Atomic physics: computer calculations and theoretical analysis
Drukarev, E. G.
2004-01-01
It is demonstrated, how the theoretical analysis preceding the numerical calculations helps to calculate the energy of the ground state of helium atom, and enables to avoid qualitative errors in the calculations of the characteristics of the double photoionization.
Calculating True Computer Access in Schools.
Slovacek, Simeon P.
1992-01-01
Discusses computer access in schools; explains how to determine sufficient quantities of computers; and describes a formula that illustrates the relationship between student access hours, the number of computers in a school, and the number of instructional hours in a typical school week. (six references) (LRW)
Computational methods for probability of instability calculations
Wu, Y.-T.; Burnside, O. H.
1990-01-01
This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.
Classical MD calculations with parallel computers
Energy Technology Data Exchange (ETDEWEB)
Matsumoto, Mitsuhiro [Nagoya Univ. (Japan)
1998-03-01
We have developed parallel computation codes for a classical molecular dynamics (MD) method. In order to use them on work station clusters as well as parallel super computers, we use MPI (message passing interface) library for distributed-memory type computers. Two algorithms are compared: (1) particle parallelism technique: easy to install, effective for rather small number of processors. (2) region parallelism technique: take some time to install, effective even for many nodes. (J.P.N.)
Computing tools for accelerator design calculations
Energy Technology Data Exchange (ETDEWEB)
Fischler, M.; Nash, T.
1984-01-01
This note is intended as a brief, summary guide for accelerator designers to the new generation of commercial and special processors that allow great increases in computing cost effectiveness. New thinking is required to take best advantage of these computing opportunities, in particular, when moving from analytical approaches to tracking simulations. In this paper, we outline the relevant considerations.
CACTUS: Calculator and Computer Technology User Service.
Hyde, Hartley
1998-01-01
Presents an activity in which students use computer-based spreadsheets to find out how much grain should be added to a chess board when a grain of rice is put on the first square, the amount is doubled for the next square, and the chess board is covered. (ASK)
Computer calculation of Witten's 3-manifold invariant
International Nuclear Information System (INIS)
Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant. (orig.)
Computer calculation of Witten's 3-manifold invariant
Freed, Daniel S.; Gompf, Robert E.
1991-10-01
Witten's 2+1 dimensional Chern-Simons theory is exactly solvable. We compute the partition function, a topological invariant of 3-manifolds, on generalized Seifert spaces. Thus we test the path integral using the theory of 3-manifolds. In particular, we compare the exact solution with the asymptotic formula predicted by perturbation theory. We conclude that this path integral works as advertised and gives an effective topological invariant.
Graphical representation of supersymmetry and computer calculation
International Nuclear Information System (INIS)
A graphical representation of supersymmetry is presented. It clearly expresses the chiral flow appearing in SUSY quantities, by representing spinors by directed lines (arrows). The chiral suffixes are expressed by the directions (up, down, left, right) of the arrows. The SL(2,C) invariants are represented by wedges. We are free from the messy symbols of spinor suffixes. The method is applied to the 5D supersymmetry. Many applications are expected. The result is suitable for coding a computer program and is highly expected to be applicable to various SUSY theories (including Supergravity) in various dimensions. (author)
Newnes circuit calculations pocket book with computer programs
Davies, Thomas J
2013-01-01
Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.
Computer program developed for flowsheet calculations and process data reduction
Alfredson, P. G.; Anastasia, L. J.; Knudsen, I. E.; Koppel, L. B.; Vogel, G. J.
1969-01-01
Computer program PACER-65, is used for flowsheet calculations and easily adapted to process data reduction. Each unit, vessel, meter, and processing operation in the overall flowsheet is represented by a separate subroutine, which the program calls in the order required to complete an overall flowsheet calculation.
Calculation of liquid-crystal Frank constants by computer simulation
Allen, M.P.; Frenkel, D.
1988-01-01
We present the first calculations, by computer simulation, of the Frank elastic constants of a liquid crystal composed of freely rotating and translating molecules. Extensive calculations are performed for hard prolate ellipsoids at a single density, and for hard spherocylinders at three densities.
TRIGLAV - a computer programme for research reactor calculation
Energy Technology Data Exchange (ETDEWEB)
Persic, A.; Ravnik, M.; Slavic, S.; Zagar, T. (J.Stefan Institute, Ljubljana (Slovenia))
1999-12-15
TRIGLAV is a new computer programme for burn-up calculation of mixed core of research reactors. The code is based on diffusion model in two dimensions and iterative procedure is applied for its solution. The material data used in the model are calculated with the transport programme WIMS. In regard to fission density distribution and energy produced by the reactor the burn-up increment of fuel elements is determined. (orig.)
Gravitational field calculations on a dynamic lattice by distributed computing.
Mähönen, P.; Punkka, V.
A new method of calculating numerically time evolution of a gravitational field in general relativity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.
Gravitation Field Calculations on a Dynamic Lattice by Distributed Computing
Mähönen, Petri; Punkka, Veikko
A new method of calculating numerically time evolution of a gravitational field in General Relatity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.
Computer program for equilibrium calculation and diffusion simulation
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
A computer program called TKCALC(thermodynamic and kinetic calculation) has been successfully developedfor the purpose of phase equilibrium calculation and diffusion simulation in ternary substitutional alloy systems. The program was subsequently applied to calculate the isothermal sections of the Fe-Cr-Ni system and predict the concentrationprofiles of two γ/γ single-phase diffusion couples in the Ni-Cr-Al system. The results are in excellent agreement withTHERMO-CALC and DICTRA software packages. Detailed mathematical derivation of some important formulae involvedis also elaborated
Computer calculation of bacterial survival during industrial poultry scalding
Computer simulation was used to model survival of bacteria during poultry scalding under common industrial conditions. Bacterial survival was calculated in a single-tank single-pass scalder with and without counterflow water movement, in a single-tank two-pass scalder, and in a three-tank two-pass ...
Computer program for calculating water and steam properties
Hendricks, R. C.; Peller, I. C.; Baron, A. K.
1975-01-01
Computer subprogram calculates thermodynamic and transport properties of water and steam. Program accepts any two of pressure, temperature, and density as input conditions. Pressure and either entropy or enthalpy are also allowable input variables. Output includes any combination of temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, surface tension, and the Laplace constant.
Development of a computational methodology for internal dose calculations
Yoriyaz, H
2000-01-01
A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body and a more precise tool for the radiation transport simulation. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. In order to utilize the segmented human anatomy as a computational model for the simulation of radiation transport, an interface program, SCMS, was developed to build the geometric configurations for the phantom through the use of tomographic images. This procedure allows to calculate not only average dose values but also spatial distribution of dose in regions of interest. With the present methodology absorbed fractions for photons and electrons in various organs of the Zubal segmented phantom were calculated and compared to those reported for the mathematical phanto...
Computer program for calculating technological parameters of underground transport
Energy Technology Data Exchange (ETDEWEB)
Kreimer, E.L. (DonUGI (USSR))
1990-05-01
Reports on an analytical method developed at DonUGI for determining technological parameters and indices of mine haulage performance. A calculation program intended for personal computers and minicomputers is described and designed especially to consider haulage by electric locomotives. The program can be used in an interactive manner and it enables haulage systems of arbitrary complexity to be calculated in 2-4 minutes. The program also allows the effect of haulage on working face output to be evaluated quantitatively. Haulage systems of all mines of the Selidovugol' association were analyzed with the aid of the program in 1988; results for the Ukraina mine are presented in tables.
Methods and computer codes for nuclear systems calculations
Indian Academy of Sciences (India)
B P Kochurov; A P Knyazev; A Yu Kwaretzkheli
2007-02-01
Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.
Computationally efficient implementation of combustion chemistry in parallel PDF calculations
International Nuclear Information System (INIS)
In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2fmpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel ISAT
Computer Program for Point Location And Calculation of ERror (PLACER)
Granato, Gregory E.
1999-01-01
A program designed for point location and calculation of error (PLACER) was developed as part of the Quality Assurance Program of the Federal Highway Administration/U.S. Geological Survey (USGS) National Data and Methodology Synthesis (NDAMS) review process. The program provides a standard method to derive study-site locations from site maps in highwayrunoff, urban-runoff, and other research reports. This report provides a guide for using PLACER, documents methods used to estimate study-site locations, documents the NDAMS Study-Site Locator Form, and documents the FORTRAN code used to implement the method. PLACER is a simple program that calculates the latitude and longitude coordinates of one or more study sites plotted on a published map and estimates the uncertainty of these calculated coordinates. PLACER calculates the latitude and longitude of each study site by interpolating between the coordinates of known features and the locations of study sites using any consistent, linear, user-defined coordinate system. This program will read data entered from the computer keyboard and(or) from a formatted text file, and will write the results to the computer screen and to a text file. PLACER is readily transferable to different computers and operating systems with few (if any) modifications because it is written in standard FORTRAN. PLACER can be used to calculate study site locations in latitude and longitude, using known map coordinates or features that are identifiable in geographic information data bases such as USGS Geographic Names Information System, which is available on the World Wide Web.
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems
Computer program 'SOMC2' for spherical optical model calculations
International Nuclear Information System (INIS)
This report is a description of the computer program 'SOMC2'. It is a program for spherical optical model calculations of the nuclear scattering cross sections of neutron, proton and α particles. In the first section, the formalism and the non-linear least square algorithm are presented. Section II is devoted to the detailed explanations of all the routines of the present program. A brief explanation of the methods used to obtain not only the fitting parameters, but also their uncertainties and their correlations is given. In section III detailed explanations of the input-data cards and of the various out-puts are given. Finally some examples of calculations are presented
TRING: a computer program for calculating radionuclide transport in groundwater
International Nuclear Information System (INIS)
The computer program TRING is described which enables the transport of radionuclides in groundwater to be calculated for use in long term radiological assessments using methods described previously. Examples of the areas of application of the program are activity transport in groundwater associated with accidental spillage or leakage of activity, the shutdown of reactors subject to delayed decommissioning, shallow land burial of intermediate level waste and geologic disposal of high level waste. Some examples of the use of the program are given, together with full details to enable users to run the program. (author)
Million atom DFT calculations using coarse graining and petascale computing
Nicholson, Don; Odbadrakh, Kh.; Samolyuk, G. D.; Stoller, R. E.; Zhang, X. G.; Stocks, G. M.
2014-03-01
Researchers performing classical Molecular Dynamics (MD) on defect structures often find it necessary to use millions of atoms in their models. It would be useful to perform density functional calculations on these large configurations in order to observe electron-based properties such as local charge and spin and the Helmann-Feynman forces on the atoms. The great number of atoms usually requires that a subset be ``carved'' from the configuration and terminated in a less that satisfactory manner, e.g. free space or inappropriate periodic boundary conditions. Coarse graining based on the Locally Self-consistent Multiple Scattering method (LSMS) and petascale computing can circumvent this problem by treating the whole system but dividing the atoms into two groups. In Coarse Grained LSMS (CG-LSMS) one group of atoms has its charge and scattering determined prescriptively based on neighboring atoms while the remaining group of atoms have their charge and scattering determined according to DFT as implemented in the LSMS. The method will be demonstrated for a one-million-atom model of a displacement cascade in Fe for which 24,130 atoms are treated with full DFT and the remaining atoms are treated prescriptively. Work supported as part of Center for Defect Physics, an Energy Frontier Research Center funded by the U.S. DOE, Office of Science, Basic Energy Sciences, used Oak Ridge Leadership Computing Facility, Oak Ridge National Lab, of DOE Office of Science.
Summaries of recent computer-assisted Feynam diagram calculations
Energy Technology Data Exchange (ETDEWEB)
Mark Fischler
2001-08-16
The AIHENP Workshop series has traditionally included cutting edge work on automated computation of Feynman diagrams. The conveners of the Symbolic Problem Solving topic in this ACAT conference felt it would be useful to solicit presentations of brief summaries of the interesting recent calculations. Since this conference was the first in the series to be held in the Western Hemisphere, it was decided that the summaries would be solicited both from attendees and from researchers who could not attend the conference. This would represent a sampling of many of the key calculations being performed. The results were presented at the Poster session; contributions from ten researchers were displayed and posted on the web. Although the poster presentation, which can be viewed at conferences.fnal.gov/acat2000/ placed equal emphasis on results presented at the conference and other contributions, here we primarily discuss the latter, which do not appear in full form in these proceedings. This brief paper can't do full justice to each contribution; interested readers can find details of the work not presented at this conference in references (1), (2), (3), (4), (5), (6), (7).
Comparison of computer code calculations with FEBA test data
International Nuclear Information System (INIS)
The FEBA forced feed reflood experiments included base line tests with unblocked geometry. The experiments consisted of separate effect tests on a full-length 5x5 rod bundle. Experimental cladding temperatures and heat transfer coefficients of FEBA test No. 216 are compared with the analytical data postcalculated utilizing the SSYST-3 computer code. The comparison indicates a satisfactory matching of the peak cladding temperatures, quench times and heat transfer coefficients for nearly all axial positions. This agreement was made possible by the use of an artificially adjusted value of the empirical code input parameter in the heat transfer for the dispersed flow regime. A limited comparison of test data and calculations using the RELAP4/MOD6 transient analysis code are also included. In this case the input data for the water entrainment fraction and the liquid weighting factor in the heat transfer for the dispersed flow regime were adjusted to match the experimental data. On the other hand, no fitting of the input parameters was made for the COBRA-TF calculations which are included in the data comparison. (orig.)
Systems for neutronic, thermohydraulic and shielding calculation in personal computers
International Nuclear Information System (INIS)
The MTR-PC (Materials Testing Reactors-Personal Computers) system has been developed by the Nuclear Engineering Division of INVAP S.E. with the aim of providing working conditions integrated with personal computers for design and neutronic, thermohydraulic and shielding analysis for reactors employing plate type fuel. (Author)
Computing NLTE Opacities -- Node Level Parallel
Energy Technology Data Exchange (ETDEWEB)
Holladay, Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2015-09-11
Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.
International Nuclear Information System (INIS)
A computer program is proposed allowing the automatic calculation of control charts for accuracy and precision. The calculated charts enable the analyst to control easily the daily results for a determined radioimmunoassay. (Auth.)
Heuristic and computer calculations for the magnitude of metric spaces
Willerton, Simon
2009-01-01
The notion of the magnitude of a compact metric space was considered in arXiv:0908.1582 with Tom Leinster, where the magnitude was calculated for line segments, circles and Cantor sets. In this paper more evidence is presented for a conjectured relationship with a geometric measure theoretic valuation. Firstly, a heuristic is given for deriving this valuation by considering 'large' subspaces of Euclidean space and, secondly, numerical approximations to the magnitude are calculated for squares, disks, cubes, annuli, tori and Sierpinski gaskets. The valuation is seen to be very close to the magnitude for the convex spaces considered and is seen to be 'asymptotically' close for some other spaces.
Computer program for calculating thermodynamic and transport properties of fluids
Hendricks, R. C.; Braon, A. K.; Peller, I. C.
1975-01-01
Computer code has been developed to provide thermodynamic and transport properties of liquid argon, carbon dioxide, carbon monoxide, fluorine, helium, methane, neon, nitrogen, oxygen, and parahydrogen. Equation of state and transport coefficients are updated and other fluids added as new material becomes available.
Calculation of Linear Systems Metric Tensors via Algebraic Computation
Neto, Joao Jose de Farias
2002-01-01
A formula for the Riemannian metric tensor of differentiable manifolds of linear dynamical systems of same McMillan degree is presented in terms of their transfer function matrices. The necessary calculations for its application to ARMA and state space overlapping parametrizations are drafted. The importance of this approach for systems identification and multiple time series analysis and forecasting is explained.
A FORTRAN Computer Program for Q Sort Calculations
Dunlap, William R.
1978-01-01
The Q Sort method is a rank order procedure. A FORTRAN program is described which calculates a total value for any group of cases for the items in the Q Sort, and rank orders the items according to this composite value. (Author/JKS)
On the calculation of dynamic derivatives using computational fluid dynamics
Da Ronch, Andrea
2012-01-01
In this thesis, the exploitation of computational fluid dynamics (CFD) methods for the flight dynamics of manoeuvring aircraft is investigated. It is demonstrated that CFD can now be used in a reasonably routine fashion to generate stability and control databases. Different strategies to create CFD-derived simulation models across the flight envelope are explored, ranging from combined low-fidelity/high-fidelity methods to reduced-order modelling. For the representation of the unsteady aerody...
Ozgun-Koca, S. Ash
2010-01-01
Although growing numbers of secondary school mathematics teachers and students use calculators to study graphs, they mainly rely on paper-and-pencil when manipulating algebraic symbols. However, the Computer Algebra Systems (CAS) on computers or handheld calculators create new possibilities for teaching and learning algebraic manipulation. This…
Computational approach for calculating bound states in quantum field theory
Lv, Q. Z.; Norris, S.; Brennan, R.; Stefanovich, E.; Su, Q.; Grobe, R.
2016-09-01
We propose a nonperturbative approach to calculate bound-state energies and wave functions for quantum field theoretical models. It is based on the direct diagonalization of the corresponding quantum field theoretical Hamiltonian in an effectively discretized and truncated Hilbert space. We illustrate this approach for a Yukawa-like interaction between fermions and bosons in one spatial dimension and show where it agrees with the traditional method based on the potential picture and where it deviates due to recoil and radiative corrections. This method permits us also to obtain some insight into the spatial characteristics of the distribution of the fermions in the ground state, such as the bremsstrahlung-induced widening.
Computational benchmark for calculation of silane and siloxane thermochemistry.
Cypryk, Marek; Gostyński, Bartłomiej
2016-01-01
Geometries of model chlorosilanes, R3SiCl, silanols, R3SiOH, and disiloxanes, (R3Si)2O, R = H, Me, as well as the thermochemistry of the reactions involving these species were modeled using 11 common density functionals in combination with five basis sets to examine the accuracy and applicability of various theoretical methods in organosilicon chemistry. As the model reactions, the proton affinities of silanols and siloxanes, hydrolysis of chlorosilanes and condensation of silanols to siloxanes were considered. As the reference values, experimental bonding parameters and reaction enthalpies were used wherever available. Where there are no experimental data, W1 and CBS-QB3 values were used instead. For the gas phase conditions, excellent agreement between theoretical CBS-QB3 and W1 and experimental thermochemical values was observed. All DFT methods also give acceptable values and the precision of various functionals used was comparable. No significant advantage of newer more advanced functionals over 'classical' B3LYP and PBEPBE ones was noted. The accuracy of the results was improved significantly when triple-zeta basis sets were used for energy calculations, instead of double-zeta ones. The accuracy of calculations for the reactions in water solution within the SCRF model was inferior compared to the gas phase. However, by careful estimation of corrections to the ΔHsolv and ΔGsolv of H(+) and HCl, reasonable values of thermodynamic quantities for the discussed reactions can be obtained. PMID:26781663
Energy Technology Data Exchange (ETDEWEB)
Lamarcq, J. [Service de Physique Theorique, CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)
1998-07-10
Numerical simulation allows the theorists to convince themselves about the validity of the models they use. Particularly by simulating the spin lattices one can judge about the validity of a conjecture. Simulating a system defined by a large number of degrees of freedom requires highly sophisticated machines. This study deals with modelling the magnetic interactions between the ions of a crystal. Many exact results have been found for spin 1/2 systems but not for systems of other spins for which many simulation have been carried out. The interest for simulations has been renewed by the Haldane`s conjecture stipulating the existence of a energy gap between the ground state and the first excited states of a spin 1 lattice. The existence of this gap has been experimentally demonstrated. This report contains the following four chapters: 1. Spin systems; 2. Calculation of eigenvalues; 3. Programming; 4. Parallel calculation 14 refs., 6 figs.
Performing three-dimensional neutral particle transport calculations on tera scale computers
Energy Technology Data Exchange (ETDEWEB)
Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R
1999-01-12
A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).
A computer code for beam optics calculation--third order approximation
Institute of Scientific and Technical Information of China (English)
L(U) Jianqin; LI Jinhai
2006-01-01
To calculate the beam transport in the ion optical systems accurately, a beam dynamics computer program of third order approximation is developed. Many conventional optical elements are incorporated in the program. Particle distributions of uniform type or Gaussian type in the ( x, y, z ) 3D ellipses can be selected by the users. The optimization procedures are provided to make the calculations reasonable and fast. The calculated results can be graphically displayed on the computer monitor.
Calculating absorption shifts for retinal proteins: computational challenges.
Wanko, M; Hoffmann, M; Strodel, P; Koslowski, A; Thiel, W; Neese, F; Frauenheim, T; Elstner, M
2005-03-01
Rhodopsins can modulate the optical properties of their chromophores over a wide range of wavelengths. The mechanism for this spectral tuning is based on the response of the retinal chromophore to external stress and the interaction with the charged, polar, and polarizable amino acids of the protein environment and is connected to its large change in dipole moment upon excitation, its large electronic polarizability, and its structural flexibility. In this work, we investigate the accuracy of computational approaches for modeling changes in absorption energies with respect to changes in geometry and applied external electric fields. We illustrate the high sensitivity of absorption energies on the ground-state structure of retinal, which varies significantly with the computational method used for geometry optimization. The response to external fields, in particular to point charges which model the protein environment in combined quantum mechanical/molecular mechanical (QM/MM) applications, is a crucial feature, which is not properly represented by previously used methods, such as time-dependent density functional theory (TDDFT), complete active space self-consistent field (CASSCF), and Hartree-Fock (HF) or semiempirical configuration interaction singles (CIS). This is discussed in detail for bacteriorhodopsin (bR), a protein which blue-shifts retinal gas-phase excitation energy by about 0.5 eV. As a result of this study, we propose a procedure which combines structure optimization or molecular dynamics simulation using DFT methods with a semiempirical or ab initio multireference configuration interaction treatment of the excitation energies. Using a conventional QM/MM point charge representation of the protein environment, we obtain an absorption energy for bR of 2.34 eV. This result is already close to the experimental value of 2.18 eV, even without considering the effects of protein polarization, differential dispersion, and conformational sampling.
Energy Technology Data Exchange (ETDEWEB)
Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)
1998-03-01
Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)
Direct Calculation of Protein Fitness Landscapes through Computational Protein Design.
Au, Loretta; Green, David F
2016-01-01
Naturally selected amino-acid sequences or experimentally derived ones are often the basis for understanding how protein three-dimensional conformation and function are determined by primary structure. Such sequences for a protein family comprise only a small fraction of all possible variants, however, representing the fitness landscape with limited scope. Explicitly sampling and characterizing alternative, unexplored protein sequences would directly identify fundamental reasons for sequence robustness (or variability), and we demonstrate that computational methods offer an efficient mechanism toward this end, on a large scale. The dead-end elimination and A(∗) search algorithms were used here to find all low-energy single mutant variants, and corresponding structures of a G-protein heterotrimer, to measure changes in structural stability and binding interactions to define a protein fitness landscape. We established consistency between these algorithms with known biophysical and evolutionary trends for amino-acid substitutions, and could thus recapitulate known protein side-chain interactions and predict novel ones.
Shielding Calculations for Positron Emission Tomography - Computed Tomography Facilities
Energy Technology Data Exchange (ETDEWEB)
Baasandorj, Khashbayar [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yang, Jeongseon [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)
2015-10-15
Integrated PET-CT has been shown to be more accurate for lesion localization and characterization than PET or CT alone, and the results obtained from PET and CT separately and interpreted side by side or following software based fusion of the PET and CT datasets. At the same time, PET-CT scans can result in high patient and staff doses; therefore, careful site planning and shielding of this imaging modality have become challenging issues in the field. In Mongolia, the introduction of PET-CT facilities is currently being considered in many hospitals. Thus, additional regulatory legislation for nuclear and radiation applications is necessary, for example, in regulating licensee processes and ensuring radiation safety during the operations. This paper aims to determine appropriate PET-CT shielding designs using numerical formulas and computer code. Since presently there are no PET-CT facilities in Mongolia, contact was made with radiological staff at the Nuclear Medicine Center of the National Cancer Center of Mongolia (NCCM) to get information about facilities where the introduction of PET-CT is being considered. Well-designed facilities do not require additional shielding, which should help cut down overall costs related to PET-CT installation. According to the results of this study, building barrier thicknesses of the NCCM building is not sufficient to keep radiation dose within the limits.
Computer calculations in interstitial seed therapy: I. Radiation treatment planning
International Nuclear Information System (INIS)
Interstitial seed therapy computers can be used for radiation treatment planning and for dose control after implantation. In interstitial therapy with radioactive seeds there are much greater differences between planning and carrying out radiation treatment than in teletherapy with cobalt-60 or X-rays. Because of the short distance between radioactive sources and tumour tissue, even slight deviations from the planned implantation geometry cause considerable dose deviations. Furthermore, the distribution of seeds in an actual implant is inhomogeneous. During implantation the spatial distribution of seeds cannot be examined exactly, though X-rays are used to control the operation. The afterloading technique of Henschke allows a more exact implantation geometry, but I have no experience of this method. In spite of the technical difficulty of achieving optimum geometry, interstitial therapy still has certain advantages when compared with teletherapy: the dose in the treated volume can be kept smaller than in teletherapy, the radiation can be better concentrated in the tumour volume, the treatment can be restricted to one or two operations, and localized inoperable tumours may be cured more easily. The latter may depend on an optimal treatment time, a relatively high tumour dose and a continuous exponentially decreasing dose rate during the treatment time. A disadvantage of interstitial therapy is the high personnel dose, which may be reduced by the afterloading technique of Henschke (1956). However, the afterloading method requires much greater personnel and instrumental expense than free implantation of radiogold seeds and causes greater trauma for the patient
Computational challenges in large nucleosynthesis calculations in stars
International Nuclear Information System (INIS)
Full text: The study of how the elements form in stars requires significant computational efforts. The time scale of nuclear reactions in different evolutionary phases in stars changes by several orders of magnitude, and requires the implementation of fully implicit solvers to obtain precise results, where the lack of accuracy may be a severe issue to consider, in particular in explosive conditions like in supernovae. Another important point to consider is the number of isotopic species that need to be included in the simulations. Neutron capture processes are the main responsible to produce the abundances of elements heavier than iron. For the slow neutron capture process (i.e., the s process), the typical number of species is about 600, whereas in the explosive rapid neutron capture process (i.e., the r process) the dimension of the matrix that needs to be inverted to solve the nucleosynthesis equations is well above 1000. I aim to present these topics providing a general overview of the astrophysical scenarios involved, and showing meaningful examples to clarify the discussion. (author)
Parallel beam dynamics calculations on high performance computers
International Nuclear Information System (INIS)
Faced with a backlog of nuclear waste and weapons plutonium, as well as an ever-increasing public concern about safety and environmental issues associated with conventional nuclear reactors, many countries are studying new, accelerator-driven technologies that hold the promise of providing safe and effective solutions to these problems. Proposed projects include accelerator transmutation of waste (ATW), accelerator-based conversion of plutonium (ABC), accelerator-driven energy production (ADEP), and accelerator production of tritium (APT). Also, next-generation spallation neutron sources based on similar technology will play a major role in materials science and biological science research. The design of accelerators for these projects will require a major advance in numerical modeling capability. For example, beam dynamics simulations with approximately 100 million particles will be needed to ensure that extremely stringent beam loss requirements (less than a nanoampere per meter) can be met. Compared with typical present-day modeling using 10,000-100,000 particles, this represents an increase of 3-4 orders of magnitude. High performance computing (HPC) platforms make it possible to perform such large scale simulations, which require 10's of GBytes of memory. They also make it possible to perform smaller simulations in a matter of hours that would require months to run on a single processor workstation. This paper will describe how HPC platforms can be used to perform the numerically intensive beam dynamics simulations required for development of these new accelerator-driven technologies
Parallel beam dynamics calculations on high performance computers
International Nuclear Information System (INIS)
Faced with a backlog of nuclear waste and weapons plutonium, as well as an ever-increasing public concern about safety and environmental issues associated with conventional nuclear reactors, many countries are studying new, accelerator-driven technologies that hold the promise of providing safe and effective solutions to these problems. Proposed projects include accelerator transmutation of waste (ATW), accelerator-based conversion of plutonium (ABC), accelerator-driven energy production (ADEP), and accelerator production of tritium (APT). Also, next-generation spallation neutron sources based on similar technology will play a major role in materials science and biological science research. The design of accelerators for these projects will require a major advance in numerical modeling capability. For example, beam dynamics simulations with approximately 100 million particles will be needed to ensure that extremely stringent beam loss requirements (less than a nanoampere per meter) can be met. Compared with typical present-day modeling using 10,000 endash 100,000 particles, this represents an increase of 3 endash 4 orders of magnitude. High performance computing (HPC) platforms make it possible to perform such large scale simulations, which require 10 close-quote s of GBytes of memory. They also make it possible to perform smaller simulations in a matter of hours that would require months to run on a single processor workstation. This paper will describe how HPC platforms can be used to perform the numerically intensive beam dynamics simulations required for development of these new accelerator-driven technologies. copyright 1997 American Institute of Physics
Simple and fast cosine approximation method for computer-generated hologram calculation.
Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ito, Tomoyoshi
2015-12-14
The cosine function is a heavy computational operation in computer-generated hologram (CGH) calculation; therefore, it is implemented by substitution methods such as a look-up table. However, the computational load and required memory space of such methods are still large. In this study, we propose a simple and fast cosine function approximation method for CGH calculation. As a result, we succeeded in creating CGH with sufficient quality and made the calculation time 1.6 times as fast at maximum compared to using the look-up table of the cosine function on CPU implementation.
Zhao, Yu; Piao, Mei-lan; Li, Gang; Kim, Nam
2015-07-01
Fast calculation method for a computer-generated cylindrical hologram (CGCH) is proposed. The method consists of two steps: the first step is a calculation of a virtual wave-front recording surface (WRS), which is located between the 3D object and CGCH. In the second step, in order to obtain a CGCH, we execute the diffraction calculation based on the fast Fourier transform (FFT) from the WRS to the CGCH, which are in the same concentric arrangement. The computational complexity is dramatically reduced in comparison with direct integration method. The simulation results confirm that our proposed method is able to improve the computational speed of CGCH.
Radiation therapy calculations using an on-demand virtual cluster via cloud computing
Keyes, Roy W; Arnold, Dorian; Luan, Shuang
2010-01-01
Computer hardware costs are the limiting factor in producing highly accurate radiation dose calculations on convenient time scales. Because of this, large-scale, full Monte Carlo simulations and other resource intensive algorithms are often considered infeasible for clinical settings. The emerging cloud computing paradigm promises to fundamentally alter the economics of such calculations by providing relatively cheap, on-demand, pay-as-you-go computing resources over the Internet. We believe that cloud computing will usher in a new era, in which very large scale calculations will be routinely performed by clinics and researchers using cloud-based resources. In this research, several proof-of-concept radiation therapy calculations were successfully performed on a cloud-based virtual Monte Carlo cluster. Performance evaluations were made of a distributed processing framework developed specifically for this project. The expected 1/n performance was observed with some caveats. The economics of cloud-based virtual...
ANIGAM: a computer code for the automatic calculation of nuclear group data
International Nuclear Information System (INIS)
The computer code ANIGAM consists mainly of the well-known programmes GAM-I and ANISN as well as of a subroutine which reads the THERMOS cross section library and prepares it for ANISN. ANIGAM has been written for the automatic calculation of microscopic and macroscopic cross sections of light water reactor fuel assemblies. In a single computer run both were calculated, the cross sections representative for fuel assemblies in reactor core calculations and the cross sections of each cell type of a fuel assembly. The calculated data were delivered to EXTERMINATOR and CITATION for following diffusion or burn up calculations by an auxiliary programme. This report contains a detailed description of the computer codes and methods used in ANIGAM, a description of the subroutines, of the OVERLAY structure and an input and output description. (oririg.)
SAMDIST: A computer code for calculating statistical distributions for R-matrix resonance parameters
Energy Technology Data Exchange (ETDEWEB)
Leal, L.C.; Larson, N.M.
1995-09-01
The SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.
DEFF Research Database (Denmark)
Rasmussen, Claus P.; Krejbjerg, Kristian; Michelsen, Michael Locht;
2006-01-01
Approaches are presented for reducing the computation time spent on flash calculations in compositional, transient simulations. In a conventional flash calculation, the majority of the simulation time is spent on stability analysis, even for systems far into the single-phase region. A criterion has...... been implemented for deciding when it is justified to bypass the stability analysis. With the implementation of the developed time-saving initiatives, it has been shown for a number of compositional, transient pipeline simulations that a reduction of the computation time spent on flash calculations...
Goel, Narenda S.; Rozehnal, I.; Thompson, R. L.
1991-01-01
A general computer graphics based model is presented for computer generation of objects of arbitrary shape and for calculating Bidirectional Reflectance Factor (BRF) and scattering from them, in the optical region. The computer generation uses a modified Lindemayer system (L system) approach. For rendering on a computer screen, the object is divided into polygons, and innovative computer graphics techniques are used to display the object and to calculate the scattering and reflectance from the object. The use of the technique is illustrated with scattering from canopies of simulated corn plants and from a snow covered mountain. The scattering is quantified using measures like BRF and albedo and by rendering the objects with brightness of each of the two facets of a polygon proportional to the amount of light scattered from the object in the viewer's direction.
Calculation reduction method for color computer-generated hologram using color space conversion
Shimobaba, Tomoyoshi; Oikawa, Minoru; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi
2013-01-01
We report a calculation reduction method for color computer-generated holograms (CGHs) using color space conversion. Color CGHs are generally calculated on RGB space. In this paper, we calculate color CGHs in other color spaces: for example, YCbCr color space. In YCbCr color space, a RGB image is converted to the luminance component (Y), blue-difference chroma (Cb) and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well-recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space.
Efficient Computation of Power, Force, and Torque in BEM Scattering Calculations
Reid, M T Homer
2013-01-01
We present concise, computationally efficient formulas for several quantities of interest -- including absorbed and scattered power, optical force (radiation pressure), and torque -- in scattering calculations performed using the boundary-element method (BEM) [also known as the method of moments (MOM)]. Our formulas compute the quantities of interest \\textit{directly} from the BEM surface currents with no need ever to compute the scattered electromagnetic fields. We derive our new formulas and demonstrate their effectiveness by computing power, force, and torque in a number of example geometries. Free, open-source software implementations of our formulas are available for download online.
Energy Technology Data Exchange (ETDEWEB)
Strenge, D.L.; Peloquin, R.A.
1981-04-01
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.
International Nuclear Information System (INIS)
The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested
International Nuclear Information System (INIS)
A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach
WETAIR: A computer code for calculating thermodynamic and transport properties of air-water mixtures
Fessler, T. E.
1979-01-01
A computer program subroutine, WETAIR, was developed to calculate the thermodynamic and transport properties of air water mixtures. It determines the thermodynamic state from assigned values of temperature and density, pressure and density, temperature and pressure, pressure and entropy, or pressure and enthalpy. The WETAIR calculates the properties of dry air and water (steam) by interpolating to obtain values from property tables. Then it uses simple mixing laws to calculate the properties of air water mixtures. Properties of mixtures with water contents below 40 percent (by mass) can be calculated at temperatures from 273.2 to 1497 K and pressures to 450 MN/sq m. Dry air properties can be calculated at temperatures as low as 150 K. Water properties can be calculated at temperatures to 1747 K and pressures to 100 MN/sq m. The WETAIR is available in both SFTRAN and FORTRAN.
Computer program for calculating flow parameters and power requirements for cryogenic wind tunnels
Dress, D. A.
1985-01-01
A computer program has been written that performs the flow parameter calculations for cryogenic wind tunnels which use nitrogen as a test gas. The flow parameters calculated include static pressure, static temperature, compressibility factor, ratio of specific heats, dynamic viscosity, total and static density, velocity, dynamic pressure, mass-flow rate, and Reynolds number. Simplifying assumptions have been made so that the calculations of Reynolds number, as well as the other flow parameters can be made on relatively small desktop digital computers. The program, which also includes various power calculations, has been developed to the point where it has become a very useful tool for the users and possible future designers of fan-driven continuous-flow cryogenic wind tunnels.
GENGTC-JB: a computer program to calculate temperature distribution for cylindrical geometry capsule
International Nuclear Information System (INIS)
In design of JMTR irradiation capsules contained specimens, a program (named GENGTC) has been generally used to evaluate temperature distributions in the capsules. The program was originally compiled by ORNL(U.S.A.) and consisted of very simple calculation methods. From the incorporated calculation methods, the program is easy to use, and has many applications to the capsule design. However, it was considered to replace original computing methods with advanced ones, when the program was checked from a standpoint of the recent computer abilities, and also to be complicated in data input. Therefore, the program was versioned up as aim to make better calculations and improve input method. The present report describes revised calculation methods and input/output guide of the version-up program. (author)
Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong
2016-04-01
The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions. PMID:27137018
Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong
2016-04-01
The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.
A FORTRAN computer code for calculating flows in multiple-blade-element cascades
Mcfarland, E. R.
1985-01-01
A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.
Energy Technology Data Exchange (ETDEWEB)
Koski, J.A.; Wix, S.D.; Cole, J.K.
1997-09-01
Shipboard fires both in the same ship hold and in an adjacent hold aboard a break-bulk cargo ship are simulated with a commercial finite-volume computational fluid mechanics code. The fire models and modeling techniques are described and discussed. Temperatures and heat fluxes to a simulated materials package are calculated and compared to experimental values. The overall accuracy of the calculations is assessed.
International Nuclear Information System (INIS)
Shipboard fires both in the same ship hold and in an adjacent hold aboard a break-bulk cargo ship are simulated with a commercial finite-volume computational fluid mechanics code. The fire models and modeling techniques are described and discussed. Temperatures and heat fluxes to a simulated materials package are calculated and compared to experimental values. The overall accuracy of the calculations is assessed
Computer calculation of dose distributions in radiotherapy. Report of a panel
International Nuclear Information System (INIS)
As in most areas of scientific endeavour, the advent of electronic computers has made a significant impact on the investigation of the physical aspects of radiotherapy. Since the first paper on the subject was published in 1955 the literature has rapidly expanded to include the application of computer techniques to problems of external beam, and intracavitary and interstitial dosimetry. By removing the tedium of lengthy repetitive calculations, the availability of automatic computers has encouraged physicists and radiotherapists to take a fresh look at many fundamental physical problems of radiotherapy. The most important result of the automation of dosage calculations is not simply an increase in the quantity of data but an improvement in the quality of data available as a treatment guide for the therapist. In October 1965 the International Atomic Energy Agency convened a panel in Vienna on the 'Use of Computers for Calculation of Dose Distributions in Radiotherapy' to assess the current status of work, provide guidelines for future research, explore the possibility of international cooperation and make recommendations to the Agency. The panel meeting was attended by 15 participants from seven countries, one observer, and two representatives of the World Health Organization. Participants contributed 20 working papers which served as the bases of discussion. By the nature of the work, computer techniques have been developed by a few advanced centres with access to large computer installations. However, several computer methods are now becoming 'routine' and can be used by institutions without facilities for research. It is hoped that the report of the Panel will provide a comprehensive view of the automatic computation of radiotherapeutic dose distributions and serve as a means of communication between present and potential users of computers
Fast calculation of computer-generated hologram using run-length encoding based recurrence relation.
Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi
2015-04-20
Computer-Generated Holograms (CGHs) can be generated by superimposing zoneplates. A zoneplate is a grating that can concentrate an incident light into a point. Since a zoneplate has a circular symmetry, we reported an algorithm that rapidly generates a zoneplate by drawing concentric circles using computer graphic techniques. However, random memory access was required in the algorithm and resulted in degradation of the computational efficiency. In this study, we propose a fast CGH generation algorithm without random memory access using run-length encoding (RLE) based recurrence relation. As a result, we succeeded in improving the calculation time by 88%, compared with that of the previous work.
Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide
International Nuclear Information System (INIS)
The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems
Calculation reduction method for color computer-generated hologram using color space conversion
Shimobaba, Tomoyoshi; Kakue, Takashi; Oikawa, Minoru; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi
2013-01-01
We report a calculation reduction method for color computer-generated holograms (CGHs) using color space conversion. Color CGHs are generally calculated on RGB space. In this paper, we calculate color CGHs in other color spaces: for example, YCbCr color space. In YCbCr color space, a RGB image is converted to the luminance component (Y), blue-difference chroma (Cb) and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance compone...
Energy Technology Data Exchange (ETDEWEB)
Gonzalez Portilla, M. I.; Marquez, J.
2011-07-01
Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.
An approach to first principles electronic structure calculation by symbolic-numeric computation
Directory of Open Access Journals (Sweden)
Akihito Kikuchi
2013-04-01
Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.
CPS: a continuous-point-source computer code for plume dispersion and deposition calculations
Energy Technology Data Exchange (ETDEWEB)
Peterson, K.R.; Crawford, T.V.; Lawson, L.A.
1976-05-21
The continuous-point-source computer code calculates concentrations and surface deposition of radioactive and chemical pollutants at distances from 0.1 to 100 km, assuming a Gaussian plume. The basic input is atmospheric stability category and wind speed, but a number of refinements are also included.
Gordon, S.; Mcbride, B.; Zeleznik, F. J.
1984-01-01
An addition to the computer program of NASA SP-273 is given that permits transport property calculations for the gaseous phase. Approximate mixture formulas are used to obtain viscosity and frozen thermal conductivity. Reaction thermal conductivity is obtained by the same method as in NASA TN D-7056. Transport properties for 154 gaseous species were selected for use with the program.
Comparison of computer codes for calculating dynamic loads in wind turbines
Spera, D. A.
1978-01-01
The development of computer codes for calculating dynamic loads in horizontal axis wind turbines was examined, and a brief overview of each code was given. The performance of individual codes was compared against two sets of test data measured on a 100 KW Mod-0 wind turbine. All codes are aeroelastic and include loads which are gravitational, inertial and aerodynamic in origin.
Easy calculations of lod scores and genetic risks on small computers.
Lathrop, G M; Lalouel, J M
1984-01-01
A computer program that calculates lod scores and genetic risks for a wide variety of both qualitative and quantitative genetic traits is discussed. An illustration is given of the joint use of a genetic marker, affection status, and quantitative information in counseling situations regarding Duchenne muscular dystrophy.
Computer Programs for Calculating the Isentropic Flow Properties for Mixtures of R-134a and Air
Kvaternik, Raymond G.
2000-01-01
Three computer programs for calculating the isentropic flow properties of R-134a/air mixtures which were developed in support of the heavy gas conversion of the Langley Transonic Dynamics Tunnel (TDT) from dichlorodifluoromethane (R-12) to 1,1,1,2 tetrafluoroethane (R-134a) are described. The first program calculates the Mach number and the corresponding flow properties when the total temperature, total pressure, static pressure, and mole fraction of R-134a in the mixture are given. The second program calculates tables of isentropic flow properties for a specified set of free-stream Mach numbers given the total pressure, total temperature, and mole fraction of R-134a. Real-gas effects are accounted for in these programs by treating the gases comprising the mixture as both thermally and calorically imperfect. The third program is a specialized version of the first program in which the gases are thermally perfect. It was written to provide a simpler computational alternative to the first program in those cases where real-gas effects are not important. The theory and computational procedures underlying the programs are summarized, the equations used to compute the flow quantities of interest are given, and sample calculated results that encompass the operating conditions of the TDT are shown.
Gordon, Sanford; Mcbride, Bonnie J.
1994-01-01
This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.
Accurate calculation of computer-generated holograms using angular-spectrum layer-oriented method.
Zhao, Yan; Cao, Liangcai; Zhang, Hao; Kong, Dezhao; Jin, Guofan
2015-10-01
Fast calculation and correct depth cue are crucial issues in the calculation of computer-generated hologram (CGH) for high quality three-dimensional (3-D) display. An angular-spectrum based algorithm for layer-oriented CGH is proposed. Angular spectra from each layer are synthesized as a layer-corresponded sub-hologram based on the fast Fourier transform without paraxial approximation. The proposed method can avoid the huge computational cost of the point-oriented method and yield accurate predictions of the whole diffracted field compared with other layer-oriented methods. CGHs of versatile formats of 3-D digital scenes, including computed tomography and 3-D digital models, are demonstrated with precise depth performance and advanced image quality. PMID:26480062
Program POD. A computer code to calculate cross sections for neutron-induced nuclear reactions
International Nuclear Information System (INIS)
A computer code, POD, was developed for neutron-induced nuclear data evaluations. This program is based on four theoretical models, (1) the optical model to calculate shape-elastic scattering and reaction cross sections, (2) the distorted wave Born approximation to calculate neutron inelastic scattering cross sections, (3) the preequilibrium model, and (4) the multi-step statistical model. With this program, cross sections can be calculated for reactions (n, γ), (n, n'), (n, p), (n, α), (n, d), (n, t), (n, 3He), (n, 2n), (n, np), (n, nα), (n, nd), and (n, 3n) in the neutron energy range above the resonance region to 20 MeV. The computational methods and input parameters are explained in this report, with sample inputs and outputs. (author)
Energy Technology Data Exchange (ETDEWEB)
Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.
1980-03-01
A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.
International Nuclear Information System (INIS)
The CITHAN computer code was developed at IPEN (Instituto de Pesquisas Energeticas e Nucleares) to link the HAMMER computer code with a fuel depletion routine and to provide neutron cross sections to be read with the appropriate format of the CITATION code. The problem arised due to the efforts to addapt the new version denomined HAMMER-TECHION with the routine refered. The HAMMER-TECHION computer code was elaborated by Haifa Institute, Israel within a project with EPRI. This version is at CNEN to be used in multigroup constant generation for neutron diffusion calculation in the scope of the new methodology to be adopted by CNEN. The theoretical formulation of CITHAM computer code, tests and modificatins are described. (Author)
TEMP: a computer code to calculate fuel pin temperatures during a transient
International Nuclear Information System (INIS)
The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method
International Nuclear Information System (INIS)
A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports - ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs
Energy Technology Data Exchange (ETDEWEB)
White, J.E.; Roussin, R.W.; Gilpin, H.
1988-12-01
A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports - ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.
An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM)
Pratt, B. S.; Pratt, D. T.
1984-01-01
A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.
Wilson, J. W.; Khandelwal, G. S.
1976-01-01
Calculational methods for estimation of dose from external proton exposure of arbitrary convex bodies are briefly reviewed. All the necessary information for the estimation of dose in soft tissue is presented. Special emphasis is placed on retaining the effects of nuclear reaction, especially in relation to the dose equivalent. Computer subroutines to evaluate all of the relevant functions are discussed. Nuclear reaction contributions for standard space radiations are in most cases found to be significant. Many of the existing computer programs for estimating dose in which nuclear reaction effects are neglected can be readily converted to include nuclear reaction effects by use of the subroutines described herein.
Fast calculation of spherical computer generated hologram using spherical wave spectrum method.
Jackin, Boaz Jessie; Yatagai, Toyohiko
2013-01-14
A fast calculation method for computer generation of spherical holograms in proposed. This method is based on wave propagation defined in spectral domain and in spherical coordinates. The spherical wave spectrum and transfer function were derived from boundary value solutions to the scalar wave equation. It is a spectral propagation formula analogous to angular spectrum formula in cartesian coordinates. A numerical method to evaluate the derived formula is suggested, which uses only N(logN)2 operations for calculations on N sampling points. Simulation results are presented to verify the correctness of the proposed method. A spherical hologram for a spherical object was generated and reconstructed successfully using the proposed method.
International Nuclear Information System (INIS)
A fast running computer code SHETEMP has been developed for analysis of reactivity initiated accidents under constant core cooling conditions such as coolant temperature and heat transfer coefficient on fuel rods. This code can predict core power and fuel temperature behaviours. A control rod movement can be taken into account in power control system. The objective of the code is to provide fast running capability with easy handling of the code required for audit and design calculations where a large number of calculations are performed for parameter surveys during short time period. The fast running capability of the code was realized by neglection of fluid flow calculation. The computer code SHETEMP was made up by extracting and conglomerating routines for reactor kinetics and heat conduction in the transient reactor thermal-hydraulic analysis code ALARM-P1, and by combining newly developed routines for reactor power control system. As ALARM-P1, SHETEMP solves point reactor kinetics equations by the modified Runge-Kutta method and one-dimensional transient heat conduction equations for slab and cylindrical geometries by the Crank-Nicholson methods. The model for reactor power control system takes into account effects of PID regulator and control rod drive mechanism. In order to check errors in programming of the code, calculated results by SHETEMP were compared with analytic solution. Based on the comparisons, the appropriateness of the programming was verified. Also, through a sample calculation for typical modelling, it was concluded that the code could satisfy the fast running capability required for audit and design calculations. This report will be described as a code manual of SHETEMP. It contains descriptions on a sample problem, code structure, input data specifications and usage of the code, in addition to analytical models and results of code verification calculations. (author)
Ablinger, J; Blümlein, J; De Freitas, A; von Manteuffel, A; Schneider, C
2015-01-01
Three loop ladder and $V$-topology diagrams contributing to the massive operator matrix element $A_{Qg}$ are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable $N$ and the dimensional parameter $\\varepsilon$. Given these representations, the desired Laurent series expansions in $\\varepsilon$ can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural ...
OPT13B and OPTIM4 - computer codes for optical model calculations
International Nuclear Information System (INIS)
OPT13B is a computer code in FORTRAN for optical model calculations with automatic search. A summary of different formulae used for computation is given. Numerical methods are discussed. The 'search' technique followed to obtain the set of optical model parameters which produce best fit to experimental data in a least-square sense is also discussed. Different subroutines of the program are briefly described. Input-output specifications are given in detail. A modified version of OPT13B specifications are given in detail. A modified version of OPT13B is OPTIM4. It can be used for optical model calculations where the form factors of different parts of the optical potential are known point by point. A brief description of the modifications is given. (author)
Energy Technology Data Exchange (ETDEWEB)
Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)
2015-09-01
This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.
Kumar, A.; Graves, R. A., Jr.; Weilmuenster, K. J.
1980-01-01
A vectorized code, EQUIL, was developed for calculating the equilibrium chemistry of a reacting gas mixture on the Control Data STAR-100 computer. The code provides species mole fractions, mass fractions, and thermodynamic and transport properties of the mixture for given temperature, pressure, and elemental mass fractions. The code is set up for the electrons H, He, C, O, N system of elements. In all, 24 chemical species are included.
Fotland, Åge; Mehl, Sigbjørn; Sunnanå, Knut
1995-01-01
Standard 0-group indices distribution maps are now produced based on hand-drawn maps using AutoCad with some additional procedures. This paper briefly describes the mathod. The paper further describes ways of importing coastlines and survey data directly into standard computer programs such as AUtoCad and SAS. Standard methods are used for gridding data, producing isolines and further calculation of abundance indices and presentation of distributions. Interactive editing of distribution maps ...
International Nuclear Information System (INIS)
The comparatively high dose and increasing frequency of computed tomography (CT) examinations have spurred the development of techniques for reducing radiation dose to imaging patients. Among these is the application of tube current modulation (TCM), which can be applied either longitudinally along the body or rotationally along the body, or both. Existing computational models for calculating dose from CT examinations do not include TCM techniques. Dose calculations using Monte Carlo methods have been previously prepared for constant-current rotational exposures at various positions along the body and for the principle exposure projections for several sets of computational phantoms, including adult male and female and pregnant patients. Dose calculations from CT scans with TCM are prepared by appropriately weighting the existing dose data. Longitudinal TCM doses can be obtained by weighting the dose at the z-axis scan position by the relative tube current at that position. Rotational TCM doses are weighted using the relative organ doses from the principle projections as a function of the current at the rotational angle. Significant dose reductions of 15% to 25% to fetal tissues are found from simulations of longitudinal TCM schemes to pregnant patients of different gestational ages. Weighting factors for each organ in rotational TCM schemes applied to adult male and female patients have also been found. As the application of TCM techniques becomes more prevalent, the need for including TCM in CT dose estimates will necessarily increase. (author)
International Nuclear Information System (INIS)
The last 12 years studies about the CABRI, SCARABEE and PHEBUS projects are summarized. It describes the object and the genesis of the cores, the evolution of the core concept and the associated neutronic problems. The calculational scheme used is presented, together with its qualification. The formalism, and the qualification of the different modules of GOLEM are presented. COXYS: module of physical analysis in order to determine the best energetic and spatial mesh for the case of interest. GOLU.B: input data management module. VAREC: calculation module of perturbations due to materials enables to compute perturbed flux and reactivity variation. VARYX: calculation module of geometric perturbations. TRACASYN: module of 3D power shape calculation. Finally TRACASTORE: module of management and graphic exploitation of results. Then, one gives utilization directions for these different modules. Qualification results show that GOLEM is able to analyse the fine physics of many various cases, to calculate by perturbation effects greater than 5000 pcm, to rebuild perturbed flux with margins near 3% for difficult situations, like reactor voiding or spectral or spectral variation in a PWR. Furthermore, 3D hot spots are calculated within margins of a magnitude comparable to experimental ones
Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry
2013-07-01
The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama
Energy Technology Data Exchange (ETDEWEB)
Smith, P.D.
1978-02-01
A special purpose computer program, TRAFIC, is presented for calculating the release of metallic fission products from an HTGR core. The program is based upon Fick's law of diffusion for radioactive species. One-dimensional transient diffusion calculations are performed for the coated fuel particles and for the structural graphite web. A quasi steady-state calculation is performed for the fuel rod matrix material. The model accounts for nonlinear adsorption behavior in the fuel rod gap and on the coolant hole boundary. The TRAFIC program is designed to operate in a core survey mode; that is, it performs many repetitive calculations for a large number of spatial locations in the core. This is necessary in order to obtain an accurate volume integrated release. For this reason the program has been designed with calculational efficiency as one of its main objectives. A highly efficient numerical method is used in the solution. The method makes use of the Duhamel superposition principle to eliminate interior spatial solutions from consideration. Linear response functions relating the concentrations and mass fluxes on the boundaries of a homogeneous region are derived. Multiple regions are numerically coupled through interface conditions. Algebraic elimination is used to reduce the equations as far as possible. The problem reduces to two nonlinear equations in two unknowns, which are solved using a Newton Raphson technique.
Brzuszek, Marcin; Daniluk, Andrzej
2006-11-01
Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1
Plummer, L.N.; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.
1988-01-01
The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)
Leal, Allan; Saar, Martin
2016-04-01
Computational methods for geochemical and reactive transport modeling are essential for the understanding of many natural and industrial processes. Most of these processes involve several phases and components, and quite often requires chemical equilibrium and kinetics calculations. We present an overview of novel methods for multiphase equilibrium calculations, based on both the Gibbs energy minimization (GEM) approach and on the solution of the law of mass-action (LMA) equations. We also employ kinetics calculations, assuming partial equilibrium (e.g., fluid species in equilibrium while minerals are in disequilibrium) using automatic time stepping to improve simulation efficiency and robustness. These methods are developed specifically for applications that are computationally expensive, such as reactive transport simulations. We show how efficient the new methods are, compared to other algorithms, and how easy it is to use them for geochemical modeling via a simple script language. All methods are available in Reaktoro, a unified open-source framework for modeling chemically reactive systems, which we also briefly describe.
Amirfattahi, Rassoul
2013-10-01
Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.
BALANCE : a computer program for calculating mass transfer for geochemical reactions in ground water
Parkhurst, David L.; Plummer, L. Niel; Thorstenson, Donald C.
1982-01-01
BALANCE is a Fortran computer designed to define and quantify chemical reactions between ground water and minerals. Using (1) the chemical compositions of two waters along a flow path and (2) a set of mineral phases hypothesized to be the reactive constituents in the system, the program calculates the mass transfer (amounts of the phases entering or leaving the aqueous phase) necessary to account for the observed changes in composition between the two waters. Additional constraints can be included in the problem formulation to account for mixing of two end-member waters, redox reactions, and, in a simplified form, isotopic composition. The computer code and a description of the input necessary to run the program are presented. Three examples typical of ground-water systems are described. (USGS)
Computational Issues Associated with Automatic Calculation of Acute Myocardial Infarction Scores
Destro-Filho, J. B.; Machado, S. J. S.; Fonseca, G. T.
2008-12-01
This paper presents a comparison among the three principal acute myocardial infarction (AMI) scores (Selvester, Aldrich, Anderson-Wilkins) as they are automatically estimated from digital electrocardiographic (ECG) files, in terms of memory occupation and processing time. Theoretical algorithm complexity is also provided. Our simulation study supposes that the ECG signal is already digitized and available within a computer platform. We perform 1000 000 Monte Carlo experiments using the same input files, leading to average results that point out drawbacks and advantages of each score. Since all these calculations do not require either large memory occupation or long processing, automatic estimation is compatible with real-time requirements associated with AMI urgency and with telemedicine systems, being faster than manual calculation, even in the case of simple costless personal microcomputers.
Methods, algorithms and computer codes for calculation of electron-impact excitation parameters
Bogdanovich, P; Stonys, D
2015-01-01
We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...
Energy Technology Data Exchange (ETDEWEB)
Lehoucq, Richard B.; Salinger, Andrew G.
1999-08-01
We present an approach for determining the linear stability of steady states of PDEs on massively parallel computers. Linearizing the transient behavior around a steady state leads to a generalized eigenvalue problem. The eigenvalues with largest real part are calculated using Arnoldi's iteration driven by a novel implementation of the Cayley transformation to recast the problem as an ordinary eigenvalue problem. The Cayley transformation requires the solution of a linear system at each Arnoldi iteration, which must be done iteratively for the algorithm to scale with problem size. A representative model problem of 3D incompressible flow and heat transfer in a rotating disk reactor is used to analyze the effect of algorithmic parameters on the performance of the eigenvalue algorithm. Successful calculations of leading eigenvalues for matrix systems of order up to 4 million were performed, identifying the critical Grashof number for a Hopf bifurcation.
WOLF: a computer code package for the calculation of ion beam trajectories
Energy Technology Data Exchange (ETDEWEB)
Vogel, D.L.
1985-10-01
The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.
Analysis of shielding calculation methods for 16- and 64-slice computed tomography facilities
Energy Technology Data Exchange (ETDEWEB)
Moreno, C; Cenizo, E; Bodineau, C; Mateo, B; Ortega, E M, E-mail: c_morenosaiz@yahoo.e [Servicio de RadiofIsica Hospitalaria, Hospital Regional Universitario Carlos Haya, Malaga (Spain)
2010-09-15
The new multislice computed tomography (CT) machines require some new methods of shielding calculation, which need to be analysed. NCRP Report No. 147 proposes three shielding calculation methods based on the following dosimetric parameters: weighted CT dose index for the peripheral axis (CTDI{sub w,per}), dose-length product (DLP) and isodose maps. A survey of these three methods has been carried out. For this analysis, we have used measured values of the dosimetric quantities involved and also those provided by the manufacturer, making a comparison between the results obtained. The barrier thicknesses when setting up two different multislice CT instruments, a Philips Brilliance 16 or a Philips Brilliance 64, in the same room, are also compared. Shielding calculation from isodose maps provides more reliable results than the other two methods, since it is the only method that takes the actual scattered radiation distribution into account. It is concluded therefore that the most suitable method for calculating the barrier thicknesses of the CT facility is the one based on isodose maps. This study also shows that for different multislice CT machines the barrier thicknesses do not necessarily become bigger as the number of slices increases, because of the great dependence on technique used in CT protocols for different anatomical regions.
Energy Technology Data Exchange (ETDEWEB)
Yamaguchi, Kizashi [Institute for Nano Science Design Center, Osaka University, 1-3 Machikaneyama, Toyonaka, Osaka 560-8531, Japan and TOYOTA Physical and Chemical Research Institute, Nagakute, Aichi, 480-1192 (Japan); Nishihara, Satomichi; Saito, Toru; Yamanaka, Shusuke; Kitagawa, Yasutaka; Kawakami, Takashi; Yamada, Satoru; Isobe, Hiroshi; Okumura, Mitsutaka [Department of Chemistry, Graduate School of Science, Osaka University, 1-1 Machikaneyama, Toyonaka, Osaka 560-0043 (Japan)
2015-01-22
First principle calculations of effective exchange integrals (J) in the Heisenberg model for diradical species were performed by both symmetry-adapted (SA) multi-reference (MR) and broken-symmetry (BS) single reference (SR) methods. Mukherjee-type (Mk) state specific (SS) MR coupled-cluster (CC) calculations by the use of natural orbital (NO) references of ROHF, UHF, UDFT and CASSCF solutions were carried out to elucidate J values for di- and poly-radical species. Spin-unrestricted Hartree Fock (UHF) based coupled-cluster (CC) computations were also performed to these species. Comparison between UHF-NO(UNO)-MkMRCC and BS UHF-CC computational results indicated that spin-contamination of UHF-CC solutions still remains at the SD level. In order to eliminate the spin contamination, approximate spin-projection (AP) scheme was applied for UCC, and the AP procedure indeed corrected the error to yield good agreement with MkMRCC in energy. The CC double with spin-unrestricted Brueckner's orbital (UBD) was furthermore employed for these species, showing that spin-contamination involved in UHF solutions is largely suppressed, and therefore AP scheme for UBCCD removed easily the rest of spin-contamination. We also performed spin-unrestricted pure- and hybrid-density functional theory (UDFT) calculations of diradical and polyradical species. Three different computational schemes for total spin angular momentums were examined for the AP correction of the hybrid (H) UDFT. HUDFT calculations followed by AP, HUDFT(AP), yielded the S-T gaps that were qualitatively in good agreement with those of MkMRCCSD, UHF-CC(AP) and UB-CC(AP). Thus a systematic comparison among MkMRCCSD, UCC(AP) UBD(AP) and UDFT(AP) was performed concerning with the first principle calculations of J values in di- and poly-radical species. It was found that BS (AP) methods reproduce MkMRCCSD results, indicating their applicability to large exchange coupled systems.
Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S
2005-06-01
In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.
A Geometric Computational Model for Calculation of Longwall Face Effect on Gate Roadways
Mohammadi, Hamid; Ebrahimi Farsangi, Mohammad Ali; Jalalifar, Hossein; Ahmadi, Ali Reza
2016-01-01
In this paper a geometric computational model (GCM) has been developed for calculating the effect of longwall face on the extension of excavation-damaged zone (EDZ) above the gate roadways (main and tail gates), considering the advance longwall mining method. In this model, the stability of gate roadways are investigated based on loading effects due to EDZ and caving zone (CZ) above the longwall face, which can extend the EDZ size. The structure of GCM depends on four important factors: (1) geomechanical properties of hanging wall, (2) dip and thickness of coal seam, (3) CZ characteristics, and (4) pillar width. The investigations demonstrated that the extension of EDZ is a function of pillar width. Considering the effect of pillar width, new mathematical relationships were presented to calculate the face influence coefficient and characteristics of extended EDZ. Furthermore, taking GCM into account, a computational algorithm for stability analysis of gate roadways was suggested. Validation was carried out through instrumentation and monitoring results of a longwall face at Parvade-2 coal mine in Tabas, Iran, demonstrating good agreement between the new model and measured results. Finally, a sensitivity analysis was carried out on the effect of pillar width, bearing capacity of support system and coal seam dip.
Energy Technology Data Exchange (ETDEWEB)
Honea, R.B.; Petrich, C.H.; Wilson, D.L.; Dillard, C.A.; Durfee, R.C.; Faber, J.A.
1979-04-01
This report documents methodologic and computer software developed by Energy Division and Computer Sciences Division personnel at Oak Ridge National Laboratory (ORNL). The software is designed to quantify and automatically map geologic and other cost-related parameters as required to estimate coal mining costs. The software complements the detailed coal production cost models for both underground and surface mines which have been developed for the Electric Power Research Institute (EPRI) by NUS, Corp. These models require input variables such as coal seam thickness, coal seam depth, surface slope, etc., to estimate mining costs. This report provides a general overview of the software and methodology developed by ORNL to calculate some of these parameters along with sample map output which indicates the geographical distribution of these geologic characteristics. A detailed user guide for implementing the software has been prepared and is included in the appendixes. (Sample input data which may be used to verify the operation of the software are available from ORNL.) Also included is a brief review of coal production, coal recovery, and coal resource calculation studies. This system will be useful to utilities and coal mine operators alike in estimating costs through comprehensive assessment before mining takes place.
Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.
2016-05-01
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
DIST: a computer code system for calculation of distribution ratios of solutes in the purex system
Energy Technology Data Exchange (ETDEWEB)
Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment
1996-05-01
Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.
Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.
1996-01-01
Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.
Quantum computing applied to calculations of molecular energies: CH2 benchmark.
Veis, Libor; Pittner, Jiří
2010-11-21
Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.
Berent, Jarosław
2010-01-01
This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.
Energy Technology Data Exchange (ETDEWEB)
Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: anees.dhabaan@emory.edu [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)
2015-03-15
Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.
Energy Technology Data Exchange (ETDEWEB)
Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N. [Moscow Power Engineering Institute (Technical University), Moscow (Russian Federation)
2007-07-01
The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes.
Dekker, C. M.; Sliggers, C. J.
To spur on quality assurance for models that calculate air pollution, quality criteria for such models have been formulated. By satisfying these criteria the developers of these models and producers of the software packages in this field can assure and account for the quality of their products. In this way critics and users of such (computer) models can gain a clear understanding of the quality of the model. Quality criteria have been formulated for the development of mathematical models, for their programming—including user-friendliness, and for the after-sales service, which is part of the distribution of such software packages. The criteria have been introduced into national and international frameworks to obtain standardization.
International Nuclear Information System (INIS)
The SMART-IST computer code models radionuclide behaviour in CANDU reactor containments during postulated accidents. It calculates nuclide concentrations in various parts of containment and releases of nuclides from containment to the atmosphere. The intended application of SMART-IST is safety and licensing analyses of public dose resulting from the releases of nuclides. SMART-IST has been developed and validated meeting the CSA N286.7 quality assurance standard, under the sponsorship of the Industry Standard Toolset (IST) partners consisting of AECL and Canadian nuclear utilities; OPG, Bruce Power, NB Power and Hydro-Quebec. This paper presents an overview of the SMART-IST code including its theoretical framework and models, and also presents typical examples of code predictions. (author)
Energy Technology Data Exchange (ETDEWEB)
Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Behring, A.; Bluemlein, J.; Freitas, A. de [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Manteuffel, A. von [Mainz Univ. (Germany). Inst. fuer Physik
2015-09-15
Three loop ladder and V-topology diagrams contributing to the massive operator matrix element A{sub Qg} are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.
Measurements and computer calculations of pulverized-coal combustion at Asnaes Power Station 4
Energy Technology Data Exchange (ETDEWEB)
Biede, O.; Swane Lund, J.
1996-07-01
Measurements have been performed on a front-fired 270 MW (net electrical out-put) pulverized-coal utility furnace with 24 swirl-stabilized burners, placed in four horizontal rows. Apart from continuous operational measurements, special measurements were performed as follows. At one horizontal level above the upper burner row, gas temperatures were measured by an acoustic pyrometer. At the same level and at the level of the second upper burner row, irradiation to the walls was measured in ten positions by means of specially designed 2 {pi}-thermal radiation meters. Fly-ash was collected and analysed for unburned carbon. Coal size distribution to each individual burner was measured. Eight different cases were measured. On a Columbian coal, three cases with different oxygen concentrations in the exit-gas were measured at a load of 260 MW, and in addition, measurements were performed at reduced loads of 215 MW and 130 MW. On a South African coal blend measurements were performed at a load of 260 MW with three different oxygen exit concentrations. Each case has been simulated by a three-dimensional numerical computer code for the prediction of distribution of gas temperatures, species concentrations and thermal radiative net heat absorption on the furnace walls. Comparisons between measured and calculated gas temperatures, irradiation and unburned carbon are made. Measured results among the cases differ significantly, and the computational results agree well with the measured results. (au)
Hybrid approach for fast occlusion processing in computer-generated hologram calculation.
Gilles, Antonin; Gioia, Patrick; Cozot, Rémi; Morin, Luce
2016-07-10
A hybrid approach for fast occlusion processing in computer-generated hologram calculation is studied in this paper. The proposed method is based on the combination of two commonly used approaches that complement one another: the point-source and wave-field approaches. By using these two approaches together, the proposed method thus takes advantage of both of them. In this method, the 3D scene is first sliced into several depth layers parallel to the hologram plane. Light scattered by the scene is then propagated and shielded from one layer to another using either a point-source or a wave-field approach according to a threshold criterion on the number of points within the layer. Finally, the hologram is obtained by computing the propagation of light from the nearest layer to the hologram plane. Experimental results reveal that the proposed method does not produce any visible artifact and outperforms both the point-source and wave-field approaches. PMID:27409327
SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)
2014-06-01
Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.
Development of a Korean adult male computational phantom for internal dosimetry calculation
International Nuclear Information System (INIS)
A Korean adult male computational phantom was constructed based on the current anthropometric and organ volume data of Korean average adult male, and was applied to calculate internal photon dosimetry data. The stylised models of external body, skeleton, and a total of 13 internal organs (brain, gall bladder, heart, kidneys, liver, lungs, pancreas, spleen, stomach, testes, thymus, thyroid and urinary bladder) were redesigned based on the Oak Ridge National Laboratory (ORNL) adult phantom. The height of trunk of the Korean phantom was 8.6% less than that of the ORNL adult phantom, and the volumes of all organs decreased up to 65% (pancreas) except for brain, gall bladder wall and thymus. Specific absorbed fraction (SAF) was calculated using the Korean phantom and Monte Carlo code, and compared with those from the ORNL adult phantom. The SAF of organs in the Korean phantom was overall higher than that from the ORNL adult phantom. This was caused by the smaller organ volume and the shorter inter-organ distance in the Korean phantom. The self SAF was dominantly affected by the difference in organ volume, and the SAF for different source and target organs was more affected by the inter-organ distance than by the organ volume difference. The SAFs of the Korean stylised phantom differ from those of the ORNL phantom by 10-180%. The comparison study of internal dosimetry will be extended to tomographic phantom and electron source in the future. (authors)
Development of a computer code for shielding calculation in X-ray facilities
International Nuclear Information System (INIS)
The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg
2016-02-01
We present a numerical method for multiphase chemical equilibrium calculations based on a Gibbs energy minimization approach. The method can accurately and efficiently determine the stable phase assemblage at equilibrium independently of the type of phases and species that constitute the chemical system. We have successfully applied our chemical equilibrium algorithm in reactive transport simulations to demonstrate its effective use in computationally intensive applications. We used FEniCS to solve the governing partial differential equations of mass transport in porous media using finite element methods in unstructured meshes. Our equilibrium calculations were benchmarked with GEMS3K, the numerical kernel of the geochemical package GEMS. This allowed us to compare our results with a well-established Gibbs energy minimization algorithm, as well as their performance on every mesh node, at every time step of the transport simulation. The benchmark shows that our novel chemical equilibrium algorithm is accurate, robust, and efficient for reactive transport applications, and it is an improvement over the Gibbs energy minimization algorithm used in GEMS3K. The proposed chemical equilibrium method has been implemented in Reaktoro, a unified framework for modeling chemically reactive systems, which is now used as an alternative numerical kernel of GEMS.
Katz, D.; Cwik, T.; Sterling, T.
1998-01-01
This paper uses the parallel calculation of the radiation integral for examination of performance and compiler issues on a Beowulf-class computer. This type of computer, built from mass-market, commodity, off-the-shelf components, has limited communications performance and therefore also has a limited regime of codes for which it is suitable.
International Nuclear Information System (INIS)
A small-size computer EC 1010 is proposed for the calculation of dosimetric parameters of irradiation procedures on #betta#-beam therapeutic units. A specially designed program is intended for the calculation of dosimetric parameters for different methods of moving and static irradiation taking into account tissue heterogeneity: multified static irradiation, multizone rotation irradiation, irradiation using dose field forming devices (V-shaped filters, edge blocks, a grid diaphragm). The computation of output parameters according to each preset program of irradiation takes no more than 1 min. The use of the computer EC 1010 for the calculation of dosimetric parameters of irradiation procedures gives an opportunity to reduce considerably calculation time, to avoid possible errors and to simplify the drawing up of documents
Goc, Roman
2004-09-01
This paper describes m2rc3, a program that calculates Van Vleck second moments for solids with internal rotation of molecules, ions or their structural parts. Only rotations about C 3 axes of symmetry are allowed, but up to 15 axes of rotation per crystallographic unit cell are permitted. The program is very useful in interpreting NMR measurements in solids. Program summaryTitle of the program: m2rc3 Catalogue number: ADUC Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADUC Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland License provisions: none Computers: Cray SV1, Cray T3E-900, PCs Installation: Poznań Supercomputing and Networking Center ( http://www.man.poznan.pl/pcss/public/main/index.html) and Faculty of Physics, A. Mickiewicz University, Poznań, Poland ( http://www.amu.edu.pl/welcome.html.en) Operating system under which program has been tested: UNICOS ver. 10.0.0.6 on Cray SV1; UNICOS/mk on Cray T3E-900; Windows98 and Windows XP on PCs. Programming language: FORTRAN 90 No. of lines in distributed program, including test data, etc.: 757 No. of bytes in distributed program, including test data, etc.: 9730 Distribution format: tar.gz Nature of physical problem: The NMR second moment reflects the strength of the nuclear magnetic dipole-dipole interaction in solids. This value can be extracted from the appropriate experiment and can be calculated on the basis of Van Vleck formula. The internal rotation of molecules or their parts averages this interaction decreasing the measured value of the NMR second moment. The analysis of the internal dynamics based on the NMR second moment measurements is as follows. The second moment is measured at different temperatures. On the other hand it is also calculated for different models and frequencies of this motion. Comparison of experimental and calculated values permits the building of the most probable model of internal dynamics in the studied material. The program described
International Nuclear Information System (INIS)
The large-scale construction of atomic power stations results in a need for trainers to instruct power-station personnel. The present work considers one problem of developing training computer software, associated with the development of a high-speed algorithm for calculating the neutron field after control-rod (CR) shift by the operator. The case considered here is that in which training units are developed on the basis of small computers of SM-2 type, which fall significantly short of the BESM-6 and EC-type computers used for the design calculations, in terms of speed and memory capacity. Depending on the apparatus for solving the criticality problem, in a two-dimensional single-group approximation, the physical-calculation programs require ∼ 1 min of machine time on a BESM-6 computer, which translates to ∼ 10 min on an SM-2 machine. In practice, this time is even longer, since ultimately it is necessary to determine not the effective multiplication factor K/sub ef/, but rather the local perturbations of the emergency-control (EC) system (to reach criticality) and change in the neutron field on shifting the CR and the EC rods. This long time means that it is very problematic to use physical-calculation programs to work in dialog mode with a computer. The algorithm presented below allows the neutron field following shift of the CR and EC rods to be calculated in a few seconds on a BESM-6 computer (tens of second on an SM-2 machine. This high speed may be achieved as a result of the preliminary calculation of the influence function (IF) for each CR. The IF may be calculated at high speed on a computer. Then it is stored in the external memory (EM) and, where necessary, used as the initial information
Calculation of brain atrophy using computed tomography and a new atrophy measurement tool
Bin Zahid, Abdullah; Mikheev, Artem; Yang, Andrew Il; Samadani, Uzma; Rusinek, Henry
2015-03-01
Purpose: To determine if brain atrophy can be calculated by performing volumetric analysis on conventional computed tomography (CT) scans in spite of relatively low contrast for this modality. Materials & Method: CTs for 73 patients from the local Veteran Affairs database were selected. Exclusion criteria: AD, NPH, tumor, and alcohol abuse. Protocol: conventional clinical acquisition (Toshiba; helical, 120 kVp, X-ray tube current 300mA, slice thickness 3-5mm). Locally developed, automatic algorithm was used to segment intracranial cavity (ICC) using (a) white matter seed (b) constrained growth, limited by inner skull layer and (c) topological connectivity. ICC was further segmented into CSF and brain parenchyma using a threshold of 16 Hu. Results: Age distribution: 25-95yrs; (Mean 67+/-17.5yrs.). Significant correlation was found between age and CSF/ICC(r=0.695, p<0.01 2-tailed). A quadratic model (y=0.06-0.001x+2.56x10-5x2 ; where y=CSF/ICC and x=age) was a better fit to data (r=0.716, p < 0.01). This is in agreement with MRI literature. For example, Smith et al. found annual CSF/ICC increase in 58 - 94.5 y.o. individuals to be 0.2%/year, whereas our data, restricted to the same age group yield 0.3%/year(0.2-0.4%/yrs. 95%C.I.). Slightly increased atrophy among elderly VA patients is attributable to the presence of other comorbidities. Conclusion: Brain atrophy can be reliably calculated using automated software and conventional CT. Compared to MRI, CT is more widely available, cheaper, and less affected by head motion due to ~100 times shorter scan time. Work is in progress to improve the precision of the measurements, possibly leading to assessment of longitudinal changes within the patient.
Institute of Scientific and Technical Information of China (English)
Zhishan Gao; Meimei Kong; Rihong Zhu; Lei Chen
2007-01-01
Interferometric optical testing using computer-generated hologram (CGH) has provided an approach to highly accurate measurement of aspheric surfaces. While designing the CGH null correctors, we should make them with as small aperture and low spatial frequency as possible, and with no zero slope of phase except at center, for the sake of insuring lowisk of substrate figure error and feasibility of fabrication. On the basis of classic optics, a set of equations for calculating the phase function of CGH are obtained. These equations lead us to find the dependence of the aperture and spatial frequency on the axial diszance from the tested aspheric surface for the CGH. We also simulatethe ptical path difference error of the CGH relative to the accuracy of controlling laser spot during fabrication. Meanwhile, we discuss the constraints used to avoid zero slope of phase except at center and give a design result of the CGH for the tested aspheric surface. The results ensure the feasibility of designing a useful CGH to test aspheric urface fundamentally.
Imachi, Hiroto
2015-01-01
Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.
DFT-Based Electronic Structure Calculations on Hybrid and Massively Parallel Computer Architectures
Briggs, Emil; Hodak, Miroslav; Lu, Wenchang; Bernholc, Jerry
2014-03-01
The latest generation of supercomputers is capable of multi-petaflop peak performance, achieved by using thousands of multi-core CPU's and often coupled with thousands of GPU's. However, efficient utilization of this computing power for electronic structure calculations presents significant challenges. We describe adaptations of the Real-Space Multigrid (RMG) code that enable it to scale well to thousands of nodes. A hybrid technique that uses one MPI process per node, rather than on per core was adopted with OpenMP and POSIX threads used for intra-node parallelization. This reduces the number of MPI process's by an order of magnitude or more and improves individual node memory utilization. GPU accelerators are also becoming common and are capable of extremely high performance for vector workloads. However, they typically have much lower scalar performance than CPU's, so achieving good performance requires that the workload is carefully partitioned and data transfer between CPU and GPU is optimized. We have used a hybrid approach utilizing MPI/OpenMP/POSIX threads and GPU accelerators to reach excellent scaling to over 100,000 cores on a Cray XE6 platform as well as a factor of three performance improvement when using a Cray XK7 system with CPU-GPU nodes.
International Nuclear Information System (INIS)
A computer program, HERMES, that provides the quantities usually needed in nuclear level density calculations, has been developed. The applied model is the standard Fermi Gas Model (FGM) in which pairing correlations and shell effects are opportunely taken into account. The effects of additional nuclear structure properties together with their inclusion into the computer program are also considered. Using HERMES, a level density parameter systematics has been constructed for mass range 41 ≤ A ≤ 253. (author)
Watanabe, Ryosuke; Yamaguchi, Kazuhiro; Sakamoto, Yuji
2016-01-20
Computer generated hologram (CGH) animations can be made by switching many CGHs on an electronic display. Some fast calculation methods for CGH animations have been proposed, but one for viewpoint movement has not been proposed. Therefore, we designed a fast calculation method of CGH animations for viewpoint parallel shifts and rotation. A Fourier transform optical system was adopted to expand the viewing angle. The results of experiments were that the calculation time of our method was over 6 times faster than that of the conventional method. Furthermore, the degradation in CGH animation quality was found to be sufficiently small.
Fast calculation of computer-generated-hologram on AMD HD5000 series GPU and OpenCL
Shimobaba, Tomoyoshi; Masuda, Nobuyuki; Ichihashi, Yasuyuki; Takada, Naoki
2010-01-01
In this paper, we report fast calculation of a computer-generated-hologram using a new architecture of the HD5000 series GPU (RV870) made by AMD and its new software development environment, OpenCL. Using a RV870 GPU and OpenCL, we can calculate 1,920 * 1,024 resolution of a CGH from a 3D object consisting of 1,024 points in 30 milli-seconds. The calculation speed realizes a speed approximately two times faster than that of a GPU made by NVIDIA.
Energy Technology Data Exchange (ETDEWEB)
Mayhall, D J; Stein, W; Gronberg, J B
2006-05-15
We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.
Energy Technology Data Exchange (ETDEWEB)
Jothi, S., E-mail: s.jothi@swansea.ac.uk [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom); Winzer, N. [Fraunhofer Institute for Mechanics of Materials IWM, Wöhlerstraße 11, 79108 Freiburg (Germany); Croft, T.N.; Brown, S.G.R. [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom)
2015-10-05
Highlights: • Characterized polycrystalline nickel microstructure using EBSD analysis. • Development meso-microstructural model based on real microstructure. • Calculated effective diffusivity using experimental electrochemical permeation test. • Calculated intergranular diffusivity of hydrogen using computational FE simulation. • Validated the calculated computation simulation results with experimental results. - Abstract: Hydrogen induced intergranular embrittlement has been identified as a cause of failure of aerospace components such as combustion chambers made from electrodeposited polycrystalline nickel. Accurate computational analysis of this process requires knowledge of the differential in hydrogen transport in the intergranular and intragranular regions. The effective diffusion coefficient of hydrogen may be measured experimentally, though experimental measurement of the intergranular grain boundary diffusion coefficient of hydrogen requires significant effort. Therefore an approach to calculate the intergranular GB hydrogen diffusivity using finite element analysis was developed. The effective diffusivity of hydrogen in polycrystalline nickel was measured using electrochemical permeation tests. Data from electron backscatter diffraction measurements were used to construct microstructural representative volume elements including details of grain size and shape and volume fraction of grains and grain boundaries. A Python optimization code has been developed for the ABAQUS environment to calculate the unknown grain boundary diffusivity.
Jaffe, L. D.
1984-01-01
The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.
Energy Technology Data Exchange (ETDEWEB)
Jaffe, L. D.
1984-02-15
CONC/11 is a computer program designed for calculating the performance of dish-type solar thermal collectors and power systems. It is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. CONC/11 is written in Athena Extended Fortran (similar to Fortran 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers.
A computer program for calculating relative-transmissivity input arrays to aid model calibration
Weiss, Emanuel
1982-01-01
A program is documented that calculates a transmissivity distribution for input to a digital ground-water flow model. Factors that are taken into account in the calculation are: aquifer thickness, ground-water viscosity and its dependence on temperature and dissolved solids, and permeability and its dependence on overburden pressure. Other factors affecting ground-water flow are indicated. With small changes in the program code, leakance also could be calculated. The purpose of these calculations is to provide a physical basis for efficient calibration, and to extend rational transmissivity trends into areas where model calibration is insensitive to transmissivity values.
Energy Technology Data Exchange (ETDEWEB)
Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Bulgarian Academy of Sciences, INRNE (Bulgaria)
2015-08-15
The potential energy surface and the computational codes, developed for the evaluation of the density shift and broadening of the spectral lines of laser-induced transitions from metastable states of antiprotonic helium, fail to produce convergent results in the case of pionic helium. We briefly analyze the encountered computational problems and outline possible solutions of the problems.
da Silveira, Pedro Rodrigo Castro
2014-01-01
This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…
International Nuclear Information System (INIS)
A computer programme which performs compound nucleus calculations using the Weisskopf-Ewing formalism is described. The programme will calculate the cross-sections for multi-particle emission by treating the process as a series of stages in the cascade. The relevant compound nucleus absorption cross-sections for particle channels are calculated with built-in optical model routines, and gamma ray emission is described by the giant dipole resonance formalism. Several choices for the final nucleus level density formula may be made using the level density routine contained in the programme. The total cross-section for the emission of a particle at any particular stage, is calculated together with the cross-section as a function of energy. The probability of leaving the final nucleus in a state of any particular energy is also obtained. (author)
A computationally efficient software application for calculating vibration from underground railways
International Nuclear Information System (INIS)
The PiP model is a software application with a user-friendly interface for calculating vibration from underground railways. This paper reports about the software with a focus on its latest version and the plans for future developments. The software calculates the Power Spectral Density of vibration due to a moving train on floating-slab track with track irregularity described by typical values of spectra for tracks with good, average and bad conditions. The latest version accounts for a tunnel embedded in a half space by employing a toolbox developed at K.U. Leuven which calculates Green's functions for a multi-layered half-space.
Use of CYBER 203 and CYBER 205 computers for three-dimensional transonic flow calculations
Melson, N. D.; Keller, J. D.
1983-04-01
Experiences are discussed for modifying two three-dimensional transonic flow computer programs (FLO 22 and FLO 27) for use on the CDC CYBER 203 computer system. Both programs were originally written for use on serial machines. Several methods were attempted to optimize the execution of the two programs on the vector machine: leaving the program in a scalar form (i.e., serial computation) with compiler software used to optimize and vectorize the program, vectorizing parts of the existing algorithm in the program, and incorporating a vectorizable algorithm (ZEBRA I or ZEBRA II) in the program. Comparison runs of the programs were made on CDC CYBER 175. CYBER 203, and two pipe CDC CYBER 205 computer systems.
TRANS4: a computer code calculation of solid fuel penetration of a concrete barrier
International Nuclear Information System (INIS)
The computer code, TRANS4, models the melting and penetration of a solid barrier by a solid disc of fuel following a core disruptive accident. This computer code has been used to model fuel debris penetration of basalt, limestone concrete, basaltic concrete, and magnetite concrete. Sensitivity studies were performed to assess the importance of various properties on the rate of penetration. Comparisons were made with results from the GROWS II code
HTR-2000: Computer program to accompany calculations during reactor operation of HTGR's
International Nuclear Information System (INIS)
HTR-2000 - developed for arithmetical control of pebble bed high temperature reactors with multiple process - is closely coupled to the actual operation of the reactor. Using measured nuclear and thermo-hydraulical parameters as well as detailed model of pebble flow and exact information and fuel burnup, loading and discharge it obtains an excellent simulation of the status of the reactor. The geometry is modelled in three dimensions, so asymmetries in core texture can be taken into account for nuclear and thermohydraulical calculations. A continuous simulation was performed during five years of AVR operation. The comparison between calculated and measured data was very satisfying. In addition, experiments which had been performed at AVR for re-calculating the control rod worth were simulated. The arithmetical analysis shows that at presence of a compensating-absorber in the reactor core the split reactivity worth for single absorbers can be determined by calculation but not by methods of measuring. (orig.)
Fast neutron reaction data calculations with the computer code STAPRE-H
International Nuclear Information System (INIS)
Description of the specific features of the version STAPRE-H are given. Illustration of the model options and parameter influence on the calculated results is done to trace the accurate reproducing of large body of correlated data. (authors)
International Nuclear Information System (INIS)
Highlights: ► The atomic densities of light and heavy materials are calculated. ► The solution is obtained using Runge–Kutta–Fehlberg method. ► The material depletion is calculated for constant flux and constant power condition. - Abstract: The present work investigates an appropriate way to calculate the variations of nuclides composition in the reactor core during operations. Specific Software has been designed for this purpose using C#. The mathematical approach is based on the solution of Bateman differential equations using a Runge–Kutta–Fehlberg method. Material depletion at constant flux and constant power can be calculated with this software. The inputs include reactor power, time step, initial and final times, order of Taylor Series to calculate time dependent flux, time unit, core material composition at initial condition (consists of light and heavy radioactive materials), acceptable error criterion, decay constants library, cross sections database and calculation type (constant flux or constant power). The atomic density of light and heavy fission products during reactor operation is obtained with high accuracy as the program outputs. The results from this method compared with analytical solution show good agreements
Directory of Open Access Journals (Sweden)
Carina Mari Aparici
2016-01-01
Full Text Available We present a case of a 69-year-old patient who underwent ascending aortic aneurysm repair with aortic valve replacement. On postsurgical day 12, he developed leukocytosis and low-grade fevers. The chest computed tomography (CT showed a periaortic hematoma which represents a postsurgical change from aortic aneurysm repair, and a small pericardial effusion. The abdominal ultrasound showed cholelithiasis without any sign of cholecystitis. Finally, a fluorodeoxyglucose (FDG-positron emission tomography (PET/CT examination was ordered to find the cause of fever of unknown origin, and it showed increased FDG uptake in the gallbladder wall, with no uptake in the lumen. FDG-PET/CT can diagnose acute cholecystitis in patients with nonspecific clinical symptoms and laboratory results.
Aparici, Carina Mari; Win, Aung Zaw
2016-01-01
We present a case of a 69-year-old patient who underwent ascending aortic aneurysm repair with aortic valve replacement. On postsurgical day 12, he developed leukocytosis and low-grade fevers. The chest computed tomography (CT) showed a periaortic hematoma which represents a postsurgical change from aortic aneurysm repair, and a small pericardial effusion. The abdominal ultrasound showed cholelithiasis without any sign of cholecystitis. Finally, a fluorodeoxyglucose (FDG)-positron emission tomography (PET)/CT examination was ordered to find the cause of fever of unknown origin, and it showed increased FDG uptake in the gallbladder wall, with no uptake in the lumen. FDG-PET/CT can diagnose acute cholecystitis in patients with nonspecific clinical symptoms and laboratory results. PMID:27625897
Energy Technology Data Exchange (ETDEWEB)
Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M
2006-07-01
The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.
Computational Calorimetry: High-Precision Calculation of Host-Guest Binding Thermodynamics.
Henriksen, Niel M; Fenley, Andrew T; Gilson, Michael K
2015-09-01
We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van't Hoff equation. Excellent agreement between the direct and van't Hoff methods is demonstrated for both host-guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design.
Emergency Doses (ED) - Revision 3: A calculator code for environmental dose computations
International Nuclear Information System (INIS)
The calculator program ED (Emergency Doses) was developed from several HP-41CV calculator programs documented in the report Seven Health Physics Calculator Programs for the HP-41CV, RHO-HS-ST-5P (Rittman 1984). The program was developed to enable estimates of offsite impacts more rapidly and reliably than was possible with the software available for emergency response at that time. The ED - Revision 3, documented in this report, revises the inhalation dose model to match that of ICRP 30, and adds the simple estimates for air concentration downwind from a chemical release. In addition, the method for calculating the Pasquill dispersion parameters was revised to match the GENII code within the limitations of a hand-held calculator (e.g., plume rise and building wake effects are not included). The summary report generator for printed output, which had been present in the code from the original version, was eliminated in Revision 3 to make room for the dispersion model, the chemical release portion, and the methods of looping back to an input menu until there is no further no change. This program runs on the Hewlett-Packard programmable calculators known as the HP-41CV and the HP-41CX. The documentation for ED - Revision 3 includes a guide for users, sample problems, detailed verification tests and results, model descriptions, code description (with program listing), and independent peer review. This software is intended to be used by individuals with some training in the use of air transport models. There are some user inputs that require intelligent application of the model to the actual conditions of the accident. The results calculated using ED - Revision 3 are only correct to the extent allowed by the mathematical models. 9 refs., 36 tabs
Grimblat, Nicolas; Sarotti, Ariel M
2016-08-22
The calculations of NMR properties of molecules using quantum chemical methods have deeply impacted several branches of organic chemistry. They are particularly important in structural or stereochemical assignments of organic compounds, with implications in total synthesis, stereoselective reactions, and natural products chemistry. In studying the evolution of the strategies developed to support (or reject) a structural proposal, it becomes clear that the most effective and accurate ones involve sophisticated procedures to correlate experimental and computational data. Owing to their relatively high mathematical complexity, such calculations (CP3, DP4, ANN-PRA) are often carried out using additional computational resources provided by the authors (such as applets or Excel files). This Minireview will cover the state-of-the-art of these toolboxes in the assignment of organic molecules, including mathematical definitions, updates, and discussion of relevant examples.
International Nuclear Information System (INIS)
The hybrid data processing associating a computer calculation of the processing filters to their use in coherent optical set-up may lead to real-time filtering. On the principle, it is shown that an instantaneous filtering of all known and unknown defects in images can be attained using a well adapted electro-optical relay. Some synthetical holograms, holographic lenses with variable focussing, and a number of processing filters were calculated, all holograms being phase coded in binary. The results were tape registred and displayed in delayed time on a 128x128 points liquid crystal electro-optical relay allowing the quality of reproduction for the computed holograms to be tested on a simple diffraction bench, and on a double diffraction bench in the case of the results of the image filtering
Sauer, Stephan P. A.; Paidarová, Ivana; Čársky, Petr; Čurík, Roman
2016-05-01
In this paper we present calculations of the static polarizability and its derivatives for the adamantane molecule carried out at the density functional theory level using the B3LYP exchange-correlation functional and Sadlej's polarized valence triple zeta basis set. It is shown that the polarizability tensor is necessary to correct long-range behavior of DFT functionals used in electron-molecule scattering calculations. The impact of such a long-range correction is demonstrated on elastic and vibrationally inelastic electron collisions with adamantane, a molecule representing a large polyatomic target for electron scattering calculations. Contribution to the Topical Issue "Advances in Positron and Electron Scattering", edited by Paulo Limao-Vieira, Gustavo Garcia, E. Krishnakumar, James Sullivan, Hajime Tanuma and Zoran Petrovic.
WASP: A flexible FORTRAN 4 computer code for calculating water and steam properties
Hendricks, R. C.; Peller, I. C.; Baron, A. K.
1973-01-01
A FORTRAN 4 subprogram, WASP, was developed to calculate the thermodynamic and transport properties of water and steam. The temperature range is from the triple point to 1750 K, and the pressure range is from 0.1 to 100 MN/m2 (1 to 1000 bars) for the thermodynamic properties and to 50 MN/m2 (500 bars) for thermal conductivity and to 80 MN/m2 (800 bars) for viscosity. WASP accepts any two of pressure, temperature, and density as input conditions. In addition, pressure and either entropy or enthalpy are also allowable input variables. This flexibility is especially useful in cycle analysis. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, surface tension, and the Laplace constant. The subroutine structure is modular so that the user can choose only those subroutines necessary to his calculations. Metastable calculations can also be made by using WASP.
International Nuclear Information System (INIS)
ACRO was developed as a computer program to calculate internal exposure doses resulting from acute or chronic inhalation and oral ingestion of radionuclides. The ICRP Task Force Lung Model (TGLM) was used as the inhalation model in ACRO, and a simple one-compartment model was used as the ingestion model. The program is written in FORTRAN IV, and it requires about 260 KB memory capacity
SaiToh, Akira
2011-01-01
A C++ library, named ZKCM, has been developed for the purpose of multiprecision matrix calculations, which is based on the GNU MP and MPFR libraries. It is especially convenient for writing programs involving tensor-product operations, tracing-out operations, and singular-value decompositions. Its extension library, ZKCM_QC, for simulating quantum computing has been developed using the time-dependent matrix-product-state simulation method. This report gives a brief introduction to the libraries with sample programs.
International Nuclear Information System (INIS)
A computer code system for fast calculation of activation and transmutation has been developed. The system consists of a driver code, cross-section libraries, flux libraries, a material library, and a decay library. The code is used to predict transmutations in a Ti-modified 316 stainless steel, a commercial ferritic alloy (HT9), and a V-15%Cr-5%Ti alloy in various magnetic fusion energy (MFE) test facilities and conceptual reactors
BETHSY 6.2TC test calculation with TRACE and RELAP5 computer code
International Nuclear Information System (INIS)
The TRACE code is still under development and it will have all capabilities of RELAP5. The purpose of the present study was therefore to assess the accuracy of the TRACE calculation of BETHSY 6.2TC test, which is 15.24 cm equivalent diameter horizontal cold leg break. For calculations the TRACE V5.0 Patch 1 and RELAP5/MOD3.3 Patch 4 were used. The overall results obtained with TRACE were similar to the results obtained by RELAP5/MOD3.3. The results show that the discrepancies were reasonable. (author)
International Nuclear Information System (INIS)
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations. (paper)
Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli
2014-09-21
With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.
Energy Technology Data Exchange (ETDEWEB)
Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)
2000-07-01
The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)
International Nuclear Information System (INIS)
This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules
LWR-WIMS, a computer code for light water reactor lattice calculations
International Nuclear Information System (INIS)
LMR-WIMS is a comprehensive scheme of computation for studying the reactor physics aspects and burnup behaviour of typical lattices of light water reactors. This report describes the physics methods that have been incorporated in the code, and the modifications that have been made since the code was issued in 1972. (U.K.)
Energy Technology Data Exchange (ETDEWEB)
Frankel, J.I.
1997-09-01
This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules.
Computer program calculates gamma ray source strengths of materials exposed to neutron fluxes
Heiser, P. C.; Ricks, L. O.
1968-01-01
Computer program contains an input library of nuclear data for 44 elements and their isotopes to determine the induced radioactivity for gamma emitters. Minimum input requires the irradiation history of the element, a four-energy-group neutron flux, specification of an alloy composition by elements, and selection of the output.
Beddard, Godfrey S.
2011-01-01
Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…
Blanchard, Frank N.
1980-01-01
Describes a FORTRAN IV program written to supplement a laboratory exercise dealing with quantitative x-ray diffraction analysis of mixtures of polycrystalline phases in an introductory course in x-ray diffraction. Gives an example of the use of the program and compares calculated and observed calibration data. (Author/GS)
Barbiric, Dora; Tribe, Lorena; Soriano, Rosario
2015-01-01
In this laboratory, students calculated the nutritional value of common foods to assess the energy content needed to answer an everyday life application; for example, how many kilometers can an average person run with the energy provided by 100 g (3.5 oz) of beef? The optimized geometries and the formation enthalpies of the nutritional components…
PABS: A Computer Program to Normalize Emission Probabilities and Calculate Realistic Uncertainties
International Nuclear Information System (INIS)
The program PABS normalizes relative particle emission probabilities to an absolute scale and calculates the relevant uncertainties on this scale. The program is written in Java using the JDK 1.6 library. For additional information about system requirements, the code itself, and compiling from source, see the README file distributed with this program. The mathematical procedures used are given.
DCHAIN 2: a computer code for calculation of transmutation of nuclides
International Nuclear Information System (INIS)
DCHAIN2 is a one-point depletion code which solves the coupled equation of radioactive growth and decay for a large number of nuclides by the Bateman method. A library of nuclear data for 1170 fission products has been prepared for providing input data to this code. The Bateman method surpasses the matrix exponential method in computational accuracies and in saving computer storage for the code. However, most existing computer codes based on the Bateman method have shown serious drawbacks in treating cyclic chains and more than a few specific types of decay chains. The present code has surmounted the above drawbacks by improving the code FP-S, and has the following characteristics: (1) The code can treat any type of transmutation through decays or neutron induced reactions. Multiple decays and reactions are allowed for a nuclide. (2) Unknown decay energy in the nuclear data library can be estimated. (3) The code constructs the decay scheme of each nuclide in the code and breaks it up into linear chains. Nuclide names, decay types and branching ratios of mother nuclides are necessary as the input data for each nuclide. Order of nuclides in the library is arbitrary because each nuclide is destinguished by its nuclide name. (4) The code can treat cyclic chains by an approximation. A library of the nuclear data has been prepared for 1170 fission products, including the data for half-lives, decay schemes, neutron absorption cross sections, fission yields, and disintegration energies. While DCHAIN2 is used to compute the compositions, radioactivity and decay heat of fission products, the gamma-ray spectrum of fission products can be computed also by a separate code FPGAM using the composition obtained from DCHAIN2. (J.P.N.)
International Nuclear Information System (INIS)
Computer codes incorporating advanced nuclear models (optical, statistical and pre-equilibrium decay nuclear reaction models) were used to calculate neutron cross sections needed for fusion reactor technology. The elastic and inelastic scattering (n,2n), (n,p), (n,n'p), (n,d) and (n,γ) cross sections for stable molybdenum isotopes Mosup(92,94,95,96,97,98,100) and incident neutron energy from about 100 keV or a threshold to 20 MeV were calculated using the consistent set of input parameters. The hydrogen production cross section which determined the radiation damage in structural materials of fusion reactors can be simply deduced from the presented results. The more elaborated microscopic models of nuclear level density are required for high accuracy calculations
Bakkiyaraj, D.; Periandy, S.; Xavier, S.
2016-09-01
In this study; spectroscopic investigation of adenosine having clinical importance was studied computationally and obtained results were compared with experimental ones. In this scope, geometric optimization and conformational analysis were studied and vibrational spectroscopic properties were studied on the most stable form. NMR and TD-DFT studies on the title compound were conducted with its experimental data. In addition atomic charge distribution, NBO, frontier molecular analysis, thermodynamic analysis and hyperpolarization features were studied.
Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär
2016-01-01
To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research. PMID:27505418
Ericsson, Jonas; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth
2016-01-01
To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.
Ericsson, Jonas; Husmark, Teodor; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth
2016-01-01
To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research. PMID:27505418
Ericsson, Jonas; Husmark, Teodor; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth
2016-01-01
To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.
Multi-user software of radio therapeutical calculation using a computational network
International Nuclear Information System (INIS)
It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)
Stability Analysis of Large-Scale Incompressible Flow Calculations on Massively Parallel Computers
International Nuclear Information System (INIS)
A set of linear and nonlinear stability analysis tools have been developed to analyze steady state incompressible flows in 3D geometries. The algorithms have been implemented to be scalable to hundreds of parallel processors. The linear stability of steady state flows are determined by calculating the rightmost eigenvalues of the associated generalize eigenvalue problem. Nonlinear stability is studied by bifurcation analysis techniques. The boundaries between desirable and undesirable operating conditions are determined for buoyant flow in the rotating disk CVD reactor
Institute of Scientific and Technical Information of China (English)
Reinhold Schneider; Thorsten Rohwedder; Alexey Neelov; Johannes Blauert
2009-01-01
In this article, we analyse three related preconditioned steepest descent algorithms,which are partially popular in Hartree-Fock and Kohn-Sham theory as well as invariant subspace computations, from the viewpoint of minimization of the corresponding functionals, constrained by orthogonality conditions. We exploit the geometry of the admissible manifold, i.e., the invariance with respect to unitary transformations, to reformulate the problem on the Grassmann manifold as the admissible set. We then prove asymptotical linear convergence of the algorithms under the condition that the Hessian of the corresponding Lagrangian is elliptic on the tangent space of the Crassmann manifold at the minimizer.
DITTY - a computer program for calculating population dose integrated over ten thousand years
Energy Technology Data Exchange (ETDEWEB)
Napier, B.A.; Peloquin, R.A.; Strenge, D.L.
1986-03-01
The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages.
ALLDOS: a computer program for calculation of radiation doses from airborne and waterborne releases
International Nuclear Information System (INIS)
The computer code ALLDOS is described and instructions for its use are presented. ALLDOS generates tables of radiation doses to the maximum individual and the population in the region of the release site. Acute or chronic release of radionuclides may be considered to airborne and waterborne pathways. The code relies heavily on data files of dose conversion factors and environmental transport factors for generating the radiation doses. A source inventory data library may also be used to generate the release terms for each pathway. Codes available for preparation of the dose conversion factors are described and a complete sample problem is provided describing preparation of data files and execution of ALLDOS
Parallel calculations on shared memory, NUMA-based computers using MATLAB
Krotkiewski, Marcin; Dabrowski, Marcin
2014-05-01
Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU
International Nuclear Information System (INIS)
A calculational technique for quantifying the concentration of hydrogen generated by radiolysis in sealed radioactive waste containers was developed in a U.S. Department of Energy (DOE) study conducted by EG ampersand G Idaho, Inc., and the Electric Power Research Institute (EPRI) TMI-2 Technology Transfer Office. The study resulted in report GEND-041, entitled open-quotes A Calculational Technique to Predict Combustible Gas Generation in Sealed Radioactive Waste Containersclose quotes. The study also resulted in a presentation to the U.S. Nuclear Regulatory Commission (NRC) which gained acceptance of the methodology for use in ensuring compliance with NRC IE Information Notice No. 84-72 (NRC 1984) concerning the generation of hydrogen within packages. NRC IE Information Notice No. 84-72: open-quotes Clarification of Conditions for Waste Shipments Subject to Hydrogen Gas Generationclose quotes applies to any package containing water and/or organic substances that could radiolytically generate combustible gases. EPRI developed a simple computer program in a spreadsheet format utilizing GEND-041 calculational methodology to predict hydrogen gas concentrations in low-level radioactive wastes containers termed Radcalc. The computer code was extensively benchmarked against TMI-2 (Three Mile Island) EPICOR II resin bed measurements. The benchmarking showed that the model developed predicted hydrogen gas concentrations within 20% of the measured concentrations. Radcalc for Windows was developed using the same calculational methodology. The code is written in Microsoft Visual C++ 2.0 and includes a Microsoft Windows compatible menu-driven front end. In addition to hydrogen gas concentration calculations, Radcalc for Windows also provides transportation and packaging information such as pressure buildup, total activity, decay heat, fissile activity, TRU activity, and transportation classifications
Energy Technology Data Exchange (ETDEWEB)
Berna, G. A; Bohn, M. P.; Rausch, W. N.; Williford, R. E.; Lanning, D. D.
1981-01-01
FRAPCON-2 is a FORTRAN IV computer code that calculates the steady state response of light Mater reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, deformation, and tai lure histories of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (a) heat conduction through the fuel and cladding, (b) cladding elastic and plastic deformation, (c) fuel-cladding mechanical interaction, (d) fission gas release, (e} fuel rod internal gas pressure, (f) heat transfer between fuel and cladding, (g) cladding oxidation, and (h) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat transfer correlations. FRAPCON-2 is programmed for use on the CDC Cyber 175 and 176 computers. The FRAPCON-2 code Is designed to generate initial conditions for transient fuel rod analysis by either the FRAP-T6 computer code or the thermal-hydraulic code, RELAP4/MOD7 Version 2.
International Nuclear Information System (INIS)
The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)
Energy Technology Data Exchange (ETDEWEB)
Hudritsch, W.W.; Smith, P.D.
1977-11-01
The one-dimensional computer program PADLOC is designed to analyze steady-state and time-dependent plateout of fission products in an arbitrary network of pipes. The problem solved is one of mass transport of impurities in a fluid, including the effects of sources in the fluid and in the plateout surfaces, convection along the flow paths, decay, adsorption on surfaces (plateout), and desorption from surfaces. These phenomena are governed by a system of coupled, nonlinear partial differential equations. The solution is achieved by (a) linearizing the equations about an approximate solution, employing a Newton Raphson iteration technique, (b) employing a finite difference solution method with an implicit time integration, and (c) employing a substructuring technique to logically organize the systems of equations for an arbitrary flow network.
RUSAP: A computer program for the calculation of Roll-Up Solar Array Performance characteristics
Ross, R. G., Jr.; Coyner, J. V., Jr.
1973-01-01
RUSAP is a FORTRAN 4 computer program designed to determine the performance characteristics (power-to-weight ratio, blanket tension, structural member section dimensions, and resonant frequencies) of large-area, roll-up solar arrays of the single-boom, tensioned-substrate design. The program includes the determination of the size and weight of the base structure supporting the boom and blanket and the determination of the blanket tension and deployable boom stiffness needed to achieve the minimum-weight design for a specified frequency for the first mode of vibration. A complete listing of the program, a description of the theoretical background, and all information necessary to use the program are provided.
International Nuclear Information System (INIS)
The computer code NAIADQ is designed to simulate the course and consequences of non-destructive reactivity accidents in low power, experimental, water-cooled reactor cores fuelled with metal plate elements. It is a coupled neutron kinetics-hydrodynamics-heat transfer code which uses point kinetics and one-dimensional thermohydraulic equations. Nucleate boiling, which occurs at the fuel surface during transients, is modelled by the growth of a superheated layer of water in which vapour is generated at a non-equilibrium rate. It is assumed that this vapour is formed at its saturation temperature and that it mixes homogeneously with the water in this layer. The code is written in FORTRAN IV and has been programmed to run as a catalogued procedure on an IBM operating system such as MVT or MVS, with facility for the inclusion of user routines
Diamant, Roee
2016-01-01
Detection of hydroacoustic transmissions is a key enabling technology in applications such as depth measurements, detection of objects, and undersea mapping. To cope with the long channel delay spread and the low signal-to-noise ratio, hydroacoustic signals are constructed with a large time-bandwidth product, $N$. A promising detector for hydroacoustic signals is the normalized matched filter (NMF). For the NMF, the detection threshold depends only on $N$, thereby obviating the need to estimate the characteristics of the sea ambient noise which are time-varying and hard to estimate. While previous works analyzed the characteristics of the normalized matched filter (NMF), for hydroacoustic signals with large $N$ values the expressions available are computationally complicated to evaluate. Specifically for hydroacoustic signals of large $N$ values, this paper presents approximations for the probability distribution of the NMF. These approximations are found extremely accurate in numerical simulations. We also o...
da Silveira, Pedro R. C.; da Silva, Cesar R. S.; Wentzcovitch, Renata M.
2008-02-01
This paper describes the metadata and metadata management algorithms necessary to handle the concurrent execution of multiple tasks from a single workflow, in a collaborative service oriented architecture environment. Metadata requirements are imposed by the distributed workflow that calculates thermoelastic properties of materials at high pressures and temperatures. The scientific relevance of this workflow is also discussed. We explain the basic metaphor, the receipt, underlying the metadata management. We show the actual java representation of the receipt, and explain how it is converted to XML in order to be transferred between servers, and stored in a database. We also discuss how the collaborative aspect of user activity on running workflows could potentially lead to race conditions, how this affects requirements on metadata, and how these race conditions are precluded. Finally we describe an additional metadata structure, complementary to the receipts, that contains general information about the workflow.
GOBLIN computer code. Comparison between calculations and TLTA small break test
International Nuclear Information System (INIS)
GOBLIN calcuations have been performed for two simulation tests of the boiling water reactor (BWR) small break loss-of-coolant accidents (LOCAs) which were conducted in the two loop test apparatus (TLTA). The first test investigated the small break with nondegraded emergency core coolant (ECC) systems and the second test studied the same small break but with degraded ECC systems in which the high pressure core spray (HPCS) was assumed unavailable. Very good agreement between test data and calculations is achieved. The second test is the most challenging from code comparison point of view and the code prediction of the complicated mass distribution pattern which changes with time is very satisfactory. In the first test and to some extent late in the second test multidimensional subchannel effects are evident in the core bundle region. These are not and cannot be reproduced by the code since the bundle model of GOBLIN is strictly one-dimensional. (Author)
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T A
2016-01-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This, in particular, obviates the use of coefficients of fractional parentage. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [pi x q x pi]_0 and [pi x pi]_{LM}, where q_M are the model's quadrupole moments, and pi_N are corresponding conjugate momenta (-2>=M,N<=2). The code also provides ready access to SO(3)-reduced SO(5) Clebsch-Gordan coefficients through data files provided with the code.
Guerrero, A. F.; Mesa, J.
2016-07-01
Because of the behavior that charged particles have when they interact with biological material, proton therapy is shaping the future of radiation therapy in cancer treatment. The planning of radiation therapy is made up of several stages. The first one is the diagnostic image, in which you have an idea of the density, size and type of tumor being treated; to understand this it is important to know how the particles beam interacts with the tissue. In this work, by using de Lindhard formalism and the Y.R. Waghmare model for the charge distribution of the proton, the electronic stopping power (SP) for a proton beam interacting with a liquid water target in the range of proton energies 101 eV - 1010 eV taking into account all the charge states is calculated.
Martínez-Cifuentes, Maximiliano; Clavijo-Allancan, Graciela; Zuñiga-Hormazabal, Pamela; Aranda, Braulio; Barriga, Andrés; Weiss-López, Boris; Araya-Maturana, Ramiro
2016-01-01
A series of a new type of tetracyclic carbazolequinones incorporating a carbonyl group at the ortho position relative to the quinone moiety was synthesized and analyzed by tandem electrospray ionization mass spectrometry (ESI/MS-MS), using Collision-Induced Dissociation (CID) to dissociate the protonated species. Theoretical parameters such as molecular electrostatic potential (MEP), local Fukui functions and local Parr function for electrophilic attack as well as proton affinity (PA) and gas phase basicity (GB), were used to explain the preferred protonation sites. Transition states of some main fragmentation routes were obtained and the energies calculated at density functional theory (DFT) B3LYP level were compared with the obtained by ab initio quadratic configuration interaction with single and double excitation (QCISD). The results are in accordance with the observed distribution of ions. The nature of the substituents in the aromatic ring has a notable impact on the fragmentation routes of the molecules. PMID:27399676
Energy Technology Data Exchange (ETDEWEB)
Smith, Matthew W.; Dallmeyer, Ian; Johnson, Timothy J.; Brauer, Carolyn S.; McEwen, Jean-Sabin; Espinal, Juan F.; Garcia-Perez, Manuel
2016-04-01
Raman spectroscopy is a powerful tool for the characterization of many carbon 27 species. The complex heterogeneous nature of chars and activated carbons has confounded 28 complete analysis due to the additional shoulders observed on the D-band and high intensity 29 valley between the D and G-bands. In this paper the effects of various vacancy and substitution 30 defects have been systematically analyzed via molecular modeling using density functional 31 theory (DFT) and how this is manifested in the calculated gas-phase Raman spectra. The 32 accuracy of these calculations was validated by comparison with (solid-phase) experimental 33 spectra, with a small correction factor being applied to improve the accuracy of frequency 34 predictions. The spectroscopic effects on the char species are best understood in terms of a 35 reduced symmetry as compared to a “parent” coronene molecule. Based upon the simulation 36 results, the shoulder observed in chars near 1200 cm-1 has been assigned to the totally symmetric 37 A1g vibrations of various small polyaromatic hydrocarbons (PAH) as well as those containing 38 rings of seven or more carbons. Intensity between 1400 cm-1 and 1450 cm-1 is assigned to A1g 39 type vibrations present in small PAHs and especially those containing cyclopentane rings. 40 Finally, band intensity between 1500 cm-1 and 1550 cm-1 is ascribed to predominately E2g 41 vibrational modes in strained PAH systems. A total of ten potential bands have been assigned 42 between 1000 cm-1 and 1800 cm-1. These fitting parameters have been used to deconvolute a 43 thermoseries of cellulose chars produced by pyrolysis at 300-700 °C. The results of the 44 deconvolution show consistent growth of PAH clusters with temperature, development of non-45 benzyl rings as temperature increases and loss of oxygenated features between 400 °C and 46 600 °C
International Nuclear Information System (INIS)
This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors
Energy Technology Data Exchange (ETDEWEB)
Yuan, Y.C. [Square Y, Orchard Park, NY (United States); Chen, S.Y.; LePoire, D.J. [Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Rothman, R. [USDOE Idaho Field Office, Idaho Falls, ID (United States)
1993-02-01
This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors.
User's guide to SERICPAC: A computer program for calculating electric-utility avoided costs rates
Energy Technology Data Exchange (ETDEWEB)
Wirtshafter, R.; Abrash, M.; Koved, M.; Feldman, S.
1982-05-01
SERICPAC is a computer program developed to calculate average avoided cost rates for decentralized power producers and cogenerators that sell electricity to electric utilities. SERICPAC works in tandem with SERICOST, a program to calculate avoided costs, and determines the appropriate rates for buying and selling of electricity from electric utilities to qualifying facilities (QF) as stipulated under Section 210 of PURA. SERICPAC contains simulation models for eight technologies including wind, hydro, biogas, and cogeneration. The simulations are converted in a diversified utility production which can be either gross production or net production, which accounts for an internal electricity usage by the QF. The program allows for adjustments to the production to be made for scheduled and forced outages. The final output of the model is a technology-specific average annual rate. The report contains a description of the technologies and the simulations as well as complete user's guide to SERICPAC.
International Nuclear Information System (INIS)
CHILES 2 is a finite-element computer program that calculates the strength of singularities in linear elastic bodies. A generalized quadrilateral finite element that includes a singular point at a corner node is incorporated in the code. The displacement formulation is used and interelement compatibility is maintained so that monotone convergence is preserved. Plane stress, plane strain, and axisymmetric conditions are treated. Isotropic and orthotropic crack tip singularity problems are solved by this version of the code, but any type of singularity may be properly modeled by modifying selected subroutines in the program
Energy Technology Data Exchange (ETDEWEB)
Benzley, S.E.; Beisinger, Z.E.
1978-02-01
CHILES 2 is a finite-element computer program that calculates the strength of singularities in linear elastic bodies. A generalized quadrilateral finite element that includes a singular point at a corner node is incorporated in the code. The displacement formulation is used and interelement compatibility is maintained so that monotone convergence is preserved. Plane stress, plane strain, and axisymmetric conditions are treated. Isotropic and orthotropic crack tip singularity problems are solved by this version of the code, but any type of singularity may be properly modeled by modifying selected subroutines in the program.
Plummer, L. Niel; Jones, Blair F.; Truesdell, Alfred Hemingway
1976-01-01
WATEQF is a FORTRAN IV computer program that models the thermodynamic speciation of inorganic ions and complex species in solution for a given water analysis. The original version (WATEQ) was written in 1973 by A. H. Truesdell and B. F. Jones in Programming Language/one (PL/1.) With but a few exceptions, the thermochemical data, speciation, coefficients, and general calculation procedure of WATEQF is identical to the PL/1 version. This report notes the differences between WATEQF and WATEQ, demonstrates how to set up the input data to execute WATEQF, provides a test case for comparison, and makes available a listing of WATEQF. (Woodard-USGS)
International Nuclear Information System (INIS)
EQ3NR is a geochemical aqueous speciation-solubility FORTRAN program developed for application with the EQ3/6 software package. The program models the thermodynamic state of an aqueous solution by using a modified Newton-Raphson algorithm to calculate the distribution of aqueous species such as simple ions, ion-pairs, and aqueous complexes. Input to EQ3NR primarily consists of data derived from total analytical concentrations of dissolved components and can also include pH, alkalinity, electrical balance, phase equilibrium (solubility) constraints, and a default value for either Eh, pe, or the logarithm of oxygen fugacity. The program evaluates the degree of disequilibrium for various reactions and computes either the saturation index (SI = log Q/K) or thermodynamic affinity (A = -2.303 RT log Q/K) for minerals. Individual values of Eh, pe, equilibrium oxygen fugacity, and Ah (redox affinity, a new parameter) are computed for aqueous redox couples. Differences in these values define the degree of aqueous redox disequilibrium. EQ3NR can be used alone. It must be used to initialize a reaction-path calculation by EQ6, its companion program. EQ3NR reads a secondary data file, DATAl, created from a primary data file, DATA0, by the data base preprocessor, EQTL. The temperature range for the thermodynamic data in the file is 0 to 3000C. Addition or deletion of species or changes in associated thermodynamic data are made by changing only the file. Changes are not made to either EQ3NR or EQTL. Modification or substitution of equilibrium constant values can be selected on the EQ3NR INPUT file by the user at run time. EQ3NR and EQTL were developed for the FTN and CFT FORTRAN languages on the CDC 7600 and Cray-1 computers. Special FORTRAN conventions have been implemented for ease of portability to IBM, UNIVAC, and VAX computers
International Nuclear Information System (INIS)
Nuclear technology development pointed out the need for a new assessment of the fuel cycle back-end. Treatment and disposal of radioactive wastes arising from nuclear fuel reprocessing is known as one of the problems not yet satisfactorily solved, together with separation process of uranium and plutonium from fission products in highly irradiated fuels. Aim of this work is to present an improvement of the computer code for solvent extraction process calculation previously designed by the authors. The modeling of the extraction system has been modified by introducing a new method for calculating the distribution coefficients. The new correlations were based on deriving empirical functions for not only the apparent equilibrium constants, but also the solvation number. The mathematical model derived for calculating separation performance has been then tested for up to ten components and twelve theoretical stages with minor modifications to the convergence criteria. Suitable correlations for the calculation of the distribution coefficients of Uranium, Plutonium, Nitric Acid and fission products were constructed and used to successfully simulate several experimental conditions. (Author)
A computer code to calculate the fast induced signals by electron swarms in gases
Energy Technology Data Exchange (ETDEWEB)
Tobias, Carmen C.B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Mangiarotti, Alessio [Universidade de Coimbra (Portugal). Dept. de Fisica. Lab. de Instrumentacao e Fisica Experimental de Particulas
2010-07-01
Full text: The study of electron transport parameters (i.e. drift velocity, diffusion coefficients and first Townsend coefficient) in gases is very important in several areas of applied nuclear science. For example, they are a relevant input to the design of particle detector employing micro-structures (MSGC's, micromegas, GEM's) and RPC's (resistive plate chambers). Moreover, if the data are accurate and complete enough, they can be used to derive a set of electron impact cross-sections with their energy dependence, that are a key ingredient in micro-dosimetry calculations. Despite the fundamental need of such data and the long age of the field, the gases of possible interest are so many and the effort of obtaining good quality data so time demanding, that an important contribution can still be made. As an example, electrons drift velocity at moderate field strengths (up to 50 Td) in pure Isobutane (a tissue equivalent gas) has been measured only recently by the IPEN-LIP collaboration using a dedicated setup. The transport parameters are derived from the recorded electric pulse induced by a swarm started with a pulsed laser shining on the cathode. To aid the data analysis, a special code has been developed to calculate the induced pulse by solving the electrons continuity equation including growth, drift and diffusion. A realistic profile of the initial laser beam is taken into account as well as the boundary conditions at the cathode and anode. The approach is either semi-analytic, based on the expression derived by P. H. Purdie and J. Fletcher, or fully numerical, using a finite difference scheme improved over the one introduced by J. de Urquijo et al. The agreement between the two will be demonstrated under typical conditions for the mentioned experimental setup. A brief discussion on the stability of the finite difference scheme will be given. The new finite difference scheme allows a detailed investigation of the importance of back diffusion to
A computer programmed model for calculation of fall and dispersion of particles in the atmosphere
International Nuclear Information System (INIS)
An atmospheric model has been designed and developed to provide estimates of air concentrations or ground deposit densities of particles released in the atmosphere up to 90-km altitude. Particle density and diameter may range from 1 to 10 g/cm3 and about 3 to 300μ, respectively, for given instantaneous point or line sources. The particle cloud is allowed to move horizontally in accordance with analytically simulated winds and to fall at terminal velocity plus vertical air velocity. Small-scale cloud growth rate is specified empirically at values based on past instantaneous tracer experiments while large-scale growth results from trajectory subdivision and divergence of new particle trajectories. Some specific computer runs at Sandia were done to assess hazards resulting from possible rocket abort situations and atmospheric re-entry from improper orbits of isotopic or reactor power supplies. The results have been compared with other modes of estimation derived from simpler models of world-wide contaminant spread. While existing data are insufficient for full verification, it is felt that the present model is one of the most comprehensive and realistic available. (author)
International Nuclear Information System (INIS)
A spatial frequency index method is proposed to cull occlusion and generate a hologram. Object points with the same spatial frequency are put into a set for their mutual occlusion. The hidden surfaces of the three-dimensional (3D) scene are quickly removed through culling the object points that are furthest from the hologram plane in the set. The phases of plane wave, which are only interrelated with the spatial frequencies, are precomputed and stored in a table. According to the spatial frequency of the object points, the phases of plane wave for generating fringes are obtained directly from the table. Three 3D scenes are chosen to verify the spatial frequency index method. Both numerical simulation and optical reconstruction are performed. Experimental results demonstrate that the proposed method can cull the hidden surfaces of the 3D scene correctly. The occlusion effect of the 3D scene can be well reproduced. The computational speed is better than that obtained using conventional methods but is still time-consuming. (paper)
WASTE{_}MGMT: A computer model for calculation of waste loads, profiles, and emissions
Energy Technology Data Exchange (ETDEWEB)
Kotek, T.J.; Avci, H.I.; Koebnick, B.L. [Argonne National Lab., IL (United States). Environmental Assessment Div.
1996-12-01
Waste{_}MGMT is a computational model developed to provide waste loads, profiles, and emissions for the US Department of Energy`s Waste Management Programmatic Environmental Impact Statement (WP PEIS). The model was developed to account for the considerable variety of waste types and processing alternatives evaluated for the WM PEIS. The model is table-driven, with three types of fundamental waste management data defining the input: (1) waste inventories and characteristics; (2) treatment, storage, and disposal facility characteristics; and (3) alternative definition. The primary output of the model consists of tables of waste loads and contaminant profiles at facilities, as well as contaminant air releases for each treatment and storage facility at each site for each waste stream. The model is implemented in Microsoft{reg_sign} FoxPro{reg_sign} for MS-DOS{reg_sign} version 2.5 and requires a microcomputer with at least a 386 processor and a minimum 6 Mbytes of memory and 10 Mbytes of disk space for temporary storage.
Waste-Mgmt: A computer model for calculation of waste loads, profiles, and emissions
Energy Technology Data Exchange (ETDEWEB)
Kotek, T.J.; Avci, H.I.; Koebnick, B.L.
1995-04-01
WASTE-MGMT is a computational model that provides waste loads, profiles, and emissions for the U.S. Department of Energy`s Waste Management Programmatic Environmental Impact Statement (WM PEIS). The model was developed to account for the considerable variety of waste types and processing alternatives evaluated by the WM PEIS. The model is table-driven, with three types of fundamental waste management data defining the input: (1) waste inventories and characteristics; (2) treatment, storage and disposal facility characteristics; and (3) alternative definition. The primary output of the model consists of tables of waste loads and contaminant profiles at facilities, as well as contaminant air releases for each treatment and storage facility at each site for each waste stream. The model is implemented in Microsoft{reg_sign} FoxPro{reg_sign} for MS-DOS{reg_sign} version 2.5 and requires a microcomputer with at least a 386 processor and a minimum 6 MBytes of memory and 10 MBytes of disk space for temporary storage.
Attia, Khalid A M; El-Abasawi, Nasr M; Abdel-Azim, Ahmed H
2016-04-01
Computational study has been done electronically and geometrically to select the most suitable ionophore to design a novel sensitive and selective electrochemical sensor for phenazopyridine hydrochloride (PAP). This study has revealed that sodium tetraphenylbarate (NaTPB) fits better with PAP than potassium tetrakis (KTClPB). The sensor design is based on the ion pair of PAP with NaTPB using dioctyl phthalate as a plasticizer. Under optimum conditions, the proposed sensor shows the slope of 59.5 mV per concentration decade in the concentration range of 1.0 × 10(-2)-1.0 × 10(-5) M with detection limit 8.5 × 10(-6) M. The sensor exhibits a very good selectivity for PAP with respect to a large number of interfering species as inorganic cations and sugars. The sensor enables track of determining PAP in the presence of its oxidative degradation product 2, 3, 6-Triaminopyridine, which is also its toxic metabolite. The proposed sensor has been successfully applied for the selective determination of PAP in pharmaceutical formulation. Also, the obtained results have been statistically compared to a reported electrochemical method indicating no significant difference between the investigated method and the reported one with respect to accuracy and precision. PMID:26838908
Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework
Energy Technology Data Exchange (ETDEWEB)
Austin, T.M.; /Tech-X, Boulder; Cary, J.R.; /Tech-X, Boulder /Colorado U.; Bellantoni, L.; /Argonne
2009-05-01
We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.
A computer code for calculations in the algebraic collective model of the atomic nucleus
Welsh, T. A.; Rowe, D. J.
2016-03-01
A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.
Vanderstraeten, Barbara; Reynaert, Nick; Paelinck, Leen; Madani, Indira; De Wagter, Carlos; De Gersem, Werner; De Neve, Wilfried; Thierens, Hubert
2006-09-01
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both
International Nuclear Information System (INIS)
The accuracy of dose computation within the lungs depends strongly on the performance of the calculation algorithm in regions of electronic disequilibrium that arise near tissue inhomogeneities with large density variations. There is a lack of data evaluating the performance of highly developed analytical dose calculation algorithms compared to Monte Carlo computations in a clinical setting. We compared full Monte Carlo calculations (performed by our Monte Carlo dose engine MCDE) with two different commercial convolution/superposition (CS) implementations (Pinnacle-CS and Helax-TMS's collapsed cone model Helax-CC) and one pencil beam algorithm (Helax-TMS's pencil beam model Helax-PB) for 10 intensity modulated radiation therapy (IMRT) lung cancer patients. Treatment plans were created for two photon beam qualities (6 and 18 MV). For each dose calculation algorithm, patient, and beam quality, the following set of clinically relevant dose-volume values was reported: (i) minimal, median, and maximal dose (Dmin, D50, and Dmax) for the gross tumor and planning target volumes (GTV and PTV); (ii) the volume of the lungs (excluding the GTV) receiving at least 20 and 30 Gy (V20 and V30) and the mean lung dose; (iii) the 33rd percentile dose (D33) and Dmax delivered to the heart and the expanded esophagus; and (iv) Dmax for the expanded spinal cord. Statistical analysis was performed by means of one-way analysis of variance for repeated measurements and Tukey pairwise comparison of means. Pinnacle-CS showed an excellent agreement with MCDE within the target structures, whereas the best correspondence for the organs at risk (OARs) was found between Helax-CC and MCDE. Results from Helax-PB were unsatisfying for both targets and OARs. Additionally, individual patient results were analyzed. Within the target structures, deviations above 5% were found in one patient for the comparison of MCDE and Helax-CC, while all differences between MCDE and Pinnacle-CS were below 5%. For both
Energy Technology Data Exchange (ETDEWEB)
Watson, S.B.; Ford, M.R.
1980-02-01
A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.
International Nuclear Information System (INIS)
A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration
Kim, Minsuok; Ionita, Ciprian; Tranquebar, Rekha; Hoffmann, Kenneth R.; Taulbee, Dale B.; Meng, Hui; Rudin, Stephen
2006-03-01
Stenting may provide a new, less invasive therapeutic option for cerebral aneurysms. However, a conventional porous stent may be insufficient in modifying the blood flow for clinical aneurysms. We designed an asymmetric stent consisting of a low porosity patch welded onto a porous stent for an anterior cerebral artery aneurysm of a specific patient geometry to block the strong inflow jet. To evaluate the effect of the patch on aneurysmal flow dynamics, we "virtually" implanted it into the patient's aneurysm geometry and performed Computational Fluid Dynamics (CFD) analysis. The patch was computationally deformed to fit into the vessel lumen segmented from the patient CT reconstructions. After the flow calculations, a patch with the same design was fabricated using laser cutting techniques and welded onto a commercial porous stent, creating a patient-specific asymmetric stent. This stent was implanted into a phantom, which was imaged with X-ray angiography. The hemodynamics of untreated and stented aneurysms were compared both computationally and experimentally. It was found from CFD of the patient aneurysm that the asymmetric stent effectively blocked the strong inflow jet into the aneurysm and eliminated the flow impingement on the aneurysm wall at the dome. The impact zone with elevated wall shear stress was eliminated, the aneurysmal flow activity was substantially reduced, and the flow was considerably reduced. Experimental observations corresponded well qualitatively with the CFD results. The demonstrated asymmetric stent could lead to a new minimally invasive image guided intervention to reduce aneurysm growth and rupture.
International Nuclear Information System (INIS)
The HARAD computer code, written in FORTRAN IV, calculates concentrations of radioactive daughters in air following the atmospheric release of a parent radionuclide under a variety of meteorological conditions. It can be applied most profitably to the assessment of doses to man from the noble gases such as 222Rn, 220Rn, and Xe and Kr isotopes. These gases can produce significant quantities of short-lived particulate daughters in an airborne plume, which are the major contributors to dose from these chains with gaseous parent radionuclides. The simultaneous processes of radioactive decay, buildup, and environmental losses through wet and dry deposition on ground surfaces are calculated for a daughter chain in an airborne plume as it is dispersed downwind from a point of release of a parent. The code employs exact solutions of the differential equations describing the above processes over successive discrete segments of downwind distance. Average values for the dry deposition coefficients of the chain members over each of these distance segments were treated as constants in the equations. The advantage of HARAD is its short computing time
Marchand, P.; Masson, J.; Chabrier, G.; Hennebelle, P.; Commerçon, B.; Vaytet, N.
2016-07-01
We develop a detailed chemical network relevant to calculate the conditions that are characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of potassium, sodium, and hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to nH = 1012 cm-3, after which Ohmic diffusion takes over. We find that the time-scale needed to reach chemical equilibrium is always shorter than the typical dynamical (free fall) one. This allows us to build a large, multi-dimensional multi-species equilibrium abundance table over a large temperature, density and ionisation rate ranges. This table, which we make accessible to the community, is used during first and second prestellar core collapse calculations to compute the non-ideal magneto-hydrodynamics resistivities, yielding a consistent dynamical-chemical description of this process. The multi-dimensional multi-species equilibrium abundance table and a copy of the code are only available at the CDS via anonymous ftp to http://cdsarc.u-strasbg.fr (http://130.79.128.5) or via http://cdsarc.u-strasbg.fr/viz-bin/qcat?J/A+A/592/A18
Directory of Open Access Journals (Sweden)
Yujie Huang
2015-01-01
Full Text Available This paper theoretically investigates interactions between a template and functional monomer required for synthesizing an efficient molecularly imprinted polymer (MIP. We employed density functional theory (DFT to compute geometry, single-point energy, and binding energy (ΔE of an MIP system, where spermidine (SPD and methacrylic acid (MAA were selected as template and functional monomer, respectively. The geometry was calculated by using B3LYP method with 6-31+(d basis set. Furthermore, 6-311++(d, p basis set was used to compute the single-point energy of the above geometry. The optimized geometries at different template to functional monomer molar ratios, mode of bonding between template and functional monomer, changes in charge on natural bond orbital (NBO, and binding energy were analyzed. The simulation results show that SPD and MAA form a stable complex via hydrogen bonding. At 1 : 5 SPD to MAA ratio, the binding energy is minimum, while the amount of transferred charge between the molecules is maximum; SPD and MAA form a stable complex at 1 : 5 molar ratio through six hydrogen bonds. Optimizing structure of template-functional monomer complex, through computational modeling prior synthesis, significantly contributes towards choosing a suitable pair of template-functional monomer that yields an efficient MIP with high specificity and selectivity.
International Nuclear Information System (INIS)
shortly after the deuterium-tritium experiment (DTE1) in 1997. Large computing power, both in terms of amount of data handling and storage and the CPU computing time is needed by the two methods, partly due to the complexity of the problem. With parallel versions of the MCNP code, running on two different platforms, a satisfying accuracy of the calculation has been reached in reasonable times. (authors)
García-Jerez, Antonio; Sánchez-Sesma, Francisco J; Luzón, Francisco; Perton, Mathieu
2016-01-01
During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, several schemes for inversion of the full HVSRN curve for near surface surveying have been developed over the last decade. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested.It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserv...
Energy Technology Data Exchange (ETDEWEB)
Hu, Chih-Chung [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Huang, Wen-Tao [Yuanpei Univ., Hsinchu (China). Dept. of Radiological Technology; Tsai, Chiao-Ling; Chao, Hsiao-Ling; Huang, Guo-Ming; Wang, Chun-Wei [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; Wu, Jian-Kuen [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Wu, Chien-Jang [National Taiwan Normal Univ., Taipei (China). Inst. of Electro-Optical Science and Technology; Cheng, Jason Chia-Hsien [National Taiwan Univ. Hospital and College of Medicine, Taipei (China). Division of Radiation Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Oncology; National Taiwan Univ. Taipei (China). Graduate Inst. of Clinical Medicine; National Taiwan Univ. Taipei (China). Graduate Inst. of Biomedical Electronics and Bioinformatics
2011-10-15
On-line cone-beam computed tomography (CBCT) may be used to reconstruct the dose for geometric changes of patients and tumors during radiotherapy course. This study is to establish a practical method to modify the CBCT for accurate dose calculation in head and neck cancer. Fan-beam CT (FBCT) and Elekta's CBCT were used to acquire images. The CT numbers for different materials on CBCT were mathematically modified to match them with FBCT. Three phantoms were scanned by FBCT and CBCT for image uniformity, spatial resolution, and CT numbers, and to compare the dose distribution from orthogonal beams. A Rando phantom was scanned and planned with intensity-modulated radiation therapy (IMRT). Finally, two nasopharyngeal cancer patients treated with IMRT had their CBCT image sets calculated for dose comparison. With 360 acquisition of CBCT and high-resolution reconstruction, the uniformity of CT number distribution was improved and the otherwise large variations for background and high-density materials were reduced significantly. The dose difference between FBCT and CBCT was < 2% in phantoms. In the Rando phantom and the patients, the dose-volume histograms were similar. The corresponding isodose curves covering {>=} 90% of prescribed dose on FBCT and CBCT were close to each other (within 2 mm). Most dosimetric differences were from the setup errors related to the interval changes in body shape and tumor response. The specific CBCT acquisition, reconstruction, and CT number modification can generate accurate dose calculation for the potential use in adaptive radiotherapy.
Bednarz, Bryan; Hancox, Cindy; Xu, X. George
2009-09-01
There is growing concern about radiation-induced second cancers associated with radiation treatments. Particular attention has been focused on the risk to patients treated with intensity-modulated radiation therapy (IMRT) due primarily to increased monitor units. To address this concern we have combined a detailed medical linear accelerator model of the Varian Clinac 2100 C with anatomically realistic computational phantoms to calculate organ doses from selected treatment plans. This paper describes the application to calculate organ-averaged equivalent doses using a computational phantom for three different treatments of prostate cancer: a 4-field box treatment, the same box treatment plus a 6-field 3D-CRT boost treatment and a 7-field IMRT treatment. The equivalent doses per MU to those organs that have shown a predilection for second cancers were compared between the different treatment techniques. In addition, the dependence of photon and neutron equivalent doses on gantry angle and energy was investigated. The results indicate that the box treatment plus 6-field boost delivered the highest intermediate- and low-level photon doses per treatment MU to the patient primarily due to the elevated patient scatter contribution as a result of an increase in integral dose delivered by this treatment. In most organs the contribution of neutron dose to the total equivalent dose for the 3D-CRT treatments was less than the contribution of photon dose, except for the lung, esophagus, thyroid and brain. The total equivalent dose per MU to each organ was calculated by summing the photon and neutron dose contributions. For all organs non-adjacent to the primary beam, the equivalent doses per MU from the IMRT treatment were less than the doses from the 3D-CRT treatments. This is due to the increase in the integral dose and the added neutron dose to these organs from the 18 MV treatments. However, depending on the application technique and optimization used, the required MU
International Nuclear Information System (INIS)
A short description of the TOPRA-s computer code is presented. The code is developed to calculate the thermophysical cross-section characteristics of the WWER fuel rods: fuel temperature distributions and fuel-to-cladding gap conductance. The TOPRA-s input does not require the fuel rod irradiation pre-history (time dependent distributions of linear power, fast neutron flux and coolant temperature along the rod). The required input consists of the considered cross-section data (coolant temperature, burnup, linear power) and the overall fuel rod data (burnup and linear power). TOPRA-s is included into the KASKAD code package. Some results of the TOPRA-s code validation using the SOFIT-1 and IFA-503.1 experimental data, are shown. A short description of the TRANSURANUS code for thermal and mechanical predictions of the LWR fuel rod behavior at various irradiation conditions and its version for WWER reactors, are presented. (Authors)
Energy Technology Data Exchange (ETDEWEB)
Cismondi, Federico; Mosconi, Sergio L [Fundacion Escuela de Medicina Nuclear, Mendoza (Argentina)
2007-11-15
In this paper we present a software tool that has been developed to allow automatic registrations of 2D Scintillation Camera (SC) and Computed Tomography (CT) images. This tool, used with a dosimetric software with Integrated Activity or Residence Time as input data, allows the user to assess physicians about effects of radiodiagnostic or radioterapeutic practices. Images are registered locally and globally, maximizing Mutual Information coefficient between regions been registered. In the regional case whole-body images are segmented into five regions: head, thorax, pelvis, left and right legs. Each region has its own registration parameters, which are optimized through Powell-Brent minimization method that 'maximizes' Mutual Information coefficient. This software tool allows the user to draw ROIs, input isotope characteristics and finally calculate Integrated Activity or Residence Time in one or many specific organ. These last values can be introduced in many dosimetric softwares to finally obtain Absorbed Dose values.
International Nuclear Information System (INIS)
This report describes the calculation procedure of the TRANCS code, which deals with fission product transport in fuel rod of high temperature gas-cooled reactor (HTGR). The fundamental equation modeled in the code is a cylindrical one-dimensional diffusion equation with generation and decay terms, and the non-stationary solution of the equation is obtained numerically by a finite difference method. The generation terms consist of the diffusional release from coated fuel particles, recoil release from outer-most coating layer of the fuel particle and generation due to contaminating uranium in the graphite matrix of the fuel compact. The decay term deals with neutron capture as well as beta decay. Factors affecting the computation error has been examined, and further extention of the code has been discussed in the fields of radial transport of fission products from graphite sleeve into coolant helium gas and axial transport in the fuel rod. (author)
International Nuclear Information System (INIS)
This report presents the technical details of RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, interactive program that can be run on an IBM or equivalent personal computer under the Windows trademark environment. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incident-free models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionuclide inventory and dose conversion factors. In addition, the flexibility of the models allows them to be used for assessing any accidental release involving radioactive materials. The RISKIND code allows for user-specified accident scenarios as well as receptor locations under various exposure conditions, thereby facilitating the estimation of radiological consequences and health risks for individuals. Median (50% probability) and typical worst-case (less than 5% probability of being exceeded) doses and health consequences from potential accidental releases can be calculated by constructing a cumulative dose/probability distribution curve for a complete matrix of site joint-wind-frequency data. These consequence results, together with the estimated probability of the entire spectrum of potential accidents, form a comprehensive, probabilistic risk assessment of a spent nuclear fuel transportation accident
Energy Technology Data Exchange (ETDEWEB)
Yuan, Y.C. [Square Y Consultants, Orchard Park, NY (US); Chen, S.Y.; Biwer, B.M.; LePoire, D.J. [Argonne National Lab., IL (US)
1995-11-01
This report presents the technical details of RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, interactive program that can be run on an IBM or equivalent personal computer under the Windows{trademark} environment. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incident-free models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionuclide inventory and dose conversion factors. In addition, the flexibility of the models allows them to be used for assessing any accidental release involving radioactive materials. The RISKIND code allows for user-specified accident scenarios as well as receptor locations under various exposure conditions, thereby facilitating the estimation of radiological consequences and health risks for individuals. Median (50% probability) and typical worst-case (less than 5% probability of being exceeded) doses and health consequences from potential accidental releases can be calculated by constructing a cumulative dose/probability distribution curve for a complete matrix of site joint-wind-frequency data. These consequence results, together with the estimated probability of the entire spectrum of potential accidents, form a comprehensive, probabilistic risk assessment of a spent nuclear fuel transportation accident.
Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn
2016-09-15
We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc. PMID:27431905
Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn
2016-09-15
We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.
Energy Technology Data Exchange (ETDEWEB)
Sofronov, I.D.; Voronin, B.L.; Butnev, O.I. [VNIIEF (Russian Federation)] [and others
1997-12-31
The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.
International Nuclear Information System (INIS)
The computer program, TRANCS, has been developed for evaluating the fractional release of long-lived fission products from coated fuel particles. This code numerically gives the non-stationary solution of the diffusion equation with birth and decay terms. The birth term deals with the fissile material in the fuel kernel, the contamination in the coating layers and the fission-recoil transfer from the kernel into the buffer layer; and the decay term deals with effective decay not only due to beta decay but also due to neutron capture, if appropriate input data are given. The code calculates the concentration profile, the release to birth rates (R/B), and the release and residual fractions in the coated fuel particle. Results obtained numerically have been in good agreement with the corresponding analytical solutions after the Booth model. Thus, the validity of the present code was confirmed, and further undate of the code has been discussed for extention of its computation scopes and models. (author)
Chang, T. S.
1974-01-01
A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.
Xavier, S; Periandy, S
2015-01-01
In this paper, the spectral analysis of 1-phenyl-2-nitropropene is carried out using the FTIR, FT Raman, FT NMR and UV-Vis spectra of the compound with the help of quantum mechanical computations using ab-initio and density functional theories. The FT-IR (4000-400 cm(-1)) and FT-Raman (4000-100 cm(-1)) spectra were recorded in solid phase, the (1)H and (13)C NMR spectra were recorded in CDCl3 solution phase and the UV-Vis (200-800 nm) spectrum was recorded in ethanol solution phase. The different conformers of the compound and their minimum energies are studied using B3LYP functional with 6-311+G(d,p) basis set and two stable conformers with lowest energy were identified and the same was used for further computations. The computed wavenumbers from different methods are scaled so as to agree with the experimental values and the scaling factors are reported. All the modes of vibrations are assigned and the structure the molecule is analyzed in terms of parameters like bond length, bond angle and dihedral angle predicted by both B3LYP and B3PW91 methods with 6-311+G(d,p) and 6-311++G(d,p) basis sets. The values of dipole moment (μ), polarizability (α) and hyperpolarizability (β) of the molecule are reported, using which the non-linear property of the molecule is discussed. The HOMO-LUMO mappings are reported which reveals the different charge transfer possibilities within the molecule. The isotropic chemical shifts predicted for (1)H and (13)C atoms using gauge invariant atomic orbital (GIAO) theory show good agreement with experimental shifts. NBO analysis is carried out to picture the charge transfer between the localized bonds and lone pairs. The local reactivity of the molecule has been studied using the Fukui function. The thermodynamic properties (heat capacity, entropy and enthalpy) at different temperatures are also calculated. PMID:25965169
Schmidt, James F.
1995-01-01
An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.
Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.
2016-03-01
Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
Marchand, Pierre; Chabrier, Gilles; Hennebelle, Patrick; Commerçon, Benoit; Vaytet, Neil
2016-01-01
We develop a detailed chemical network relevant to the conditions characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of Potassium, Sodium and Hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to n_H = 10^12 cm^-3, after which Oh...
Directory of Open Access Journals (Sweden)
Shahamatnia Ehsan
2016-01-01
Full Text Available Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO, solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO, a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.
ESTAB-A Computer Package for Stability Calculation of Ships%船舶稳性计算程序ESTAB
Institute of Scientific and Technical Information of China (English)
赵成璧; 邹早建
2001-01-01
This paper introduces a practical computer package for effective 3 dimensional stability calculation of ships, which involves various advanced skill and techniques in order to describe the ship surfaces and hulls conveniently, get the floating attitude and various curves of stability.%介绍了一个基于三维稳性计算方法的实用计算机程序ESTAB。采用三角形剖分技术和一些先进的算法，实现了船舶舱室和各种剖面的生成，可以可靠地获得各种载况下的浮态、不同纵向位置的进水角、各类型舱室的舱容要素曲线和船舶静水力曲线、邦戎曲线、插值曲线、静（动）稳性曲线、可浸长度曲线、弯矩与剪力曲线以及破舱时的浮态和破舱稳性。
Grzegorz Mazur; Marcin Makowski; Jakub Sumera; Krzysztof Kowalczyk
2012-01-01
Wavefunction-less, density matrix-based approach to computational quantum chemistry is briefly discussed. Implementation of second-order M oller-Plesset Perturbation Method energy and dipole moment calculations within the new paradigm is presented. Efficiency and reliability of the method is analyzed.
Kansky, Bob
The Technology Advisory Committee of the National Council of Teachers of Mathematics recently conducted a survey to assess the status of state-level policies affecting the use of calculators and computers in the teaching of mathematics in grades K-12. The committee determined that state-level actions related to the increased availability of…
Wai, J. C.; Blom, G.; Yoshihara, H.; Chaussee, D.
1986-01-01
The NASA/Ames parabolized Navier/Stokes computer code was used to calculate the turbulent flow over the wing/fuselage for a generic fighter at M = 2.2. 18 deg angle-of-attack, and 0 and 5 deg yaw. Good test/theory agreement was achieved in the zero yaw case. No test data were available for the yaw case.
Energy Technology Data Exchange (ETDEWEB)
Pavlovichev, A.M.
2001-06-19
The report presents calculation results of isotopic composition of irradiated fuel performed for the Quad Cities-1 reactor bundle with UO{sub 2} and MOX fuel. The MCU-REA code was used for calculations. The code is developed in Kurchatov Institute, Russia. The MCU-REA results are compared with the experimental data and HELIOS code results.
Institute of Scientific and Technical Information of China (English)
张敏革; 张吕鸿; 姜斌; 尹玉国; 李鑫钢
2008-01-01
Using the multiple reference frames(MRF)impeller method,the three-dimensional non-Newtonian flow field generated by a double helical ribbon(DHR)impeller has been simulated.The velocity field calculated by thc numerical simulation was similar to the previous studies and the power constant agreed well with the experi-mental data.Three computational fluid dynamic(CFD)methods,labeled Ⅰ,Ⅱ and Ⅲ,were used to compute the Metzner constant ks.The results showed that the calculated value from the slop method(method I)was consistent with the experimental data.Method Ⅱ.which took the maximal circumference-average shear rate around the impel-ler as the effective shear rate to compute ks,also showed good agreement with the experiment.However,both methods SUgcr from the complexity of calculation procedures.A new method(method III)was devised in this papcr to use the area.weighted average viscosity around the impeller as the effective viscosity for calculating ks.Method Ⅲ showed both good accuracy and ease of use.
International Nuclear Information System (INIS)
In this manual we describe the use of the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG runs on a Cray-1 computer and accepts more extensive activation cross sections than ORIGEN2 from which it was adapted. This report is an updated and a combined version of the previous ORIGEN2 and FORIG manuals. 7 refs., 15 figs., 13 tabs
Pretest and posttest calculations of Semiscale Test S-07-10D with the TRAC computer program
International Nuclear Information System (INIS)
The Transient Reactor Analysis Code (TRAC) developed at the Los Alamos National Laboratory was used to predict the behavior of the small-break experiment designated Semiscale S-07-10D. This test simulates a 10 per cent communicative cold-leg break with delayed Emergency Core Coolant injection and blowdown of the broken-loop steam generator secondary. Both pretest calculations that incorporated measured initial conditions and posttest calculations that incorporated measured initial conditions and measured transient boundary conditions were completed. The posttest calculated parameters were generally between those obtained from pretest calculations and those from the test data. The results are strongly dependent on depressurization rate and, hence, on break flow
DEFF Research Database (Denmark)
Petersen, Kurt Erling
1986-01-01
approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...... complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested....
Energy Technology Data Exchange (ETDEWEB)
Bellido, Luis F.
1995-07-01
A computer code to calculate the projectile energy degradation along a target stack was developed for an IBM or compatible personal microcomputer. A comparison of protons and deuterons bombarding uranium and aluminium targets was made. The results showed that the data obtained with TRANGE were in agreement with other computers code such as TRIM, EDP and also using Williamsom and Janni range and stopping power tables. TRANGE can be used for any charged particle ion, for energies between 1 to 100 MeV, in metal foils and solid compounds targets. (author). 8 refs., 2 tabs.
Energy Technology Data Exchange (ETDEWEB)
Rosales, Mario; De la Torre, Octavio [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)
1989-12-31
In this article are described the computational characteristics of the Package CALIIE 2D of the Instituto de Investigaciones Electricas (IIE), for the calculation of bi-dimensional electromagnetic fields. The computational implementation of the package is based in the electromagnetic and numerical statements formerly published in this series. [Espanol] En este articulo se describen las caracteristicas computacionales del paquete CALIIE 2D del Instituto de Investigaciones Electricas (IIE), para calcular campos electromagneticos bidimensionales. La implantacion computacional del paquete se basa en los planteamientos electromagneticos y numericos antes publicados en esta serie.
Energy Technology Data Exchange (ETDEWEB)
Kovscek, S.E.; Martin, S.E.
1982-10-01
ROBOT3 is a FORTRAN computer program which is used in conjunction with the CYGRO5 computer program to calculate the time-dependent inelastic bowing of a fuel rod using an incremental finite element method. The fuel rod is modeled as a viscoelastic beam whose material properties are derived as perturbations of the CYGRO5 axisymmetric model. Fuel rod supports are modeled as displacement, force, or spring-type nodal boundary conditions. The program input is described and a sample problem is given.
Gordon, S.; Mcbride, B. J.
1976-01-01
A detailed description of the equations and computer program for computations involving chemical equilibria in complex systems is given. A free-energy minimization technique is used. The program permits calculations such as (1) chemical equilibrium for assigned thermodynamic states (T,P), (H,P), (S,P), (T,V), (U,V), or (S,V), (2) theoretical rocket performance for both equilibrium and frozen compositions during expansion, (3) incident and reflected shock properties, and (4) Chapman-Jouguet detonation properties. The program considers condensed species as well as gaseous species.
Energy Technology Data Exchange (ETDEWEB)
Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Experimental Structural Dynamics Dept.
1994-07-01
LUGSAN (LUG and Sway brace ANalysis) is a analysis and database computer program designed to calculate store lug and sway brace loads from aircraft captive carriage. LUGSAN combines the rigid body dynamics code, SWAY85 and the maneuver calculation code, MILGEN, with an INGRES database to function both as an analysis and archival system. This report describes the operation of the LUGSAN application program, including function description, layout examples, and sample sessions. This report is intended to be a user`s manual for version 1.1 of LUGSAN operating on the VAX/VMS system. The report is not intended to be a programmer or developer`s manual.
Energy Technology Data Exchange (ETDEWEB)
Ribeiro, M., E-mail: ribeiro.jr@oorbit.com.br [Office of Operational Research for Business Intelligence and Technology, Principal Office, Buffalo, Wyoming 82834 (United States)
2015-06-21
Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost.
International Nuclear Information System (INIS)
After the reflooding tests in an extremely tight bundle (p/d=1.06, FLORESTAN 1) have been completed, new experiments for a wider lattice (p/d=1.242, FLORESTAN 2), which is employed in the recent APWR design of KfK, are planned at KfK to obtain the benchmark data for validation and improvement of calculation methods. This report presents the results of pre-test calculations for the FLORESTAN 2 experiment using FLUT-FDWR, a modified version of the GRS computer code FLUT for analysis of the most important behaviour during the reflooding phase after a LOCA in the APWR design. (orig.)
International Nuclear Information System (INIS)
In order to account for the reactivity-reducing effect of burn-up in the criticality safety analysis for systems with irradiated nuclear fuel (''burnup credit''), numerical methods to determine the enrichment and burnup dependent nuclide inventory (''burnup code'') and its resulting multiplication factor keff (''criticality code'') are applied. To allow for reliable conclusions, for both calculation systems the systematic deviations of the calculation results from the respective true values, the bias and its uncertainty, are being quantified by calculation and analysis of a sufficient number of suitable experiments. This quantification is specific for the application case under scope and is also called validation. GRS has developed a methodology to validate a calculation system for the application of burnup credit in the criticality safety analysis for irradiated fuel assemblies from pressurized water reactors. This methodology was demonstrated by applying the GRS home-built KENOREST burnup code and the criticality calculation sequence CSAS5 from SCALE code package. It comprises a bounding approach and alternatively a stochastic, which both have been exemplarily demonstrated by use of a generic spent fuel pool rack and a generic dry storage cask, respectively. Based on publicly available post irradiation examination and criticality experiments, currently the isotopes of uranium and plutonium elements can be regarded for.
Vlachopanos, A; Soupsana, E; Politou, A S; Papamokos, G V
2014-12-01
Mass spectrometry is a widely used technique for protein identification and it has also become the method of choice in order to detect and characterize the post-translational modifications (PTMs) of proteins. Many software tools have been developed to deal with this complication. In this paper we introduce a new, free and user friendly online software tool, named POTAMOS Mass Spectrometry Calculator, which was developed in the open source application framework Ruby on Rails. It can provide calculated mass spectrometry data in a time saving manner, independently of instrumentation. In this web application we have focused on a well known protein family of histones whose PTMs are believed to play a crucial role in gene regulation, as suggested by the so called "histone code" hypothesis. The PTMs implemented in this software are: methylations of arginines and lysines, acetylations of lysines and phosphorylations of serines and threonines. The application is able to calculate the kind, the number and the combinations of the possible PTMs corresponding to a given peptide sequence and a given mass along with the full set of the unique primary structures produced by the possible distributions along the amino acid sequence. It can also calculate the masses and charges of a fragmented histone variant, which carries predefined modifications already implemented. Additional functionality is provided by the calculation of the masses of fragments produced upon protein cleavage by the proteolytic enzymes that are most widely used in proteomics studies. PMID:25450216
International Nuclear Information System (INIS)
The ''Griess Correlation,'' in which the thickness of the corrosion product on aluminum alloy surfaces is expressed as a function of time and temperature for high-flux-reactor conditions, was rewritten in the form of a simple, general rate equation. Based on this equation, a computer program that calculates oxide-layer thickness for any given time-temperature transient was written. 4 refs
International Nuclear Information System (INIS)
The computer programs ARRRG and FOOD were written to facilitate the calculation of internal radiation doses to man from the radionuclides in the environment and external radiation doses from radionuclides in the environment. Using ARRRG, radiation doses to man may be calculated for radionuclides released to bodies of water from which people might obtain fish, other aquatic foods, or drinking water, and in which they might fish, swim or boat. With the FOOD program, radiation doses to man may be calculated from deposition on farm or garden soil and crops during either an atmospheric or water release of radionuclides. Deposition may be either directly from the air or from irrigation water. Fifteen crop or animal product pathways may be chosen. ARRAG and FOOD doses may be calculated for either a maximum-exposed individual or for a population group. Doses calculated are a one-year dose and a committed dose from one year of exposure. The exposure is usually considered as chronic; however, equations are included to calculate dose and dose commitment from acute (one-time) exposure. The equations for calculating internal dose and dose commitment are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and Maximum Permissible Concentration (MPC) of each radionuclide. The radiation doses from external exposure to contaminated farm fields or shorelines are calculated assuming an infinite flat plane source of radionuclides. A factor of two is included for surface roughness. A modifying factor to compensate for finite extent is included in the shoreline calculations
Energy Technology Data Exchange (ETDEWEB)
Napier, B.A.; Roswell, R.L.; Kennedy, W.E. Jr.; Strenge, D.L.
1980-06-01
The computer programs ARRRG and FOOD were written to facilitate the calculation of internal radiation doses to man from the radionuclides in the environment and external radiation doses from radionuclides in the environment. Using ARRRG, radiation doses to man may be calculated for radionuclides released to bodies of water from which people might obtain fish, other aquatic foods, or drinking water, and in which they might fish, swim or boat. With the FOOD program, radiation doses to man may be calculated from deposition on farm or garden soil and crops during either an atmospheric or water release of radionuclides. Deposition may be either directly from the air or from irrigation water. Fifteen crop or animal product pathways may be chosen. ARRAG and FOOD doses may be calculated for either a maximum-exposed individual or for a population group. Doses calculated are a one-year dose and a committed dose from one year of exposure. The exposure is usually considered as chronic; however, equations are included to calculate dose and dose commitment from acute (one-time) exposure. The equations for calculating internal dose and dose commitment are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and Maximum Permissible Concentration (MPC) of each radionuclide. The radiation doses from external exposure to contaminated farm fields or shorelines are calculated assuming an infinite flat plane source of radionuclides. A factor of two is included for surface roughness. A modifying factor to compensate for finite extent is included in the shoreline calculations.
International Nuclear Information System (INIS)
During the last few years the Business Unit ESC-Energy Studies of the Netherlands Energy Research Foundation (ECN) developed calculation programs to determine the economic efficiency of energy technologies, which programs support several studies for the Dutch Ministry of Economic Affairs. All these programs form the so-called BRET programs. One of these programs is ERWIN (Economische Rentabiliteit WINdenergiesystemen or in English: Economic Efficiency of Wind Energy Systems) of which an updated manual (ERWIN2) is presented in this report. An outline is given of the possibilities and limitations to carry out calculations with the model
Directory of Open Access Journals (Sweden)
SELMA ŠPIRTOVIĆ-HALILOVIĆ
2010-02-01
Full Text Available Coumarin-based compounds containing a chalcone moiety exhibit antimicrobial activity. These substances are potential drugs and it is important to determine their pKa values. However, they are almost insoluble in water. The dissociation constant was experimentally determined by potentiometric titration for 3-[3-(2-nitrophenylprop-2-enoyl]-2H-1-benzopyran-2-one because this compound shows good activity and solubility. A number of different computer programs for the calculation of the dissociation constant of chemical compounds have been developed. The pKa value of the target compound was calculated using three different computer programs, i.e., the ACD/pKa, CSpKaPredictor and ADME/ToxWEB programs, which are based on different theoretical approaches. The analysis demonstrated good agreement between the experimentally observed pKa value of 3-[3-(2-nitrophenylprop-2-enoyl]-2H-1-benzopyran-2-one and the value calculated using the computer program CSpKa.
International Nuclear Information System (INIS)
The theme of this work is the study of the concept of mathematical dummy - also called phantoms - used in internal dosimetry and radiation protection, from the perspective of computer simulations. In this work he developed the mathematical phantom of the Brazilian woman, to be used as the basis of calculations of Specific Absorbed Fractions (AEDs) in the body's organs and skeleton by virtue of goals with regarding the diagnosis or therapy in nuclear medicine. The phantom now developed is similar, in form, to Snyder phantom making it more realistic for the anthropomorphic conditions of Brazilian women. For so we used the Monte Carlo method of formalism, through computer modeling. As a contribution to the objectives of this study, it was developed and implemented the computer system cFAE - consultation Fraction Specific Absorbed, which makes it versatile for the user's query researcher
Energy Technology Data Exchange (ETDEWEB)
Lee, C.E.; Apperson, C.E. Jr.; Foley, J.E.
1976-10-01
The report describes an analytic containment building model that is used for calculating the leakage into the environment of each isotope of an arbitrary radioactive decay chain. The model accounts for the source, the buildup, the decay, the cleanup, and the leakage of isotopes that are gas-borne inside the containment building.
International Nuclear Information System (INIS)
The report describes an analytic containment building model that is used for calculating the leakage into the environment of each isotope of an arbitrary radioactive decay chain. The model accounts for the source, the buildup, the decay, the cleanup, and the leakage of isotopes that are gas-borne inside the containment building
Directory of Open Access Journals (Sweden)
Mohammadnia Meysam
2013-01-01
Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.
Kurtaj, Lavdim; Limani, Ilir; Shatri, Vjosa; Skeja, Avni
2014-01-01
Cerebellum is part of the brain that occupies only 10% of the brain volume, but it contains about 80% of total number of brain neurons. New cerebellar function model is developed that sets cerebellar circuits in context of multibody dynamics model computations, as important step in controlling balance and movement coordination, functions performed by two oldest parts of the cerebellum. Model gives new functional interpretation for granule cells-Golgi cell circuit, including distinct function ...
Becker, Caroline
2014-01-01
A molecular understanding of protein-protein or protein-ligand binding is of crucial importance for the design of proteins or ligands with defined binding characteristics. The comprehensive analysis of biomolecular binding and the coupled rational in silico design of protein-ligand interfaces requires both, accurate and computationally fast methods for the prediction of free energies. Accurate free energy methods usually involve atomistic molecular dynamics simulations that are computationall...
Pasha, M A; Siddekha, Aisha; Mishra, Soni; Azzam, Sadeq Hamood Saleh; Umapathy, S
2015-02-01
In the present study, 2'-nitrophenyloctahydroquinolinedione and its 3'-nitrophenyl isomer were synthesized and characterized by FT-IR, FT-Raman, (1)H NMR and (13)C NMR spectroscopy. The molecular geometry, vibrational frequencies, (1)H and (13)C NMR chemical shift values of the synthesized compounds in the ground state have been calculated by using the density functional theory (DFT) method with the 6-311++G (d,p) basis set and compared with the experimental data. The complete vibrational assignments of wave numbers were made on the basis of potential energy distribution using GAR2PED programme. Isotropic chemical shifts for (1)H and (13)C NMR were calculated using gauge-invariant atomic orbital (GIAO) method. The experimental vibrational frequencies, (1)H and (13)C NMR chemical shift values were found to be in good agreement with the theoretical values. On the basis of vibrational analysis, molecular electrostatic potential and the standard thermodynamic functions have been investigated.
International Nuclear Information System (INIS)
ELKIN is based on a method of kinematic analysis that uses invariant amplitudes with two invariant indices for each particle. Differential cross sections can be calculated, expressed in invariant amplitudes and particle momenta. Conservation laws can be applied, reducing the number of amplitudes. ELKIN is written in LISP and the assembler language LAP. The simplification part of the program is an adaptation of the function SIMP from the algebraic language LAM
Hu, L H; Wong, L H; Chen, G H; Hu, LiHong; Wang, XiuJun; Wong, LaiHo; Chen, GuanHua
2003-01-01
Despite of their success, the results of first-principles quantum mechanical calculations contain inherent numerical errors caused by various approximations. We propose here a neural-network algorithm to greatly reduce these inherent errors. As a demonstration, this combined quantum mechanical calculation and neural-network correction approach is applied to the evaluation of standard heat of formation $\\DelH$ and standard Gibbs energy of formation $\\DelG$ for 180 organic molecules at 298 K. A dramatic reduction of numerical errors is clearly shown with systematic deviations being eliminated. For examples, the root--mean--square deviation of the calculated $\\DelH$ ($\\DelG$) for the 180 molecules is reduced from 21.4 (22.3) kcal$\\cdotp$mol$^{-1}$ to 3.1 (3.3) kcal$\\cdotp$mol$^{-1}$ for B3LYP/6-311+G({\\it d,p}) and from 12.0 (12.9) kcal$\\cdotp$mol$^{-1}$ to 3.3 (3.4) kcal$\\cdotp$mol$^{-1}$ for B3LYP/6-311+G(3{\\it df},2{\\it p}) before and after the neural-network correction.
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
Sato, Tatsuhiko; Endo, Akira; Niita, Koji
2010-04-01
The fluence to organ-absorbed-dose and effective-dose conversion coefficients for heavy ions with atomic numbers up to 28 and energies from 1 MeV/nucleon to 100 GeV/nucleon were calculated using the PHITS code coupled to the ICRP/ICRU adult reference computational phantoms, following the instruction given in ICRP Publication 103 (2007 (Oxford: Pergamon)). The conversion coefficients for effective dose equivalents derived using the radiation quality factors of both Q(L) and Q(y) relationships were also estimated, utilizing the functions for calculating the probability densities of absorbed dose in terms of LET (L) and lineal energy (y), respectively, implemented in PHITS. The calculation results indicate that the effective dose can generally give a conservative estimation of the effective dose equivalent for heavy-ion exposure, although it is occasionally too conservative especially for high-energy lighter-ion irradiations. It is also found from the calculation that the conversion coefficients for the Q(y)-based effective dose equivalents are generally smaller than the corresponding Q(L)-based values because of the conceptual difference between LET and y as well as the numerical incompatibility between the Q(L) and Q(y) relationships. The calculated data of these dose conversion coefficients are very useful for the dose estimation of astronauts due to cosmic-ray exposure.
Exit of a blast wave from a conical nozzle. [flow field calculations by Eulerian computer code DORF
Kim, K.; Johnson, W. E.
1976-01-01
The Eulerian computer code DORF was used in the analysis of a two-dimensional, unsteady flow field resulting from semi-confined explosions for propulsive applications. Initially, the ambient gas inside the conical shaped nozzle is set into motion due to the expansion of the explosion product gas, forming a shock wave. When this shock front exits the nozzle, it takes almost a spherical form while a complex interaction between the nozzle and compression and rarefaction waves takes place behind the shock. The results show an excellent agreement with experimental data.
Rivera-Ortega, Uriel; Pico-Gonzalez, Beatriz
2016-01-01
In this manuscript an algorithm based on a graphic user interface (GUI) designed in MATLAB for an automatic phase-shifting estimation between two digitalized interferograms is presented. The proposed algorithm finds the midpoint locus of the dark and bright interference fringes in two skeletonized fringe patterns and relates their displacements with the corresponding phase-shift. In order to demonstrate the usefulness of the proposed GUI, its application to simulated and experimental interference patterns will be shown. The viability of this GUI makes it a helpful and easy-to-use computational tool for educational or research purposes in optical phenomena for undergraduate or graduate studies in the field of physics.
Prevot, Thomas
2012-01-01
This paper describes the underlying principles and algorithms for computing the primary controller managed spacing (CMS) tools developed at NASA for precisely spacing aircraft along efficient descent paths. The trajectory-based CMS tools include slot markers, delay indications and speed advisories. These tools are one of three core NASA technologies integrated in NASAs ATM technology demonstration-1 (ATD-1) that will operationally demonstrate the feasibility of fuel-efficient, high throughput arrival operations using Automatic Dependent Surveillance Broadcast (ADS-B) and ground-based and airborne NASA technologies for precision scheduling and spacing.
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
SEXIE 3.0 — an updated computer program for the calculation of coordination shells and geometries
Tabor-Morris, Anne E.; Rupp, Bernhard
1994-08-01
We report a new version of our FORTRAN program SEXIE (ACBV). New features permit interfacing to related programs for EXAFS calculations (FEFF by J.J. Rehr et al.) and structure visualization (SCHAKAL by E. Keller). The code has been refined and the basis transformation matrix from fractional to cartesian coordinates has been corrected and made compatible with IUCr (International Union for Crystallography) standards. We discuss how to determine the correct space group setting and atom position input. New examples for Unix script files are provided.
International Nuclear Information System (INIS)
Many radiotherapy centers acquire 15 and 18 MV linear accelerators to perform more effective treatments for deep tumors. However, the acquisition of these equipment must be accompanied by an additional care in shielding planning of the rooms that will house them. In cases where space is restricted, it is common to find primary barriers made of concrete and metal. The drawback of this type of barrier is the photoneutron emission when high energy photons (e.g. 15 and 18 MV spectra) interact with the metallic material of the barrier. The emission of these particles constitutes a problem of radiation protection inside and outside of radiotherapy rooms, which should be properly assessed. A recent work has shown that the current model underestimate the dose of neutrons outside the treatment rooms. In this work, a computational model for the aforementioned problem was created from Monte Carlo Simulations and Artificial Intelligence. The developed model was composed by three neural networks, each being formed of a pair of material and spectrum: Pb18, Pb15 and Fe18. In a direct comparison with the McGinley method, the Pb18 network exhibited the best responses for approximately 78% of the cases tested; the Pb15 network showed better results for 100% of the tested cases, while the Fe18 network produced better answers to 94% of the tested cases. Thus, the computational model composed by the three networks has shown more consistent results than McGinley method. (author)
International Nuclear Information System (INIS)
Recent advances in sensor technology and engineering have made it possible to assemble many related sensors in a common array, often of small physical size. Sensor arrays may report an entire vector of measured values in each data collection cycle, typically one value per sensor per sampling time. The larger quantities of data provided by larger arrays certainly contain more information, however in some cases experience suggests that dramatic increases in array size do not always lead to corresponding improvements in the practical value of the data. The work leading to this report was motivated by the need to develop computational planning tools to approximate the relative effectiveness of arrays of different size (or scale) in a wide variety of contexts. The basis of the work is a statistical model of a generic sensor array. It includes features representing measurement error, both common to all sensors and independent from sensor to sensor, and the stochastic relationships between the quantities to be measured by the sensors. The model can be used to assess the effectiveness of hypothetical arrays in classifying objects or events from two classes. A computer program is presented for evaluating the misclassification rates which can be expected when arrays are calibrated using a given number of training samples, or the number of training samples required to attain a given level of classification accuracy. The program is also available via email from the first author for a limited time
Lyu, Justin; Andrianarijaona, V. M.
2016-05-01
The causes of the misfolding of prion protein -i.e. the transformation of PrPC to PrPSc - have not been clearly elucidated. Many studies have focused on identifying possible chemical conditions, such as pH, temperature and chemical denaturation, that may trigger the pathological transformation of prion proteins (Weiwei Tao, Gwonchan Yoon, Penghui Cao, `` β-sheet-like formation during the mechanical unfolding of prion protein'', The Journal of Chemical Physics, 2015, 143, 125101). Here, we attempt to calculate the ionization energies of the prion protein, which will be able to shed light onto the possible causes of the misfolding. We plan on using the coarse-grain method which allows for a more feasible calculation time by means of approximation. We believe that by being able to approximate the ionization potential, particularly that of the regions known to form stable β-strands of the PrPSc form, the possible sources of denaturation, be it chemical or mechanical, may be narrowed down.
Ho, Michelle L; Adler, Benjamin A; Torre, Michael L; Silberg, Jonathan J; Suh, Junghae
2013-12-20
Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions.
de Guzman, C. P.; Andrianarijaona, M.; Lee, Y. S.; Andrianarijaona, V.
An extensive knowledge of the ionization energies of amino acids can provide vital information on protein sequencing, structure, and function. Acidic and basic amino acids are unique because they have three ionizable groups: the C-terminus, the N-terminus, and the side chain. The effects of multiple ionizable groups can be seen in how Aspartate's ionizable side chain heavily influences its preferred conformation (J Phys Chem A. 2011 April 7; 115(13): 2900-2912). Theoretical and experimental data on the ionization energies of many of these molecules is sparse. Considering each atom of the amino acid as a potential departing site for the electron gives insight on how the three ionizable groups affect the ionization process of the molecule and the dynamic coupling between the vibrational modes. In the following study, we optimized the structure of each acidic and basic amino acid then exported the three dimensional coordinates of the amino acids. We used ORCA to calculate single point energies for a region near the optimized coordinates and systematically went through the x, y, and z coordinates of each atom in the neutral and ionized forms of the amino acid. With the calculations, we were able to graph energy potential curves to better understand the quantum dynamic properties of the amino acids. The authors thank Pacific Union College Student Association for providing funds.
International Nuclear Information System (INIS)
This report describes the computer program COXPRO-II, which was written for performing thermal analyses of irradiated fuel assemblies in a gaseous environment with no forced cooling. The heat transfer modes within the fuel pin bundle are radiation exchange among fuel pin surfaces and conduction by the stagnant gas. The array of parallel cylindrical fuel pins may be enclosed by a metal wrapper or shroud. Heat is dissipated from the outer surface of the fuel pin assembly by radiation and convection. Both equilateral triangle and square fuel pin arrays can be analyzed. Steady-state and unsteady-state conditions are included. Temperatures predicted by the COXPRO-II code have been validated by comparing them with experimental measurements. Temperature predictions compare favorably to temperature measurements in pressurized water reactor (PWR) and liquid-metal fast breeder reactor (LMFBR) simulated, electrically heated fuel assemblies. Also, temperature comparisons are made on an actual irradiated Fast-Flux Test Facility (FFTF) LMFBR fuel assembly
da Silveira, P. R.; da Silva, C. R.; Wentzcovitch, R. M.
2006-12-01
We describe the metadata and metadata management algorithms necessary to handle the concurrent execution of multiple tasks from a single workflow in a collaborative service oriented architecture environment. Metadata requirements are imposed by the distributed workflow that calculates elastic properties of materials at high pressures and temperatures. We explain the basic metaphor underlying the metadata management, the receipt. We also show the actual java representation of the receipt, and explain how they are XML serialized to be transferred between servers and stored in a database. We also discuss how the collaborative aspect of the user activity on running workflows could potentially lead to rush conditions, how this affects the requirements on metadata, and how these rush conditions are avoided. Finally, we describe an additional metadata structure complimentary to the receipts that contains general information about the workflow. Work supported by NSF/ITR 0428774 (VLab).
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
International Nuclear Information System (INIS)
Output signals of a commercially available on-line laser turbidimeter exhibit fluctuations due to air and/or CO2 bubbles. A simple data processing algorithm and a personal computer software have been developed to smooth the noisy turbidity data acquired, and to utilize them for the on-line calculations of some kinetic variables involved in batch and fed-batch cultures of uniformly dispersed microorganisms. With this software, about 103 instantaneous turbidity data acquired over 55 s are averaged and convert it to dry cell concentration, X, every minute. Also, volume of the culture broth, V, is estimated from the averaged output data of weight loss of feed solution reservoir, W, using an electronic balance on which the reservoir is placed. Then, the computer software is used to perform linear regression analyses over the past 30 min of the total biomass, VX, the natural logarithm of the total biomass, ln(VX), and the weight loss, W, in order to calculate volumetric growth rate, d(VX)/dt, specific growth rate, μ [ = dln(VX)/dt] and the rate of W, dW/dt, every minute in a fed-batch culture. The software used to perform the first-order regression analyses of VX, ln(VX) and W was applied to batch or fed-batch cultures of Escherichia coli on minimum synthetic or natural complex media. Sample determination coefficients of the three different variables (VX, ln(VX) and W) were close to unity, indicating that the calculations are accurate. Furthermore, growth yield, Yx/s, and specific substrate consumption rate, qsc, were approximately estimated from the data, dW/dt and in a ‘balanced’ fed-batch culture of E. coli on the minimum synthetic medium where the computer-aided substrate-feeding system automatically matches well with the cell growth. (author)
Energy Technology Data Exchange (ETDEWEB)
Papadimitroulas, P; Kagadis, GC [University of Patras, Rion, Ahaia (Greece); Loudos, G [Technical Educational Institute of Athens, Aigaleo, Attiki (Greece)
2014-06-15
Purpose: Our purpose is to evaluate the administered absorbed dose in pediatric, nuclear imaging studies. Monte Carlo simulations with the incorporation of pediatric computational models can serve as reference for the accurate determination of absorbed dose. The procedure of the calculated dosimetric factors is described, while a dataset of reference doses is created. Methods: Realistic simulations were executed using the GATE toolkit and a series of pediatric computational models, developed by the “IT'IS Foundation”. The series of the phantoms used in our work includes 6 models in the range of 5–14 years old (3 boys and 3 girls). Pre-processing techniques were applied to the images, to incorporate the phantoms in GATE simulations. The resolution of the phantoms was set to 2 mm3. The most important organ densities were simulated according to the GATE “Materials Database”. Several used radiopharmaceuticals in SPECT and PET applications are being tested, following the EANM pediatric dosage protocol. The biodistributions of the several isotopes used as activity maps in the simulations, were derived by the literature. Results: Initial results of absorbed dose per organ (mGy) are presented in a 5 years old girl from the whole body exposure to 99mTc - SestaMIBI, 30 minutes after administration. Heart, kidney, liver, ovary, pancreas and brain are the most critical organs, in which the S-factors are calculated. The statistical uncertainty in the simulation procedure was kept lower than 5%. The Sfactors for each target organ are calculated in Gy/(MBq*sec) with highest dose being absorbed in kidneys and pancreas (9.29*10{sup 10} and 0.15*10{sup 10} respectively). Conclusion: An approach for the accurate dosimetry on pediatric models is presented, creating a reference dosage dataset for several radionuclides in children computational models with the advantages of MC techniques. Our study is ongoing, extending our investigation to other reference models and
Diaz, Carlos; Echevarria, Lorenzo; Hernández, Florencio E.
2013-05-01
Herein we report on the development of a fragment-recombination approach (FRA) that allows overcoming the computational limitations found in the ab initio calculation of the two-photon circular dichroism (TPCD) spectra of large optically active molecules. Through the comparative analysis of the corresponding theoretical TPCD spectra of the fragments and that of the entire molecule, we prove that TPCD is an additive property. We also demonstrate that the same property apply to two-photon absorption (TPA). TPCD-FRA is expected to find great applications in the structural-analysis of large catalysts and polypeptides due to its reduced computational complexity, cost and time, and to reveal fingerprints in the obscure spectral region between the near and far UV.
International Nuclear Information System (INIS)
A computer code TERFOC-N has bee developed to calculate doses to the public due to atmospheric releases of radionuclides in normal operations of nuclear facilities. The code calculates the highest individual dose and the collective dose from four exposure pathways; internal doses due to ingestion and inhalation, external doses due to cloudshine and groundshine. A foodchain model, which is originally referred to the U.S.NRC Regulatory Guide 1.109, has been improved to apply to not only LWRs but also other nuclear facilities. This report describes the models employed and gives a sample run performed by the code. The parameters which were sensitive to ingestion dose were identified from the results of sensitivity analysis. The models which significantly contributed to the dose were identified among the models improved and extended here. (author)
International Nuclear Information System (INIS)
The problems are discussed of the mathematical description and simulation of temperature fields in annealing the closing weld of the steam generator jacket of the WWER 440 nuclear power plant. The basic principles are given of induction annealing, the method of calculating temperature fields is indicated and the mathematical description is given of boundary conditions on the outer and inner surfaces of the steam generator jacket for the computation of temperature fields arising during annealing. Also described are the methods of determining the temperature of exposed parts of heat exchange tubes inside the steam generator and the technical possibilities are assessed of the annealing equipment from the point of view of its computer simulation. Five alternatives are given for the computation of temperature fields in the area around the weld for different boundary conditions. The values are given of maximum differences in the temperatures of the metal in the annealed part of the steam generator jacket which allow the assessment of individual computation variants, this mainly from the point of view of observing the course of annealing temperature in the required width of the annealed jacket of the steam generator along both sides of the closing weld. (B.S.)
Atkinson, Paul
2011-01-01
The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati
Li, Qiang; Yu, Guichang; Liu, Shulian; Zheng, Shuiying
2012-09-01
Journal bearings are important parts to keep the high dynamic performance of rotor machinery. Some methods have already been proposed to analysis the flow field of journal bearings, and in most of these methods simplified physical model and classic Reynolds equation are always applied. While the application of the general computational fluid dynamics (CFD)-fluid structure interaction (FSI) techniques is more beneficial for analysis of the fluid field in a journal bearing when more detailed solutions are needed. This paper deals with the quasi-coupling calculation of transient fluid dynamics of oil film in journal bearings and rotor dynamics with CFD-FSI techniques. The fluid dynamics of oil film is calculated by applying the so-called "dynamic mesh" technique. A new mesh movement approach is presented while the dynamic mesh models provided by FLUENT are not suitable for the transient oil flow in journal bearings. The proposed mesh movement approach is based on the structured mesh. When the journal moves, the movement distance of every grid in the flow field of bearing can be calculated, and then the update of the volume mesh can be handled automatically by user defined function (UDF). The journal displacement at each time step is obtained by solving the moving equations of the rotor-bearing system under the known oil film force condition. A case study is carried out to calculate the locus of the journal center and pressure distribution of the journal in order to prove the feasibility of this method. The calculating results indicate that the proposed method can predict the transient flow field of a journal bearing in a rotor-bearing system where more realistic models are involved. The presented calculation method provides a basis for studying the nonlinear dynamic behavior of a general rotor-bearing system.
Reibe-Pal, Saskia; Madea, Burkhard
2015-03-01
We compared the results of calculating a minimum post-mortem interval (PMImin) in a mock crime case using two different methods: accumulated degree hours (ADH method) and a newly developed computational model called ExLAC. For the ADH method we further applied five reference datasets for the development time of Calliphora vicina (Diptera: Calliphoridae) from 5 different countries and our results confirmed the following: (1) Reference data for blowfly development that has not been sampled using a local blowfly colony should not, in most circumstances, be used in estimating a PMI in real cases; and (2) The new method ExLAC might be a potential alternative to the ADH method.
Energy Technology Data Exchange (ETDEWEB)
Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Guimaraes, Antonio C.F.; Moreira, Maria de L., E-mail: deise_dy@hotmail.com, E-mail: diogosb@outlook.com, E-mail: raoniwa@yahoo.com.br, E-mail: tony@ien.gov.br, E-mail: malu@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2014-07-01
This paper is prepared in order to address calculations of shielding to minimize the interaction of patients with ionizing radiation and / or personnel. The work includes the use of protection report Radiation in Dental Medicine (NCRP-145 or Radiation Protection in Dentistry), which establishes calculations and standards to be adopted to ensure safety to those who may be exposed to ionizing radiation in dental facilities, according to the dose limits established by CNEN-NN-3.1 standard published in September / 2011. The methodology comprises the use of computer language for processing data provided by that report, and a commercial application used for creating residential projects and decoration. The FORTRAN language was adopted as a method for application to a real case. The result is a programming capable of returning data related to the thickness of material, such as steel, lead, wood, glass, plaster, acrylic, acrylic and leaded glass, which can be used for effective shielding against single or continuous pulse beams. Several variables are used to calculate the thickness of the shield, as: number of films used in the week, film load, use factor, occupational factor, distance between the wall and the source, transmission factor, workload, area definition, beam intensity, intraoral and panoramic exam. Before the application of the methodology is made a validation of results with examples provided by NCRP-145. The calculations redone from the examples provide answers consistent with the report.
Moorthy, N.; Jobe Prabakar, P. C.; Ramalingam, S.; Periandy, S.; Parasuraman, K.
2016-04-01
In order to explore the unbelievable NLO property of prepared Benzophenone thiosemicarbazone (BPTSC), the experimental and theoretical investigation has been made. The theoretical calculations were made using RHF and CAM-B3LYP methods at 6-311++G(d,p) basis set. The title compound contains Cdbnd S ligand which helps to improve the second harmonic generation (SHG) efficiency. The molecule has been examined in terms of the vibrational, electronic and optical properties. The entire molecular behavior was studied by their fundamental IR and Raman wavenumbers and was compared with the theoretical aspect. The molecular chirality has been studied by performing vibrational circular dichroism (circularly polarized infrared radiation). The Mulliken charge levels of the compound ensure the perturbation of atomic charges according to the ligand. The molecular interaction of frontier orbitals emphasizes the modification of chemical properties of the compound through the reaction path. The enormous amount of NLO activity was induced by the Benzophenone in thiosemicarbazone. The Gibbs free energy was evaluated at different temperature and from which the enhancement of chemical stability was stressed. The VCD spectrum was simulated and the optical dichroism of the compound has been analyzed.
Moorthy, N.; Prabakar, P. C. Jobe; Ramalingam, S.; Pandian, G. V.; Anbusrinivasan, P.
2016-04-01
In order to investigate the vibrational, electronic and NLO characteristics of the compound; benzaldehyde thiosemicarbazone (BTSC), the XRD, FT-IR, FT-Raman, NMR and UV-visible spectra were recorded and were analysed with the calculated spectra by using HF and B3LYP methods with 6-311++G(d,p) basis set. The XRD results revealed that the stabilized molecular systems were confined in orthorhombic unit cell system. The cause for the change of chemical and physical properties behind the compound has been discussed makes use of Mulliken charge levels and NBO in detail. The shift of molecular vibrational pattern by the fusing of ligand; thiosemicarbazone group with benzaldehyde has been keenly observed. The occurrence of in phase and out of phase molecular interaction over the frontier molecular orbitals was determined to evaluate the degeneracy of the electronic energy levels. The thermodynamical studies of the temperature region 100-1000 K to detect the thermal stabilization of the crystal phase of the compound were investigated. The NLO properties were evaluated by the determination of the polarizability and hyperpolarizability of the compound in crystal phase. The physical stabilization of the geometry of the compound has been explained by geometry deformation analysis.
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
International Nuclear Information System (INIS)
The report describes an I.B.M. 709 program written at the request of the Reactor Division, Harwell, to obtain high energy spectra in a system containing a number of fissile and non-fissile materials, arranged as concentric cylinders of infinite length surrounded by an outer material with a square or rectangular boundary. At the cell boundary neutrons can be lost by leakage or reflected back into the system. A specified number of fission neutrons born in the fissile materials, together with any descendants they may have, are tracked one by one through the system until they are absorbed, lost by leakage through the lattice boundary, or their energies have fallen below a specifiable cut-off energy. The neutrons may be started from anywhere in the system and all neutron-nucleus reactions that occur in the nuclides supplied with the program are allowed. A descriptions is given of the use of the program, the current version of which is available as a self-loading binary tape which contains, in addition to the program, all the nuclear data at present available. Binary card decks are also available and nuclear data for other nuclides can be added. A feature of the program is the flexibility with which the core storage available for input and output data can be allocated according to the requirements of the problem. The output of the program is in the form of a Binary Coded Decimal tape (B.C.D.) which can be used on the normal I.B.M. off-line equipment to print out the results. An example is given of the results obtained for use in radiation damage calculations of the spatial distribution of neutrons in a simple uranium-D2O system
Parkhurst, David L.; Appelo, C.A.J.
2013-01-01
PHREEQC version 3 is a computer program written in the C and C++ programming languages that is designed to perform a wide variety of aqueous geochemical calculations. PHREEQC implements several types of aqueous models: two ion-association aqueous models (the Lawrence Livermore National Laboratory model and WATEQ4F), a Pitzer specific-ion-interaction aqueous model, and the SIT (Specific ion Interaction Theory) aqueous model. Using any of these aqueous models, PHREEQC has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations with reversible and irreversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and pressure and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters within specified compositional uncertainty limits. Many new modeling features were added to PHREEQC version 3 relative to version 2. The Pitzer aqueous model (pitzer.dat database, with keyword PITZER) can be used for high-salinity waters that are beyond the range of application for the Debye-Hückel theory. The Peng-Robinson equation of state has been implemented for calculating the solubility of gases at high pressure. Specific volumes of aqueous species are calculated as a function of the dielectric properties of water and the ionic strength of the solution, which allows calculation of pressure effects on chemical reactions and the density of a solution. The specific conductance and the density of a solution are calculated and printed in the output file. In addition to Runge-Kutta integration, a stiff ordinary differential equation solver (CVODE) has been included for kinetic calculations with multiple rates that occur at widely different time scales
Energy Technology Data Exchange (ETDEWEB)
1999-09-01
The fundamental objective of the project is the elaboration of a user friendly computer programme which allows to mining technicians an easy application of the empirical calculation methods of mining subsidence. As is well known these methods use, together with a suitable theoretical support, the experimental data obtained during a long period of mining activities in areas of different geological and geomechanical nature. Thus they can incorporate to the calculus the local parameters that hardly could be taken into account by using pure theoretical methods. In general, as basic calculation method, it has been followed the procedure development by the VNIMI Institute of Leningrad, a particularly suitable method for application to the most various conditions that may occur in the mining of flat or steep seams. The computer programme has been worked out on the basis of MicroStation System (5.0 version) of INTERGRAPH which allows the development of new applications related to the basic aims of the project. An important feature, of the programme that may be quoted is the easy adaptation to local conditions by adjustment of the geomechanical or mining parameters according to the values obtained from the own working experience. (Author)
International Nuclear Information System (INIS)
To investigate the effect of computed tomography (CT) using hepatic arterial phase (HAP) and portal venous phase (PVP) contrast on dose calculation of stereotactic body radiation therapy (SBRT) for liver cancer. Twenty-one patients with liver cancer were studied. HAP, PVP and non-enhanced CTs were performed on subjects scanned in identical positions under active breathing control (ABC). SBRT plans were generated using seven-field three-dimensional conformal radiotherapy (7 F-3D-CRT), seven-field intensity-modulated radiotherapy (7 F-IMRT) and single-arc volumetric modulated arc therapy (VMAT) based on the PVP CT. Plans were copied to the HAP and non-enhanced CTs. Radiation doses calculated from the three phases of CTs were compared with respect to the planning target volume (PTV) and the organs at risk (OAR) using the Friedman test and the Wilcoxon signed ranks test. SBRT plans calculated from either PVP or HAP CT, including 3D-CRT, IMRT and VMAT plans, demonstrated significantly lower (p <0.05) minimum absorbed doses covering 98%, 95%, 50% and 2% of PTV (D98%, D95%, D50% and D2%) than those calculated from non-enhanced CT. The mean differences between PVP or HAP CT and non-enhanced CT were less than 2% and 1% respectively. All mean dose differences between the three phases of CTs for OARs were less than 2%. Our data indicate that though the differences in dose calculation between contrast phases are not clinically relevant, dose underestimation (IE, delivery of higher-than-intended doses) resulting from CT using PVP contrast is larger than that resulting from CT using HAP contrast when compared against doses based upon non-contrast CT in SBRT treatment of liver cancer using VMAT, IMRT or 3D-CRT
Energy Technology Data Exchange (ETDEWEB)
Hadid, L; Desbree, A; Franck, D; Blanchardon, E [IRSN, Institute for Radiological Protection and Nuclear Safety, Internal Dosimetry Department, IRSN/DRPH/SDI, BP 17, F-92262 Fontenay-aux-Roses Cedex (France); Schlattl, H; Zankl, M, E-mail: lama.hadid@irsn.f [Institute of Radiation Protection, Helmholtz Zentrum Muenchen-German Research Center for Environmental Health, Neuherberg (Germany)
2010-07-07
The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum Muenchen (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed
Hadid, L.; Desbrée, A.; Schlattl, H.; Franck, D.; Blanchardon, E.; Zankl, M.
2010-07-01
The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum München (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
I. Fisk
2010-01-01
Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
Energy Technology Data Exchange (ETDEWEB)
Mehrabian, M.A.; Aseman, R.D. [Mechanical Engineering Dept., Shahid Bahonar Univ., Kerman (Iran, Islamic Republic of)
2008-07-01
The central receiver solar power plant is composed of a large number of individually stirred mirrors (heliostats), focusing the solar radiation onto a tower-mounted receiver. In this paper, an algorithm is developed based on vector geometry to pick an individual heliostat and calculate its characteristic angles at different times of the day and different days of the year. The algorithm then picks the other heliostats one by one and performs the same calculations as did for the first one. These data are used to control the orientation of heliostats for improving the performance of the field. This procedure is relatively straight-forward, and quite suitable for computer programming. The effect of major parameters such as shading and blocking on the performance of the heliostat field is also studied using this algorithm. The results of computer simulation are presented in three sections: (1) the characteristic angles of individual heliostats, (2) the incidence angle of the sun rays striking individual heliostats, and (3) the blocking and shading effect of each heliostat. The calculations and comparisons of results show that: (a) the average incidence angle in the northern hemisphere at the north side of the tower is less than that at its south side, (b) the cosine losses are reduced as the latitude is increased or the tower height is increased, (c) the blocking effect is more important in winter and its effect is much more noticeable than shading for large fields, (d) the height of the tower does not considerably affect shading; but significantly reduces the blocking effect, and (e) to have no blocking effect throughout the year, the field design should be performed for the winter solstice noon. (orig.)
International Nuclear Information System (INIS)
Tools for dosimetric calculations are of the utmost importance for the basic principles of radiological protection, not only in nuclear medicine, but also in other scientific calculations. In this work a mathematical model of the Brazilian woman is developed in order to be used as a basis for calculations of Specific Absorbed Fractions (SAFs) in internal organs and in the skeleton, in accord with the objectives of diagnosis or therapy in nuclear medicine. The model developed here is similar in form to that of Snyder, but modified to be more relevant to the case of the Brazilian woman. To do this, the formalism of the Monte Carlo method was used by means of the ALGAM- 97R computational code. As a contribution to the objectives of this thesis, we developed the computational system cSAF - consultation for Specific Absorbed Fractions (cFAE from Portuguese acronym) - which furnishes several 'look-up' facilities for the research user. The dialogue interface with the operator was planned following current practices in the utilization of event-oriented languages. This interface permits the user to navigate by means of the reference models, choose the source organ, the energy desired, and receive an answer through an efficient and intuitive dialogue. The system furnishes, in addition to the data referring to the Brazilian woman, data referring to the model of Snyder and to the model of the Brazilian man. The system makes available not only individual data to the SAFs of the three models, but also a comparison among them. (author)
International Nuclear Information System (INIS)
In the National Institute of Nuclear Research (ININ) a methodology is developed to optimize the design of cells 10x10 of assemble fuels for reactors of water in boil or BWR. It was proposed a lineal calculation formula based on a coefficients matrix (of the change reason of the relative power due to changes in the enrichment of U-235) for estimate the relative powers by pin of a cell. With this it was developed the computer program of fast calculation named PreDiCeldas. The one which by means of a simple search algorithm allows to minimize the relative power peak maximum of cell or LPPF. This is achieved varying the distribution of U-235 inside the cell, maintaining in turn fixed its average enrichment. The accuracy in the estimation of the relative powers for pin is of the order from 1.9% when comparing it with results of the 'best estimate' HELIOS code. With the PreDiCeldas it was possible, at one minimum time of calculation, to re-design a reference cell diminishing the LPPF, to the beginning of the life, of 1.44 to a value of 1.31. With the cell design with low LPPF is sought to even design cycles but extensive that those reached at the moment in the BWR of the Laguna Verde Central. (Author)
International Nuclear Information System (INIS)
This paper is a user manual of the computer program MAIL3.0 which makes various types of cross section sets for neutron transport theory programs. MAIL3.0 is a revised version of the MAIL in the JACS code system and has new features as follows: (1) Both of conventional MGCL library and new memory-saved library with P3-scattering matrix file can be processed. (2) A cross section library for MULTI-KENO-II can be made. (3) Calculation of a self-shielding factor, f(σ0, T), at a specified temperature T by interpolating two f-tables with different temperature in the MGCL library can be performed. (4) An interpolation method of f(σ0) for σ0 ≥ 105 [barn] is revised. (5) A Monte Carlo Dancoff correction factor calculation program by Monte Carlo method MCDAN is included. (6) The h-table to compensate the narrow resonance approximation can be read and processed. (7) A program to calculate atomic number densities of various nuclear materials is included. (8) Atomic number densities such as structural materials, moderators and poisons are available. (author)
Sato, Tatsuhiko; Endo, Akira; Sihver, Lembit; Niita, Koji
2011-03-01
Absorbed-dose and dose-equivalent rates for astronauts were estimated by multiplying fluence-to-dose conversion coefficients in the units of Gy.cm(2) and Sv.cm(2), respectively, and cosmic-ray fluxes around spacecrafts in the unit of cm(-2) s(-1). The dose conversion coefficients employed in the calculation were evaluated using the general-purpose particle and heavy ion transport code system PHITS coupled to the male and female adult reference computational phantoms, which were released as a common ICRP/ICRU publication. The cosmic-ray fluxes inside and near to spacecrafts were also calculated by PHITS, using simplified geometries. The accuracy of the obtained absorbed-dose and dose-equivalent rates was verified by various experimental data measured both inside and outside spacecrafts. The calculations quantitatively show that the effective doses for astronauts are significantly greater than their corresponding effective dose equivalents, because of the numerical incompatibility between the radiation quality factors and the radiation weighting factors. These results demonstrate the usefulness of dose conversion coefficients in space dosimetry. PMID:20835833
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
P. MacBride
The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
International Nuclear Information System (INIS)
Purpose: The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. Methods and Materials: One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCTMLT) and HM (mCBCTHM) algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [Dmean], minimum dose [Dmin], and maximum dose [Dmax]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. Results: For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. Conclusions: We evaluated the accuracy of the dose calculation in CBCT, mCBCTMLT, and mCBCTHM with DIR for 10 patients. The results showed that dose differences in Dmean, Dmin, and Dmax in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCTMLT and mCBCTHM can be useful for improving the dose calculation for adaptive radiation therapy
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
International Nuclear Information System (INIS)
The computer code GORGON, which calculates the energy deposition and slowing down of ions in cold materials and hot plasmas is described, and analyzed in this report. This code is in a state of continuous development but an intermediate stage has been reached where it is considered useful to document the 'state of the art' at the present time. The GORGON code is an improved version of a code developed by Zinamon et al. as part of a more complex program system for studying the hydrodynamic motion of plane metal targets irradiated by intense beams of protons. The improvements made in the code were necessary to improve its usefulness for problems related to the design and burn of heavy ion beam driven inertial confinement fusion targets. (orig./GG)
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
Directory of Open Access Journals (Sweden)
Julien Guevar
Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.
Energy Technology Data Exchange (ETDEWEB)
Zhang, J; Zhang, W; Lu, J [Cancer Hospital of Shantou University Medical College, Shantou, Guangdong (China)
2015-06-15
Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.
Jia, Jia; Wang, Yongtian; Liu, Juan; Li, Xin; Pan, Yijie; Sun, Zhumei; Zhang, Bin; Zhao, Qing; Jiang, Wei
2013-03-01
A fast algorithm with low memory usage is proposed to generate the hologram for full-color 3D display based on a compressed look-up table (C-LUT). The C-LUT is described and built to reduce the memory usage and speed up the calculation of the computer-generated hologram (CGH). Numerical simulations and optical experiments are performed to confirm this method, and several other algorithms are compared. The results show that the memory usage of the C-LUT is kept low when number of depth layers of the 3D object is increased, and the time for building the C-LUT is independent of the number of depth layers of the 3D object. The algorithm based on C-LUT is an efficient method for saving memory usage and calculation time, and it is expected that it could be used for realizing real-time and full-color 3D holographic display in the future.
Energy Technology Data Exchange (ETDEWEB)
Borges, Diogo da S.; Lava, Deise D.; Affonso, Renato R.W.; Moreira, Maria de L.; Guimaraes, Antonio C.F., E-mail: diogosb@outlook.com, E-mail: deise_dy@hotmail.com, E-mail: raoniwa@yahoo.com.br, E-mail: malu@ien.gov.br, E-mail: tony@ien.gov.br [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)
2014-07-01
The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011.
并流多效蒸发系统的计算机辅助计算%The computer-aided calculation of cocurrent multi-effect evaporation
Institute of Scientific and Technical Information of China (English)
陈文波; 陈华新; 施得志; 阮奇
2001-01-01
The model of cocurrent multi-effect evaporation with extra vapor elicitation and condenser water flash is established. A computer-aided calculating method is presented. The algorithm is programmed by Visual Basic 5.0. A practical example shows that the calculating method is fast and accurate. With condensed water flash and preheating cane sugar solution from 26.7?℃ to 70?℃ by eliciting extra vapor， the fresh vapor expenditure of cocurrent four-effectevaporation is decreased by about 11%.%建立了带有冷凝水闪蒸和额外蒸汽引出的并流多效蒸发系统工艺计算的数学模型，用计算机辅助计算方法求解. 算法用Visual Basic 5.0编程实现. 算例表明，计算方法快速、准确，对四效并流蒸发蔗糖水溶液系统，引出额外蒸汽将原料液从26.7℃预热到60℃进料并利用冷凝水闪蒸可节省加热蒸汽约11%.
Energy Technology Data Exchange (ETDEWEB)
Dong Cunku [Department of Chemistry, Harbin Institute of Technology, Harbin 150090 (China); Li Xin, E-mail: lixin@hit.edu.cn [Department of Chemistry, Harbin Institute of Technology, Harbin 150090 (China); Guo Zechong [School of Municipal Environmental Engineering, Harbin Institute of Technology, Harbin 150090 (China); Qi Jingyao, E-mail: jyq@hit.edu.cn [School of Municipal Environmental Engineering, Harbin Institute of Technology, Harbin 150090 (China)
2009-08-04
A new rational approach for the preparation of molecularly imprinted polymer (MIP) based on the combination of molecular dynamics (MD) simulations and quantum mechanics (QM) calculations is described in this work. Before performing molecular modeling, a virtual library of functional monomers was created containing forty frequently used monomers. The MD simulations were first conducted to screen the top three monomers from virtual library in each porogen-acetonitrile, chloroform and carbon tetrachloride. QM simulations were then performed with an aim to select the optimum monomer and progen solvent in which the QM simulations were carried out; the monomers giving the highest binding energies were chosen as the candidate to prepare MIP in its corresponding solvent. The acetochlor, a widely used herbicide, was chosen as the target analyte. According to the theoretical calculation results, the MIP with acetochlor as template was prepared by emulsion polymerization method using N,N-methylene bisacrylamide (MBAAM) as functional monomer and divinylbenzene (DVB) as cross-linker in chloroform. The synthesized MIP was then tested by equilibrium-adsorption method, and the MIP demonstrated high removal efficiency to the acetochlor. Mulliken charge distribution and {sup 1}H NMR spectroscopy of the synthesized MIP provided insight on the nature of recognition during the imprinting process probing the governing interactions for selective binding site formation at a molecular level. We think the computer simulation method first proposed in this paper is a novel and reliable method for the design and synthesis of MIP.
Daniels, Jeffrey J.
1977-01-01
Three-dimensional induced polarization and resistivity modeling for buried electrode configurations can be achieved by adapting surface integral techniques for surface electrode configurations to buried electrodes. Modification of. the surface technique is accomplished by considering the additional mathematical terms required to express-the changes in the electrical potential and geometry caused by placing the source and receiver electrodes below the surface. This report presents a listing of a computer program to calculate the resistivity and induced polarization response from a three-dimensional body for buried electrode configurations. The program is designed to calculate the response for the following electrode configurations: (1) hole-to-surface array with a buried bipole source and a surface bipole receiver, (2) hole-to-surface array with a buried pole source and a surface bipole receiver, (3) hole-to-hole array with a buried, fixed pole source and a moving bipole receiver, (4) surface-to-hole array with a fixed pole source on the surface and a moving bipole receiver in the borehole, (5) hole-to-hole array with a buried, fixed bipole source and a buried, moving bipole receiver, (6) hole-to-hole array with a buried, moving bipole source and a buried, moving bipole receiver, and (7) single-hole, buried bipole-bipole array. Input and output examples are given for each of the arrays.
International Nuclear Information System (INIS)
The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self-indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling by Doppler broadened cross-sections. The various self-shielding factors are computer numerically as Lebesgue integrals over the cross-section probability tables
Parkhurst, David L.; Appelo, C.A.J.
2013-01-01
PHREEQC version 3 is a computer program written in the C and C++ programming languages that is designed to perform a wide variety of aqueous geochemical calculations. PHREEQC implements several types of aqueous models: two ion-association aqueous models (the Lawrence Livermore National Laboratory model and WATEQ4F), a Pitzer specific-ion-interaction aqueous model, and the SIT (Specific ion Interaction Theory) aqueous model. Using any of these aqueous models, PHREEQC has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations with reversible and irreversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and pressure and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters within specified compositional uncertainty limits. Many new modeling features were added to PHREEQC version 3 relative to version 2. The Pitzer aqueous model (pitzer.dat database, with keyword PITZER) can be used for high-salinity waters that are beyond the range of application for the Debye-Hückel theory. The Peng-Robinson equation of state has been implemented for calculating the solubility of gases at high pressure. Specific volumes of aqueous species are calculated as a function of the dielectric properties of water and the ionic strength of the solution, which allows calculation of pressure effects on chemical reactions and the density of a solution. The specific conductance and the density of a solution are calculated and printed in the output file. In addition to Runge-Kutta integration, a stiff ordinary differential equation solver (CVODE) has been included for kinetic calculations with multiple rates that occur at widely different time scales
Energy Technology Data Exchange (ETDEWEB)
Onozato, Yusuke [Department of Radiation Oncology, Tohoku University School of Medicine, Sendai (Japan); Kadoya, Noriyuki, E-mail: kadoya.n@rad.med.tohoku.ac.jp [Department of Radiation Oncology, Tohoku University School of Medicine, Sendai (Japan); Fujita, Yukio; Arai, Kazuhiro [Department of Radiation Oncology, Tohoku University School of Medicine, Sendai (Japan); Dobashi, Suguru; Takeda, Ken [Department of Radiological Technology, School of Health Sciences, Faculty of Medicine, Tohoku University, Sendai (Japan); Kishi, Kazuma [Radiation Technology, Tohoku University Hospital, Sendai (Japan); Umezawa, Rei; Matsushita, Haruo; Jingu, Keiichi [Department of Radiation Oncology, Tohoku University School of Medicine, Sendai (Japan)
2014-06-01
Purpose: The purpose of this study was to estimate the accuracy of the dose calculation of On-Board Imager (Varian, Palo Alto, CA) cone beam computed tomography (CBCT) with deformable image registration (DIR), using the multilevel-threshold (MLT) algorithm and histogram matching (HM) algorithm in pelvic radiation therapy. Methods and Materials: One pelvis phantom and 10 patients with prostate cancer treated with intensity modulated radiation therapy were studied. To minimize the effect of organ deformation and different Hounsfield unit values between planning CT (PCT) and CBCT, we modified CBCT (mCBCT) with DIR by using the MLT (mCBCT{sub MLT}) and HM (mCBCT{sub HM}) algorithms. To evaluate the accuracy of the dose calculation, we compared dose differences in dosimetric parameters (mean dose [D{sub mean}], minimum dose [D{sub min}], and maximum dose [D{sub max}]) for planning target volume, rectum, and bladder between PCT (reference) and CBCTs or mCBCTs. Furthermore, we investigated the effect of organ deformation compared with DIR and rigid registration (RR). We determined whether dose differences between PCT and mCBCTs were significantly lower than in CBCT by using Student t test. Results: For patients, the average dose differences in all dosimetric parameters of CBCT with DIR were smaller than those of CBCT with RR (eg, rectum; 0.54% for DIR vs 1.24% for RR). For the mCBCTs with DIR, the average dose differences in all dosimetric parameters were less than 1.0%. Conclusions: We evaluated the accuracy of the dose calculation in CBCT, mCBCT{sub MLT}, and mCBCT{sub HM} with DIR for 10 patients. The results showed that dose differences in D{sub mean}, D{sub min}, and D{sub max} in mCBCTs were within 1%, which were significantly better than those in CBCT, especially for the rectum (P<.05). Our results indicate that the mCBCT{sub MLT} and mCBCT{sub HM} can be useful for improving the dose calculation for adaptive radiation therapy.
McCarty, George
1982-01-01
How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...
Adibi, Atoosa; Mortazavi, Mojgan; Shayganfar, Azin; Kamal, Sima; Azad, Roya; Aalinezhad, Marzieh
2016-01-01
It is essential to ascertain the state of health and renal function of potential kidney donors before organ removal. In this regard, one of the primary steps is to estimate the donor's glomerular filtration rate (GFR). For this purpose, the modification of diet in renal disease (MDRD) and the Cockcroft-Gault (CG) formulas has been used. However, these two formulas produce different results and finding new techniques with greater accuracy is required. Measuring the renal volume from computed tomography (CT) scan may be a valuable index to assess the renal function. This study was conducted to investigate the correlation between renal volume and the GFR values in potential living kidney donors referred to the multislice imaging center at Alzahra Hospital during 2014. The study comprised 66 subjects whose GFR was calculated using the two aforementioned formulas. Their kidney volumes were measured by using 64-slice CT angiography and the correlation between renal volume and GFR values were analyzed using the Statistical Package for the Social Science software. There was no correlation between the volume of the left and right kidneys and the MDRD-based estimates of GFR (P = 0.772, r = 0.036, P = 0.251, r = 0.143, respectively). A direct linear correlation was found between the volume of the left and right kidneys and the CG-based GFR values (P = 0.001, r = 0.397, P kidney volume derived from multislice CT scan can help predict the GFR value in kidney donors with normal renal function. The limitations of our study include the small sample size and the medium resolution of 64-slice multislice scanners. Further studies with larger sample size and using higher resolution scanners are warranted to determine the accuracy of this method in potential kidney donors. PMID:27424682
Daniluk, Andrzej
2011-06-01
A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures. New version program summaryProgram title: GrowthCP Catalogue identifier: ADVL_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 32 269 No. of bytes in distributed program, including test data, etc.: 8 234 229 Distribution format: tar.gz Programming language: Free Object Pascal Computer: multi-core x64-based PC Operating system: Windows XP, Vista, 7 Has the code been vectorised or parallelized?: No RAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory. Supplementary material: The figures mentioned in the "Summary of revisions" section can be obtained here. Classification: 4.3, 7.2, 6.2, 8, 14 External routines: Lazarus [1] Catalogue
International Nuclear Information System (INIS)
The project encompasses the following project tasks and problems: (1) Studies relating to complete failure of the main heat transfer system; (2) Pebble flow; (3) Development of computer codes for detailed calculation of hypothetical accidents; (a) the THERMIX/RZKRIT temperature buildup code (covering a.o. a variation to include exothermal heat sources); (b) the REACT/THERMIX corrosion code (variation taking into account extremely severe air ingress into the primary loop); (c) the GRECO corrosion code (variation for treating extremely severe water ingress into the primary loop); (d) the KIND transients code (for treating extremely fast transients during reactivity incidents. (4) Limiting devices for safety-relevant quantities. (5) Analyses relating to hypothetical accidents. (a) hypothetical air ingress; (b) effects on the fuel particles induced by fast transients. The problems of the various tasks are defined in detail and the main results obtained are explained. The contributions reporting the various project tasks and activities have been prepared for separate retrieval from the database. (orig./HP)
International Nuclear Information System (INIS)
In order to evaluate core characteristics of fast reactors, a computer code system ARCADIAN-FBR has been developed by utilizing the existing analysis codes and the latest nuclear data library JENDL-3.3. The validity of ARCADIAN-FBR was verified by using the experimental data obtained in the MONJU core physics tests. The results of analyses are in good agreement with the experimental data and the applicability of ARCADIAN-FBR for fast reactor core analysis is confirmed. Using ARCADIAN-FBR, the sodium void reactivity worth, which is an important parameter in the safety analysis of fast reactors, was analyzed for MONJU core. 241Pu in the core fuel is transmuted to 241Am due to disintegrations. Therefore, the effect of 241Am accumulation on the sodium void reactivity worth was evaluated for MONJU core. As a result of calculation, it was confirmed that the accumulation of 241Am significantly influences on the sodium void reactivity worth and hence on the safety analysis of sodium-cooled fast reactors. (author)
International Nuclear Information System (INIS)
The computer program INDAR enables detailed estimates to be made of critical group radiation exposure arising from routine discharges of radioactivity for coastal sites where the discharge is close to the shore and the shoreline is reasonably straight, and for estuarine sites where radioactivity is rapidly mixed across the width of the estuary. Important processes which can be taken into account include the turbulence generated by the discharge, the effects of a sloping sea bed and the variation with time of the lateral dispersion coefficient. The significance of the timing of discharges can also be assessed. INDAR uses physically meaningful hydrographic parameters directly. For most sites the most important exposure pathways are seafood consumption, external exposure over estuarine sediments and beaches, and the handling of fishing gear. As well as for these primary pathways, INDAR enables direct calculations to be made for some additional exposure pathways. The secondary pathways considered are seaweed consumption, swimming, the handling of materials other than fishing gear and the inhalation of activity. (author)
Douhaya, Ya V; Barkaline, V V; Tsakalof, A
2016-07-01
Molecular imprinting is a promising way to create polymer materials that can be used as artificial receptors, and have anticipated use in synthetic imitation of natural antibodies. In case of successful imprinting, the selectivity and affinity of the imprint for the substrate molecules are comparable with those of natural counterparts. Various calculation methods can be used to estimate the effects of a large range of imprinting parameters under different conditions, and to find better ways to improve polymer characteristics. However, one difficulty is that properties such as hydrogen bonding can be modeled only by quantum methods that demand a lot of computational resources. Combined quantum mechanics/molecular mechanics (QM/MM) methods allow the use of MM and QM for different parts of the modeled system. In present study this method was realized in the NWChem package to compare estimations of the stability of tri-O-acetyl adenosine-monomer pre-polymerization complexes in benzene solution with previous results under vacuum. PMID:27296451
International Nuclear Information System (INIS)
Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very complex systems. In order to increase the applicability of the programs variance reduction techniques can be applied to speed up the calculation process. Variance reduction techniques have been studied and procedures for implementation of importance sampling are suggested. (author)
Directory of Open Access Journals (Sweden)
Guez F.
2006-11-01
Full Text Available La recherche des conditions optimales d'exploitation d'un gisement fissuré repose sur une bonne description de la fissuration. En conséquence il est nécessaire de définir les dimensions et volumes des blocs matriciels en chaque point d'une structure. Or la géométrie du milieu (juxtaposition et formes des blocs est généralement trop complexe pour se prêter au calcul. Aussi, dans une précédente communication, avons-nous dû tourner cette difficulté par un raisonnement sur des moyennes (pendages, azimuts, espacement des fissures qui nous a conduits à un ordre de grandeur des volumes. Cependant un volume moyen ne peut pas rendre compte d'une loi de répartition des volumes des blocs. Or c'est cette répartition qui conditionne le choix d'une ou plusieurs méthodes successives de récupération. Aussi présentons-nous ici une méthode originale de calcul statistique de la loi de distribution des volumes des blocs matriciels, applicable en tout point d'un gisement. La part de gisement concernée par les blocs de volume donné en est déduite. La connaissance générale du phénomène de la fracturation sert de base au modèle. Les observations de subsurface sur la fracturation du gisement en fournissent les données (histogramme d'orientation et d'espacement des fissures.Une application au gisement d'Eschau (Alsace, France est rapportée ici pour illustrer la méthode. The search for optimum production conditions for a fissured reservoir depends on having a good description of the fissure pattern. Hence the sizes and volumes of the matrix blocks must be defined at all points in a structure. However, the geometry of the medium (juxtaposition and shapes of blocks in usually too complex for such computation. This is why, in a previous paper, we got around this problem by reasoning on the bases of averages (clips, azimuths, fissure spacing, and thot led us to an order of magnitude of the volumes. Yet a mean volume cannot be used to explain
International Nuclear Information System (INIS)
The URR computer code has been developed to calculate cross-section probability tables, Bondarenko self-shielding factors, and self- indication ratios for fertile and fissile isotopes in the unresolved resonance region. Monte Carlo methods are utilized to select appropriate resonance parameters and to compute the cross sections at the desired reference energy. The neutron cross sections are calculated by the single-level Breit-Wigner formalism with s-, p-, and d-wave contributions. The cross-section probability tables are constructed by sampling the Doppler broadened cross-section. The various shelf-shielded factors are computed numerically as Lebesgue integrals over the cross-section probability tables. 6 refs
Analysis and calculation on human-computer interaction system of emotion%基于人机互动系统情感分析计算研究
Institute of Scientific and Technical Information of China (English)
杨杰; 赵强
2012-01-01
由于人的情感包含大量的信息,特别是面部的表情与手势图像中存在大量干扰信息,造成了情感计算正确率下降,为此提出了一种从用户的面部表情与手势中系统地分析情感线索的方法.对面部表情与手势进行分析是感情丰富的人机交互系统的基本组成部分,采用非语言线索算法来判断用户的感情状态.从图像序列中提取与表情相关的特征,通过智能法则系统分析用户的情感状态,得到的感情信息最终与用户的真实反映相适应.最后采用基于主体接口技术来处理如沮丧或愤怒等特定的情绪状态.实验结果表明,提出的方法能够准确地分析计算出用户的情感信息.%Especially facial expression and gesture image existed in a large number of interference information disadvantages , resulting in affective computing accuracy down, this paper presented a from the user' s facial expressions and gestured in a sys-tematic analysis of the emotional cues of method. First analysis of facial expressions and gestures were the feelings of the rich interactive basic part of the system, and then the nonverbal cued algorithm to judge user emotional state. At the same time, these papers extracted from an image sequence and expression of related characteristics, through the intelligent rule system analysis of the user' s emotional state, and reflected the user' s suit. Finally based on the main interface technology,it treated such as frustration or anger and other specific emotional states. The experimental results show that, the proposed method can accurately analyze and calculate the user' s emotional information.
Saarilahti, M.
2002-01-01
Documentation of a computer programme written in VisualBasic for comparing the trafficability of forest terrain and mobility of forest tractors using different WES-based mobility models and empirical rut depth models.
物联网的边界计算模型：雾计算%A boundary calculation model of IoT fog computing
Institute of Scientific and Technical Information of China (English)
杨志和
2014-01-01
在物联网和云计算带来技术变革和带动产业发展的过程中，由于网络接入设备激增，而网络带宽有限的情况下，思科公司推出了雾计算的概念。首先探讨雾计算的特征和应用模式，然后分析雾计算的“雾节点”与云计算的“云节点”以及物联网的“物节点”的互操作方法，并总结了雾计算的用例，最后给出了前景展望。%During the process of technological change and industry development brought by IoT and cloud computing, Cisco Corporation launched the concept of fog computing because of the surging network access equipments and limited network bandwidth. First, the characteristics and application mode of fog computing are discussed. Then the interoperation method among“the fog nodes”of fog computing,“the cloud nodes”of cloud computing and“the entity nodes”of IoT is analyzed. The examples of fog computing are summarized. The prospect forecast is put forward.
Energy Technology Data Exchange (ETDEWEB)
Villar Sanchez, T.
2012-07-01
(FDS) is an advanced computational model of calculation of simulation of fire that numerically solves the Navier-Stokes equations in each cell of the mesh in each interval of time, having capacity to calculate accurately all those parameters of fire to NUREG-1805 has a limited capacity. The objective of the analysis is to compare the results obtained with the FDS with those obtained from spreadsheets of NUREG-1805 and deal widespread and realistic study of the propagation of a fire in different areas of NPP Almaraz.
Energy Technology Data Exchange (ETDEWEB)
Behler, Matthias; Hannstein, Volker; Kilger, Robert; Moser, Franz-Eberhard; Pfeiffer, Arndt; Stuke, Maik
2014-06-15
In order to account for the reactivity-reducing effect of burn-up in the criticality safety analysis for systems with irradiated nuclear fuel (''burnup credit''), numerical methods to determine the enrichment and burnup dependent nuclide inventory (''burnup code'') and its resulting multiplication factor k{sub eff} (''criticality code'') are applied. To allow for reliable conclusions, for both calculation systems the systematic deviations of the calculation results from the respective true values, the bias and its uncertainty, are being quantified by calculation and analysis of a sufficient number of suitable experiments. This quantification is specific for the application case under scope and is also called validation. GRS has developed a methodology to validate a calculation system for the application of burnup credit in the criticality safety analysis for irradiated fuel assemblies from pressurized water reactors. This methodology was demonstrated by applying the GRS home-built KENOREST burnup code and the criticality calculation sequence CSAS5 from SCALE code package. It comprises a bounding approach and alternatively a stochastic, which both have been exemplarily demonstrated by use of a generic spent fuel pool rack and a generic dry storage cask, respectively. Based on publicly available post irradiation examination and criticality experiments, currently the isotopes of uranium and plutonium elements can be regarded for.
International Nuclear Information System (INIS)
This investigation uses symbolic computation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular, integral and integro-differential equations which appear in radiative and combined mode energy transport. This technical report summarizes the research conducted during the first nine months of the present investigation. The use of Chebyshev polynomials augmented with symbolic computation has clearly been demonstrated in problems involving radiative (or neutron) transport, and mixed-mode energy transport. Theoretical issues related to convergence, errors, and accuracy have also been pursued. Three manuscripts have resulted from the funded research. These manuscripts have been submitted to archival journals. At the present time, an investigation involving a conductive and radiative medium is underway. The mathematical formulation leads to a system of nonlinear, weakly-singular integral equations involving the unknown temperature and various Legendre moments of the radiative intensity in a participating medium. Some preliminary results are presented illustrating the direction of the proposed research
International Nuclear Information System (INIS)
Working Group 1 examined a range of reactor deployment strategies and fuel cycle options, in oder to estimate the range of nuclear fuel requirements and fuel cycle service needs which would result. The computer model, its verification in comparison with other models, the strategies to be examined through use of the model, and the range of results obtained are described
Collier, G.; Gibson, G.
1968-01-01
FORTRAN 4 program /P1-GAS/ calculates the P-O and P-1 transfer matrices for neutron moderation in a monatomic gas. The equations used are based on the conditions that there is isotropic scattering in the center-of-mass coordinate system, the scattering cross section is constant, and the target nuclear velocities satisfy a Maxwellian distribution.
DEFF Research Database (Denmark)
Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G;
2015-01-01
.01). RESULTS: From an individual factors used to multiply the first result were calculated to create limits for constant cumulated significant changes. The factors were shown to become a function of the number of results included and the total coefficient of variation. CONCLUSIONS: The first result should...
International Nuclear Information System (INIS)
A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/μCi-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult
Energy Technology Data Exchange (ETDEWEB)
Wolery, T.J.
1992-09-14
EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desired electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = {minus}2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file.
Energy Technology Data Exchange (ETDEWEB)
Pinchedez, K
1999-06-01
Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)
International Nuclear Information System (INIS)
An atomistic kinetic Monte Carlo (AKMC) method has been applied to study the stability and mobility of copper-vacancy clusters in Fe. This information, which cannot be obtained directly from experimental measurements, is needed to parameterise models describing the nanostructure evolution under irradiation of Fe alloys (e.g. model alloys for reactor pressure vessel steels). The physical reliability of the AKMC method has been improved by employing artificial intelligence techniques for the regression of the activation energies required by the model as input. These energies are calculated allowing for the effects of local chemistry and relaxation, using an interatomic potential fitted to reproduce them as accurately as possible and the nudged-elastic-band method. The model validation was based on comparison with available ab initio calculations for verification of the used cohesive model, as well as with other models and theories.
Calculator. Owning a Small Business.
Parma City School District, OH.
Seven activities are presented in this student workbook designed for an exploration of small business ownership and the use of the calculator in this career. Included are simulated situations in which students must use a calculator to compute property taxes; estimate payroll taxes and franchise taxes; compute pricing, approximate salaries,…
Energy Technology Data Exchange (ETDEWEB)
Allaucca P, J.J.; Picon C, C.; Zaharia B, M. [Departamento de Radioterapia, Instituto de Enfermedades Neoplasicas, Av. Angamos Este 2520, Lima 34 (Peru)
1998-12-31
It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)
International Nuclear Information System (INIS)
In 1986 the CEC sponsored a benchmark exercise on aerosol calculation based on the Demona B3 experiment. The results of this exercise were very sensitive to the calculation of energy and mass transfer between the phases. In view of the results of the study mentioned above, it had been decided to carry out a benchmark exercise for severe accident containment thermal-hydraulics codes. This exercise is based on experiment B3 in the Demona programme. The experiment B3 was a simulation of a late overpressure failure scenario in a PWR. The main objective of the benchmark exercise has been to assess the ability of the participating codes to predict atmosphere saturation levels and bulk condensation rates under conditions similar to those predicted to follow a severe accident in a PWR. Several research organizations of the Community member countries have participated at this benchmark exercise. The paper presents the comparison of the experimental results with the following calculated quantities: total system pressure, steam pressure, containment temperature, saturation ratio, steam condensation rate on walls and in the bulk volume, water mass in sump, sump temperature, heat transfer coefficient, steam mass flow rate and structural temperature for a real time period of up to 90 hours. The paper also presents the major conclusions from the exercise in order to identify the status of present codes versus the requirements needed as input and for coupling with aerosol analysis codes
Manzoor Ali, M.; George, Gene; Ramalingam, S.; Periandy, S.; Gokulakrishnan, V.
2015-11-01
In this research work, in order to the vibrational, physical and chemical properties, a thorough investigation has been made by recording FT-IR, FT-Raman, Mass and 13C and 1H NMR spectra of pharmaceutically important compound; 3,6-Dimethylphenanthrene. The altered geometrical parameters of Phenanthrene due to the addition of methyl groups have been calculated using HF and DFT (B3LYP and B3PW91) methods with 6-31++G(d,p) and 6-311++G(d,p) basis sets and the discussion are made on their corresponding results. The alternation of the vibrational pattern of the molecule due to the injection of the substitutions; CH3 is investigated. The keen observation is made over the excitations between the electronic energy levels of the molecule which lead to the study of electronic properties. The alternation of distribution of Mulliken charges after the formation of present molecule has been correlated with the vibrational pattern of the molecular bonds. The charge transformation over the frontier molecular orbitals between the ligand and rings has been studied. The cause of the linear and non linear optical activity of the molecule is interpreted in detail from the average Polarizability first order diagonal hyperpolarizability calculations. The variation of thermodynamic properties; heat capacity, entropy, and enthalpy of the present compound at different temperatures are calculated using NIST thermodynamical function program and interpreted.
Scientific calculating peripheral
Energy Technology Data Exchange (ETDEWEB)
Ethridge, C.D.; Nickell, J.D. Jr.; Hanna, W.H.
1979-09-01
A scientific calculating peripheral for small intelligent data acquisition and instrumentation systems and for distributed-task processing systems is established with a number-oriented microprocessor controlled by a single component universal peripheral interface microcontroller. A MOS/LSI number-oriented microprocessor provides the scientific calculating capability with Reverse Polish Notation data format. Master processor task definition storage, input data sequencing, computation processing, result reporting, and interface protocol is managed by a single component universal peripheral interface microcontroller.
Institute of Scientific and Technical Information of China (English)
莫依; 黎乐民
2001-01-01
The influence of the computation conditions on the results in the calculations with Kohn-Sham density functional theory has been studied through a series of calculations on the molecules with various compositions and structures such as BCl3,SO2,ZnO,TiCl4,LuF3.Three factors are considered.It is found that the completeness of the basis sets is the most important factor.The number of the grid points of the numerical integration is less important.The projection for decomposing the molecular charge into multipolar components Centered in each atomic nucleus converges fairly rapidly.The molecular geometry and the fundamental vibrational frequency are insensitive to the computation conditions,while the total energy and bond energy are pretty sensitive.If the same basis set and the same numerical integration points are used for a molecule and its constituting atoms in calculating its bond energy,the error of the calculated result could be reduced.It is shown that the calculation results with an accuracy matched with the approximate DFT formulas can be obtained by choosing the mediate computation conditions with a smaller computational effort.%通过对有代表性的分子BCl3，SO2，ZnO，TiCl4，LuF3的一系列计算，从三个方面考察密度泛函理论的Kohn-sham方法中采用的计算条件对结果精度的影响.发现基组完备性对计算结果影响最大，数值积分取点数目对计算结果影响略小.电荷多极投影收敛较快.分子几何构型和振动基频对计算条件不太敏感，总能量和键能对计算结果较敏感.在计算键能时对分子及其组成原子采用相同的基组和积分布点方式可以减少误差.选择适当的计算条件，用较小的计算量可以得到与现有近似能量密度泛函精度相匹配的结果.
Energy Technology Data Exchange (ETDEWEB)
Osswald, F.; Roumie, M.; Frick, G.; Heusch, B.
1994-11-01
Calculations have been made to increase the high voltage performance of some components and to explain electrical failures of the Vivitron. These involve simulations of static stresses and transient over voltages, especially on insulating boards and electrodes occurring before or during breakdowns. Developments made to the structure of the machine over the last years and new ideas to improve the static and dynamic behaviour are presented. The application of this study and HV tests led recently to a nominal potential near 20 MV without sparks. (author). 49 refs., 25 figs., 2 tabs.
International Nuclear Information System (INIS)
Extensive research has been carried out around the world to understand the behavior of radioactive materials in the containment building of an LWR under accident conditions. Most of this material is in the form of aerosols or is attached to non-radioactive aerosol particles in the containment atmosphere. Several computer codes have been written to describe fission product aerosol behavior under accident conditions, aimed at evaluating the time dependent airborne concentration inside the reactor building and the fraction that leaks out to the environment. The objective of this Study was to perform a comparison of computer codes used in the CEC member states with a DEMONA experiment. This is a follow up study to an earlier exercise comparing codes to each other in which rigid benchmark cases of more or less artificially detailed nature had been used. In the present Study the comparison to the DEMONA experiment was to be oriented only at the experimental results without additional help provided. It should thus provide a basis for judging the practical applicability of the codes to a situation which is real but, maybe, less well defined than a theoretical benchmark case
Energy Technology Data Exchange (ETDEWEB)
Vieira, Jose Wilson
2004-07-15
The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)
Radioactive cloud dose calculations
International Nuclear Information System (INIS)
Radiological dosage principles, as well as methods for calculating external and internal dose rates, following dispersion and deposition of radioactive materials in the atmosphere are described. Emphasis has been placed on analytical solutions that are appropriate for hand calculations. In addition, the methods for calculating dose rates from ingestion are discussed. A brief description of several computer programs are included for information on radionuclides. There has been no attempt to be comprehensive, and only a sampling of programs has been selected to illustrate the variety available
SRD 166 MEMS Calculator (Web, free access) This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.
Zhu, Shun; Travis, Sue M; Elcock, Adrian H
2013-07-01
A major current challenge for drug design efforts focused on protein kinases is the development of drug resistance caused by spontaneous mutations in the kinase catalytic domain. The ubiquity of this problem means that it would be advantageous to develop fast, effective computational methods that could be used to determine the effects of potential resistance-causing mutations before they arise in a clinical setting. With this long-term goal in mind, we have conducted a combined experimental and computational study of the thermodynamic effects of active-site mutations on a well-characterized and high-affinity interaction between a protein kinase and a small-molecule inhibitor. Specifically, we developed a fluorescence-based assay to measure the binding free energy of the small-molecule inhibitor, SB203580, to the p38α MAP kinase and used it measure the inhibitor's affinity for five different kinase mutants involving two residues (Val38 and Ala51) that contact the inhibitor in the crystal structure of the inhibitor-kinase complex. We then conducted long, explicit-solvent thermodynamic integration (TI) simulations in an attempt to reproduce the experimental relative binding affinities of the inhibitor for the five mutants; in total, a combined simulation time of 18.5 μs was obtained. Two widely used force fields - OPLS-AA/L and Amber ff99SB-ILDN - were tested in the TI simulations. Both force fields produced excellent agreement with experiment for three of the five mutants; simulations performed with the OPLS-AA/L force field, however, produced qualitatively incorrect results for the constructs that contained an A51V mutation. Interestingly, the discrepancies with the OPLS-AA/L force field could be rectified by the imposition of position restraints on the atoms of the protein backbone and the inhibitor without destroying the agreement for other mutations; the ability to reproduce experiment depended, however, upon the strength of the restraints' force constant
Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.
2012-06-01
BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods
Ventura, Oscar N.; Segovia, Marc
2005-02-01
The experimental enthalpy of formation of perfluoropropane (C 3F 8), reported originally as -1729 kJ/mol and latter corrected to -1784.7 kJ/mol, is reexamined at the light of density functional and model chemistry (G3, CBS-4, CBS-Q) calculations of several isodesmic reactions relating C 3F 8 to smaller fluoroalkanes. The average enthalpy of formation of C 3F 8 obtained from all reactions studied was -1739 ± 12 kJ/mol at the DFT level and -1748 ± 12 kJ/mol at the ab initio level, thus ruling out the larger experimental value. A value of -1732 ± 5 kJ/mol is recommended from careful analysis of the theoretical results.
VCM精馏工段的计算机模拟优化计算%Computer simulation and calculation for optimizing VCM rectification procedure
Institute of Scientific and Technical Information of China (English)
李群生; 于颖
2012-01-01
Simulation calculation for VCM rectification was made by chemical process simula- tion software ASPEN PLUS, and the sensitivity analysis on operation variables were studied, and thus suitable feed position, reflux ratio and fraction ratio were obtained to provide the basis for the operation optimization of VCM rectification procedure.%采用化工流程模拟软件ASPEN PLUS对VCM精馏工段各精馏塔进行模拟计算,并对操作变量进行灵敏度分析,可得到适宜的进料位置、回流比及馏出比,为氯乙烯精馏工段的操作优化提供依据。
International Nuclear Information System (INIS)
The experimental and theoretical two-dimensional nuclear Overhauser effect spectra, double-quantum-filtered COSY experiments, and molecular mechanics calculations on the self-complementary decamer [d-(5'ATATATATAT3')]2 presented here indicate that the duplex as a time-average assumes a wrinkled D conformation (B DNA family) with a hydration tunnel in the minor groove. Formation of the tunnel is favored by non-bonded and electrostatic interchain sugar-phosphate and ion-DNA interactions in the minor groove. The size of the tunnel in the DNA perfectly accomodates three types of water molecules - one bridging interstrand N3 atoms of adenine, another water molecule bridging interstrand O2 atoms of thymine bases and another water molecule bridging the above mentioned two water molecules. 31 refs.; 3 figs.; 2 tabs
International Nuclear Information System (INIS)
Photographs of the droplets associated with the ionisations caused by charged particle tracks in the Harwell low pressure cloud chamber have been analysed. The radiation types these represent are alpha particles, protons and low energy X rays (carbon and aluminium) in either a tissue-equivalent gas or water vapour. The tracks were used to test the validity of two Monte Carlo codes developed by Wilson and Paratzke, namely MOCA14 for the generation of proton and alpha particle tracks, and MOCA8 for the generation of electron tracks. The comparisons showed that the code MOCA14 would appear to be valid for protons with energies greater than about 390 keV, and for alpha particles with energies greater than 1.6 MeV. No disagreement was found between the low energy X ray tracks from the cloud chamber and the tracks calculated from MOCA8, although this comparison was severely limited by droplet diffusion. (author)
1992-12-01
ESDU 92035 provides details of a FORTRAN program that implements the calculation method of ESDU 83004. It allows performance analysis of an existing design, or the design of a bearing dimensions, subject to any space constraint, are recommended. The predicted performance includes the lubricant film thickness under load, its temperature and flow rate, the power loss, and the bearing temperature. Recommendations are also made on surface finish. Warning messages are output in the following cases, for each of which possible remedial actions are suggested: drain or pad temperature too high, churning losses too great, film thickness too small, pad number too high, ratio or inner to outer pad radius too large, flow rate too great, lubricant or pad temperature outside usable range. A lubricant database is provided that may be extended or edited. The program applies to Newtonian lubricants in laminar flow. Worked examples illustrate the use of the program.
Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.
2016-10-01
We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a
Languages for structural calculations
International Nuclear Information System (INIS)
The differences between human and computing languages are recalled. It is argued that they are to some extent structured in antagonistic ways. Languages in structural calculation, in the past, present, and future, are considered. The contribution of artificial intelligence is stressed
Institute of Scientific and Technical Information of China (English)
朱宇兰
2016-01-01
GPU通用计算是近几年来迅速发展的一个计算领域，以其强大的并行处理能力为密集数据单指令型计算提供了一个绝佳的解决方案，但受限制于芯片的制造工艺，其运算能力遭遇瓶颈。本文从GPU通用计算的基础——图形API开始，分析GPU并行算法特征、运算的过程及特点，并抽象出了一套并行计算框架。通过计算密集行案例，演示了框架的使用方法，并与传统GPU通用计算的实现方法比较，证明了本框架具有代码精简、与图形学无关的特点。%GPGPU(General Purpose Computing on Graphics Processing Unit) is a calculation mothed that develops quiet fast in recent years, it provide an optimal solution for the intensive data calculation of a single instruction with a powerful treatment, however it is restricted in CPU making process to lead to entounter the bottleneck of hardware manufacture. This paper started from GPGPU by Graphics API to analyze the featuers, progress and characteristics of GPU parallel algorithm and obtained a set of computing framework to demonstrate it by an intensive line calculation and compared between the traditional GPU and the parallel computing framework to turn out to show that there was a simplified code and had nothing to do with graphics.
1982-01-01
Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn
Energy Technology Data Exchange (ETDEWEB)
Sidrach-de-Cardona, M.; Carretero, J.; Martin, B.; Mora-Lopez, L.; Ramirez, L.; Varela, M.
2004-07-01
We present a computer tool to calculate the daily energy produced by a grid-connected photovoltaic system. The main novelty of this tool is that it uses radiation and ambient temperature as input data; these maps allow us to obtain 365 values of these parameters in any point of the image. The radiation map has been obtained by using images of the satellite Meteosat. For the temperature map a system of geographical information has been used by using data of terrestrial meteorological stations. For the calculation of the daily energy an empiric model obtained from the study of the behavior of different photovoltaic systems is used. The program allows to design the photovoltaic generator, includes a database of photovoltaic products and allows to carry out a complete economic analysis. (Author)
Energy Technology Data Exchange (ETDEWEB)
Lodygensky, O
2006-09-15
Centralized computers have been replaced by 'client/server' distributed architectures which are in turn in competition with new distributed systems known as 'peer to peer'. These new technologies are widely spread, and trading, industry and the research world have understood the new goals involved and massively invest around these new technologies, named 'grid'. One of the fields is about calculating. This is the subject of the works presented here. At the Paris Orsay University, a synergy emerged between the Computing Science Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) on grid infrastructure, opening new investigations fields for the first and new high computing perspective for the other. Works presented here are the results of this multi-discipline collaboration. They are based on XtremWeb, the LRI global computing platform. We first introduce a state of the art of the large scale distributed systems, its principles, its architecture based on services. We then introduce XtremWeb and detail modifications and improvements we had to specify and implement to achieve our goals. We present two different studies, first interconnecting grids in order to generalize resource sharing and secondly, be able to use legacy services on such platforms. We finally explain how a research community like the community of high energy cosmic radiation detection can gain access to these services and detail Monte Carlos and data analysis processes over the grids. (author)
KUCHIDA, Keigo; Kono, S.; KONISHI, Kazuyuki; Van, Vleck LD; SUZUKI, Mitsuyoshi; Miyoshi, Syunzou; 口田, 圭吾; 小西, 一之; 鈴木, 三義; 三好, 俊三
2000-01-01
Crude fat content of longissimus (ribeye) muscle of beef cattle was predicted from a ratio of fat area (RFA) to area of ribeye muscle calculated from computer image analysis (CIA). Cross sections of 64 ribeyes taken from the 6-7(th) rib from cattle at experiment station A and cross sections of 94 ribeyes taken from the 6-7(th) rib from cattle at Experiment Station B were used in this study. Slices (1 to 1.5 cm in thickness) of just the Longissimus dorsi were homogenized and sampled for chemic...
Energy Technology Data Exchange (ETDEWEB)
Siebert, B.R.L.; Thomas, R.H.
1996-01-01
The paper presents a definition of the term ``Computational Dosimetry`` that is interpreted as the sub-discipline of computational physics which is devoted to radiation metrology. It is shown that computational dosimetry is more than a mere collection of computational methods. Computational simulations directed at basic understanding and modelling are important tools provided by computational dosimetry, while another very important application is the support that it can give to the design, optimization and analysis of experiments. However, the primary task of computational dosimetry is to reduce the variance in the determination of absorbed dose (and its related quantities), for example in the disciplines of radiological protection and radiation therapy. In this paper emphasis is given to the discussion of potential pitfalls in the applications of computational dosimetry and recommendations are given for their avoidance. The need for comparison of calculated and experimental data whenever possible is strongly stressed.
International Nuclear Information System (INIS)
We investigated whether right atrial (RA) volume could be used to predict the recurrence of atrial fibrillation (AF) after pulmonary vein catheter ablation (CA). We evaluated 65 patients with paroxysmal AF (mean age, 60+10 years, 81.5% male) and normal volunteers (57±14 years, 41.7% male). Sixty-four-slice multi-detector computed tomography was performed for left atrial (LA) and RA volume estimations before CA. The recurrence of AF was assessed for 6 months after the ablation. Both left and right atrial volumes were larger in the AF patients than the normal volunteers (LA: 99.7+33.2 ml vs. 59.7+17.4 ml; RA: 82.9+35.7 ml vs. 43.9+12 ml; P100 ml) for predicting the recurrence of AF was 81.3% in 13 of 16 patients with AF recurrence, and the specificity was 69.4% in 34 of 49 patients without recurrence. The sensitivity with large RA volumes (>87 ml) was 81.3% in 13 of 16 patients with AF recurrence, and the specificity was 75.5% in 37 of 49 patients without recurrence. RA volume is a useful predictor of the recurrence of AF, similar to LA volume. (author)
International Nuclear Information System (INIS)
TURTLE is a computer program useful for determining many characteristics of a particle beam once an initial design has been achieved, Charged particle beams are usually designed by adjusting various beam line parameters to obtain desired values of certain elements of a transfer or beam matrix. Such beam line parameters may describe certain magnetic fields and their gradients, lengths and shapes of magnets, spacings between magnetic elements, or the initial beam accepted into the system. For such purposes one typically employs a matrix multiplication and fitting program such as TRANSPORT. TURTLE is designed to be used after TRANSPORT. For convenience of the user, the input formats of the two programs have been made compatible. The use of TURTLE should be restricted to beams with small phase space. The lumped element approximation, described below, precludes the inclusion of the effect of conventional local geometric aberrations (due to large phase space) or fourth and higher order. A reading of the discussion below will indicate clearly the exact uses and limitations of the approach taken in TURTLE
International Nuclear Information System (INIS)
This research thesis has been made within the frame of a project on nuclear reactor vessel life. It deals with the use of numerical codes aimed at estimating probability densities for every input parameter in order to calculate probability margins at the output level. More precisely, it deals with codes with one-dimensional functional responses. The author studies the numerical simulation of a pressurized thermal shock on a nuclear reactor vessel, i.e. one of the possible accident types. The study of the vessel integrity relies on a thermal-hydraulic analysis and on a mechanical analysis. Algorithms are developed and proposed for each of them. Input-output data are classified using a clustering technique and a graph-based representation. A method for output dimension reduction is proposed, and a regression is applied between inputs and reduced representations. Applications are discussed in the case of modelling and sensitivity analysis for the CATHARE code (a code used at the CEA for the thermal-hydraulic analysis)
Dhoke, Gaurao V; Ensari, Yunus; Davari, Mehdi D; Ruff, Anna Joëlle; Schwaneberg, Ulrich; Bocola, Marco
2016-07-25
Zinc-dependent medium chain reductase from Candida parapsilosis can be used in the reduction of carbonyl compounds to pharmacologically important chiral secondary alcohols. To date, the nomenclature of cpADH5 is differing (CPCR2/RCR/SADH) in the literature, and its natural substrate is not known. In this study, we utilized a substrate docking based virtual screening method combined with KEGG, MetaCyc pathway, and Candida genome databases search for the discovery of natural substrates of cpADH5. The virtual screening of 7834 carbonyl compounds from the ZINC database provided 94 aldehydes or methyl/ethyl ketones as putative carbonyl substrates. Out of which, 52 carbonyl substrates of cpADH5 with catalytically active docking pose were identified by employing mechanism based substrate docking protocol. Comparison of the virtual screening results with KEGG, MetaCyc database search, and Candida genome pathway analysis suggest that cpADH5 might be involved in the Ehrlich pathway (reduction of fusel aldehydes in leucine, isoleucine, and valine degradation). Our QM/MM calculations and experimental activity measurements affirmed that butyraldehyde substrates are the potential natural substrates of cpADH5, suggesting a carbonyl reductase role for this enzyme in butyraldehyde reduction in aliphatic amino acid degradation pathways. Phylogenetic tree analysis of known ADHs from Candida albicans shows that cpADH5 is close to caADH5. We therefore propose, according to the experimental substrate identification and sequence similarity, the common name butyraldehyde dehydrogenase cpADH5 for Candida parapsilosis CPCR2/RCR/SADH. PMID:27387009
Rowley, Christopher N; Roux, Benoît
2013-10-01
Electrophysiological studies have established that the permeation of Ba(2+) ions through the KcsA K(+)-channel is impeded by the presence of K(+) ions in the external solution, while no effect is observed for external Na(+) ions. This Ba(2+) "lock-in" effect suggests that at least one of the external binding sites of the KcsA channel is thermodynamically selective for K(+). We used molecular dynamics simulations to interpret these lock-in experiments in the context of the crystallographic structure of KcsA. Assuming that the Ba(2+) is bound in site S(2) in the dominant blocked state, we examine the conditions that could impede its translocation and cause the observed "lock-in" effect. Although the binding of a K(+) ion to site S(1) when site S(2) is occupied by Ba(2+) is prohibitively high in energy (>10 kcal/mol), binding to site S0 appears to be more plausible (ΔG > 4 kcal/mol). The 2D potential of mean force (PMF) for the simultaneous translocation of Ba(2+) from site S(2) to site S(1) and of a K(+) ion on the extracellular side shows a barrier that is consistent with the concept of external lock-in. The barrier opposing the movement of Ba(2+) is very high when a cation is in site S(0), and considerably smaller when the site is unoccupied. Furthermore, free energy perturbation calculations show that site S(0) is selective for K(+) by 1.8 kcal/mol when S(2) is occupied by Ba(2+). However, the same site S(0) is nonselective when site S(2) is occupied by K(+), which shows that the presence of Ba(2+) affects the selectivity of the pore. A theoretical framework within classical rate theory is presented to incorporate the concentration dependence of the external ions on the lock-in effect. PMID:24043859
Energy Technology Data Exchange (ETDEWEB)
Gutierrez Delgado, Wilson; Kubiak, Janusz; Serrano Romero, Luis Enrique [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)
1990-12-31
In the preliminary design and diagnosis of rotary machines is very common to utilize simple calculation methods for the mechanical and thermal stresses, dynamic and thermodynamic analysis and flow of fluids in this machines (Gutierrez et al., 1989). The analysis with these methods provides the necessary results for the project initial stage of the machine. Later on, more complex tools are employed to refine the design of some machine components. In the Gutierrez report et al., (1989) 34 programs were developed for the preliminary design and diagnosis of rotating equipment; in this article, one of them is presented in which a method for the analysis of mechanical and thermal stresses is applied in discs of uniform or variable thickness that are normally found in turbomachines and rotary equipment. [Espanol] En el diseno preliminar y diagnostico de maquinas rotatorias es muy comun emplear metodos de calculo sencillos para el analisis de esfuerzos mecanicos y termicos, analisis dinamico y termodinamico y de flujo de fluidos en estas maquinas (Gutierrez et al., 1989). El analisis con estos metodos proporcionan los resultados necesarios para la etapa del proyecto inicial de la maquina. Posteriormente, para refinar el diseno de algunos componentes de la maquina, se aplican las herramientas mas complejas. En el informe de Gutierrez et al., (1989) se desarrollan 34 programas para el diseno preliminar y diagnostico de equipo rotatorio; en este articulo, se presenta uno de ellos, en el que se emplea un metodo para el analisis de esfuerzos mecanicos y termicos en discos de espesor constante o variable que se encuentran comunmente en turbomaquinas y en equipos rotatorios.
Kelley, H. J.; Lefton, L.
1976-01-01
The numerical analysis of composite differential-turn trajectory pairs was studied for 'fast-evader' and 'neutral-evader' attitude dynamics idealization for attack aircraft. Transversality and generalized corner conditions are examined and the joining of trajectory segments discussed. A criterion is given for the screening of 'tandem-motion' trajectory segments. Main focus is upon the computation of barrier surfaces. Fortunately, from a computational viewpoint, the trajectory pairs defining these surfaces need not be calculated completely, the final subarc of multiple-subarc pairs not being required. Some calculations for pairs of example aircraft are presented. A computer program used to perform the calculations is included.
International Nuclear Information System (INIS)
Reviewed is the effect of heat flux of different system parameters on critical density in order to give an initial view on the value of several parameters. A thorough analysis of different equations is carried out to calculate burnout is steam-water flows in uniformly heated tubes, annular, and rectangular channels and rod bundles. Effect of heat flux density distribution and flux twisting on burnout and storage determination according to burnout are commended
Parton, K C
2014-01-01
The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer
Dead reckoning calculating without instruments
Doerfler, Ronald W
1993-01-01
No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner
Calculations of effective atomic number
Energy Technology Data Exchange (ETDEWEB)
Kaliman, Z. [Department of Physics, Faculty of Arts and Sciences, Omladinska 14, Rijeka (Croatia); Orlic, N. [Department of Physics, Faculty of Arts and Sciences, Omladinska 14, Rijeka (Croatia)], E-mail: norlic@ffri.hr; Jelovica, I. [Department of Physics, Faculty of Arts and Sciences, Omladinska 14, Rijeka (Croatia)
2007-09-21
We present and discuss effective atomic number (Z{sub eff}) obtained by different methods of calculations. There is no unique relation between the computed values. This observation led us to the conclusion that any Z{sub eff} is valid only for given process. We illustrate calculations for different subshells of atom Z=72 and for M3 subshell of several other atoms.
Bacteria Make Computers Look like Pocket Calculators
Institute of Scientific and Technical Information of China (English)
Jacob Aron; 程杰（选注）
2009-01-01
计算机科学的发展日新月异。除了令人期待的光子计算机与量子计算机外，生物计算机也将成为未来计算机的发展方向之一。您能想象吗，今后的计算机将只有袖珍型计算器般大小，而平日生活在我们肚子里的细菌，有朝一日则可能成为生物计算机的重要部件!
Institute of Scientific and Technical Information of China (English)
王书晓; 利岚; 张滨
2013-01-01
天然导光技术的应用对于改善地下空间、大进深空间的采光质量具有重要作用.传统的导光管计算方法多基于某一或某几种特定的导光管的实验数据推算得来,不具有普遍适用性.本文基于日光系数法,对光线在圆柱形导光筒中的传输特性进行了分析,建立了导光筒光传输特性数学模型,并利用TracePro软件对该数学模型的准确性进行了验证.%Application of natural tubular daylight technology is important to improvement of daylight quality in underground space and deep space.The traditional light pipe calculation method is generally based on calculation of the experimental data from certain or some specific light pipes,which lacks general applicability.Based on daylight coefficient computational method,this paper analyzes the light transfer characteristics in tubular daylight device,establishes a mathematical model regarding light transfer characteristics of light in tubular daylight device and utilizes TracePro software to verify the accuracy of the mathematical model.
Institute of Scientific and Technical Information of China (English)
李静; 宋婧; 龙鹏程; 刘鸿飞; 江平
2015-01-01
在基于蒙特卡罗粒子输运方法的反应堆模拟中，如裂变堆、聚变裂变混合堆等，达到可接受的统计误差需要大量的计算时间，这已成为蒙特卡罗方法的挑战问题之一，需通过并行计算技术解决。为解决现有方法中通信死锁的问题并保证负载均衡性，设计了基于双向遍历的临界计算并行算法。该方法基于超级蒙特卡罗核计算仿真软件系统SuperMC进行实现，以池式钠冷快堆BN600基准模型进行验证，并与MCNP进行对比。测试结果表明，串行和并行计算结果一致，且SuperMC并行效率高于MCNP。%Background: It requires much computational time with acceptable statistics errors in reactor simulations including fission reactors and fusion-fission hybrid reactors, which has become one challenge of the Monte Carlo method.Purpose: In this paper, an efficient parallel computing method was presented for resolving the communication deadlock and load balancing problem of current methods.Methods: The parallel computing method based on bi-directional traversal of criticality calculation was implemented in super Monte Carlo simulation program (SuperMC) for nuclear and radiation process. The pool-type sodium cooled fast reactor BN600 was proposed for benchmarking and was compared with MCNP.Results: Results showed that the parallel method and un-parallel methods were in agreement with each other.Conclusion: The parallel efficiency of SuperMC is higher than that of MCNP, which demonstrates the accuracy and efficiency of the parallel computing method.
Energy Technology Data Exchange (ETDEWEB)
Petrovich, C. [ENEA, Divisione Sistemi Energetici Ecosostenibili, Centro Ricerche Ezio Clementel, Bologna (Italy)
2001-07-01
The calculation of the number of atoms and the activity of materials following nuclear interactions at incident energies up to several GeV is necessary in the design of Accelerator Driven Systems, Radioactive Ion Beam and proton accelerator facilities such as spallation neutron sources. As well as the radioactivity of the materials, this allows the evaluation of the formation of active gaseous elements and the assessment of possible corrosion problems The particle energies involved here are higher than those used in typical nuclear reactors and fusion devices for which many codes already exist. These calculations can be performed by coupling two different computer codes: MCNPX and SP-FISPACT. MCNPX performs Monte Carlo particle transport up to energies of several GeV. SP-FISPACT is a modification of FISPACT, a code designed for fusion applications and able to calculate neutron activation for energies <20 MeV. In such a way it is possible to perform a hybrid calculation in which neutron activation data are used for neutron interactions at energies <20 MeV and intermediate energy physics models for all the other nuclear interactions. [Italian] In fase di design di sistemi ADS (Accelerator Driven Systems), di strutture con acceleratori quali quelli finalizzate alla produzione di fasci di ioni radioattivi o a sorgenti neutroniche di spallazione e' necessario calcolare la composizione e l'attivita' di materiali a seguito di interazioni nucleari con energie fino a qualche GeV. Oltre la radioattivita' dei materiali, questi calcoli permettono di prevedere la formazione di elementi gassosi attivi e possibili problemi di corrosione. Le energie delle particelle qui coinvolte sono piu' alte di quelle usate in tipici reattori nucleari ed in dispositivi finalizzati alla fusione, per i quali sono gia' disponibili diversi codici. Questi tipi di calcoli possono essere eseguiti accoppiando due codici differenti: MCNPX e SP-FISPACT. MCNPX trasporta
Shielding calculational system for plutonium
International Nuclear Information System (INIS)
A computer calculational system has been developed and assembled specifically for calculating dose rates in AEC plutonium fabrication facilities. The system consists of two computer codes and all nuclear data necessary for calculation of neutron and gamma dose rates from plutonium. The codes include the multigroup version of the Battelle Monte Carlo code for solution of general neutron and gamma shielding problems and the PUSHLD code for solution of shielding problems where low energy gamma and x-rays are important. The nuclear data consists of built in neutron and gamma yields and spectra for various plutonium compounds, an automatic calculation of age effects and all cross-sections commonly used. Experimental correlations have been performed to verify portions of the calculational system. (23 tables, 7 figs, 16 refs) (U.S.)
Configuration space Faddeev calculations
International Nuclear Information System (INIS)
The detailed study of few-body systems provides one of the most effective means for studying nuclear physics at subnucleon distance scales. For few-body systems the model equations can be solved numerically with errors less than the experimental uncertainties. We have used such systems to investigate the size of relativistic effects, the role of meson-exchange currents, and the importance of quark degrees of freedom in the nucleus. Complete calculations for momentum-dependent potentials have been performed, and the properties of the three-body bound state for these potentials have been studied. Few-body calculations of the electromagnetic form factors of the deuteron and pion have been carried out using a front-form formulation of relativistic quantum mechanics. The decomposition of the operators transforming convariantly under the Poincare group into kinematical and dynamical parts has been studies. New ways for constructing interactions between particles, as well as interactions which lead to the production of particles, have been constructed in the context of a relativistic quantum mechanics. To compute scattering amplitudes in a nonperturbative way, classes of operators have been generated out of which the phase operator may be constructed. Finally, we have worked out procedures for computing Clebsch-Gordan and Racah coefficients on a computer, as well as giving procedures for dealing with the multiplicity problem
DEFF Research Database (Denmark)
Rasmussen, Lykke
One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...
Institute of Scientific and Technical Information of China (English)
刘雪峰; 刘金平; 陈星龙; 陆继东
2012-01-01
以同程布置的冷水系统管网为研究对象,在充分考虑末端支路温度调节阀调节特性的基础上,建立了管网水力计算数学模型,提出了虚拟流量的计算机逻辑算法.以10个AHU支路的同程式管网为例,在所有末端支路水力可调、部分AHU关闭及部分AHU调节失灵3种情况下,分别计算了不同供回水压差下各支路温度调节阀的开度和实际流量,计算结果符合同程式管网的固有特性.%Taking a reverse-return chilled water system as the research object, based on full consideration of the regulating characteristics of temperature regulating valves in terminal branches, establishes a mathematical model of pipe network hydraulic characteristics, and puts forward a computer logic algorithm with virtual flow. Taking a reverse-return pipe network with ten AHU branches as the example, calculates the opening of temperature regulating valves and actual flow in branches respectively in the condition of different supply-return water pressure differences in three kinds of operating conditions of pipe network including all terminal branches being hydraulic adjustable, part AHU closing and part AHU being out of adjustment. The calculation results are in accord with the natural characteristics of reversed return pipe network.
Lopez, Cesar
2015-01-01
MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. This book is designed for use as a scientific/business calculator so that you can get numerical solutions to problems involving a wide array of mathematics using MATLAB. Just look up the function y
International Nuclear Information System (INIS)
Technetium-99m radionuclide is the workhorse of nuclear medicine and currently accounts for over 80% of all in-vivo diagnostic procedures. Technetium, element 43 in the Periodic Table does not occur naturally but was discovered in 1937 by Perrier and Segre. The daughter radionuclide technetium-99m (/sup 99m/Tc, T/sub 1/2/ = 6 h) is formed from the decay of parent molybdenum-99 (/sup 99/Mo, T/sub 1/2/ = 66 h). The prominent position of /sup 99m/Tc on the market has been due to its near ideal nuclear properties, the ready availability in the form of convenient /sup 99/Mo. /sup 99m/Tc generator system and the rapid progress made in recent years in the development of variety of /99m/Tc radiopharmaceuticals for applications in oncology, cardiology and other fields. The /sup 99/Mo radionuclide is produced by the fission of uranium with thermal or fast neutrons in the nuclear reactor. The separation of /sup 99m/Tc from /sup 99/Mo is based on the chromatographic alumina column where the carrier-free daughter /sup 99m/TcC/sub 4/ formed from /sup 99/Mo by beta-decay is eluted periodically by 0.9% saline while the molybdate remains adsorbed on the alumina column. Examination of the /sup 99/Mo-/sup 99m/Tc-/sup 99/decay scheme leads to mathematical equations for theoretical calculation of generator parameters in order to judge the performance of /sup 99/Mo-/sup 99/Tc generator. However, these calculations are laborious and time consuming. A computer program 'MOGEN-TEC-2' for the calculation of /sup 99/Mo-/sup 99/Tc generator parameters using Java 1.3 software which has the advantage of working under different operating systems, has been developed. Here the data of input variables is taken in text area and Output, for each elution after execution of the program, is displayed in Label area. The input variables are /sup 99/Mo activity, generator elution efficiency, number of elution, the growth time between elution, the decay time between elution and the use of /sup/99m/Tc. The
Molecular calculations with B functions
Steinborn, E O; Ema, I; López, R; Ramírez, G
1998-01-01
A program for molecular calculations with B functions is reported and its performance is analyzed. All the one- and two-center integrals, and the three-center nuclear attraction integrals are computed by direct procedures, using previously developed algorithms. The three- and four-center electron repulsion integrals are computed by means of Gaussian expansions of the B functions. A new procedure for obtaining these expansions is also reported. Some results on full molecular calculations are included to show the capabilities of the program and the quality of the B functions to represent the electronic functions in molecules.
Deitel, Harvey M
1985-01-01
Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of
Institute of Scientific and Technical Information of China (English)
张涛; 洪文学
2012-01-01
在计算几何组合分类器中,子分类器的权重分配一直未能充分利用空间视觉信息,使得分类器的可视化特性无法完全得到发挥.本文从类空间类别分布特性出发,提出基于类空间规整度的权重分配方法.该方法首先将子分类器由空间的类别表示转变为类别的空间表示,进而利用共生原则分析不同类别在空间中的分布规整度.由于分布规整度为类别分布信息的整体体现,可以用于刻画类空间中不同类别样本的离散程度,因此可以利用当前类空间的规整度信息作为该子分类器的权重.实验表明,利用规整度信息进行加权后的分类器不但与可视化特性更好的吻合,增强了分类过程的可理解性,而且在分类精度上得到了进一步的提升,扩展了应用领域.%In all the tissues about computational geometry combining classifier, the weight calculation for sub classifiers has not taken the advantage of visual information in the spaces, which retains the visual performance about classifier. According to the category distribution in class space, a weight calculation method based on space regulation is proposed. In this method, the space is turned from category information in space to space information in category. And the space regularity is obtained from the later based on co occur rules. As the regularity reflects the distribution of categories and describes the separation of the samples, which makes it as the weight for the sub classifier. The experiments show that the classifier weighted by the regularity not only enhance the visual performance, but also the classify performance of the classifier. It means that the comprehensibility of the classifier is enhanced and the application of the classifier is extended.
Calculation of transonic aileron buzz
Steger, J. L.; Bailey, H. E.
1979-01-01
An implicit finite-difference computer code that uses a two-layer algebraic eddy viscosity model and exact geometric specification of the airfoil has been used to simulate transonic aileron buzz. The calculated results, which were performed on both the Illiac IV parallel computer processor and the Control Data 7600 computer, are in essential agreement with the original expository wind-tunnel data taken in the Ames 16-Foot Wind Tunnel just after World War II. These results and a description of the pertinent numerical techniques are included.
Directory of Open Access Journals (Sweden)
Adriano Francisco de Lucca Facholli
2006-04-01
Full Text Available O diagnóstico da discrepância de tamanho dentário de Bolton é de fundamental importância para a boa finalização do tratamento ortodôntico. Por meio da medição dos dentes, com o auxílio de um paquímetro digital e a inserção dos valores no programa de computador desenvolvido e apresentado pelos autores, o trabalho do ortodontista fica mais simples, pois não é necessário realizar nenhum cálculo matemático ou auxiliar-se de nenhuma tabela de valores, eliminando-se a probabilidade de erros. Além disso, o programa apresenta a localização da discrepância por segmento - total, anterior e posterior - e individual - por elemento dentário, permitindo assim maior precisão na planificação das estratégias para a resolução dos problemas, caminhando para um tratamento ortodôntico de sucesso.The diagnosis of the Bolton’s Tooth Size Discrepancy is of fundamental importance for the good orthodontics finalization. Through the measurement of the teeth with the aid of a digital caliper and the insert of the values in the computer program developed by the authors and which that it will be presented in this article, the orthodontist’s work is simpler, because it is not necessary to accomplish any mathematical calculation or to aid of any table of values, eliminating the probability of mistakes. Besides, the program presents the location of the discrepancy for segment - overall, anterior and posterior - and individual - for dental element, allowing larger precision in the planning of the strategies for the resolution of the problems and walking for a success orthodontic treatment.
Institute of Scientific and Technical Information of China (English)
关仲
2012-01-01
Based on linearization characteristics method and symmetrical component method,the Visual Basic is used for programming of hydropower station short circuit calculation.The program involves mainly two states： transient short circuit and steady short circuit,which can obtain the symmetrical components（positive,negative and zero-sequence） of voltage and current,distributed on each branch circuit under various short circuit types and also can express them in phasor form（complex number） to meet actual requirements of the relay protection and the electric secondary specialty.Meanwhile,a comment on short circuit algorithm and some local optimization are given for computer programming.%基于对称分量法和直线化特性法,应用Visual Basic编制了水电站短路计算程序。该程序主要包含超瞬变短路和稳态短路2个阶段的内容,可获得各种短路类型下每条支路的电流、电压对称分量（正序、负序和零序）,将其表示为相量（复数）,可满足继电保护和电气二次专业实际需要。同时,对短路计算算法进行了评述和局部优化,以适应计算机编程。
Quantum Transport Calculations Using Periodic Boundary Conditions
Wang, Lin-Wang
2004-01-01
An efficient new method is presented to calculate the quantum transports using periodic boundary conditions. This method allows the use of conventional ground state ab initio programs without big changes. The computational effort is only a few times of a normal ground state calculations, thus is makes accurate quantum transport calculations for large systems possible.
47 CFR 1.1623 - Probability calculation.
2010-10-01
... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623... Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be computed to no less than three significant digits. Probabilities will be truncated to the number...
Directory of Open Access Journals (Sweden)
Murray M.
2006-11-01
Full Text Available Cet article est destiné à montrer que la méthode des Plans d'Expériences utilisée dans les laboratoires et les unités de fabrication est également applicable au calcul scientifique et en particulier, à la simulation informatique. Son emploi permet de réduire, dans une forte proportion, le nombre de passages informatiques. Il permet également d'écrire des modèles mathématiques empiriques qui orientent les recherches vers la bonne solution et qui fournissent une bonne image du phénomène étudié. The aim of this article is to show that Factorial Design, which is a commonly used method in laboratories and production units, can also be very successful for designing and computerized simulations. Computer runs can be reduced by a factor as great as four to achieve a comprehensive understanding of how a plant or a process runs. Simple models can then be constructed to provide a good image of the investigated phenomenom. The example given here is that of a plant processing raw Natural Gas whose outputs are a Sales Gas and an NGL which must meet simultaneously five specifications. The operator in charge of the simulations begins by defining the Experimental Range of Investigation (Table 1. Calculations (Table 1, Fig. 2 are set in a pattern defined by Factorial Design (Table 2. These correspond to the apices of the Experimental cube (Fig. 2. Results of the simulations are then reported on Table 3. These require analysis, using Factorial Design Theory, in conjunction with each specification. A graphical approach is used to define the regions for which each specification is met: Fig. 3 shows the zone authorized for the first specification, the Wobbe Index and Fig. 4 gives the results for the outlet pressure of the Turbo-Expander. Figs. 5, 6 and 7 show the zones allowed for the CO2/C2 ratio, the TVP and the C2/C3 ratio. A satisfactory zone is found, for this last ratio, outside of the investigated range. The results acquired so far enable us
International Nuclear Information System (INIS)
The computer programme HERA is used for comparative calculation of temperature gradients in sodium-cooled fuel element clusters. It belongs to the group of computer programmes assuming the subchannels formed by the rods to be the smallest element of the flow diameter. The short description outlines the basic characteristics of this computer programme. (HR)
On-Site was developed to provide modelers and model reviewers with prepackaged tools ("calculators") for performing site assessment calculations. The philosophy behind OnSite is that the convenience of the prepackaged calculators helps provide consistency for simple calculations,...
Pressure Vessel Calculations for VVER-440 Reactors
Hordósy, G.; Hegyi, Gy.; Keresztúri, A.; Maráczy, Cs.; Temesvári, E.; Vértes, P.; Zsolnay, É.
2003-06-01
Monte Carlo calculations were performed for a selected cycle of the Paks NPP Unit II to test a computational model. In the model the source term was calculated by the core design code KARATE and the neutron transport calculations were performed by the MCNP. Different forms of the source specification were examined. The calculated results were compared with measurements and in most cases fairly good agreement was found.
Energy Technology Data Exchange (ETDEWEB)
Cowan, R. D.; Rajnak, K.; Renard, P.
1976-06-01
This is a set of three Fortran IV programs, RCN29, HFMOD7, and RCN229, based on the Herman--Skillman and Charlotte Froese Fischer programs, with extensive modifications and additions. The programs compute self-consistent-field radial wave functions and the various radial integrals involved in the computation of atomic energy levels and spectra.
Applications of computer algebra
1985-01-01
Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa tion and obtains a closed for...
Institute of Scientific and Technical Information of China (English)
罗芳; 沙莎
2014-01-01
"Computational Thinking"concept deeply affected the thinking and direction of the teaching reform of computer in University in china. This paper from the understanding of calculation are discussed and the analysis of the computational thinking concept and connotation, and points out that the nature of computation is an information state to another information state change process, and analyzes the forms and characteristics calculation; from a computational view of development, put forward the calculation thinking should have broad sense and narrow sense, Computational Thinking generalized refers to a kind of mode of thinking people for the real world information abstraction and realizes information conversion tools, and analysis of the specific meaning of computational thinking.%“计算思维”概念深刻地影响了我国大学计算机教学改革的思路与方向。该文从认识计算的角度对计算思维概念内涵进行了探讨和分析，指出计算的本质是一种信息状态到另一种信息状态转变的过程，并分析了计算的形式和特征；从计算的发展来看，提出计算思维应有狭义与广义之分，广义的计算思维是指人们对于现实世界进行信息抽象并利用工具实现信息转换的一种思维方式，并分析了计算思维的具体含义。
Energy Technology Data Exchange (ETDEWEB)
Goudreau, G.L.
1993-03-01
The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.
Zero Temperature Hope Calculations
Energy Technology Data Exchange (ETDEWEB)
Rozsnyai, B F
2002-07-26
The primary purpose of the HOPE code is to calculate opacities over a wide temperature and density range. It can also produce equation of state (EOS) data. Since the experimental data at the high temperature region are scarce, comparisons of predictions with the ample zero temperature data provide a valuable physics check of the code. In this report we show a selected few examples across the periodic table. Below we give a brief general information about the physics of the HOPE code. The HOPE code is an ''average atom'' (AA) Dirac-Slater self-consistent code. The AA label in the case of finite temperature means that the one-electron levels are populated according to the Fermi statistics, at zero temperature it means that the ''aufbau'' principle works, i.e. no a priory electronic configuration is set, although it can be done. As such, it is a one-particle model (any Hartree-Fock model is a one particle model). The code is an ''ion-sphere'' model, meaning that the atom under investigation is neutral within the ion-sphere radius. Furthermore, the boundary conditions for the bound states are also set at the ion-sphere radius, which distinguishes the code from the INFERNO, OPAL and STA codes. Once the self-consistent AA state is obtained, the code proceeds to generate many-electron configurations and proceeds to calculate photoabsorption in the ''detailed configuration accounting'' (DCA) scheme. However, this last feature is meaningless at zero temperature. There is one important feature in the HOPE code which should be noted; any self-consistent model is self-consistent in the space of the occupied orbitals. The unoccupied orbitals, where electrons are lifted via photoexcitation, are unphysical. The rigorous way to deal with that problem is to carry out complete self-consistent calculations both in the initial and final states connecting photoexcitations, an enormous computational task
46 CFR 170.090 - Calculations.
2010-10-01
... necessary to compute and plot any of the following curves as part of the calculations required in this subchapter, these plots must also be submitted: (1) Righting arm or moment curves. (2) Heeling arm or...
Semantic Similarity Calculation of Chinese Word
Directory of Open Access Journals (Sweden)
Liqiang Pan
2014-08-01
Full Text Available This paper puts forward a two layers computing method to calculate semantic similarity of Chinese word. Firstly, using Latent Dirichlet Allocation (LDA subject model to generate subject spatial domain. Then mapping word into topic space and forming topic distribution which is used to calculate semantic similarity of word(the first layer computing. Finally, using semantic dictionary "HowNet" to deeply excavate semantic similarity of word (the second layer computing. This method not only overcomes the problem that it’s not specific enough merely using LDA to calculate semantic similarity of word, but also solves the problems such as new words (haven’t been added in dictionary and without considering specific context when calculating semantic similarity based on semantic dictionary "HowNet". By experimental comparison, this thesis proves feasibility,availability and advantages of the calculation method.
Efficient Finite Element Calculation of Nγ
DEFF Research Database (Denmark)
Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.
2007-01-01
This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....
Calculation of mixed core safety parameters
International Nuclear Information System (INIS)
The purpose of this presentation is the reactor physics explanation of the most important nuclear safety parameters in mixed TRIGA cores as well as their calculation methods and appropriate computer codes. Nuclear core parameters, such as power density peaking factors and temperature reactivity coefficients are considered. The computer codes adapted, tested and widely available for TRIGA nuclear calculations are presented. Thermal-hydraulics aspects of safety analysis are not treated
Distillation Calculations with a Programmable Calculator.
Walker, Charles A.; Halpern, Bret L.
1983-01-01
Describes a three-step approach for teaching multicomponent distillation to undergraduates, emphasizing patterns of distribution as an aid to understanding the separation processes. Indicates that the second step can be carried out by programmable calculators. (A more complete set of programs for additional calculations is available from the…
Direct Computation on the Kinetic Spectrophotometry
DEFF Research Database (Denmark)
Hansen, Jørgen-Walther; Broen Pedersen, P.
1974-01-01
This report describes an analog computer designed for calculations of transient absorption from photographed recordings of the oscilloscope trace of the transmitted light intensity. The computer calculates the optical density OD, the natural logarithm of OD, and the natural logarithm...
Federal Laboratory Consortium — The NETL Super Computer was designed for performing engineering calculations that apply to fossil energy research. It is one of the world’s larger supercomputers,...
Energy Technology Data Exchange (ETDEWEB)
Dubois, J.
2011-10-13
In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [French] Les travaux de cette these concernent dans un premier temps l'evaluation des nouveaux materiels de calculs tels que les cartes graphiques ou les puces massivement multicoeurs, et leur application aux problemes de valeurs propres pour la neutronique. Ensuite, dans le but d'utiliser le parallelisme massif des supercalculateurs, nous etudions egalement l'utilisation de methodes hybrides asynchrones pour resoudre des problemes a valeur propre avec ce tres haut niveau de parallelisme. Nous experimentons ensuite le resultat de ces recherches sur plusieurs supercalculateurs nationaux tels que la machine hybride Titane du Centre de Calcul, Recherche et Technologies (CCRT), la machine Curie du Tres Grand Centre de Calcul (TGCC) qui
Constructing Programs from Example Computations.
Bierman, A. W.; Krishnaswamy, R.
This paper describes the construction and implementation of an autoprogramming system. An autoprogrammer is an interactive computer programming system which automatically constructs computer programs from example computations executed by the user. The example calculations are done in a scratch pad fashion at a computer display, and the system…
Energy Technology Data Exchange (ETDEWEB)
Siciliano, F.; Lippolis, G.; Bruno, S.G. [ENEA, Centro Ricerche Trisaia, Rotondella (Italy)
1995-11-01
In the present paper the calculation technique aspects of dose rate from neutron and photon radiation sources are covered with reference both to the basic theoretical modeling of the MERCURE-4, XSDRNPM-S and MCNP-3A codes and from practical point of view performing safety analyses of irradiation risk of two transportation casks. The input data set of these calculations -regarding the CEN 10/200 HLW container and dry PWR spent fuel assemblies shipping cask- is frequently commented as for as connecting points of input data and understanding theoretic background are concerned.
Energy Technology Data Exchange (ETDEWEB)
Corpa Masa, R.; Jimenez Varas, G.; Moreno Garcia, B.
2016-08-01
To be able to simulate the behavior of nuclear fuel under operating conditions, it is required to include all the representative loads, including the lateral hydraulic forces which were not included traditionally because of the difficulty of calculating them in a reliable way. Thanks to the advance in CFD codes, now it is possible to assess them. This study calculates the local lateral hydraulic forces, caused by the contraction and expansion of the flow due to the bow of the surrounding fuel assemblies, on of fuel assembly under typical operating conditions from a three loop Westinghouse PWR reactor. (Author)
Insertion device calculations with mathematica
Energy Technology Data Exchange (ETDEWEB)
Carr, R. [Stanford Synchrotron Radiation Lab., CA (United States); Lidia, S. [Univ. of California, Davis, CA (United States)
1995-02-01
The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectory solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.
Parallel nearest neighbor calculations
Trease, Harold
We are just starting to parallelize the nearest neighbor portion of our free-Lagrange code. Our implementation of the nearest neighbor reconnection algorithm has not been parallelizable (i.e., we just flip one connection at a time). In this paper we consider what sort of nearest neighbor algorithms lend themselves to being parallelized. For example, the construction of the Voronoi mesh can be parallelized, but the construction of the Delaunay mesh (dual to the Voronoi mesh) cannot because of degenerate connections. We will show our most recent attempt to tessellate space with triangles or tetrahedrons with a new nearest neighbor construction algorithm called DAM (Dial-A-Mesh). This method has the characteristics of a parallel algorithm and produces a better tessellation of space than the Delaunay mesh. Parallel processing is becoming an everyday reality for us at Los Alamos. Our current production machines are Cray YMPs with 8 processors that can run independently or combined to work on one job. We are also exploring massive parallelism through the use of two 64K processor Connection Machines (CM2), where all the processors run in lock step mode. The effective application of 3-D computer models requires the use of parallel processing to achieve reasonable "turn around" times for our calculations.
Automation of 2-loop Amplitude Calculations
Jones, S P
2016-01-01
Some of the tools and techniques that have recently been used to compute Higgs boson pair production at NLO in QCD are discussed. The calculation relies on the use of integral reduction, to reduce the number of integrals which must be computed, and expressing the amplitude in terms of a quasi-finite basis, which simplifies their numeric evaluation. Emphasis is placed on sector decomposition and Quasi-Monte Carlo (QMC) integration which are used to numerically compute the master integrals.
Autistic Savant Calendar Calculators.
Patti, Paul J.
This study identified 10 savants with developmental disabilities and an exceptional ability to calculate calendar dates. These "calendar calculators" were asked to demonstrate their abilities, and their strategies were analyzed. The study found that the ability to calculate dates into the past or future varied widely among these calculators. Three…
Verification of Internal Dose Calculations.
Aissi, Abdelmadjid
The MIRD internal dose calculations have been in use for more than 15 years, but their accuracy has always been questionable. There have been attempts to verify these calculations; however, these attempts had various shortcomings which kept the question of verification of the MIRD data still unanswered. The purpose of this research was to develop techniques and methods to verify the MIRD calculations in a more systematic and scientific manner. The research consisted of improving a volumetric dosimeter, developing molding techniques, and adapting the Monte Carlo computer code ALGAM to the experimental conditions and vice versa. The organic dosimetric system contained TLD-100 powder and could be shaped to represent human organs. The dosimeter possessed excellent characteristics for the measurement of internal absorbed doses, even in the case of the lungs. The molding techniques are inexpensive and were used in the fabrication of dosimetric and radioactive source organs. The adaptation of the computer program provided useful theoretical data with which the experimental measurements were compared. The experimental data and the theoretical calculations were compared for 6 source organ-7 target organ configurations. The results of the comparison indicated the existence of an agreement between measured and calculated absorbed doses, when taking into consideration the average uncertainty (16%) of the measurements, and the average coefficient of variation (10%) of the Monte Carlo calculations. However, analysis of the data gave also an indication that the Monte Carlo method might overestimate the internal absorbed doses. Even if the overestimate exists, at least it could be said that the use of the MIRD method in internal dosimetry was shown to lead to no unnecessary exposure to radiation that could be caused by underestimating the absorbed dose. The experimental and the theoretical data were also used to test the validity of the Reciprocity Theorem for heterogeneous
Calculation of plasma characteristics of the sun
Institute of Scientific and Technical Information of China (English)
Muhammad Abbas Bari; Zhong Jia-Yong; Chen Miu; Zhao Jing; Zhang Jie
2006-01-01
The ionization level and free electron density of most abundant elements (C, N, O, Mg, Al, Si, S, and Fe) in the sun are calculated from the centre of the sun to the surface of the photosphere. The model and computations are made under the assumption of local thermodynamic equilibrium (LTE). The Saha equation has been used to calculate the ionization level of elements and the electron density. Temperature values for calculations along the solar radius are taken from referebces.
Institute of Scientific and Technical Information of China (English)
李坤; 徐小照
2015-01-01
往复压缩机飞轮矩的计算是动力计算的主要内容之一，人工与编程计算两者难以兼顾精度、可靠及方便。文章提出用AutoCAD绘制压缩机综合活塞力图、切向力图，求取飞轮矩的方法。通过ZW型设计实例研究了AutoCAD在活塞式压缩机飞轮矩计算方法上的应用。事实证明此法直观明了、精度高、方便可靠。%The calculation of flywheel moment for reciprocating compressor is one of keys in dynamic calculation. All of the requirements in precision, reliability and convenience cannot be met by hand-calculation or programming solely. In this article, the method of drawing synthetic piston force diagram and tangential force diagram by using AutoCAD to calculateflywheel moment was presented. With the practical design example of ZW model, the application of AutoCAD in the calculation offlywheel moment for reciprocating compressor was studied. It has been proved that the method presented herein is perceived, precision and reliable.
Bunt, Harry; Pulman, Stephen
2013-01-01
This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue i
A Short History of the Computer.
Leon, George
1984-01-01
Briefly traces the development of computers from the abacus, John Napier's logarithms, the first computer/calculator (known as the Differential Engine), the first computer programming via steel punched cards, the electrical analog computer, electronic digital computer, and the transistor to the microchip of today's computers. (MBR)
Methods for Melting Temperature Calculation
Hong, Qi-Jun
Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which
The Computational Materials Repository
DEFF Research Database (Denmark)
Landis, David D.; Hummelshøj, Jens S.; Nestorov, Svetlozar;
2012-01-01
The possibilities for designing new materials based on quantum physics calculations are rapidly growing, but these design efforts lead to a significant increase in the amount of computational data created. The Computational Materials Repository (CMR) addresses this data challenge and provides...
Directory of Open Access Journals (Sweden)
Anamaria Şiclovan
2013-02-01
Full Text Available
Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.
Keywords: Cloud computing, QoS, quality of cloud computing
Energy Technology Data Exchange (ETDEWEB)
Perache, M
2006-10-15
In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)
Energy Technology Data Exchange (ETDEWEB)
Huang, B-T; Lu, J-Y [Cancer Hospital of Shantou University Medical College, Shantou (China)
2015-06-15
Purpose: We introduce a new method combined with the deformable image registration (DIR) and regions-of-interest mapping (ROIM) technique to accurately calculate dose on daily CBCT for esophageal cancer. Methods: Patients suffered from esophageal cancer were enrolled in the study. Prescription was set to 66 Gy/30 F and 54 Gy/30 F to the primary tumor (PTV66) and subclinical disease (PTV54) . Planning CT (pCT) were segmented into 8 substructures in terms of their differences in physical density, such as gross target volume (GTV), venae cava superior (SVC), aorta, heart, spinal cord, lung, muscle and bones. The pCT and its substructures were transferred to the MIM software to readout their mean HU values. Afterwards, a deformable planning CT to daily KV-CBCT image registration method was then utilized to acquire a new structure set on CBCT. The newly generated structures on CBCT were then transferred back to the treatment planning system (TPS) and its HU information were overridden manually with mean HU values obtained from pCT. Finally, the treatment plan was projected onto the CBCT images with the same beam arrangements and monitor units (MUs) to accomplish dose calculation. Planning target volume (PTV) and organs at risk (OARs) from both of the pCT and CBCT were compared to evaluate the dose calculation accuracy. Results: It was found that the dose distribution in the CBCT showed little differences compared to the pCT, regardless of whether PTV or OARs were concerned. Specifically, dose variation in GTV, PTV54, PTV66, SVC, lung and heart were within 0.1%. The maximum dose variation was presented in the spinal cord, which was up to 2.7% dose difference. Conclusion: The proposed method combined with DIR and ROIM technique to accurately calculate dose distribution on CBCT for esophageal cancer is feasible.
Energy Technology Data Exchange (ETDEWEB)
Santos, William S.; Carvalho Junior, Alberico B. de; Pereira, Ariana J.S.; Santos, Marcos S.; Maia, Ana F., E-mail: williathan@yahoo.com.b, E-mail: ablohem@gmail.co, E-mail: ariana-jsp@hotmail.co, E-mail: m_souzasantos@hotmail.co, E-mail: afmaia@ufs.b [Universidade Federal de Sergipe (UFS), Aracaju, SE (Brazil)
2011-10-26
In this paper conversion coefficients (CCs) of equivalent dose and effective in terms of kerma in the air were calculated suggested by the ICRP 74. These dose coefficients were calculated considering a plane radiation source and monoenergetic for a spectrum of energy varying from 10 keV to 2 MeV. The CCs were obtained for four geometries of irradiation, anterior-posterior, posterior-anterior, lateral right side and lateral left side. It was used the radiation transport code Visual Monte Carlo (VMC), and a anthropomorphic simulator of sit female voxel. The observed differences in the found values for the CCs at the four irradiation sceneries are direct results of the body organs disposition, and the distance of these organs to the irradiation source. The obtained CCs will be used for estimative more precise of dose in situations that the exposed individual be sit, as the normally the CCs available in the literature were calculated by using simulators always lying or on their feet
Ramalingam, S.; Periandy, S.
2011-03-01
In the present study, the FT-IR and FT-Raman spectra of 4-chloro-2-methylaniline (4CH2MA) have been recorded in the range of 4000-100 cm -1. The fundamental modes of vibrational frequencies of 4CH2MA are assigned. All the geometrical parameters have been calculated by HF and DFT (LSDA, B3LYP and B3PW91) methods with 6-31G (d, p) and 6-311G (d, p) basis sets. Optimized geometries of the molecule have been interpreted and compared with the reported experimental values for aniline and some substituted aniline. The harmonic and anharmonic vibrational wavenumbers, IR intensities and Raman activities are calculated at the same theory levels used in geometry optimization. The calculated frequencies are scaled and compared with experimental values. The scaled vibrational frequencies at LSDA/B3LYP/6-311G (d, p) seem to coincide with the experimentally observed values with acceptable deviations. The impact of substitutions on the benzene structure is investigated. The molecular interactions between the substitutions (Cl, CH 3 and NH 2) are also analyzed.
Energy Technology Data Exchange (ETDEWEB)
Archier, P.
2011-09-14
The safety criteria to be met for Generation IV sodium fast reactors (SFR) require reduced and mastered uncertainties on neutronic quantities of interest. Part of these uncertainties come from nuclear data and, in the particular case of SFR, from sodium nuclear data, which show significant differences between available international libraries (JEFF-3.1.1, ENDF/B-VII.0, JENDL-4.0). The objective of this work is to improve the knowledge on sodium nuclear data for a better calculation of SFR neutronic parameters and reliable associated uncertainties. After an overview of existing {sup 23}Na data, the impact of the differences is quantified, particularly on sodium void reactivity effects, with both deterministic and stochastic neutronic codes. Results show that it is necessary to completely re-evaluate sodium nuclear data. Several developments have been made in the evaluation code Conrad, to integrate new nuclear reactions models and their associated parameters and to perform adjustments with integral measurements. Following these developments, the analysis of differential data and the experimental uncertainties propagation have been performed with Conrad. The resolved resonances range has been extended up to 2 MeV and the continuum range begins directly beyond this energy. A new {sup 23}Na evaluation and the associated multigroup covariances matrices were generated for future uncertainties calculations. The last part of this work focuses on the sodium void integral data feedback, using methods of integral data assimilation to reduce the uncertainties on sodium cross sections. This work ends with uncertainty calculations for industrial-like SFR, which show an improved prediction of their neutronic parameters with the new evaluation. (author) [French] Les criteres de surete exiges pour les reacteurs rapides au sodium de Generation IV (RNR-Na) se traduisent par la necessite d'incertitudes reduites et maitrisees sur les grandeurs neutroniques d'interet. Une part
Research in Computational Astrobiology
Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.
2003-01-01
We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.
Grier, David Alan
2013-01-01
Before Palm Pilots and iPods, PCs and laptops, the term ""computer"" referred to the people who did scientific calculations by hand. These workers were neither calculating geniuses nor idiot savants but knowledgeable people who, in other circumstances, might have become scientists in their own right. When Computers Were Human represents the first in-depth account of this little-known, 200-year epoch in the history of science and technology. Beginning with the story of his own grandmother, who was trained as a human computer, David Alan Grier provides a poignant introduction to the wider wo
The rating reliability calculator
Directory of Open Access Journals (Sweden)
Solomon David J
2004-04-01
Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.
Flow Field Calculations for Afterburner
Institute of Scientific and Technical Information of China (English)
ZhaoJianxing; LiuQuanzhong; 等
1995-01-01
In this paper a calculation procedure for simulating the coimbustion flow in the afterburner with the heat shield,flame stabilizer and the contracting nozzle is described and evaluated by comparison with experimental data.The modified two-equation κ-ε model is employed to consider the turbulence effects,and the κ-ε-g turbulent combustion model is used to determine the reaction rate.To take into accunt the influence of heat radiation on gas temperature distribution,heat flux model is applied to predictions of heat flux distributions,The solution domain spanned the entire region between centerline and afterburner wall ,with the heat shield represented as a blockage to the mesh.The enthalpy equation and wall boundary of the heat shield require special handling for two passages in the afterburner,In order to make the computer program suitable to engineering applications,a subregional scheme is developed for calculating flow fields of complex geometries.The computational grids employed are 100×100 and 333×100(non-uniformly distributed).The numerical results are compared with experimental data,Agreement between predictions and measurements shows that the numerical method and the computational program used in the study are fairly reasonable and appopriate for primary design of the afterburner.
Personal Finance Calculations.
Argo, Mark
1982-01-01
Contains explanations and examples of mathematical calculations for a secondary level course on personal finance. How to calculate total monetary cost of an item, monthly payments, different types of interest, annual percentage rates, and unit pricing is explained. (RM)
Threlfall, John
2002-01-01
Suggests that strategy choice is a misleading characterization of efficient mental calculation and that teaching mental calculation methods as a whole is not conducive to flexibility. Proposes an alternative in which calculation is thought of as an interaction between noticing and knowledge. Presents an associated teaching approach to promote…
Energy Technology Data Exchange (ETDEWEB)
Clerc, S
1998-07-01
In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)
Energy Technology Data Exchange (ETDEWEB)
Diakhate, F.
2010-12-15
In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)
Energy Technology Data Exchange (ETDEWEB)
Rezende, Gabriel Fonseca da Silva
2015-06-01
Many radiotherapy centers acquire 15 and 18 MV linear accelerators to perform more effective treatments for deep tumors. However, the acquisition of these equipment must be accompanied by an additional care in shielding planning of the rooms that will house them. In cases where space is restricted, it is common to find primary barriers made of concrete and metal. The drawback of this type of barrier is the photoneutron emission when high energy photons (e.g. 15 and 18 MV spectra) interact with the metallic material of the barrier. The emission of these particles constitutes a problem of radiation protection inside and outside of radiotherapy rooms, which should be properly assessed. A recent work has shown that the current model underestimate the dose of neutrons outside the treatment rooms. In this work, a computational model for the aforementioned problem was created from Monte Carlo Simulations and Artificial Intelligence. The developed model was composed by three neural networks, each being formed of a pair of material and spectrum: Pb18, Pb15 and Fe18. In a direct comparison with the McGinley method, the Pb18 network exhibited the best responses for approximately 78% of the cases tested; the Pb15 network showed better results for 100% of the tested cases, while the Fe18 network produced better answers to 94% of the tested cases. Thus, the computational model composed by the three networks has shown more consistent results than McGinley method. (author)
General-purpose software for science technology calculation
International Nuclear Information System (INIS)
We have developed many general-purpose softwares for parallel processing of science technology calculation. This paper reported six softwares such as STA (Seamless Thinking Aid) basic soft, parallel numerical computation library, grid formation software for parallel computer, real-time visualizing system, parallel benchmark test system and object-oriented parallel programing method. STA is a user interface software to perform a total environment for parallel programing, a network computing environment for various parallel computers and a desktop computing environment via Web. Some examples using the above softwares are explained. One of them is a simultaneous parallel calculation of both analysis of flow and structure of supersonic transport to design of them. The other is various kinds of computer parallel calculations for nuclear fusion reaction such as a molecular dynamic calculation and a calculation of reactor structure and fluid. These softs are opened to the public by the home page {http://guide.tokai.jaeri.go.jp/ccse/}. (S.Y.)
Hendricks, R. C.; Baron, A. K.; Peller, I. C.
1975-01-01
A FORTRAN IV subprogram called GASP is discussed which calculates the thermodynamic and transport properties for 10 pure fluids: parahydrogen, helium, neon, methane, nitrogen, carbon monoxide, oxygen, fluorine, argon, and carbon dioxide. The pressure range is generally from 0.1 to 400 atmospheres (to 100 atm for helium and to 1000 atm for hydrogen). The temperature ranges are from the triple point to 300 K for neon; to 500 K for carbon monoxide, oxygen, and fluorine; to 600 K for methane and nitrogen; to 1000 K for argon and carbon dioxide; to 2000 K for hydrogen; and from 6 to 500 K for helium. GASP accepts any two of pressure, temperature and density as input conditions along with pressure, and either entropy or enthalpy. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, and surface tension. The subprogram design is modular so that the user can choose only those subroutines necessary to the calculations.
Painless causality in defect calculations
Cheung, C; Cheung, Charlotte; Magueijo, Joao
1997-01-01
Topological defects must respect causality, a statement leading to restrictive constraints on the power spectrum of the total cosmological perturbations they induce. Causality constraints have for long been known to require the presence of an under-density in the surrounding matter compensating the defect network on large scales. This so-called compensation can never be neglected and significantly complicates calculations in defect scenarios, eg. computing cosmic microwave background fluctuations. A quick and dirty way to implement the compensation are the so-called compensation fudge factors. Here we derive the complete photon-baryon-CDM backreaction effects in defect scenarios. The fudge factor comes out as an algebraic identity and so we drop the negative qualifier ``fudge''. The compensation scale is computed and physically interpreted. Secondary backreaction effects exist, and neglecting them constitutes the well-defined approximation scheme within which one should consider compensation factor calculatio...
Non-commutative computer algebra and molecular computing
Directory of Open Access Journals (Sweden)
Svetlana Cojocaru
2001-12-01
Full Text Available Non-commutative calculations are considered from the molecular computing point of view. The main idea is that one can get more advantage in using molecular computing for non-commutative computer algebra compared with a commutative one. The restrictions, connected with the coefficient handling in Grobner basis calculations are investigated. Semigroup and group cases are considered as more appropriate. SAGBI basis constructions and possible implementations are discussed.
Rate calculation with colored noise
Bartsch, Thomas; Benito, R M; Borondo, F
2016-01-01
The usual identification of reactive trajectories for the calculation of reaction rates requires very time-consuming simulations, particularly if the environment presents memory effects. In this paper, we develop a new method that permits the identification of reactive trajectories in a system under the action of a stochastic colored driving. This method is based on the perturbative computation of the invariant structures that act as separatrices for reactivity. Furthermore, using this perturbative scheme, we have obtained a formally exact expression for the reaction rate in multidimensional systems coupled to colored noisy environments.
ITER Port Interspace Pressure Calculations
Energy Technology Data Exchange (ETDEWEB)
Carbajo, Juan J [ORNL; Van Hove, Walter A [ORNL
2016-01-01
The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.
Xing, Tao; Stern, Frederick
2015-11-01
Eça and Hoekstra [1] proposed a procedure for the estimation of the numerical uncertainty of CFD calculations based on the least squares root (LSR) method. We believe that the LSR method has potential value for providing an extended Richardson-extrapolation solution verification procedure for mixed monotonic and oscillatory or only oscillatory convergent solutions (based on the usual systematic grid-triplet convergence condition R). Current Richardson-extrapolation solution verification procedures [2-7] are restricted to monotonic convergent solutions 0 block diagram, which summarizes the LSR procedure and options, including some of which we are in disagreement. Compared to the grid-triplet and three-step procedure followed by most solution verification methods (convergence condition followed by error and uncertainty estimates), the LSR method follows a four-grid (minimum) and four-step procedure (error estimate, data range parameter Δϕ, FS, and uncertainty estimate).
Energy Technology Data Exchange (ETDEWEB)
Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment
1998-03-01
In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)
Directory of Open Access Journals (Sweden)
Thiago C. F. Gomes
2008-01-01
Full Text Available The first computational implementation that automates the procedures involved in the calculation of infrared intensities using the charge-charge flux-dipole flux model is presented. The atomic charges and dipoles from the Quantum Theory of Atoms in Molecules (QTAIM model was programmed for Morphy98, Gaussian98 and Gaussian03 programs outputs, but for the ChelpG parameters only the Gaussian programs are supported. Results of illustrative but new calculations for the water, ammonia and methane molecules at the MP2/6-311++G(3d,3p theoretical level, using the ChelpG and QTAIM/Morphy charges and dipoles are presented. These results showed excellent agreement with analytical results obtained directly at the MP2/6-311++G(3d,3p level of theory.
Resolving resonances in R-matrix calculations
International Nuclear Information System (INIS)
We present a technique to obtain detailed resonance structures from R-matrix calculations of atomic cross sections for both collisional and radiative processes. The resolving resonances (RR) method relies on the QB method of Quigley-Berrington (Quigley L, Berrington K A and Pelan J 1998 Comput. Phys. Commun. 114 225) to find the position and width of resonances directly from the reactance matrix. Then one determines the symmetry parameters of these features and generates an energy mesh whereby fully resolved cross sections are calculated with minimum computational cost. The RR method is illustrated with the calculation of the photoionization cross sections and the unified recombination rate coefficients of Fe XXIV, O VI, and Fe XVII. The RR method reduces numerical errors arising from unresolved R-matrix cross sections in the computation of synthetic bound-free opacities, thermally averaged collision strengths and recombination rate coefficients. (author)
Cloud Computing Vs. Grid Computing
Seyyed Mohsen Hashemi; Amid Khatibi Bardsiri
2012-01-01
Cloud computing emerges as one of the hottest topic in field of information technology. Cloud computing is based on several other computing research areas such as HPC, virtualization, utility computing and grid computing. In order to make clear the essential of cloud computing, we propose the characteristics of this area which make cloud computing being cloud computing and distinguish it from other research areas. The service oriented, loose coupling, strong fault tolerant, business model and...
Computer Algebra in Particle Physics
Weinzierl, Stefan
2002-01-01
These lectures given to graduate students in theoretical particle physics, provide an introduction to the ``inner workings'' of computer algebra systems. Computer algebra has become an indispensable tool for precision calculations in particle physics. A good knowledge of the basics of computer algebra systems allows one to exploit these systems more efficiently.
Argosy 4 - A programme for lattice calculations
International Nuclear Information System (INIS)
This report contains a detailed description of the methods of calculation used in the Argosy 4 computer programme, and of the input requirements and printed results produced by the programme. An outline of the physics of the Argosy method is given. Section 2 describes the lattice calculation, including the burn up calculation, section 3 describes the control rod calculation and section 4 the reflector calculation. In these sections the detailed equations solved by the programme are given. In section 5 input requirements are given, and in section 6 the printed output obtained from an Argosy calculation is described. In section 7 are noted the principal differences between Argosy 4 and earlier versions of the Argosy programme
Electrical installation calculations basic
Kitcher, Christopher
2013-01-01
All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo
Electronics Environmental Benefits Calculator
U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...
Electrical installation calculations advanced
Kitcher, Christopher
2013-01-01
All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio
Energy Technology Data Exchange (ETDEWEB)
Rousseau, E
2006-12-15
An electron on helium presents a quantized energy spectrum. The interaction with the environment is considered sufficiently weak in order to allow the realization of a quantum bit (qubit) by using the first two energy levels. The first stage in the realization of this qubit was to trap and control a single electron. This is carried out thanks to a set of micro-fabricated electrodes defining a well of potential in which the electron is trapped. We are able with such a sample to trap and detect a variables number of electrons varying between one and around twenty. This then allowed us to study the static behaviour of a small number of electrons in a trap. They are supposed to crystallize and form structures called Wigner molecules. Such molecules have not yet been observed yet with electrons above helium. Our results bring circumstantial evidence for of Wigner crystallization. We then sought to characterize the qubit more precisely. We sought to carry out a projective reading (depending on the state of the qubit) and a measurement of the relaxation time. The results were obtained by exciting the electron with an incoherent electric field. A clean measurement of the relaxation time would require a coherent electric field. The conclusion cannot thus be final but it would seem that the relaxation time is shorter than calculated theoretically. That is perhaps due to a measurement of the relaxation between the oscillating states in the trap and not between the states of the qubit. (author)
Energy Technology Data Exchange (ETDEWEB)
Rivero, Paulo C.M.; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear
2000-07-01
Nowadays, probability approaches are employed for calculating the reliability of steam generators as a function of defects in their tubes without any deterministic association with warranty assurance. Unfortunately, probability models produce large failure values, as opposed to the recommendation of the U.S. Code of Federal Regulations, that is, failure probabilities must be as small as possible In this paper, we propose the association of the deterministic methodology with the probabilistic one. At first, the failure probability evaluation of steam generators follows a probabilistic methodology: to find the failure probability, critical cracks - obtained from Monte Carlo simulations - are limited to have length's in the interval defined by their lower value and the plugging limit one, so as to obtain a failure probability of at most 1%. The distribution employed for modeling the observed (measured) cracks considers the same interval. Any length outside the mentioned interval is not considered for the probability evaluation: it is approached by the deterministic model. The deterministic approach is to plug the tube when any anomalous crack is detected in it. Such a crack is an observed one placed in the third region on the plot of the logarithmic time derivative of crack lengths versus the mode I stress intensity factor, while for normal cracks the plugging of tubes occurs in the second region of that plot - if they are dangerous, of course, considering their random evolution. A methodology for identifying anomalous cracks is also presented. (author)
Calculating reliability measures for ordinal data.
Gamsu, C V
1986-11-01
Establishing the reliability of measures taken by judges is important in both clinical and research work. Calculating the statistic of choice, the kappa coefficient, unfortunately is not a particularly quick and simple procedure. Two much-needed practical tools have been developed to overcome these difficulties: a comprehensive and easily understood guide to the manual calculation of the most complex form of the kappa coefficient, weighted kappa for ordinal data, has been written; and a computer program to run under CP/M, PC-DOS and MS-DOS has been developed. With simple modification the program will also run on a Sinclair Spectrum home computer.
Relaxation Method For Calculating Quantum Entanglement
Tucci, R R
2001-01-01
In a previous paper, we showed how entanglement of formation can be defined as a minimum of the quantum conditional mutual information (a.k.a. quantum conditional information transmission). In classical information theory, the Arimoto-Blahut method is one of the preferred methods for calculating extrema of mutual information. We present a new method akin to the Arimoto-Blahut method for calculating entanglement of formation. We also present several examples computed with a computer program called Causa Comun that implements the ideas of this paper.
Directory of Open Access Journals (Sweden)
Chen X.
2006-11-01
Full Text Available Le problème considéré ici est celui de l'évaluation des efforts excitateurs de deuxième ordre (en mode somme, c'est-à-dire prenant place aux sommes deux à deux des fréquences de houle sur des plates-formes à lignes tendues. Ces efforts sont tenus pour responsables de comportements résonnants (en roulis, tangage et pilonnement observés lors d'essais en bassin et pourraient réduire sensiblement la durée de vie en fatigue des tendons. Des résultats sont tout d'abord présentés pour une structure simplifiée, consistant en 4 cylindres verticaux reposant sur le fond marin. L'intérêt de cette géométrie est que tous les calculs peuvent être menés à terme de façon quasi analytique. Les résultats obtenus permettent d'illustrer le haut degré d'interaction entre les colonnes, et la faible décroissance du potentiel de diffraction de deuxième ordre avec la profondeur. On présente ensuite des résultats pour une plate-forme réelle, celle de Snorre. Tension Leg Platforms (TLP's are now regarded as a promising technology for the development of deep offshore fields. As the water depth increases however, their natural periods of heave, roll and pitch tend to increase as well (roughly to the one-half power, and it is not clear yet what the maximum permissible values for these natural periods can be. For the Snorre TLP for instance, they are only about 2. 5 seconds, which seems to be sufficiently low since there is very limited free wave energy at such periods. Model tests, however, have shown some resonant response in sea states with peak periods of about 5 seconds. Often referred to as springing , this resonant motion can severely affect the fatigue life of tethers and increase their design loads. In order to calculate this springing motion at the design stage, it is necessary to identify and evaluate both the exciting loads and the mechanisms of energy dissipation. With the help of the French Norwegian Foundation a joint effort was
Calculation of resonance integral for fuel cluster
International Nuclear Information System (INIS)
The procedure for calculating the shielding correction, formulated in the previous paper [6], was broadened and applied for a cluster of cylindrical rods. The sam analytical method as in the previous paper was applied. A combination of Gauss method with the method of Almgren and Porn used for solving the same type of integral was used to calculate the geometry functions. CLUSTER code was written for ZUSE-Z-23 computer to calculate the shielding corrections for pairs of fuel rods in the cluster. Computing time for one pair of fuel rods depends on the number of closely placed rod, and for two closely placed rods it is about 3 hours. Calculations were done for clusters containing 7 and 19 UO2 rods. results show that calculated values of resonance integrals are somewhat higher than the values obtained by Helstrand empirical formula. Taking into account the results for two rods from the previous paper it can be noted that the calculated and empirical values for clusters with 2 and 7 rods are in agreement since the deviations do not exceed the limits of experimental error (±2%). In case of larger cluster with 19 rods deviations are higher than the experimental error. Most probably the calculated values exceed the experimental ones result from the fact that in this paper the shielding correction is calculated only in the region up to 1 keV
Three dimensional diffusion calculations of nuclear reactors
International Nuclear Information System (INIS)
This work deals with the three dimensional calculation of nuclear reactors using the code TRITON. The purposes of the work were to perform three-dimensional computations of the core of the Soreq nuclear reactor and of the power reactor ZION and to validate the TRITON code. Possible applications of the TRITON code in Soreq reactor calculations and in power reactor research are suggested. (H.K.)
Calculators and Polynomial Evaluation.
Weaver, J. F.
The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…