Sample records for calculations computer

  1. Computational chemistry: Making a bad calculation (United States)

    Winter, Arthur


    Computations of the energetics and mechanism of the Morita-Baylis-Hillman reaction are ``not even wrong'' when compared with experiments. While computational abstinence may be the purest way to calculate challenging reaction mechanisms, taking prophylactic measures to avoid regrettable outcomes may be more realistic.

  2. Calculating True Computer Access in Schools. (United States)

    Slovacek, Simeon P.


    Discusses computer access in schools; explains how to determine sufficient quantities of computers; and describes a formula that illustrates the relationship between student access hours, the number of computers in a school, and the number of instructional hours in a typical school week. (six references) (LRW)

  3. Classical MD calculations with parallel computers

    Energy Technology Data Exchange (ETDEWEB)

    Matsumoto, Mitsuhiro [Nagoya Univ. (Japan)


    We have developed parallel computation codes for a classical molecular dynamics (MD) method. In order to use them on work station clusters as well as parallel super computers, we use MPI (message passing interface) library for distributed-memory type computers. Two algorithms are compared: (1) particle parallelism technique: easy to install, effective for rather small number of processors. (2) region parallelism technique: take some time to install, effective even for many nodes. (J.P.N.)

  4. Computer program calculates transonic velocities in turbomachines (United States)

    Katsanis, T.


    Computer program, TSONIC, combines velocity gradient and finite difference methods to obtain numerical solution for ideal, transonic, compressible flow for axial, radial, or mixed flow cascade of turbomachinery blades.

  5. CACTUS: Calculator and Computer Technology User Service. (United States)

    Hyde, Hartley


    Presents an activity in which students use computer-based spreadsheets to find out how much grain should be added to a chess board when a grain of rice is put on the first square, the amount is doubled for the next square, and the chess board is covered. (ASK)

  6. Newnes circuit calculations pocket book with computer programs

    CERN Document Server

    Davies, Thomas J


    Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.

  7. Gravitation Field Calculations on a Dynamic Lattice by Distributed Computing (United States)

    Mähönen, Petri; Punkka, Veikko

    A new method of calculating numerically time evolution of a gravitational field in General Relatity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  8. Gravitational field calculations on a dynamic lattice by distributed computing. (United States)

    Mähönen, P.; Punkka, V.

    A new method of calculating numerically time evolution of a gravitational field in general relativity is introduced. Vierbein (tetrad) formalism, dynamic lattice and massively parallelized computation are suggested as they are expected to speed up the calculations considerably and facilitate the solution of problems previously considered too hard to be solved, such as the time evolution of a system consisting of two or more black holes or the structure of worm holes.

  9. Computer program for equilibrium calculation and diffusion simulation

    Institute of Scientific and Technical Information of China (English)


    A computer program called TKCALC(thermodynamic and kinetic calculation) has been successfully developedfor the purpose of phase equilibrium calculation and diffusion simulation in ternary substitutional alloy systems. The program was subsequently applied to calculate the isothermal sections of the Fe-Cr-Ni system and predict the concentrationprofiles of two γ/γ single-phase diffusion couples in the Ni-Cr-Al system. The results are in excellent agreement withTHERMO-CALC and DICTRA software packages. Detailed mathematical derivation of some important formulae involvedis also elaborated

  10. Computer program for calculating the daylight level in a room

    NARCIS (Netherlands)

    Jordaans, A.A.


    A computer program has been developed that calculates the total quantity of daylight provided to an arbitrary place in a room by direct incident daylight, by reflected daylight from opposite buildings and ground, and by interreflected daylight from walls, ceilings and floors. Input data include the

  11. Computer program calculates velocities and streamlines in turbomachines (United States)

    Katsanis, T.


    Computer program calculates the velocity distribution and streamlines over widely separated blades of turbomachines. It gives the solutions of a two dimensional, subsonic, compressible nonviscous flow problem for a rotating or stationary circular cascade of blades on a blade-to-blade surface of revolution.

  12. Development of a computational methodology for internal dose calculations

    CERN Document Server

    Yoriyaz, H


    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body and a more precise tool for the radiation transport simulation. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. In order to utilize the segmented human anatomy as a computational model for the simulation of radiation transport, an interface program, SCMS, was developed to build the geometric configurations for the phantom through the use of tomographic images. This procedure allows to calculate not only average dose values but also spatial distribution of dose in regions of interest. With the present methodology absorbed fractions for photons and electrons in various organs of the Zubal segmented phantom were calculated and compared to those reported for the mathematical phanto...

  13. A computational scheme usable for calculating the plume backflow region (United States)

    Cooper, B. P., Jr.


    The effects of the nozzle wall boundary layer on the plume flowfield are neglected in the majority of computational schemes which exist for the calculation of rocket engine exhaust plume flowfields. This neglect, which is unimportant in many applications, becomes unacceptable for applications where a surface which can be adversely affected by plume impingement forces, heating, or contamination is located behind the nozzle exit plane in what is called the 'plume backflow region'. The flow in this region originates in, and is highly affected by, the nozzle wall boundary layer. The inclusion of the effects of the boundary layer in the calculations is required for an appropriate determination of the flowfield properties within this region. A description is presented of the results of modifications of a method-of-characteristics computer program. The modifications were made to include the effects of the nozzle wall boundary layer on the plume flowfield. A comparison of computed and experimental data indicates that the employed computer program may be a useful tool for calculating the entire plume flowfield for liquid propellant rocket engines.

  14. A computational framework for automation of point defect calculations

    Energy Technology Data Exchange (ETDEWEB)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanović, Vladan


    A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely-accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3 as test examples, we demonstrate the package capabilities and validate the methodology.

  15. Computer program for calculating technological parameters of underground transport

    Energy Technology Data Exchange (ETDEWEB)

    Kreimer, E.L. (DonUGI (USSR))


    Reports on an analytical method developed at DonUGI for determining technological parameters and indices of mine haulage performance. A calculation program intended for personal computers and minicomputers is described and designed especially to consider haulage by electric locomotives. The program can be used in an interactive manner and it enables haulage systems of arbitrary complexity to be calculated in 2-4 minutes. The program also allows the effect of haulage on working face output to be evaluated quantitatively. Haulage systems of all mines of the Selidovugol' association were analyzed with the aid of the program in 1988; results for the Ukraina mine are presented in tables.

  16. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    B P Kochurov; A P Knyazev; A Yu Kwaretzkheli


    Some numerical methods for reactor cell, sub-critical systems and 3D models of nuclear reactors are presented. The methods are developed for steady states and space–time calculations. Computer code TRIFON solves space-energy problem in (, ) systems of finite height and calculates heterogeneous few-group matrix parameters of reactor cells. These parameters are used as input data in the computer code SHERHAN solving the 3D heterogeneous reactor equation for steady states and 3D space–time neutron processes simulation. Modification of TRIFON was developed for the simulation of space–time processes in sub-critical systems with external sources. An option of SHERHAN code for the system with external sources is under development.

  17. Efficient algorithm and computing tool for shading calculation

    Directory of Open Access Journals (Sweden)

    Chanadda Pongpattana


    Full Text Available The window is always part of a building envelope. It earns its respect in creating architectural elegance of a building. Despite a major advantage of daylight utilization, a window would inevitably allow heat from solar radiation to penetrate into a building. Hence, a window design must be performed under a careful consideration in order to achieve an energy-conscious design for which the daylight utilization and heat gain are optimized. This paper presents the validation of the vectorial formulation of shading calculation by comparing the computational results with experimental ones, overhang, fin, and eggcrate. A computational algorithm and interactive computer software for computing the shadow were developed. The software was designed in order to be user-friendly and capable of presenting profiles of the shadow graphically and computing corresponding shaded areas for a given window system. It was found that software simulation results were in excellent agreement with experimental results. The average percentage of error is approximately 0.25%, 0.52%, and 0.21% for overhang, fin, and eggcrate, respectively.

  18. Computational aspects of sensitivity calculations in transient structural analysis (United States)

    Greene, William H.; Haftka, Raphael T.


    A key step in the application of formal automated design techniques to structures under transient loading is the calculation of sensitivities of response quantities to the design parameters. This paper considers structures with general forms of damping acted on by general transient loading and addresses issues of computational errors and computational efficiency. The equations of motion are reduced using the traditional basis of vibration modes and then integrated using a highly accurate, explicit integration technique. A critical point constraint formulation is used to place constraints on the magnitude of each response quantity as a function of time. Three different techniques for calculating sensitivities of the critical point constraints are presented. The first two are based on the straightforward application of the forward and central difference operators, respectively. The third is based on explicit differentiation of the equations of motion. Condition errors, finite difference truncation errors, and modal convergence errors for the three techniques are compared by applying them to a simple five-span-beam problem. Sensitivity results are presented for two different transient loading conditions and for both damped and undamped cases.

  19. Color calculations for and perceptual assessment of computer graphic images

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, G.W.


    Realistic image synthesis involves the modelling of an environment in accordance with the laws of physics and the production of a final simulation that is perceptually acceptable. To be considered a scientific endeavor, synthetic image generation should also include the final step of experimental verification. This thesis concentrates on the color calculations that are inherent in the production of the final simulation and on the perceptual assessment of the computer graphic images that result. The fundamental spectral sensitivity functions that are active in the human visual system are introduced and are used to address color-blindness issues in computer graphics. A digitally controlled color television monitor is employed to successfully implement both the Farnsworth Munsell 100 hues test and a new color vision test that yields more accurate diagnoses. Images that simulate color blind vision are synthesized and are used to evaluate color scales for data display. Gaussian quadrature is used with a set of opponent fundamental to select the wavelengths at which to perform synthetic image generation.

  20. Computing NLTE Opacities -- Node Level Parallel Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Holladay, Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)


    Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.

  1. Stored energy in transformers: calculation by a computer program. [Computer code TFORMR calculates and prints the stored energy in a transformer with an iron core

    Energy Technology Data Exchange (ETDEWEB)

    Willmann, P.A.; Hooper, E.B. Jr.


    A computer program was written to calculate the stored energy in a transformer. This result easily yields the inductance and leakage reactance of the transformer and is estimated to be accurate to better than 5 percent. The program was used to calculate the leakage reactance of the main transformer for the LLL neutral beam High Voltage Test Stand.

  2. Computer code for double beta decay QRPA based calculations

    Energy Technology Data Exchange (ETDEWEB)

    Barbero, C. A.; Mariano, A. [Departamento de Física, Facultad de Ciencias Exactas, Universidad Nacional de La Plata, La Plata, Argentina and Instituto de Física La Plata, CONICET, La Plata (Argentina); Krmpotić, F. [Instituto de Física La Plata, CONICET, La Plata, Argentina and Instituto de Física Teórica, Universidade Estadual Paulista, São Paulo (Brazil); Samana, A. R.; Ferreira, V. dos Santos [Departamento de Ciências Exatas e Tecnológicas, Universidade Estadual de Santa Cruz, BA (Brazil); Bertulani, C. A. [Department of Physics, Texas A and M University-Commerce, Commerce, TX (United States)


    The computer code developed by our group some years ago for the evaluation of nuclear matrix elements, within the QRPA and PQRPA nuclear structure models, involved in neutrino-nucleus reactions, muon capture and β{sup ±} processes, is extended to include also the nuclear double beta decay.

  3. Computer program for calculating thermodynamic and transport properties of fluids (United States)

    Hendricks, R. C.; Braon, A. K.; Peller, I. C.


    Computer code has been developed to provide thermodynamic and transport properties of liquid argon, carbon dioxide, carbon monoxide, fluorine, helium, methane, neon, nitrogen, oxygen, and parahydrogen. Equation of state and transport coefficients are updated and other fluids added as new material becomes available.

  4. A FORTRAN Computer Program for Q Sort Calculations (United States)

    Dunlap, William R.


    The Q Sort method is a rank order procedure. A FORTRAN program is described which calculates a total value for any group of cases for the items in the Q Sort, and rank orders the items according to this composite value. (Author/JKS)

  5. [Correlation between ventricular volume calculated manually and by computer]. (United States)

    Gil Moreno, M; Martínez Ríos, M; Grande, F; Cisneros, F; García Moreira, C; Soní, J


    We present here a program of ventricular volumes measurements in which an area-lenght procedure and a digital computer were used. The results were compared with those obtained by the manual method using the same formula. The correlative estatistical analysis of these results showed a high index of 0.95 when compared to the telediastolic volumes obtained by both technics, while the index reached 0.99 in reference to the telesistolic volumes and the ejection fraction.

  6. Prospective Teachers' Views on the Use of Calculators with Computer Algebra System in Algebra Instruction (United States)

    Ozgun-Koca, S. Ash


    Although growing numbers of secondary school mathematics teachers and students use calculators to study graphs, they mainly rely on paper-and-pencil when manipulating algebraic symbols. However, the Computer Algebra Systems (CAS) on computers or handheld calculators create new possibilities for teaching and learning algebraic manipulation. This…

  7. Computational approach for calculating bound states in quantum field theory (United States)

    Lv, Q. Z.; Norris, S.; Brennan, R.; Stefanovich, E.; Su, Q.; Grobe, R.


    We propose a nonperturbative approach to calculate bound-state energies and wave functions for quantum field theoretical models. It is based on the direct diagonalization of the corresponding quantum field theoretical Hamiltonian in an effectively discretized and truncated Hilbert space. We illustrate this approach for a Yukawa-like interaction between fermions and bosons in one spatial dimension and show where it agrees with the traditional method based on the potential picture and where it deviates due to recoil and radiative corrections. This method permits us also to obtain some insight into the spatial characteristics of the distribution of the fermions in the ground state, such as the bremsstrahlung-induced widening.

  8. Algorithms for computer algebra calculations in spacetime; 1, the calculation of curvature

    CERN Document Server

    Pollney, D; Santosuosso, K; Lake, K; Pollney, Denis; Musgrave, Peter; Santosuosso, Kevin; Lake, Kayll


    We examine the relative performance of algorithms for the calculation of curvature in spacetime. The classical coordinate component method is compared to two distinct versions of the Newman-Penrose tetrad approach for a variety of spacetimes, and distinct coordinates and tetrads for a given spacetime. Within the system GRTensorII, we find that there is no single preferred approach on the basis of speed. Rather, we find that the fastest algorithm is the one that minimizes the amount of time spent on simplification. This means that arguments concerning the theoretical superiority of an algorithm need not translate into superior performance when applied to a specific spacetime calculation. In all cases it is the global simplification strategy which is of paramount importance. An appropriate simplification strategy can change an untractable problem into one which can be solved essentially instantaneously.

  9. Performing three-dimensional neutral particle transport calculations on tera scale computers

    Energy Technology Data Exchange (ETDEWEB)

    Woodward, C S; Brown, P N; Chang, B; Dorr, M R; Hanebutte, U R


    A scalable, parallel code system to perform neutral particle transport calculations in three dimensions is presented. To utilize the hyper-cluster architecture of emerging tera scale computers, the parallel code successfully combines the MPI message passing and paradigms. The code's capabilities are demonstrated by a shielding calculation containing over 14 billion unknowns. This calculation was accomplished on the IBM SP ''ASCI-Blue-Pacific computer located at Lawrence Livermore National Laboratory (LLNL).

  10. A computer code for beam optics calculation--third order approximation

    Institute of Scientific and Technical Information of China (English)

    L(U) Jianqin; LI Jinhai


    To calculate the beam transport in the ion optical systems accurately, a beam dynamics computer program of third order approximation is developed. Many conventional optical elements are incorporated in the program. Particle distributions of uniform type or Gaussian type in the ( x, y, z ) 3D ellipses can be selected by the users. The optimization procedures are provided to make the calculations reasonable and fast. The calculated results can be graphically displayed on the computer monitor.

  11. Easy-to-use application programs for decay heat and delayed neutron calculations on personal computers

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)


    Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)

  12. Calculating absorption shifts for retinal proteins: computational challenges. (United States)

    Wanko, M; Hoffmann, M; Strodel, P; Koslowski, A; Thiel, W; Neese, F; Frauenheim, T; Elstner, M


    Rhodopsins can modulate the optical properties of their chromophores over a wide range of wavelengths. The mechanism for this spectral tuning is based on the response of the retinal chromophore to external stress and the interaction with the charged, polar, and polarizable amino acids of the protein environment and is connected to its large change in dipole moment upon excitation, its large electronic polarizability, and its structural flexibility. In this work, we investigate the accuracy of computational approaches for modeling changes in absorption energies with respect to changes in geometry and applied external electric fields. We illustrate the high sensitivity of absorption energies on the ground-state structure of retinal, which varies significantly with the computational method used for geometry optimization. The response to external fields, in particular to point charges which model the protein environment in combined quantum mechanical/molecular mechanical (QM/MM) applications, is a crucial feature, which is not properly represented by previously used methods, such as time-dependent density functional theory (TDDFT), complete active space self-consistent field (CASSCF), and Hartree-Fock (HF) or semiempirical configuration interaction singles (CIS). This is discussed in detail for bacteriorhodopsin (bR), a protein which blue-shifts retinal gas-phase excitation energy by about 0.5 eV. As a result of this study, we propose a procedure which combines structure optimization or molecular dynamics simulation using DFT methods with a semiempirical or ab initio multireference configuration interaction treatment of the excitation energies. Using a conventional QM/MM point charge representation of the protein environment, we obtain an absorption energy for bR of 2.34 eV. This result is already close to the experimental value of 2.18 eV, even without considering the effects of protein polarization, differential dispersion, and conformational sampling.

  13. Shielding Calculations for Positron Emission Tomography - Computed Tomography Facilities

    Energy Technology Data Exchange (ETDEWEB)

    Baasandorj, Khashbayar [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of); Yang, Jeongseon [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)


    Integrated PET-CT has been shown to be more accurate for lesion localization and characterization than PET or CT alone, and the results obtained from PET and CT separately and interpreted side by side or following software based fusion of the PET and CT datasets. At the same time, PET-CT scans can result in high patient and staff doses; therefore, careful site planning and shielding of this imaging modality have become challenging issues in the field. In Mongolia, the introduction of PET-CT facilities is currently being considered in many hospitals. Thus, additional regulatory legislation for nuclear and radiation applications is necessary, for example, in regulating licensee processes and ensuring radiation safety during the operations. This paper aims to determine appropriate PET-CT shielding designs using numerical formulas and computer code. Since presently there are no PET-CT facilities in Mongolia, contact was made with radiological staff at the Nuclear Medicine Center of the National Cancer Center of Mongolia (NCCM) to get information about facilities where the introduction of PET-CT is being considered. Well-designed facilities do not require additional shielding, which should help cut down overall costs related to PET-CT installation. According to the results of this study, building barrier thicknesses of the NCCM building is not sufficient to keep radiation dose within the limits.

  14. Direct Calculation of Protein Fitness Landscapes through Computational Protein Design. (United States)

    Au, Loretta; Green, David F


    Naturally selected amino-acid sequences or experimentally derived ones are often the basis for understanding how protein three-dimensional conformation and function are determined by primary structure. Such sequences for a protein family comprise only a small fraction of all possible variants, however, representing the fitness landscape with limited scope. Explicitly sampling and characterizing alternative, unexplored protein sequences would directly identify fundamental reasons for sequence robustness (or variability), and we demonstrate that computational methods offer an efficient mechanism toward this end, on a large scale. The dead-end elimination and A(∗) search algorithms were used here to find all low-energy single mutant variants, and corresponding structures of a G-protein heterotrimer, to measure changes in structural stability and binding interactions to define a protein fitness landscape. We established consistency between these algorithms with known biophysical and evolutionary trends for amino-acid substitutions, and could thus recapitulate known protein side-chain interactions and predict novel ones.

  15. Computational method for general multicenter electronic structure calculations. (United States)

    Batcho, P F


    Here a three-dimensional fully numerical (i.e., chemical basis-set free) method [P. F. Batcho, Phys. Rev. A 57, 6 (1998)], is formulated and applied to the calculation of the electronic structure of general multicenter Hamiltonian systems. The numerical method is presented and applied to the solution of Schrödinger-type operators, where a given number of nuclei point singularities is present in the potential field. The numerical method combines the rapid "exponential" convergence rates of modern spectral methods with the multiresolution flexibility of finite element methods, and can be viewed as an extension of the spectral element method. The approximation of cusps in the wave function and the formulation of multicenter nuclei singularities are efficiently dealt with by the combination of a coordinate transformation and a piecewise variational spectral approximation. The complete system can be efficiently inverted by established iterative methods for elliptical partial differential equations; an application of the method is presented for atomic, diatomic, and triatomic systems, and comparisons are made to the literature when possible. In particular, local density approximations are studied within the context of Kohn-Sham density functional theory, and are presented for selected subsets of atomic and diatomic molecules as well as the ozone molecule.

  16. Fast calculation method of computer-generated cylindrical hologram using wave-front recording surface. (United States)

    Zhao, Yu; Piao, Mei-lan; Li, Gang; Kim, Nam


    Fast calculation method for a computer-generated cylindrical hologram (CGCH) is proposed. The method consists of two steps: the first step is a calculation of a virtual wave-front recording surface (WRS), which is located between the 3D object and CGCH. In the second step, in order to obtain a CGCH, we execute the diffraction calculation based on the fast Fourier transform (FFT) from the WRS to the CGCH, which are in the same concentric arrangement. The computational complexity is dramatically reduced in comparison with direct integration method. The simulation results confirm that our proposed method is able to improve the computational speed of CGCH.

  17. Simple and fast cosine approximation method for computer-generated hologram calculation. (United States)

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Arai, Daisuke; Ito, Tomoyoshi


    The cosine function is a heavy computational operation in computer-generated hologram (CGH) calculation; therefore, it is implemented by substitution methods such as a look-up table. However, the computational load and required memory space of such methods are still large. In this study, we propose a simple and fast cosine function approximation method for CGH calculation. As a result, we succeeded in creating CGH with sufficient quality and made the calculation time 1.6 times as fast at maximum compared to using the look-up table of the cosine function on CPU implementation.

  18. Radiation therapy calculations using an on-demand virtual cluster via cloud computing

    CERN Document Server

    Keyes, Roy W; Arnold, Dorian; Luan, Shuang


    Computer hardware costs are the limiting factor in producing highly accurate radiation dose calculations on convenient time scales. Because of this, large-scale, full Monte Carlo simulations and other resource intensive algorithms are often considered infeasible for clinical settings. The emerging cloud computing paradigm promises to fundamentally alter the economics of such calculations by providing relatively cheap, on-demand, pay-as-you-go computing resources over the Internet. We believe that cloud computing will usher in a new era, in which very large scale calculations will be routinely performed by clinics and researchers using cloud-based resources. In this research, several proof-of-concept radiation therapy calculations were successfully performed on a cloud-based virtual Monte Carlo cluster. Performance evaluations were made of a distributed processing framework developed specifically for this project. The expected 1/n performance was observed with some caveats. The economics of cloud-based virtual...

  19. SAMDIST A Computer Code for Calculating Statistical Distributions for R-Matrix Resonance Parameters

    CERN Document Server

    Leal, L C


    The: SAMDIST computer code has been developed to calculate distribution of resonance parameters of the Reich-Moore R-matrix type. The program assumes the parameters are in the format compatible with that of the multilevel R-matrix code SAMMY. SAMDIST calculates the energy-level spacing distribution, the resonance width distribution, and the long-range correlation of the energy levels. Results of these calculations are presented in both graphic and tabular forms.

  20. Increasing the computational speed of flash calculations with applications for compositional, transient simulations

    DEFF Research Database (Denmark)

    Rasmussen, Claus P.; Krejbjerg, Kristian; Michelsen, Michael Locht


    Approaches are presented for reducing the computation time spent on flash calculations in compositional, transient simulations. In a conventional flash calculation, the majority of the simulation time is spent on stability analysis, even for systems far into the single-phase region. A criterion has...... been implemented for deciding when it is justified to bypass the stability analysis. With the implementation of the developed time-saving initiatives, it has been shown for a number of compositional, transient pipeline simulations that a reduction of the computation time spent on flash calculations...

  1. Efficient Computation of Power, Force, and Torque in BEM Scattering Calculations

    CERN Document Server

    Reid, M T Homer


    We present concise, computationally efficient formulas for several quantities of interest -- including absorbed and scattered power, optical force (radiation pressure), and torque -- in scattering calculations performed using the boundary-element method (BEM) [also known as the method of moments (MOM)]. Our formulas compute the quantities of interest \\textit{directly} from the BEM surface currents with no need ever to compute the scattered electromagnetic fields. We derive our new formulas and demonstrate their effectiveness by computing power, force, and torque in a number of example geometries. Free, open-source software implementations of our formulas are available for download online.

  2. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Peloquin, R.A.


    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.

  3. Computer program for calculating flow parameters and power requirements for cryogenic wind tunnels (United States)

    Dress, D. A.


    A computer program has been written that performs the flow parameter calculations for cryogenic wind tunnels which use nitrogen as a test gas. The flow parameters calculated include static pressure, static temperature, compressibility factor, ratio of specific heats, dynamic viscosity, total and static density, velocity, dynamic pressure, mass-flow rate, and Reynolds number. Simplifying assumptions have been made so that the calculations of Reynolds number, as well as the other flow parameters can be made on relatively small desktop digital computers. The program, which also includes various power calculations, has been developed to the point where it has become a very useful tool for the users and possible future designers of fan-driven continuous-flow cryogenic wind tunnels.

  4. Computer program to calculate three-dimensional boundary layer flows over wings with wall mass transfer (United States)

    Mclean, J. D.; Randall, J. L.


    A system of computer programs for calculating three dimensional transonic flow over wings, including details of the three dimensional viscous boundary layer flow, was developed. The flow is calculated in two overlapping regions: an outer potential flow region, and a boundary layer region in which the first order, three dimensional boundary layer equations are numerically solved. A consistent matching of the two solutions is achieved iteratively, thus taking into account viscous-inviscid interaction. For the inviscid outer flow calculations, the Jameson-Caughey transonic wing program FLO 27 is used, and the boundary layer calculations are performed by a finite difference boundary layer prediction program. Interface programs provide communication between the two basic flow analysis programs. Computed results are presented for the NASA F8 research wing, both with and without distributed surface suction.

  5. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision. (United States)

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong


    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.

  6. A FORTRAN computer code for calculating flows in multiple-blade-element cascades (United States)

    Mcfarland, E. R.


    A solution technique has been developed for solving the multiple-blade-element, surface-of-revolution, blade-to-blade flow problem in turbomachinery. The calculation solves approximate flow equations which include the effects of compressibility, radius change, blade-row rotation, and variable stream sheet thickness. An integral equation solution (i.e., panel method) is used to solve the equations. A description of the computer code and computer code input is given in this report.

  7. Fast calculation of computer-generated hologram using run-length encoding based recurrence relation. (United States)

    Nishitsuji, Takashi; Shimobaba, Tomoyoshi; Kakue, Takashi; Ito, Tomoyoshi


    Computer-Generated Holograms (CGHs) can be generated by superimposing zoneplates. A zoneplate is a grating that can concentrate an incident light into a point. Since a zoneplate has a circular symmetry, we reported an algorithm that rapidly generates a zoneplate by drawing concentric circles using computer graphic techniques. However, random memory access was required in the algorithm and resulted in degradation of the computational efficiency. In this study, we propose a fast CGH generation algorithm without random memory access using run-length encoding (RLE) based recurrence relation. As a result, we succeeded in improving the calculation time by 88%, compared with that of the previous work.

  8. Finite element computer program for the calculation of the resonant frequencies of anisotropic materials

    Energy Technology Data Exchange (ETDEWEB)

    Fleury, W.H.; Rosinger, H.E.; Ritchie, I.G.


    A set of computer programs for the calculation of the flexural and torsional resonant frequencies of rectangular section bars of materials of orthotropic or high symmetry are described. The calculations are used in the experimental determination and verification of the elastic constants of anisotropic materials. The simple finite element technique employed separates the inertial and elastic properties of the beam element into station and field transfer matrices respectively. It includes the Timoshenko beam corrections for flexure and Lekhnitskii's theory for torsion-flexure coupling. The programs also calculate the vibration shapes and surface nodal contours or Chladni figures of the vibration modes. (auth)

  9. Computer programming for nucleic acid studies. III. Calculated ultraviolet absorption spectra of protected oligodeoxyribonucleotides. (United States)

    Kan, L; Kettell, R W; Miller, P S


    A computer program called UV. FOR was written in FORTRAN. This program primarily utilizes the digitized UV absorption spectra of 8 protected deoxyribonucleosides in 95% ethanol solution to compose the UV spectrum of a oligodeoxynucleotide of any sequence. Both calculated and observed UV spectra of 2 protected oligodeoxynucleotides are carefully compared. The results show that the calculated UV spectrum is virtually identical to the observed spectrum. Thus, the calculated spectra provide rapid confirmation of oligonucleotide compositions during the course of oligonucleotide synthesis by the phosphotriester method.

  10. Shielding calculations using computer techniques; Calculo de blindajes mediante tecnicas de computacion

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Portilla, M. I.; Marquez, J.


    Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.

  11. Massively parallel computational fluid dynamics calculations for aerodynamics and aerothermodynamics applications

    Energy Technology Data Exchange (ETDEWEB)

    Payne, J.L.; Hassan, B.


    Massively parallel computers have enabled the analyst to solve complicated flow fields (turbulent, chemically reacting) that were previously intractable. Calculations are presented using a massively parallel CFD code called SACCARA (Sandia Advanced Code for Compressible Aerothermodynamics Research and Analysis) currently under development at Sandia National Laboratories as part of the Department of Energy (DOE) Accelerated Strategic Computing Initiative (ASCI). Computations were made on a generic reentry vehicle in a hypersonic flowfield utilizing three different distributed parallel computers to assess the parallel efficiency of the code with increasing numbers of processors. The parallel efficiencies for the SACCARA code will be presented for cases using 1, 150, 100 and 500 processors. Computations were also made on a subsonic/transonic vehicle using both 236 and 521 processors on a grid containing approximately 14.7 million grid points. Ongoing and future plans to implement a parallel overset grid capability and couple SACCARA with other mechanics codes in a massively parallel environment are discussed.

  12. An approach to first principles electronic structure calculation by symbolic-numeric computation

    Directory of Open Access Journals (Sweden)

    Akihito Kikuchi


    Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.

  13. Calculating nasoseptal flap dimensions : a cadaveric study using cone beam computed tomography

    NARCIS (Netherlands)

    ten Dam, Ellen; Korsten-Meijer, Astrid G. W.; Schepers, Rutger H.; van der Meer, Wicher J.; Gerrits, Peter O.; van der Laan, Bernard F. A. M.; Feijen, Robert A.


    We hypothesize that three-dimensional imaging using cone beam computed tomography (CBCT) is suitable for calculating nasoseptal flap (NSF) dimensions. To evaluate our hypothesis, we compared CBCT NSF dimensions with anatomical dissections. The NSF reach and vascularity were studied. In an anatomical

  14. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis (United States)

    Gordon, Sanford; Mcbride, Bonnie J.


    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  15. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform. (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter


    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  16. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.


    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.

  17. Simplified calculation method for computer-generated holographic stereograms from multi-view images. (United States)

    Takaki, Yasuhiro; Ikeda, Kyohei


    A simple calculation method to synthesize computer-generated holographic stereograms, which does not involve diffraction calculations, is proposed. It is assumed that three-dimension (3D) image generation by holographic stereograms is similar to that of multi-view autostereoscopic displays, in that multiple parallax images are displayed with rays converging to corresponding viewpoints. Therefore, a wavefront is calculated, whose amplitude is the square root of an intensity distribution of a parallax image and whose phase is a quadric phase distribution of a spherical wave converging to a viewpoint. Multiple wavefronts calculated for multiple viewpoints are summed up to obtain an object wave, which is then used to determine a hologram pattern. The proposed technique was experimentally verified.

  18. Open Quantum Dynamics Calculations with the Hierarchy Equations of Motion on Parallel Computers. (United States)

    Strümpfer, Johan; Schulten, Klaus


    Calculating the evolution of an open quantum system, i.e., a system in contact with a thermal environment, has presented a theoretical and computational challenge for many years. With the advent of supercomputers containing large amounts of memory and many processors, the computational challenge posed by the previously intractable theoretical models can now be addressed. The hierarchy equations of motion present one such model and offer a powerful method that remained under-utilized so far due to its considerable computational expense. By exploiting concurrent processing on parallel computers the hierarchy equations of motion can be applied to biological-scale systems. Herein we introduce the quantum dynamics software PHI, that solves the hierarchical equations of motion. We describe the integrator employed by PHI and demonstrate PHI's scaling and efficiency running on large parallel computers by applying the software to the calculation of inter-complex excitation transfer between the light harvesting complexes 1 and 2 of purple photosynthetic bacteria, a 50 pigment system.

  19. Guide for licensing evaluations using CRAC2: A computer program for calculating reactor accident consequences

    Energy Technology Data Exchange (ETDEWEB)

    White, J.E.; Roussin, R.W.; Gilpin, H.


    A version of the CRAC2 computer code applicable for use in analyses of consequences and risks of reactor accidents in case work for environmental statements has been implemented for use on the Nuclear Regulatory Commission Data General MV/8000 computer system. Input preparation is facilitated through the use of an interactive computer program which operates on an IBM personal computer. The resulting CRAC2 input deck is transmitted to the MV/8000 by using an error-free file transfer mechanism. To facilitate the use of CRAC2 at NRC, relevant background material on input requirements and model descriptions has been extracted from four reports - ''Calculations of Reactor Accident Consequences,'' Version 2, NUREG/CR-2326 (SAND81-1994) and ''CRAC2 Model Descriptions,'' NUREG/CR-2552 (SAND82-0342), ''CRAC Calculations for Accident Sections of Environmental Statements, '' NUREG/CR-2901 (SAND82-1693), and ''Sensitivity and Uncertainty Studies of the CRAC2 Computer Code,'' NUREG/CR-4038 (ORNL-6114). When this background information is combined with instructions on the input processor, this report provides a self-contained guide for preparing CRAC2 input data with a specific orientation toward applications on the MV/8000. 8 refs., 11 figs., 10 tabs.

  20. Implementation of a Thermodynamic Solver within a Computer Program for Calculating Fission-Product Release Fractions (United States)

    Barber, Duncan Henry

    During some postulated accidents at nuclear power stations, fuel cooling may be impaired. In such cases, the fuel heats up and the subsequent increased fission-gas release from the fuel to the gap may result in fuel sheath failure. After fuel sheath failure, the barrier between the coolant and the fuel pellets is lost or impaired, gases and vapours from the fuel-to-sheath gap and other open voids in the fuel pellets can be vented. Gases and steam from the coolant can enter the broken fuel sheath and interact with the fuel pellet surfaces and the fission-product inclusion on the fuel surface (including material at the surface of the fuel matrix). The chemistry of this interaction is an important mechanism to model in order to assess fission-product releases from fuel. Starting in 1995, the computer program SOURCE 2.0 was developed by the Canadian nuclear industry to model fission-product release from fuel during such accidents. SOURCE 2.0 has employed an early thermochemical model of irradiated uranium dioxide fuel developed at the Royal Military College of Canada. To overcome the limitations of computers of that time, the implementation of the RMC model employed lookup tables to pre-calculated equilibrium conditions. In the intervening years, the RMC model has been improved, the power of computers has increased significantly, and thermodynamic subroutine libraries have become available. This thesis is the result of extensive work based on these three factors. A prototype computer program (referred to as SC11) has been developed that uses a thermodynamic subroutine library to calculate thermodynamic equilibria using Gibbs energy minimization. The Gibbs energy minimization requires the system temperature (T) and pressure (P), and the inventory of chemical elements (n) in the system. In order to calculate the inventory of chemical elements in the fuel, the list of nuclides and nuclear isomers modelled in SC11 had to be expanded from the list used by SOURCE 2.0. A

  1. Mathematic-computational modeling for the calculations involved in the Stern-Volmer theory (United States)

    Thadeu, Felipe C.; Silva, Juliana A.; Silva, Dilson


    The present work consists of the description of a mathematic-computational routine to process the calculation, statistics, plotting graphs and then show binding constants of ligands to transport proteins, as described by the Stern-Volmer Theory. The quenching of fluorescence technique used to analyze samples produces a great amount of data to build spectral plots. The aim of the work is to develop a computational tool which simplify, turn confident and make agile to deal with the great mass of data generated by the fluorescence spectroscopy equipment.

  2. Computational aspects of maximum likelihood estimation and reduction in sensitivity function calculations (United States)

    Gupta, N. K.; Mehra, R. K.


    This paper discusses numerical aspects of computing maximum likelihood estimates for linear dynamical systems in state-vector form. Different gradient-based nonlinear programming methods are discussed in a unified framework and their applicability to maximum likelihood estimation is examined. The problems due to singular Hessian or singular information matrix that are common in practice are discussed in detail and methods for their solution are proposed. New results on the calculation of state sensitivity functions via reduced order models are given. Several methods for speeding convergence and reducing computation time are also discussed.

  3. Computer subroutines for the estimation of nuclear reaction effects in proton-tissue-dose calculations (United States)

    Wilson, J. W.; Khandelwal, G. S.


    Calculational methods for estimation of dose from external proton exposure of arbitrary convex bodies are briefly reviewed. All the necessary information for the estimation of dose in soft tissue is presented. Special emphasis is placed on retaining the effects of nuclear reaction, especially in relation to the dose equivalent. Computer subroutines to evaluate all of the relevant functions are discussed. Nuclear reaction contributions for standard space radiations are in most cases found to be significant. Many of the existing computer programs for estimating dose in which nuclear reaction effects are neglected can be readily converted to include nuclear reaction effects by use of the subroutines described herein.

  4. A Computationally Efficient Approach for Calculating Galaxy Two-Point Correlations

    CERN Document Server

    Demina, Regina; BenZvi, Segev; Hindrichs, Otto


    We develop a modification to the calculation of the two-point correlation function commonly used in the analysis of large scale structure in cosmology. An estimator of the two-point correlation function is constructed by contrasting the observed distribution of galaxies with that of a uniformly populated random catalog. Using the assumption that the distribution of random galaxies in redshift is independent of angular position allows us to replace pairwise combinatorics with fast integration over probability maps. The new method significantly reduces the computation time while simultaneously increasing the precision of the calculation.

  5. Fast calculation of spherical computer generated hologram using spherical wave spectrum method. (United States)

    Jackin, Boaz Jessie; Yatagai, Toyohiko


    A fast calculation method for computer generation of spherical holograms in proposed. This method is based on wave propagation defined in spectral domain and in spherical coordinates. The spherical wave spectrum and transfer function were derived from boundary value solutions to the scalar wave equation. It is a spectral propagation formula analogous to angular spectrum formula in cartesian coordinates. A numerical method to evaluate the derived formula is suggested, which uses only N(logN)2 operations for calculations on N sampling points. Simulation results are presented to verify the correctness of the proposed method. A spherical hologram for a spherical object was generated and reconstructed successfully using the proposed method.

  6. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect (United States)

    Lan, C. Edward


    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  7. Research on feasibility of computational fluid dynamics (CFD) method for traffic signs board calculation (United States)

    Chao, S.; Jiao, C. W.; Liu, S.


    At this stage of the development of China's highway, the quantity and size of traffic signs are growing with the guiding information increasing. In this paper, a calculation method is provided for special sign board with reducing wind load measures to save construction materials and cost. The empirical model widely used in China is introduced for normal sign structure design. After that, this paper shows a computational fluid dynamics method, which can calculate both normal and special sign structures. These two methods are compared and analyzed with examples to ensure the applicability and feasibility of CFD method.

  8. Calculating Three Loop Ladder and V-Topologies for Massive Operator Matrix Elements by Computer Algebra

    CERN Document Server

    Ablinger, J; Blümlein, J; De Freitas, A; von Manteuffel, A; Schneider, C


    Three loop ladder and $V$-topology diagrams contributing to the massive operator matrix element $A_{Qg}$ are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable $N$ and the dimensional parameter $\\varepsilon$. Given these representations, the desired Laurent series expansions in $\\varepsilon$ can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural ...

  9. Computer Program for Calculation of a Gas Temperature Profile by Infrared Emission: Absorption Spectroscopy (United States)

    Buchele, D. R.


    A computer program to calculate the temperature profile of a flame or hot gas was presented in detail. Emphasis was on profiles found in jet engine or rocket engine exhaust streams containing H2O or CO2 radiating gases. The temperature profile was assumed axisymmetric with an assumed functional form controlled by two variable parameters. The parameters were calculated using measurements of gas radiation at two wavelengths in the infrared. The program also gave some information on the pressure profile. A method of selection of wavelengths was given that is likely to lead to an accurate determination of the parameters. The program is written in FORTRAN IV language and runs in less than 60 seconds on a Univac 1100 computer.

  10. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)


    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  11. Monte Carlo and deterministic computational methods for the calculation of the effective delayed neutron fraction (United States)

    Zhong, Zhaopeng; Talamo, Alberto; Gohar, Yousry


    The effective delayed neutron fraction β plays an important role in kinetics and static analysis of the reactor physics experiments. It is used as reactivity unit referred to as "dollar". Usually, it is obtained by computer simulation due to the difficulty in measuring it experimentally. In 1965, Keepin proposed a method, widely used in the literature, for the calculation of the effective delayed neutron fraction β. This method requires calculation of the adjoint neutron flux as a weighting function of the phase space inner products and is easy to implement by deterministic codes. With Monte Carlo codes, the solution of the adjoint neutron transport equation is much more difficult because of the continuous-energy treatment of nuclear data. Consequently, alternative methods, which do not require the explicit calculation of the adjoint neutron flux, have been proposed. In 1997, Bretscher introduced the k-ratio method for calculating the effective delayed neutron fraction; this method is based on calculating the multiplication factor of a nuclear reactor core with and without the contribution of delayed neutrons. The multiplication factor set by the delayed neutrons (the delayed multiplication factor) is obtained as the difference between the total and the prompt multiplication factors. Using Monte Carlo calculation Bretscher evaluated the β as the ratio between the delayed and total multiplication factors (therefore the method is often referred to as the k-ratio method). In the present work, the k-ratio method is applied by Monte Carlo (MCNPX) and deterministic (PARTISN) codes. In the latter case, the ENDF/B nuclear data library of the fuel isotopes (235U and 238U) has been processed by the NJOY code with and without the delayed neutron data to prepare multi-group WIMSD neutron libraries for the lattice physics code DRAGON, which was used to generate the PARTISN macroscopic cross sections. In recent years Meulekamp and van der Marck in 2006 and Nauchi and Kameyama

  12. DCHAIN: A user-friendly computer program for radioactive decay and reaction chain calculations

    Energy Technology Data Exchange (ETDEWEB)

    East, L.V.


    A computer program for calculating the time-dependent daughter populations in radioactive decay and nuclear reaction chains is described. Chain members can have non-zero initial populations and be produced from the preceding chain member as the result of radioactive decay, a nuclear reaction, or both. As presently implemented, chains can contain up to 15 members. Program input can be supplied interactively or read from ASCII data files. Time units for half-lives, etc. can be specified during data entry. Input values are verified and can be modified if necessary, before used in calculations. Output results can be saved in ASCII files in a format suitable for including in reports or other documents. The calculational method, described in some detail, utilizes a generalized form of the Bateman equations. The program is written in the C language in conformance with current ANSI standards and can be used on multiple hardware platforms.

  13. A computer program incorporating Pitzer's equations for calculation of geochemical reactions in brines (United States)

    Plummer, L.N.; Parkhurst, D.L.; Fleming, G.W.; Dunkle, S.A.


    The program named PHRQPITZ is a computer code capable of making geochemical calculations in brines and other electrolyte solutions to high concentrations using the Pitzer virial-coefficient approach for activity-coefficient corrections. Reaction-modeling capabilities include calculation of (1) aqueous speciation and mineral-saturation index, (2) mineral solubility, (3) mixing and titration of aqueous solutions, (4) irreversible reactions and mineral water mass transfer, and (5) reaction path. The computed results for each aqueous solution include the osmotic coefficient, water activity , mineral saturation indices, mean activity coefficients, total activity coefficients, and scale-dependent values of pH, individual-ion activities and individual-ion activity coeffients , and scale-dependent values of pH, individual-ion activities and individual-ion activity coefficients. A data base of Pitzer interaction parameters is provided at 25 C for the system: Na-K-Mg-Ca-H-Cl-SO4-OH-HCO3-CO3-CO2-H2O, and extended to include largely untested literature data for Fe(II), Mn(II), Sr, Ba, Li, and Br with provision for calculations at temperatures other than 25C. An extensive literature review of published Pitzer interaction parameters for many inorganic salts is given. Also described is an interactive input code for PHRQPITZ called PITZINPT. (USGS)

  14. Computational methods for multiphase equilibrium and kinetics calculations for geochemical and reactive transport applications (United States)

    Leal, Allan; Saar, Martin


    Computational methods for geochemical and reactive transport modeling are essential for the understanding of many natural and industrial processes. Most of these processes involve several phases and components, and quite often requires chemical equilibrium and kinetics calculations. We present an overview of novel methods for multiphase equilibrium calculations, based on both the Gibbs energy minimization (GEM) approach and on the solution of the law of mass-action (LMA) equations. We also employ kinetics calculations, assuming partial equilibrium (e.g., fluid species in equilibrium while minerals are in disequilibrium) using automatic time stepping to improve simulation efficiency and robustness. These methods are developed specifically for applications that are computationally expensive, such as reactive transport simulations. We show how efficient the new methods are, compared to other algorithms, and how easy it is to use them for geochemical modeling via a simple script language. All methods are available in Reaktoro, a unified open-source framework for modeling chemically reactive systems, which we also briefly describe.

  15. Calculation of Computational Complexity for Radix-2 (p) Fast Fourier Transform Algorithms for Medical Signals. (United States)

    Amirfattahi, Rassoul


    Owing to its simplicity radix-2 is a popular algorithm to implement fast fourier transform. Radix-2(p) algorithms have the same order of computational complexity as higher radices algorithms, but still retain the simplicity of radix-2. By defining a new concept, twiddle factor template, in this paper, we propose a method for exact calculation of multiplicative complexity for radix-2(p) algorithms. The methodology is described for radix-2, radix-2 (2) and radix-2 (3) algorithms. Results show that radix-2 (2) and radix-2 (3) have significantly less computational complexity compared with radix-2. Another interesting result is that while the number of complex multiplications in radix-2 (3) algorithm is slightly more than radix-2 (2), the number of real multiplications for radix-2 (3) is less than radix-2 (2). This is because of the twiddle factors in the form of which need less number of real multiplications and are more frequent in radix-2 (3) algorithm.

  16. Large-Scale Eigenvalue Calculations for Stability Analysis of Steady Flows on Massively Parallel Computers

    Energy Technology Data Exchange (ETDEWEB)

    Lehoucq, Richard B.; Salinger, Andrew G.


    We present an approach for determining the linear stability of steady states of PDEs on massively parallel computers. Linearizing the transient behavior around a steady state leads to a generalized eigenvalue problem. The eigenvalues with largest real part are calculated using Arnoldi's iteration driven by a novel implementation of the Cayley transformation to recast the problem as an ordinary eigenvalue problem. The Cayley transformation requires the solution of a linear system at each Arnoldi iteration, which must be done iteratively for the algorithm to scale with problem size. A representative model problem of 3D incompressible flow and heat transfer in a rotating disk reactor is used to analyze the effect of algorithmic parameters on the performance of the eigenvalue algorithm. Successful calculations of leading eigenvalues for matrix systems of order up to 4 million were performed, identifying the critical Grashof number for a Hopf bifurcation.

  17. Computational Issues Associated with Automatic Calculation of Acute Myocardial Infarction Scores (United States)

    Destro-Filho, J. B.; Machado, S. J. S.; Fonseca, G. T.


    This paper presents a comparison among the three principal acute myocardial infarction (AMI) scores (Selvester, Aldrich, Anderson-Wilkins) as they are automatically estimated from digital electrocardiographic (ECG) files, in terms of memory occupation and processing time. Theoretical algorithm complexity is also provided. Our simulation study supposes that the ECG signal is already digitized and available within a computer platform. We perform 1000 000 Monte Carlo experiments using the same input files, leading to average results that point out drawbacks and advantages of each score. Since all these calculations do not require either large memory occupation or long processing, automatic estimation is compatible with real-time requirements associated with AMI urgency and with telemedicine systems, being faster than manual calculation, even in the case of simple costless personal microcomputers.

  18. Methods, algorithms and computer codes for calculation of electron-impact excitation parameters

    CERN Document Server

    Bogdanovich, P; Stonys, D


    We describe the computer codes, developed at Vilnius University, for the calculation of electron-impact excitation cross sections, collision strengths, and excitation rates in the plane-wave Born approximation. These codes utilize the multireference atomic wavefunctions which are also adopted to calculate radiative transition parameters of complex many-electron ions. This leads to consistent data sets suitable in plasma modelling codes. Two versions of electron scattering codes are considered in the present work, both of them employing configuration interaction method for inclusion of correlation effects and Breit-Pauli approximation to account for relativistic effects. These versions differ only by one-electron radial orbitals, where the first one employs the non-relativistic numerical radial orbitals, while another version uses the quasirelativistic radial orbitals. The accuracy of produced results is assessed by comparing radiative transition and electron-impact excitation data for neutral hydrogen, helium...

  19. WOLF: a computer code package for the calculation of ion beam trajectories

    Energy Technology Data Exchange (ETDEWEB)

    Vogel, D.L.


    The WOLF code solves POISSON'S equation within a user-defined problem boundary of arbitrary shape. The code is compatible with ANSI FORTRAN and uses a two-dimensional Cartesian coordinate geometry represented on a triangular lattice. The vacuum electric fields and equipotential lines are calculated for the input problem. The use may then introduce a series of emitters from which particles of different charge-to-mass ratios and initial energies can originate. These non-relativistic particles will then be traced by WOLF through the user-defined region. Effects of ion and electron space charge are included in the calculation. A subprogram PISA forms part of this code and enables optimization of various aspects of the problem. The WOLF package also allows detailed graphics analysis of the computed results to be performed.

  20. Application of bilateral filtration with weight coefficients for similarity metric calculation in optical flow computation algorithm (United States)

    Panin, S. V.; Titkov, V. V.; Lyubutin, P. S.; Chemezov, V. O.; Eremin, A. V.


    Application of weight coefficients of the bilateral filter used to determine weighted similarity metrics of image ranges in optical flow computation algorithm that employs 3-dimension recursive search (3DRS) was investigated. By testing the algorithm applying images taken from the public test database Middlebury benchmark, the effectiveness of this weighted similarity metrics for solving the image processing problem was demonstrated. The necessity of matching the equation parameter values when calculating the weight coefficients aimed at taking into account image texture features was proved for reaching the higher noise resistance under the vector field construction. The adaptation technique which allows excluding manual determination of parameter values was proposed and its efficiency was demonstrated.

  1. A computer module used to calculate the horizontal control surface size of a conceptual aircraft design (United States)

    Sandlin, Doral R.; Swanson, Stephen Mark


    The creation of a computer module used to calculate the size of the horizontal control surfaces of a conceptual aircraft design is discussed. The control surface size is determined by first calculating the size needed to rotate the aircraft during takeoff, and, second, by determining if the calculated size is large enough to maintain stability of the aircraft throughout any specified mission. The tail size needed to rotate during takeoff is calculated from a summation of forces about the main landing gear of the aircraft. The stability of the aircraft is determined from a summation of forces about the center of gravity during different phases of the aircraft's flight. Included in the horizontal control surface analysis are: downwash effects on an aft tail, upwash effects on a forward canard, and effects due to flight in close proximity to the ground. Comparisons of production aircraft with numerical models show good accuracy for control surface sizing. A modified canard design verified the accuracy of the module for canard configurations. Added to this stability and control module is a subroutine that determines one of the three design variables, for a stable vectored thrust aircraft. These include forward thrust nozzle position, aft thrust nozzle angle, and forward thrust split.

  2. Development of a locally mass flux conservative computer code for calculating 3-D viscous flow in turbomachines (United States)

    Walitt, L.


    The VANS successive approximation numerical method was extended to the computation of three dimensional, viscous, transonic flows in turbomachines. A cross-sectional computer code, which conserves mass flux at each point of the cross-sectional surface of computation was developed. In the VANS numerical method, the cross-sectional computation follows a blade-to-blade calculation. Numerical calculations were made for an axial annular turbine cascade and a transonic, centrifugal impeller with splitter vanes. The subsonic turbine cascade computation was generated in blade-to-blade surface to evaluate the accuracy of the blade-to-blade mode of marching. Calculated blade pressures at the hub, mid, and tip radii of the cascade agreed with corresponding measurements. The transonic impeller computation was conducted to test the newly developed locally mass flux conservative cross-sectional computer code. Both blade-to-blade and cross sectional modes of calculation were implemented for this problem. A triplet point shock structure was computed in the inducer region of the impeller. In addition, time-averaged shroud static pressures generally agreed with measured shroud pressures. It is concluded that the blade-to-blade computation produces a useful engineering flow field in regions of subsonic relative flow; and cross-sectional computation, with a locally mass flux conservative continuity equation, is required to compute the shock waves in regions of supersonic relative flow.

  3. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra (United States)

    Ablinger, J.; Behring, A.; Blümlein, J.; De Freitas, A.; von Manteuffel, A.; Schneider, C.


    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element AQg are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  4. Use of Monte Carlo simulation software for calculating effective dose in cone beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gomes B, W. O., E-mail: [Instituto Federal da Bahia, Rua Emidio dos Santos s/n, Barbalho 40301-015, Salvador de Bahia (Brazil)


    This study aimed to develop a geometry of irradiation applicable to the software PCXMC and the consequent calculation of effective dose in applications of the Computed Tomography Cone Beam (CBCT). We evaluated two different CBCT equipment s for dental applications: Care stream Cs 9000 3-dimensional tomograph; i-CAT and GENDEX GXCB-500. Initially characterize each protocol measuring the surface kerma input and the product kerma air-area, P{sub KA}, with solid state detectors RADCAL and PTW transmission chamber. Then we introduce the technical parameters of each preset protocols and geometric conditions in the PCXMC software to obtain the values of effective dose. The calculated effective dose is within the range of 9.0 to 15.7 μSv for 3-dimensional computer 9000 Cs; within the range 44.5 to 89 μSv for GXCB-500 equipment and in the range of 62-111 μSv for equipment Classical i-CAT. These values were compared with results obtained dosimetry using TLD implanted in anthropomorphic phantom and are considered consistent. Os effective dose results are very sensitive to the geometry of radiation (beam position in mathematical phantom). This factor translates to a factor of fragility software usage. But it is very useful to get quick answers to regarding process optimization tool conclusions protocols. We conclude that use software PCXMC Monte Carlo simulation is useful assessment protocols for CBCT tests in dental applications. (Author)

  5. Overcoming computational uncertainties to reveal chemical sensitivity in single molecule conduction calculations. (United States)

    Solomon, Gemma C; Reimers, Jeffrey R; Hush, Noel S


    In the calculation of conduction through single molecule's approximations about the geometry and electronic structure of the system are usually made in order to simplify the problem. Previously [G. C. Solomon, J. R. Reimers, and N. S. Hush, J. Chem. Phys. 121, 6615 (2004)], we have shown that, in calculations employing cluster models for the electrodes, proper treatment of the open-shell nature of the clusters is the most important computational feature required to make the results sensitive to variations in the structural and chemical features of the system. Here, we expand this and establish a general hierarchy of requirements involving treatment of geometrical approximations. These approximations are categorized into two classes: those associated with finite-dimensional methods for representing the semi-infinite electrodes, and those associated with the chemisorption topology. We show that ca. 100 unique atoms are required in order to properly characterize each electrode: using fewer atoms leads to nonsystematic variations in conductivity that can overwhelm the subtler changes. The choice of binding site is shown to be the next most important feature, while some effects that are difficult to control experimentally concerning the orientations at each binding site are actually shown to be insignificant. Verification of this result provides a general test for the precision of computational procedures for molecular conductivity. Predictions concerning the dependence of conduction on substituent and other effects on the central molecule are found to be meaningful only when they exceed the uncertainties of the effects associated with binding-site variation.

  6. A Geometric Computational Model for Calculation of Longwall Face Effect on Gate Roadways (United States)

    Mohammadi, Hamid; Ebrahimi Farsangi, Mohammad Ali; Jalalifar, Hossein; Ahmadi, Ali Reza


    In this paper a geometric computational model (GCM) has been developed for calculating the effect of longwall face on the extension of excavation-damaged zone (EDZ) above the gate roadways (main and tail gates), considering the advance longwall mining method. In this model, the stability of gate roadways are investigated based on loading effects due to EDZ and caving zone (CZ) above the longwall face, which can extend the EDZ size. The structure of GCM depends on four important factors: (1) geomechanical properties of hanging wall, (2) dip and thickness of coal seam, (3) CZ characteristics, and (4) pillar width. The investigations demonstrated that the extension of EDZ is a function of pillar width. Considering the effect of pillar width, new mathematical relationships were presented to calculate the face influence coefficient and characteristics of extended EDZ. Furthermore, taking GCM into account, a computational algorithm for stability analysis of gate roadways was suggested. Validation was carried out through instrumentation and monitoring results of a longwall face at Parvade-2 coal mine in Tabas, Iran, demonstrating good agreement between the new model and measured results. Finally, a sensitivity analysis was carried out on the effect of pillar width, bearing capacity of support system and coal seam dip.

  7. DIST: a computer code system for calculation of distribution ratios of solutes in the purex system

    Energy Technology Data Exchange (ETDEWEB)

    Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment


    Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.

  8. Computational scheme for pH-dependent binding free energy calculation with explicit solvent. (United States)

    Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R


    We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations.

  9. Evaluation of Two Computational Techniques of Calculating Multipath Using Global Positioning System Carrier Phase Measurements (United States)

    Gomez, Susan F.; Hood, Laura; Panneton, Robert J.; Saunders, Penny E.; Adkins, Antha; Hwu, Shian U.; Lu, Ba P.


    Two computational techniques are used to calculate differential phase errors on Global Positioning System (GPS) carrier war phase measurements due to certain multipath-producing objects. The two computational techniques are a rigorous computati electromagnetics technique called Geometric Theory of Diffraction (GTD) and the other is a simple ray tracing method. The GTD technique has been used successfully to predict microwave propagation characteristics by taking into account the dominant multipath components due to reflections and diffractions from scattering structures. The ray tracing technique only solves for reflected signals. The results from the two techniques are compared to GPS differential carrier phase ns taken on the ground using a GPS receiver in the presence of typical International Space Station (ISS) interference structures. The calculations produced using the GTD code compared to the measured results better than the ray tracing technique. The agreement was good, demonstrating that the phase errors due to multipath can be modeled and characterized using the GTD technique and characterized to a lesser fidelity using the DECAT technique. However, some discrepancies were observed. Most of the discrepancies occurred at lower devations and were either due to phase center deviations of the antenna, the background multipath environment, or the receiver itself. Selected measured and predicted differential carrier phase error results are presented and compared. Results indicate that reflections and diffractions caused by the multipath producers, located near the GPS antennas, can produce phase shifts of greater than 10 mm, and as high as 95 mm. It should be noted tl the field test configuration was meant to simulate typical ISS structures, but the two environments are not identical. The GZ and DECAT techniques have been used to calculate phase errors due to multipath o the ISS configuration to quantify the expected attitude determination errors.

  10. Quantum computing applied to calculations of molecular energies: CH2 benchmark. (United States)

    Veis, Libor; Pittner, Jiří


    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  11. DNAStat, version 2.1--a computer program for processing genetic profile databases and biostatistical calculations. (United States)

    Berent, Jarosław


    This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.

  12. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis (United States)

    Greene, William H.


    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  13. User's Guide to Handlens - A Computer Program that Calculates the Chemistry of Minerals in Mixtures (United States)

    Eberl, D.D.


    HandLens is a computer program, written in Excel macro language, that calculates the chemistry of minerals in mineral mixtures (for example, in rocks, soils and sediments) for related samples from inputs of quantitative mineralogy and chemistry. For best results, the related samples should contain minerals having the same chemical compositions; that is, the samples should differ only in the proportions of minerals present. This manual describes how to use the program, discusses the theory behind its operation, and presents test results of the program's accuracy. Required input for HandLens includes quantitative mineralogical data, obtained, for example, by RockJock analysis of X-ray diffraction (XRD) patterns, and quantitative chemical data, obtained, for example, by X-ray florescence (XRF) analysis of the same samples. Other quantitative data, such as sample depth, temperature, surface area, also can be entered. The minerals present in the samples are selected from a list, and the program is started. The results of the calculation include: (1) a table of linear coefficients of determination (r2's) which relate pairs of input data (for example, Si versus quartz weight percents); (2) a utility for plotting all input data, either as pairs of variables, or as sums of up to eight variables; (3) a table that presents the calculated chemical formulae for minerals in the samples; (4) a table that lists the calculated concentrations of major, minor, and trace elements in the various minerals; and (5) a table that presents chemical formulae for the minerals that have been corrected for possible systematic errors in the mineralogical and/or chemical analyses. In addition, the program contains a method for testing the assumption of constant chemistry of the minerals within a sample set.

  14. MRI-Based Computed Tomography Metal Artifact Correction Method for Improving Proton Range Calculation Accuracy

    Energy Technology Data Exchange (ETDEWEB)

    Park, Peter C. [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Schreibmann, Eduard; Roper, Justin; Elder, Eric; Crocker, Ian [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States); Fox, Tim [Varian Medical Systems, Palo Alto, California (United States); Zhu, X. Ronald [Department of Radiation Physics, University of Texas MD Anderson Cancer Center, Houston, Texas (United States); Dong, Lei [Scripps Proton Therapy Center, San Diego, California (United States); Dhabaan, Anees, E-mail: [Department of Radiation Oncology, Winship Cancer Institute of Emory University, Atlanta, Georgia (United States)


    Purpose: Computed tomography (CT) artifacts can severely degrade dose calculation accuracy in proton therapy. Prompted by the recently increased popularity of magnetic resonance imaging (MRI) in the radiation therapy clinic, we developed an MRI-based CT artifact correction method for improving the accuracy of proton range calculations. Methods and Materials: The proposed method replaces corrupted CT data by mapping CT Hounsfield units (HU number) from a nearby artifact-free slice, using a coregistered MRI. MRI and CT volumetric images were registered with use of 3-dimensional (3D) deformable image registration (DIR). The registration was fine-tuned on a slice-by-slice basis by using 2D DIR. Based on the intensity of paired MRI pixel values and HU from an artifact-free slice, we performed a comprehensive analysis to predict the correct HU for the corrupted region. For a proof-of-concept validation, metal artifacts were simulated on a reference data set. Proton range was calculated using reference, artifactual, and corrected images to quantify the reduction in proton range error. The correction method was applied to 4 unique clinical cases. Results: The correction method resulted in substantial artifact reduction, both quantitatively and qualitatively. On respective simulated brain and head and neck CT images, the mean error was reduced from 495 and 370 HU to 108 and 92 HU after correction. Correspondingly, the absolute mean proton range errors of 2.4 cm and 1.7 cm were reduced to less than 2 mm in both cases. Conclusions: Our MRI-based CT artifact correction method can improve CT image quality and proton range calculation accuracy for patients with severe CT artifacts.

  15. Development of selective photoionization spectroscopy technology - Development of a computer program to calculate selective ionization of atoms with multistep processes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Soon; Nam, Baek Il [Myongji University, Seoul (Korea, Republic of)


    We have developed computer programs to calculate 2-and 3-step selective resonant multiphoton ionization of atoms. Autoionization resonances in the final continuum can be put into account via B-Spline basis set method. 8 refs., 5 figs. (author)

  16. Development of additional module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation

    Energy Technology Data Exchange (ETDEWEB)

    Proskuryakov, K.N.; Bogomazov, D.N.; Poliakov, N. [Moscow Power Engineering Institute (Technical University), Moscow (Russian Federation)


    The new special module to neutron-physic and thermal-hydraulic computer codes for coolant acoustical characteristics calculation is worked out. The Russian computer code Rainbow has been selected for joint use with a developed module. This code system provides the possibility of EFOCP (Eigen Frequencies of Oscillations of the Coolant Pressure) calculations in any coolant acoustical elements of primary circuits of NPP. EFOCP values have been calculated for transient and for stationary operating. The calculated results for nominal operating were compared with results of measured EFOCP. For example, this comparison was provided for the system: 'pressurizer + surge line' of a WWER-1000 reactor. The calculated result 0.58 Hz practically coincides with the result of measurement (0.6 Hz). The EFOCP variations in transients are also shown. The presented results are intended to be useful for NPP vibration-acoustical certification. There are no serious difficulties for using this module with other computer codes.

  17. Good manufacturing practice for modelling air pollution: Quality criteria for computer models to calculate air pollution (United States)

    Dekker, C. M.; Sliggers, C. J.

    To spur on quality assurance for models that calculate air pollution, quality criteria for such models have been formulated. By satisfying these criteria the developers of these models and producers of the software packages in this field can assure and account for the quality of their products. In this way critics and users of such (computer) models can gain a clear understanding of the quality of the model. Quality criteria have been formulated for the development of mathematical models, for their programming—including user-friendliness, and for the after-sales service, which is part of the distribution of such software packages. The criteria have been introduced into national and international frameworks to obtain standardization.

  18. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao


    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.

  19. Calculating three loop ladder and V-topologies for massive operator matrix elements by computer algebra

    Energy Technology Data Exchange (ETDEWEB)

    Ablinger, J.; Schneider, C. [Johannes Kepler Univ., Linz (Austria). Research Inst. for Symbolic Computation; Behring, A.; Bluemlein, J.; Freitas, A. de [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany); Manteuffel, A. von [Mainz Univ. (Germany). Inst. fuer Physik


    Three loop ladder and V-topology diagrams contributing to the massive operator matrix element A{sub Qg} are calculated. The corresponding objects can all be expressed in terms of nested sums and recurrences depending on the Mellin variable N and the dimensional parameter ε. Given these representations, the desired Laurent series expansions in ε can be obtained with the help of our computer algebra toolbox. Here we rely on generalized hypergeometric functions and Mellin-Barnes representations, on difference ring algorithms for symbolic summation, on an optimized version of the multivariate Almkvist-Zeilberger algorithm for symbolic integration, and on new methods to calculate Laurent series solutions of coupled systems of differential equations. The solutions can be computed for general coefficient matrices directly for any basis also performing the expansion in the dimensional parameter in case it is expressible in terms of indefinite nested product-sum expressions. This structural result is based on new results of our difference ring theory. In the cases discussed we deal with iterative sum- and integral-solutions over general alphabets. The final results are expressed in terms of special sums, forming quasi-shuffle algebras, such as nested harmonic sums, generalized harmonic sums, and nested binomially weighted (cyclotomic) sums. Analytic continuations to complex values of N are possible through the recursion relations obeyed by these quantities and their analytic asymptotic expansions. The latter lead to a host of new constants beyond the multiple zeta values, the infinite generalized harmonic and cyclotomic sums in the case of V-topologies.

  20. MILDOS - A Computer Program for Calculating Environmental Radiation Doses from Uranium Recovery Operations

    Energy Technology Data Exchange (ETDEWEB)

    Strange, D. L.; Bander, T. J.


    The MILDOS Computer Code estimates impacts from radioactive emissions from uranium milling facilities. These impacts are presented as dose commitments to individuals and the regional population within an 80 km radius of the facility. Only airborne releases of radioactive materials are considered: releases to surface water and to groundwater are not addressed in MILDOS. This code is multi-purposed and can be used to evaluate population doses for NEPA assessments, maximum individual doses for predictive 40 CFR 190 compliance evaluations, or maximum offsite air concentrations for predictive evaluations of 10 CFR 20 compliance. Emissions of radioactive materials from fixed point source locations and from area sources are modeled using a sector-averaged Gaussian plume dispersion model, which utilizes user-provided wind frequency data. Mechanisms such as deposition of particulates, resuspension. radioactive decay and ingrowth of daughter radionuclides are included in the transport model. Annual average air concentrations are computed, from which subsequent impacts to humans through various pathways are computed. Ground surface concentrations are estimated from deposition buildup and ingrowth of radioactive daughters. The surface concentrations are modified by radioactive decay, weathering and other environmental processes. The MILDOS Computer Code allows the user to vary the emission sources as a step function of time by adjustinq the emission rates. which includes shutting them off completely. Thus the results of a computer run can be made to reflect changing processes throughout the facility's operational lifetime. The pathways considered for individual dose commitments and for population impacts are: • Inhalation • External exposure from ground concentrations • External exposure from cloud immersion • Ingestioo of vegetables • Ingestion of meat • Ingestion of milk • Dose commitments are calculated using dose conversion factors, which are ultimately based

  1. SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)


    Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.

  2. APOLLO: A computer program for the calculation of chemical equilibrium and reaction kinetics of chemical systems

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, H.D.


    Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a ``glass like`` material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable to other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.

  3. APOLLO: A computer program for the calculation of chemical equilibrium and reaction kinetics of chemical systems

    Energy Technology Data Exchange (ETDEWEB)

    Nguyen, H.D.


    Several of the technologies being evaluated for the treatment of waste material involve chemical reactions. Our example is the in situ vitrification (ISV) process where electrical energy is used to melt soil and waste into a glass like'' material that immobilizes and encapsulates any residual waste. During the ISV process, various chemical reactions may occur that produce significant amounts of products which must be contained and treated. The APOLLO program was developed to assist in predicting the composition of the gases that are formed. Although the development of this program was directed toward ISV applications, it should be applicable to other technologies where chemical reactions are of interest. This document presents the mathematical methodology of the APOLLO computer code. APOLLO is a computer code that calculates the products of both equilibrium and kinetic chemical reactions. The current version, written in FORTRAN, is readily adaptable to existing transport programs designed for the analysis of chemically reacting flow systems. Separate subroutines EQREACT and KIREACT for equilibrium ad kinetic chemistry respectively have been developed. A full detailed description of the numerical techniques used, which include both Lagrange multiplies and a third-order integrating scheme is presented. Sample test problems are presented and the results are in excellent agreement with those reported in the literature.

  4. CATARACT: Computer code for improving power calculations at NREL's high-flux solar furnace (United States)

    Scholl, K.; Bingham, C.; Lewandowski, A.


    The High-Flux Solar Furnace (HFSF), operated by the National Renewable Energy Laboratory, uses a camera-based, flux-mapping system to analyze the distribution and to determine total power at the focal point. The flux-mapping system consists of a diffusively reflecting plate with seven circular foil calorimeters, a charge-coupled device (CCD) camera, an IBM-compatible personal computer with a frame-grabber board, and commercial image analysis software. The calorimeters provide flux readings that are used to scale the image captured from the plate by the camera. The image analysis software can estimate total power incident on the plate by integrating under the 3-dimensional image. Because of the physical layout of the HFSF, the camera is positioned at a 20 angle to the flux mapping plate normal. The foreshortening of the captured images that results represents a systematic error in the power calculations because the software incorrectly assumes the image is parallel to the camera's array. We have written a FORTRAN computer program called CATARACT (camera/target angle correction) that we use to transform the original flux-mapper image to a plane that is normal to the camera's optical axis. A description of the code and the results of experiments performed to verify it are presented. Also presented are comparisons of the total power available from the HFSF as determined from the flux mapping system and theoretical considerations.

  5. Monte Carlo Modeling of Computed Tomography Ceiling Scatter for Shielding Calculations. (United States)

    Edwards, Stephen; Schick, Daniel


    Radiation protection for clinical staff and members of the public is of paramount importance, particularly in occupied areas adjacent to computed tomography scanner suites. Increased patient workloads and the adoption of multi-slice scanning systems may make unshielded secondary scatter from ceiling surfaces a significant contributor to dose. The present paper expands upon an existing analytical model for calculating ceiling scatter accounting for variable room geometries and provides calibration data for a range of clinical beam qualities. The practical effect of gantry, false ceiling, and wall attenuation in limiting ceiling scatter is also explored and incorporated into the model. Monte Carlo simulations were used to calibrate the model for scatter from both concrete and lead surfaces. Gantry attenuation experimental data showed an effective blocking of scatter directed toward the ceiling at angles up to 20-30° from the vertical for the scanners examined. The contribution of ceiling scatter from computed tomography operation to the effective dose of individuals in areas surrounding the scanner suite could be significant and therefore should be considered in shielding design according to the proposed analytical model.

  6. An Examination of the Performance of Parallel Calculation of the Radiation Integral on a Beowulf-Class Computer (United States)

    Katz, D.; Cwik, T.; Sterling, T.


    This paper uses the parallel calculation of the radiation integral for examination of performance and compiler issues on a Beowulf-class computer. This type of computer, built from mass-market, commodity, off-the-shelf components, has limited communications performance and therefore also has a limited regime of codes for which it is suitable.

  7. Calculation of brain atrophy using computed tomography and a new atrophy measurement tool (United States)

    Bin Zahid, Abdullah; Mikheev, Artem; Yang, Andrew Il; Samadani, Uzma; Rusinek, Henry


    Purpose: To determine if brain atrophy can be calculated by performing volumetric analysis on conventional computed tomography (CT) scans in spite of relatively low contrast for this modality. Materials & Method: CTs for 73 patients from the local Veteran Affairs database were selected. Exclusion criteria: AD, NPH, tumor, and alcohol abuse. Protocol: conventional clinical acquisition (Toshiba; helical, 120 kVp, X-ray tube current 300mA, slice thickness 3-5mm). Locally developed, automatic algorithm was used to segment intracranial cavity (ICC) using (a) white matter seed (b) constrained growth, limited by inner skull layer and (c) topological connectivity. ICC was further segmented into CSF and brain parenchyma using a threshold of 16 Hu. Results: Age distribution: 25-95yrs; (Mean 67+/-17.5yrs.). Significant correlation was found between age and CSF/ICC(r=0.695, patrophy among elderly VA patients is attributable to the presence of other comorbidities. Conclusion: Brain atrophy can be reliably calculated using automated software and conventional CT. Compared to MRI, CT is more widely available, cheaper, and less affected by head motion due to ~100 times shorter scan time. Work is in progress to improve the precision of the measurements, possibly leading to assessment of longitudinal changes within the patient.

  8. An extension of the computer program for dynamical calculations of RHEED intensity oscillations. Heterostructures (United States)

    Daniluk, Andrzej


    A practical computing algorithm working in real time has been developed for calculations of the reflection high-energy electron diffraction from the molecular beam epitaxy growing surface. The calculations are based on a dynamical diffraction theory in which the electrons are scattered on a potential, which is periodic in the direction perpendicular to the surface. New version program summaryTitle of program:RHEED_v2 Catalogue identifier:ADUY_v1_1 Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADUY Authors of the original program:A. Daniluk Does the new version supersede the original program:Yes Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT, Linux Programming language used:C++ Memory required to execute with typical data:more than 1 MB Number of bits in a word:64 bits Number of processors used:1 Number of bytes in distributed program, including test data, etc.:1 074 131 No. of lines in distributed program, including test data, etc.:3408 Distribution format:tar.gz Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying the growth and the surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). RHEED rocking curves recorded from heteroepitaxial layers are used for the non-destructive evaluation of epilayer thickness and composition with a high degree of accuracy. Rocking curves from such heterostructures are often very complex because the thickness fringes from every layer beat together. Simulations based on dynamical diffraction theory are generally used to interpret the rocking curves of such structures from which very small changes in thickness and composition can be

  9. Computer calculation of the Van Vleck second moment for materials with internal rotation of spin groups (United States)

    Goc, Roman


    This paper describes m2rc3, a program that calculates Van Vleck second moments for solids with internal rotation of molecules, ions or their structural parts. Only rotations about C 3 axes of symmetry are allowed, but up to 15 axes of rotation per crystallographic unit cell are permitted. The program is very useful in interpreting NMR measurements in solids. Program summaryTitle of the program: m2rc3 Catalogue number: ADUC Program summary URL: Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland License provisions: none Computers: Cray SV1, Cray T3E-900, PCs Installation: Poznań Supercomputing and Networking Center ( and Faculty of Physics, A. Mickiewicz University, Poznań, Poland ( Operating system under which program has been tested: UNICOS ver. on Cray SV1; UNICOS/mk on Cray T3E-900; Windows98 and Windows XP on PCs. Programming language: FORTRAN 90 No. of lines in distributed program, including test data, etc.: 757 No. of bytes in distributed program, including test data, etc.: 9730 Distribution format: tar.gz Nature of physical problem: The NMR second moment reflects the strength of the nuclear magnetic dipole-dipole interaction in solids. This value can be extracted from the appropriate experiment and can be calculated on the basis of Van Vleck formula. The internal rotation of molecules or their parts averages this interaction decreasing the measured value of the NMR second moment. The analysis of the internal dynamics based on the NMR second moment measurements is as follows. The second moment is measured at different temperatures. On the other hand it is also calculated for different models and frequencies of this motion. Comparison of experimental and calculated values permits the building of the most probable model of internal dynamics in the studied material. The program described


    The general conditional equations which govern the phase equilibria in three-component systems are presented. Using the general conditional equations...a general method has been developed to precalculate the phase equilibria in three-component systems from first principle using computer technique...The method developed has been applied to several model examples and the system Ta-Hf-C. The phase equilibria in three-component systems calculated

  11. Absolute binding free energy calculations: on the accuracy of computational scoring of protein-ligand interactions. (United States)

    Singh, Nidhi; Warshel, Arieh


    Calculating the absolute binding free energies is a challenging task. Reliable estimates of binding free energies should provide a guide for rational drug design. It should also provide us with deeper understanding of the correlation between protein structure and its function. Further applications may include identifying novel molecular scaffolds and optimizing lead compounds in computer-aided drug design. Available options to evaluate the absolute binding free energies range from the rigorous but expensive free energy perturbation to the microscopic linear response approximation (LRA/beta version) and related approaches including the linear interaction energy (LIE) to the more approximated and considerably faster scaled protein dipoles Langevin dipoles (PDLD/S-LRA version) as well as the less rigorous molecular mechanics Poisson-Boltzmann/surface area (MM/PBSA) and generalized born/surface area (MM/GBSA) to the less accurate scoring functions. There is a need for an assessment of the performance of different approaches in terms of computer time and reliability. We present a comparative study of the LRA/beta, the LIE, the PDLD/S-LRA/beta, and the more widely used MM/PBSA and assess their abilities to estimate the absolute binding energies. The LRA and LIE methods perform reasonably well but require specialized parameterization for the nonelectrostatic term. The PDLD/S-LRA/beta performs effectively without the need of reparameterization. Our assessment of the MM/PBSA is less optimistic. This approach appears to provide erroneous estimates of the absolute binding energies because of its incorrect entropies and the problematic treatment of electrostatic energies. Overall, the PDLD/S-LRA/beta appears to offer an appealing option for the final stages of massive screening approaches.

  12. Hybrid Numerical Solvers for Massively Parallel Eigenvalue Computation and Their Benchmark with Electronic Structure Calculations

    CERN Document Server

    Imachi, Hiroto


    Optimally hybrid numerical solvers were constructed for massively parallel generalized eigenvalue problem (GEP).The strong scaling benchmark was carried out on the K computer and other supercomputers for electronic structure calculation problems in the matrix sizes of M = 10^4-10^6 with upto 105 cores. The procedure of GEP is decomposed into the two subprocedures of the reducer to the standard eigenvalue problem (SEP) and the solver of SEP. A hybrid solver is constructed, when a routine is chosen for each subprocedure from the three parallel solver libraries of ScaLAPACK, ELPA and EigenExa. The hybrid solvers with the two newer libraries, ELPA and EigenExa, give better benchmark results than the conventional ScaLAPACK library. The detailed analysis on the results implies that the reducer can be a bottleneck in next-generation (exa-scale) supercomputers, which indicates the guidance for future research. The code was developed as a middleware and a mini-application and will appear online.

  13. A computational approach to calculate the heat of transport of aqueous solutions (United States)

    Di Lecce, Silvia; Albrecht, Tim; Bresme, Fernando


    Thermal gradients induce concentration gradients in alkali halide solutions, and the salt migrates towards hot or cold regions depending on the average temperature of the solution. This effect has been interpreted using the heat of transport, which provides a route to rationalize thermophoretic phenomena. Early theories provide estimates of the heat of transport at infinite dilution. These values are used to interpret thermodiffusion (Soret) and thermoelectric (Seebeck) effects. However, accessing heats of transport of individual ions at finite concentration remains an outstanding question both theoretically and experimentally. Here we discuss a computational approach to calculate heats of transport of aqueous solutions at finite concentrations, and apply our method to study lithium chloride solutions at concentrations >0.5 M. The heats of transport are significantly different for Li+ and Cl− ions, unlike what is expected at infinite dilution. We find theoretical evidence for the existence of minima in the Soret coefficient of LiCl, where the magnitude of the heat of transport is maximized. The Seebeck coefficient obtained from the ionic heats of transport varies significantly with temperature and concentration. We identify thermodynamic conditions leading to a maximization of the thermoelectric response of aqueous solutions.

  14. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    Directory of Open Access Journals (Sweden)

    Lopes Antonio


    Full Text Available Background : In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. Objective : We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Materials and Methods : Using Microsoft ® Excel facilities, we constructed a matrix containing 5 models (equations for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. Results : By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P < .001 and between-methods ( P < .001 differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. Conclusion : The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations.

  15. Problems on design of computer-generated holograms for testing aspheric surfaces: principle and calculation

    Institute of Scientific and Technical Information of China (English)

    Zhishan Gao; Meimei Kong; Rihong Zhu; Lei Chen


    Interferometric optical testing using computer-generated hologram (CGH) has provided an approach to highly accurate measurement of aspheric surfaces. While designing the CGH null correctors, we should make them with as small aperture and low spatial frequency as possible, and with no zero slope of phase except at center, for the sake of insuring lowisk of substrate figure error and feasibility of fabrication. On the basis of classic optics, a set of equations for calculating the phase function of CGH are obtained. These equations lead us to find the dependence of the aperture and spatial frequency on the axial diszance from the tested aspheric surface for the CGH. We also simulatethe ptical path difference error of the CGH relative to the accuracy of controlling laser spot during fabrication. Meanwhile, we discuss the constraints used to avoid zero slope of phase except at center and give a design result of the CGH for the tested aspheric surface. The results ensure the feasibility of designing a useful CGH to test aspheric urface fundamentally.

  16. Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.


    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  17. Computer Calculations of Eddy-Current Power Loss in Rotating Titanium Wheels and Rims in Localized Axial Magnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, D J; Stein, W; Gronberg, J B


    We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.

  18. Fast calculation method of computer generated hologram animation for viewpoint parallel shift and rotation using Fourier transform optical system. (United States)

    Watanabe, Ryosuke; Yamaguchi, Kazuhiro; Sakamoto, Yuji


    Computer generated hologram (CGH) animations can be made by switching many CGHs on an electronic display. Some fast calculation methods for CGH animations have been proposed, but one for viewpoint movement has not been proposed. Therefore, we designed a fast calculation method of CGH animations for viewpoint parallel shifts and rotation. A Fourier transform optical system was adopted to expand the viewing angle. The results of experiments were that the calculation time of our method was over 6 times faster than that of the conventional method. Furthermore, the degradation in CGH animation quality was found to be sufficiently small.

  19. Meso-microstructural computational simulation of the hydrogen permeation test to calculate intergranular, grain boundary and effective diffusivities

    Energy Technology Data Exchange (ETDEWEB)

    Jothi, S., E-mail: [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom); Winzer, N. [Fraunhofer Institute for Mechanics of Materials IWM, Wöhlerstraße 11, 79108 Freiburg (Germany); Croft, T.N.; Brown, S.G.R. [College of Engineering, Swansea University, Singleton Park, Swansea SA2 8PP (United Kingdom)


    Highlights: • Characterized polycrystalline nickel microstructure using EBSD analysis. • Development meso-microstructural model based on real microstructure. • Calculated effective diffusivity using experimental electrochemical permeation test. • Calculated intergranular diffusivity of hydrogen using computational FE simulation. • Validated the calculated computation simulation results with experimental results. - Abstract: Hydrogen induced intergranular embrittlement has been identified as a cause of failure of aerospace components such as combustion chambers made from electrodeposited polycrystalline nickel. Accurate computational analysis of this process requires knowledge of the differential in hydrogen transport in the intergranular and intragranular regions. The effective diffusion coefficient of hydrogen may be measured experimentally, though experimental measurement of the intergranular grain boundary diffusion coefficient of hydrogen requires significant effort. Therefore an approach to calculate the intergranular GB hydrogen diffusivity using finite element analysis was developed. The effective diffusivity of hydrogen in polycrystalline nickel was measured using electrochemical permeation tests. Data from electron backscatter diffraction measurements were used to construct microstructural representative volume elements including details of grain size and shape and volume fraction of grains and grain boundaries. A Python optimization code has been developed for the ABAQUS environment to calculate the unknown grain boundary diffusivity.

  20. CONC/11: a computer program for calculating the performance of dish-type solar thermal collectors and power systems

    Energy Technology Data Exchange (ETDEWEB)

    Jaffe, L. D.


    CONC/11 is a computer program designed for calculating the performance of dish-type solar thermal collectors and power systems. It is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. CONC/11 is written in Athena Extended Fortran (similar to Fortran 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers.

  1. CONC/11: A computer program for calculating the performance of dish-type solar thermal collectors and power systems (United States)

    Jaffe, L. D.


    The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.

  2. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems (United States)

    da Silveira, Pedro Rodrigo Castro


    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  3. Structure problems in the analog computation; Problemes de structure dans le calcul analogique

    Energy Technology Data Exchange (ETDEWEB)

    Braffort, P.L. [Commissariat a l' Energie Atomique, Saclay (France). Centre d' Etudes Nucleaires


    The recent mathematical development showed the importance of elementary structures (algebraic, topological, etc.) in abeyance under the great domains of classical analysis. Such structures in analog computation are put in evidence and possible development of applied mathematics are discussed. It also studied the topological structures of the standard representation of analog schemes such as additional triangles, integrators, phase inverters and functions generators. The analog method gives only the function of the variable: time, as results of its computations. But the course of computation, for systems including reactive circuits, introduces order structures which are called 'chronological'. Finally, it showed that the approximation methods of ordinary numerical and digital computation present the same structure as these analog computation. The structure analysis permits fruitful comparisons between the several domains of applied mathematics and suggests new important domains of application for analog method. (M.P.)

  4. WRAITH - A Computer Code for Calculating Internal and External Doses Resulting From An Atmospheric Release of Radioactive Material

    Energy Technology Data Exchange (ETDEWEB)

    Scherpelz, R. I.; Borst, F. J.; Hoenes, G. R.


    WRAITH is a FORTRAN computer code which calculates the doses received by a standard man exposed to an accidental release of radioactive material. The movement of the released material through the atmosphere is calculated using a bivariate straight-line Gaussian distribution model, with Pasquill values for standard deviations. The quantity of material in the released cloud is modified during its transit time to account for radioactive decay and daughter production. External doses due to exposure to the cloud can be calculated using a semi-infinite cloud approximation. In situations where the semi-infinite cloud approximation is not a good one, the external dose can be calculated by a "finite plume" three-dimensional point-kernel numerical integration technique. Internal doses due to acute inhalation are cal.culated using the ICRP Task Group Lung Model and a four-segmented gastro-intestinal tract model. Translocation of the material between body compartments and retention in the body compartments are calculated using multiple exponential retention functions. Internal doses to each organ are calculated as sums of cross-organ doses, with each target organ irradiated by radioactive material in a number of source organs. All doses are calculated in rads, with separate values determined for high-LET and low-LET radiation.

  5. Computation of nodal surfaces in fixed-node diffusion Monte Carlo calculations using a genetic algorithm. (United States)

    Ramilowski, Jordan A; Farrelly, David


    The fixed-node diffusion Monte Carlo (DMC) algorithm is a powerful way of computing excited state energies in a remarkably diverse number of contexts in quantum chemistry and physics. The main difficulty in implementing the procedure lies in obtaining a good estimate of the nodal surface of the excited state in question. Although the nodal surface can sometimes be obtained from symmetry or by making approximations this is not always the case. In any event, nodal surfaces are usually obtained in an ad hoc way. In fact, the search for nodal surfaces can be formulated as an optimization problem within the DMC procedure itself. Here we investigate the use of a genetic algorithm to systematically and automatically compute nodal surfaces. Application is made to the computation of excited states of the HCN-(4)He complex and to the computation of tunneling splittings in the hydrogen bonded HCl-HCl complex.

  6. Effectiveness of a computer based medication calculation education and testing programme for nurses. (United States)

    Sherriff, Karen; Burston, Sarah; Wallis, Marianne


    The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety.

  7. Calculation method of reflectance distributions for computer-generated holograms using the finite-difference time-domain method. (United States)

    Ichikawa, Tsubasa; Sakamoto, Yuji; Subagyo, Agus; Sueoka, Kazuhisa


    The research on reflectance distributions in computer-generated holograms (CGHs) is particularly sparse, and the textures of materials are not expressed. Thus, we propose a method for calculating reflectance distributions in CGHs that uses the finite-difference time-domain method. In this method, reflected light from an uneven surface made on a computer is analyzed by finite-difference time-domain simulation, and the reflected light distribution is applied to the CGH as an object light. We report the relations between the surface roughness of the objects and the reflectance distributions, and show that the reflectance distributions are given to CGHs by imaging simulation.

  8. Computer codes in nuclear safety, radiation transport and dosimetry; Les codes de calcul en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M


    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.

  9. Acute Calculous Cholecystitis Missed on Computed Tomography and Ultrasound but Diagnosed with Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography

    Directory of Open Access Journals (Sweden)

    Carina Mari Aparici


    Full Text Available We present a case of a 69-year-old patient who underwent ascending aortic aneurysm repair with aortic valve replacement. On postsurgical day 12, he developed leukocytosis and low-grade fevers. The chest computed tomography (CT showed a periaortic hematoma which represents a postsurgical change from aortic aneurysm repair, and a small pericardial effusion. The abdominal ultrasound showed cholelithiasis without any sign of cholecystitis. Finally, a fluorodeoxyglucose (FDG-positron emission tomography (PET/CT examination was ordered to find the cause of fever of unknown origin, and it showed increased FDG uptake in the gallbladder wall, with no uptake in the lumen. FDG-PET/CT can diagnose acute cholecystitis in patients with nonspecific clinical symptoms and laboratory results.

  10. Emergency Doses (ED) - Revision 3: A calculator code for environmental dose computations

    Energy Technology Data Exchange (ETDEWEB)

    Rittmann, P.D.


    The calculator program ED (Emergency Doses) was developed from several HP-41CV calculator programs documented in the report Seven Health Physics Calculator Programs for the HP-41CV, RHO-HS-ST-5P (Rittman 1984). The program was developed to enable estimates of offsite impacts more rapidly and reliably than was possible with the software available for emergency response at that time. The ED - Revision 3, documented in this report, revises the inhalation dose model to match that of ICRP 30, and adds the simple estimates for air concentration downwind from a chemical release. In addition, the method for calculating the Pasquill dispersion parameters was revised to match the GENII code within the limitations of a hand-held calculator (e.g., plume rise and building wake effects are not included). The summary report generator for printed output, which had been present in the code from the original version, was eliminated in Revision 3 to make room for the dispersion model, the chemical release portion, and the methods of looping back to an input menu until there is no further no change. This program runs on the Hewlett-Packard programmable calculators known as the HP-41CV and the HP-41CX. The documentation for ED - Revision 3 includes a guide for users, sample problems, detailed verification tests and results, model descriptions, code description (with program listing), and independent peer review. This software is intended to be used by individuals with some training in the use of air transport models. There are some user inputs that require intelligent application of the model to the actual conditions of the accident. The results calculated using ED - Revision 3 are only correct to the extent allowed by the mathematical models. 9 refs., 36 tabs.

  11. Computational Calorimetry: High-Precision Calculation of Host-Guest Binding Thermodynamics. (United States)

    Henriksen, Niel M; Fenley, Andrew T; Gilson, Michael K


    We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van't Hoff equation. Excellent agreement between the direct and van't Hoff methods is demonstrated for both host-guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design.

  12. Computational Chemistry to the Rescue: Modern Toolboxes for the Assignment of Complex Molecules by GIAO NMR Calculations. (United States)

    Grimblat, Nicolas; Sarotti, Ariel M


    The calculations of NMR properties of molecules using quantum chemical methods have deeply impacted several branches of organic chemistry. They are particularly important in structural or stereochemical assignments of organic compounds, with implications in total synthesis, stereoselective reactions, and natural products chemistry. In studying the evolution of the strategies developed to support (or reject) a structural proposal, it becomes clear that the most effective and accurate ones involve sophisticated procedures to correlate experimental and computational data. Owing to their relatively high mathematical complexity, such calculations (CP3, DP4, ANN-PRA) are often carried out using additional computational resources provided by the authors (such as applets or Excel files). This Minireview will cover the state-of-the-art of these toolboxes in the assignment of organic molecules, including mathematical definitions, updates, and discussion of relevant examples.

  13. Independent-Trajectory Thermodynamic Integration: a practical guide to protein-drug binding free energy calculations using distributed computing. (United States)

    Lawrenz, Morgan; Baron, Riccardo; Wang, Yi; McCammon, J Andrew


    The Independent-Trajectory Thermodynamic Integration (IT-TI) approach for free energy calculation with distributed computing is described. IT-TI utilizes diverse conformational sampling obtained from multiple, independent simulations to obtain more reliable free energy estimates compared to single TI predictions. The latter may significantly under- or over-estimate the binding free energy due to finite sampling. We exemplify the advantages of the IT-TI approach using two distinct cases of protein-ligand binding. In both cases, IT-TI yields distributions of absolute binding free energy estimates that are remarkably centered on the target experimental values. Alternative protocols for the practical and general application of IT-TI calculations are investigated. We highlight a protocol that maximizes predictive power and computational efficiency.

  14. Calculation of dipole polarizability derivatives of adamantane and their use in electron scattering computations (United States)

    Sauer, Stephan P. A.; Paidarová, Ivana; Čársky, Petr; Čurík, Roman


    In this paper we present calculations of the static polarizability and its derivatives for the adamantane molecule carried out at the density functional theory level using the B3LYP exchange-correlation functional and Sadlej's polarized valence triple zeta basis set. It is shown that the polarizability tensor is necessary to correct long-range behavior of DFT functionals used in electron-molecule scattering calculations. The impact of such a long-range correction is demonstrated on elastic and vibrationally inelastic electron collisions with adamantane, a molecule representing a large polyatomic target for electron scattering calculations. Contribution to the Topical Issue "Advances in Positron and Electron Scattering", edited by Paulo Limao-Vieira, Gustavo Garcia, E. Krishnakumar, James Sullivan, Hajime Tanuma and Zoran Petrovic.

  15. Computational program to neutron flux calculation; Programa computacional para calculo de fluxo de neutrons

    Energy Technology Data Exchange (ETDEWEB)

    Souza, Maria Ines Silvani; Furieri, Rosanne Cefaly de Aranda Amado [Instituto de Engenharia Nuclear (IEN), Rio de Janeiro, RJ (Brazil)


    The absolute value of the neutron flux is of paramount importance in reactor physics and other application on the nuclear field. Due to several corrections which should be done, such as radioactive decay of the produced nuclides, normalization factors between different irradiations, neutron spectrum perturbation, cross section behaviour and growing of the reactor power, among other factors, make the calculation of the neutron flux very cumbersome. the software FLUXO was developed to overcome these inconveniences. It is programmed in FORTRAN language, and was written to calculate the absolute flux of thermal, epithermal and fast neutrons, through the foil activation technique. The magnitude of this activation can be measured by a 4{pi} {beta}-{gamma} coincidence measurement or by gamma spectroscopy alone. The software calculates as well, the absolute activity of radioactive sources, and reactor-irradiated samples. (author)

  16. Robust computational method for fast calculations of multicharged ions lineshapes affected by a low-frequency electrostatic plasma turbulence (United States)

    Dalimier, E.; Oks, E.


    Transport phenomena in plasmas, such as, e.g., resistivity, can be affected by electrostatic turbulence that frequently occurs in various kinds of laboratory and astrophysical plasmas. Transport phenomena are affected most significantly by a low-frequency electrostatic turbulence—such as, e.g., ion acoustic waves, also known as ionic sound—causing anomalous resistivity. In this case, for computing profiles of spectral lines, emitted by plasma ions, by any appropriate code for diagnostic purposes, it is necessary to calculate the distribution of the total quasistatic field. For a practically important situation, where the average turbulent field is much greater than the characteristic ion microfield, we develop a robust computational method valid for any appropriate distribution of the ion microfield at a charged point. We show that the correction to the Rayleigh distribution of the turbulent field is controlled by the behavior of the ion microfield distribution at large fields—in distinction to the opposite (and therefore, erroneous) result in the literature. We also obtain a universal analytical expression for the correction to the Rayleigh distribution based on the asymptotic of the ion microfield distribution at large fields at a charged point. By comparison with various known distributions of the ion microfield, we show that our asymptotic formula has a sufficiently high accuracy. Also exact computations are used to verify the high accuracy of the method. This robust approximate, but accurate method yields faster computational results than the exact calculations and therefore should be important for practical situations requiring simultaneous computations of a large number of spectral lineshapes (e.g., for calculating opacities)—especially for laser-produced plasmas.

  17. Development of 1-year-old computational phantom and calculation of organ doses during CT scans using Monte Carlo simulation. (United States)

    Pan, Yuxi; Qiu, Rui; Gao, Linfeng; Ge, Chaoyong; Zheng, Junzheng; Xie, Wenzhang; Li, Junli


    With the rapidly growing number of CT examinations, the consequential radiation risk has aroused more and more attention. The average dose in each organ during CT scans can only be obtained by using Monte Carlo simulation with computational phantoms. Since children tend to have higher radiation sensitivity than adults, the radiation dose of pediatric CT examinations requires special attention and needs to be assessed accurately. So far, studies on organ doses from CT exposures for pediatric patients are still limited. In this work, a 1-year-old computational phantom was constructed. The body contour was obtained from the CT images of a 1-year-old physical phantom and the internal organs were deformed from an existing Chinese reference adult phantom. To ensure the organ locations in the 1-year-old computational phantom were consistent with those of the physical phantom, the organ locations in 1-year-old computational phantom were manually adjusted one by one, and the organ masses were adjusted to the corresponding Chinese reference values. Moreover, a CT scanner model was developed using the Monte Carlo technique and the 1-year-old computational phantom was applied to estimate organ doses derived from simulated CT exposures. As a result, a database including doses to 36 organs and tissues from 47 single axial scans was built. It has been verified by calculation that doses of axial scans are close to those of helical scans; therefore, this database could be applied to helical scans as well. Organ doses were calculated using the database and compared with those obtained from the measurements made in the physical phantom for helical scans. The differences between simulation and measurement were less than 25% for all organs. The result shows that the 1-year-old phantom developed in this work can be used to calculate organ doses in CT exposures, and the dose database provides a method for the estimation of 1-year-old patient doses in a variety of CT examinations.

  18. A Computer Program to Calculate the Supersonic Flow over a Solid Cone in Air or Water. (United States)


    ix air or water. The rain objective is to calculate the ccne semi-vertei angle given prescribed initial ccndi- tions. The program is written the motion of the metal jet frcm an explczive shaped-charge fired underwater. A tiical result for supersonic flow over a ccne in water is as follcws...the ccne semi-vertex angle is calculated to be 7.23 degrees. Gene rally, pressures invclved in water flow are much larger than for air flow, and the

  19. STATIC{sub T}EMP: a useful computer code for calculating static formation temperatures in geothermal wells

    Energy Technology Data Exchange (ETDEWEB)

    Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)


    The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)

  20. The use of symbolic computation in radiative, energy, and neutron transport calculations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, J.I.


    This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules.

  1. Computer program calculates gamma ray source strengths of materials exposed to neutron fluxes (United States)

    Heiser, P. C.; Ricks, L. O.


    Computer program contains an input library of nuclear data for 44 elements and their isotopes to determine the induced radioactivity for gamma emitters. Minimum input requires the irradiation history of the element, a four-energy-group neutron flux, specification of an alloy composition by elements, and selection of the output.

  2. A Computer Program for Calculation of Calibration Curves for Quantitative X-Ray Diffraction Analysis. (United States)

    Blanchard, Frank N.


    Describes a FORTRAN IV program written to supplement a laboratory exercise dealing with quantitative x-ray diffraction analysis of mixtures of polycrystalline phases in an introductory course in x-ray diffraction. Gives an example of the use of the program and compares calculated and observed calibration data. (Author/GS)

  3. Computational Chemistry Laboratory: Calculating the Energy Content of Food Applied to a Real-Life Problem (United States)

    Barbiric, Dora; Tribe, Lorena; Soriano, Rosario


    In this laboratory, students calculated the nutritional value of common foods to assess the energy content needed to answer an everyday life application; for example, how many kilometers can an average person run with the energy provided by 100 g (3.5 oz) of beef? The optimized geometries and the formation enthalpies of the nutritional components…

  4. Calculation of dipole polarizability derivatives of adamantane and their use in electron scattering computations

    DEFF Research Database (Denmark)

    Sauer, Stephan P. A.; Paidarová, Ivana; Čársky, Petr


    In this paper we present calculations of the static polarizability and its derivatives for the adamantane molecule carried out at the density functional theory level using the B3LYP exchange correlation functional and Sadlej’s polarized valence triple zeta basis set. It is shown that the polariza...

  5. Two methods for calculating regional cerebral blood flow from emission computed tomography of inert gas concentrations

    DEFF Research Database (Denmark)

    Kanno, I; Lassen, N A


    Two methods are described for calculation of regional cerebral blood flow from completed tomographic data of radioactive inert gas distribution in a slice of brain tissue. It is assumed that the tomographic picture gives the average inert gas concentration in each pixel over data collection periods...

  6. Using the Metropolis Algorithm to Calculate Thermodynamic Quantities: An Undergraduate Computational Experiment (United States)

    Beddard, Godfrey S.


    Thermodynamic quantities such as the average energy, heat capacity, and entropy are calculated using a Monte Carlo method based on the Metropolis algorithm. This method is illustrated with reference to the harmonic oscillator but is particularly useful when the partition function cannot be evaluated; an example using a one-dimensional spin system…

  7. An advanced computational scheme for the optimization of 2D radial reflector calculations in pressurized water reactors

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, T., E-mail: [Institut de Génie Nucléaire, P.O. Box 6079, Station “Centre-Ville”, Montréal, Qc., Canada H3C 3A7 (Canada); Hébert, A., E-mail: [Institut de Génie Nucléaire, P.O. Box 6079, Station “Centre-Ville”, Montréal, Qc., Canada H3C 3A7 (Canada); Leroyer, H.; Argaud, J.P.; Bouriquet, B.; Ponçot, A. [Électricité de France, R and D, SINETICS, 1 Av. du Général de Gaulle, 92141 Clamart (France)


    Highlights: • We present a computational scheme for the determination of reflector properties in a PWR. • The approach is based on the minimization of a functional. • We use a data assimilation method or a parametric complementarity principle. • The reference target is a solution obtained with the method of characteristics. • The simplified flux solution is based on diffusion theory or on the simplified Pn method. - Abstract: This paper presents a computational scheme for the determination of equivalent 2D multi-group spatially dependant reflector parameters in a Pressurized Water Reactor (PWR). The proposed strategy is to define a full-core calculation consistent with a reference lattice code calculation such as the Method Of Characteristics (MOC) as implemented in APOLLO2 lattice code. The computational scheme presented here relies on the data assimilation module known as “Assimilation de données et Aide à l’Optimisation (ADAO)” of the SALOME platform developed at Électricité De France (EDF), coupled with the full-core code COCAGNE and with the lattice code APOLLO2. A first code-to-code verification of the computational scheme is made using the OPTEX reflector model developed at École Polytechnique de Montréal (EPM). As a result, we obtain 2D multi-group, spatially dependant reflector parameters, using both diffusion or SP{sub N} operators. We observe important improvements of the power discrepancies distribution over the core when using reflector parameters computed with the proposed computational scheme, and the SP{sub N} operator enables additional improvements.

  8. VORCAM: A computer program for calculating vortex lift effect of cambered wings by the suction analogy (United States)

    Lan, C. E.; Chang, J. F.


    A user's guide to an improved version of Woodward's chord plane aerodynamic panel computer code is presumed. The guide can be applied to cambered wings exhibiting edge separated flow, including those with leading edge vortex flow at subsonic and supersonic speeds. New orientations for the rotated suction force are employed based on the momentum principal. The supersonic suction analogy method is improved by using an effective angle of attack defined through a semiempirical method.

  9. Involving High School Students in Computational Physics University Research: Theory Calculations of Toluene Adsorbed on Graphene (United States)

    Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär


    To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research. PMID:27505418

  10. Involving High School Students in Computational Physics University Research: Theory Calculations of Toluene Adsorbed on Graphene. (United States)

    Ericsson, Jonas; Husmark, Teodor; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth


    To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.

  11. Involving high school students in computational physics university research: Theory calculations of toluene adsorbed on graphene

    CERN Document Server

    Ericsson, Jonas; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth


    To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.

  12. DITTY - a computer program for calculating population dose integrated over ten thousand years

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.


    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages.

  13. Parallel calculations on shared memory, NUMA-based computers using MATLAB (United States)

    Krotkiewski, Marcin; Dabrowski, Marcin


    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU


    Directory of Open Access Journals (Sweden)



    Full Text Available Economic information is an essential element of progress, being present in all fields. With the development of market economy must grow and economic information in order to reflect as accurately as patrimonial situation and results of financial and economic activity of enterprises. The main source of economic information is the accounting, which is the main instrument of knowledge, management and control of assets and results of any enterprise. In this paper we present a computer model to analyze economic information on the profitability and economic risk, available both in the vegetable farms and for the livestock sector.

  15. FRAPCON-2: A Computer Code for the Calculation of Steady State Thermal-Mechanical Behavior of Oxide Fuel Rods

    Energy Technology Data Exchange (ETDEWEB)

    Berna, G. A; Bohn, M. P.; Rausch, W. N.; Williford, R. E.; Lanning, D. D.


    FRAPCON-2 is a FORTRAN IV computer code that calculates the steady state response of light Mater reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, deformation, and tai lure histories of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (a) heat conduction through the fuel and cladding, (b) cladding elastic and plastic deformation, (c) fuel-cladding mechanical interaction, (d) fission gas release, (e} fuel rod internal gas pressure, (f) heat transfer between fuel and cladding, (g) cladding oxidation, and (h) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat transfer correlations. FRAPCON-2 is programmed for use on the CDC Cyber 175 and 176 computers. The FRAPCON-2 code Is designed to generate initial conditions for transient fuel rod analysis by either the FRAP-T6 computer code or the thermal-hydraulic code, RELAP4/MOD7 Version 2.

  16. PADLOC: a one-dimensional computer program for calculating coolant and plateout fission product concentrations. [HTGR

    Energy Technology Data Exchange (ETDEWEB)

    Hudritsch, W.W.; Smith, P.D.


    The one-dimensional computer program PADLOC is designed to analyze steady-state and time-dependent plateout of fission products in an arbitrary network of pipes. The problem solved is one of mass transport of impurities in a fluid, including the effects of sources in the fluid and in the plateout surfaces, convection along the flow paths, decay, adsorption on surfaces (plateout), and desorption from surfaces. These phenomena are governed by a system of coupled, nonlinear partial differential equations. The solution is achieved by (a) linearizing the equations about an approximate solution, employing a Newton Raphson iteration technique, (b) employing a finite difference solution method with an implicit time integration, and (c) employing a substructuring technique to logically organize the systems of equations for an arbitrary flow network.

  17. Computationally Efficient Calculations of Target Performance of the Normalized Matched Filter Detector for Hydrocoustic Signals

    CERN Document Server

    Diamant, Roee


    Detection of hydroacoustic transmissions is a key enabling technology in applications such as depth measurements, detection of objects, and undersea mapping. To cope with the long channel delay spread and the low signal-to-noise ratio, hydroacoustic signals are constructed with a large time-bandwidth product, $N$. A promising detector for hydroacoustic signals is the normalized matched filter (NMF). For the NMF, the detection threshold depends only on $N$, thereby obviating the need to estimate the characteristics of the sea ambient noise which are time-varying and hard to estimate. While previous works analyzed the characteristics of the normalized matched filter (NMF), for hydroacoustic signals with large $N$ values the expressions available are computationally complicated to evaluate. Specifically for hydroacoustic signals of large $N$ values, this paper presents approximations for the probability distribution of the NMF. These approximations are found extremely accurate in numerical simulations. We also o...

  18. A calculation procedure for viscous flow in turbomachines, volume 3. [computer programs (United States)

    Khalil, I.; Sheoran, Y.; Tabakoff, W.


    A method for analyzing the nonadiabatic viscous flow through turbomachine blade passages was developed. The field analysis is based upon the numerical integration of the full incompressible Navier-Stokes equations, together with the energy equation on the blade-to-blade surface. A FORTRAN IV computer program was written based on this method. The numerical code used to solve the governing equations employs a nonorthogonal boundary fitted coordinate system. The flow may be axial, radial or mixed and there may be a change in stream channel thickness in the through-flow direction. The inputs required for two FORTRAN IV programs are presented. The first program considers laminar flows and the second can handle turbulent flows. Numerical examples are included to illustrate the use of the program, and to show the results that are obtained.

  19. New Developments on Inverse Polygon Mapping to Calculate Gravitational Lensing Magnification Maps: Optimized Computations (United States)

    Mediavilla, E.; Mediavilla, T.; Muñoz, J. A.; Ariza, O.; Lopez, P.; Gonzalez-Morcillo, C.; Jimenez-Vicente, J.


    We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation. Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N -3/4 dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.

  20. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco


    directly. The feasibility analysis showed a difference of only 2% in temperature results from the published reference work that was performed with Ansys Fluent (Colella et al., 2010). The reduction in simulation time was significantly larger when using multiscale modelling than when performing multiple......Multiscale modelling of tunnel fires that uses a coupled 3D (fire area) and 1D (the rest of the tunnel) model is seen as the solution to the numerical problem of the large domains associated with long tunnels. The present study demonstrates the feasibility of the implementation of this method...... in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...

  1. Protonation Sites, Tandem Mass Spectrometry and Computational Calculations of o-Carbonyl Carbazolequinone Derivatives (United States)

    Martínez-Cifuentes, Maximiliano; Clavijo-Allancan, Graciela; Zuñiga-Hormazabal, Pamela; Aranda, Braulio; Barriga, Andrés; Weiss-López, Boris; Araya-Maturana, Ramiro


    A series of a new type of tetracyclic carbazolequinones incorporating a carbonyl group at the ortho position relative to the quinone moiety was synthesized and analyzed by tandem electrospray ionization mass spectrometry (ESI/MS-MS), using Collision-Induced Dissociation (CID) to dissociate the protonated species. Theoretical parameters such as molecular electrostatic potential (MEP), local Fukui functions and local Parr function for electrophilic attack as well as proton affinity (PA) and gas phase basicity (GB), were used to explain the preferred protonation sites. Transition states of some main fragmentation routes were obtained and the energies calculated at density functional theory (DFT) B3LYP level were compared with the obtained by ab initio quadratic configuration interaction with single and double excitation (QCISD). The results are in accordance with the observed distribution of ions. The nature of the substituents in the aromatic ring has a notable impact on the fragmentation routes of the molecules. PMID:27399676

  2. Electronic stopping power calculation for water under the Lindhard formalism for application in proton computed tomography (United States)

    Guerrero, A. F.; Mesa, J.


    Because of the behavior that charged particles have when they interact with biological material, proton therapy is shaping the future of radiation therapy in cancer treatment. The planning of radiation therapy is made up of several stages. The first one is the diagnostic image, in which you have an idea of the density, size and type of tumor being treated; to understand this it is important to know how the particles beam interacts with the tissue. In this work, by using de Lindhard formalism and the Y.R. Waghmare model for the charge distribution of the proton, the electronic stopping power (SP) for a proton beam interacting with a liquid water target in the range of proton energies 101 eV - 1010 eV taking into account all the charge states is calculated.

  3. A computer code for calculations in the algebraic collective model of the atomic nucleus

    CERN Document Server

    Welsh, T A


    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1,1) x SO(5) dynamical group. This, in particular, obviates the use of coefficients of fractional parentage. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [pi x q x pi]_0 and [pi x pi]_{LM}, where q_M are the model's quadrupole moments, and pi_N are corresponding conjugate momenta (-2>=M,N<=2). The code also provides ready access to SO(3)-reduced SO(5) Clebsch-Gordan coefficients through data files provided with the code.

  4. Structural analysis of char by Raman spectroscopy: Improving band assignments through computational calculations from first principles

    Energy Technology Data Exchange (ETDEWEB)

    Smith, Matthew W.; Dallmeyer, Ian; Johnson, Timothy J.; Brauer, Carolyn S.; McEwen, Jean-Sabin; Espinal, Juan F.; Garcia-Perez, Manuel


    Raman spectroscopy is a powerful tool for the characterization of many carbon 27 species. The complex heterogeneous nature of chars and activated carbons has confounded 28 complete analysis due to the additional shoulders observed on the D-band and high intensity 29 valley between the D and G-bands. In this paper the effects of various vacancy and substitution 30 defects have been systematically analyzed via molecular modeling using density functional 31 theory (DFT) and how this is manifested in the calculated gas-phase Raman spectra. The 32 accuracy of these calculations was validated by comparison with (solid-phase) experimental 33 spectra, with a small correction factor being applied to improve the accuracy of frequency 34 predictions. The spectroscopic effects on the char species are best understood in terms of a 35 reduced symmetry as compared to a “parent” coronene molecule. Based upon the simulation 36 results, the shoulder observed in chars near 1200 cm-1 has been assigned to the totally symmetric 37 A1g vibrations of various small polyaromatic hydrocarbons (PAH) as well as those containing 38 rings of seven or more carbons. Intensity between 1400 cm-1 and 1450 cm-1 is assigned to A1g 39 type vibrations present in small PAHs and especially those containing cyclopentane rings. 40 Finally, band intensity between 1500 cm-1 and 1550 cm-1 is ascribed to predominately E2g 41 vibrational modes in strained PAH systems. A total of ten potential bands have been assigned 42 between 1000 cm-1 and 1800 cm-1. These fitting parameters have been used to deconvolute a 43 thermoseries of cellulose chars produced by pyrolysis at 300-700 °C. The results of the 44 deconvolution show consistent growth of PAH clusters with temperature, development of non-45 benzyl rings as temperature increases and loss of oxygenated features between 400 °C and 46 600 °C

  5. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y.C. [Square Y, Orchard Park, NY (United States); Chen, S.Y.; LePoire, D.J. [Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Rothman, R. [USDOE Idaho Field Office, Idaho Falls, ID (United States)


    This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors.

  6. User's guide for the computer code COLTS for calculating the coupled laminar and turbulent flow over a Jovian entry probe (United States)

    Kumar, A.; Graeves, R. A.


    A user's guide for a computer code 'COLTS' (Coupled Laminar and Turbulent Solutions) is provided which calculates the laminar and turbulent hypersonic flows with radiation and coupled ablation injection past a Jovian entry probe. Time-dependent viscous-shock-layer equations are used to describe the flow field. These equations are solved by an explicit, two-step, time-asymptotic finite-difference method. Eddy viscosity in the turbulent flow is approximated by a two-layer model. In all, 19 chemical species are used to describe the injection of carbon-phenolic ablator in the hydrogen-helium gas mixture. The equilibrium composition of the mixture is determined by a free-energy minimization technique. A detailed frequency dependence of the absorption coefficient for various species is considered to obtain the radiative flux. The code is written for a CDC-CYBER-203 computer and is capable of providing solutions for ablated probe shapes also.

  7. A computer code for forward calculation and inversion of the H/V spectral ratio under the diffuse field assumption (United States)

    García-Jerez, Antonio; Piña-Flores, José; Sánchez-Sesma, Francisco J.; Luzón, Francisco; Perton, Mathieu


    During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, over the last decade several schemes for inversion of the full HVSRN curve for near surface surveying have been developed. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested. It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserved by means of an adaptation of the Wang's orthonormalization method to the calculation of dispersion curves, surface-waves medium responses and contributions of body waves. This code has been combined with a variety of inversion methods to make up a powerful tool for passive seismic surveying.

  8. Computer programming for nucleic acid studies. II. Total chemical shifts calculation of all protons of double-stranded helices. (United States)

    Giessner-Prettre, C; Ribas Prado, F; Pullman, B; Kan, L; Kast, J R; Ts'o, P O


    A FORTRAN computer program called SHIFTS is described. Through SHIFTS, one can calculate the NMR chemical shifts of the proton resonances of single and double-stranded nucleic acids of known sequences and of predetermined conformations. The program can handle RNA and DNA for an arbitrary sequence of a set of 4 out of the 6 base types A,U,G,C,I and T. Data files for the geometrical parameters are available for A-, A'-, B-, D- and S-conformations. The positions of all the atoms are calculated using a modified version of the SEQ program [1]. Then, based on this defined geometry three chemical shift effects exerted by the atoms of the neighboring nucleotides on the protons of each monomeric unit are calculated separately: the ring current shielding effect: the local atomic magnetic susceptibility effect (including both diamagnetic and paramagnetic terms); and the polarization or electric field effect. Results of the program are compared with experimental results for a gamma (ApApGpCpUpU) 2 helical duplex and with calculated results on this same helix based on model building of A'-form and B-form and on graphical procedure for evaluating the ring current effects.

  9. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ. (United States)

    Greene, William H.


    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  10. Analyse des erreurs dans les calculs sur ordinateurs Error Analysis in Computing

    Directory of Open Access Journals (Sweden)

    Vignes J.


    Full Text Available La méthode présentée ici permet d'évaluer l'erreur sur les résultats d'algorithmes, erreurs dues à l'arithmétique à précision limitée de la machines L'idée de base de cette méthode est qu'à un algorithme donné fournissant un résultat algébrique unique r, correspond en informatique un ensemble R de résultats numériques qui sont tous représentatifs de résultat exact r. La méthode de permutation-perturbation que nous présentons ici permet d'obtenir les éléments de R. La perturbation agit sur les données et résultats de chaque opération élémentaire. La permutation agit sur l'ordre d'exécution des opérations. Une étude statistique des éléments de R permet d'estimer l'erreur commise. Dans la pratique, il suffit de 2 ou 3 éléments de R pour connaître cette erreur. This paper describes a new method for evaluating the error in the results of computation of an algorithm. The basic idea underlying the method is that while in algebra a given algorithm provides a single result r, this same algorithm carried out on a computer provides a set R of numerical results that are ail representative of the exact algebraic result r. The permutation-perturbation method described here can be used to obtain the elements of R. The perturbation acts on the data and results of each elementary operation, and the permutation acts on the order in which operations are carried out. A statistical analysis of the elements of R is performed to determine the error committed. In practice, 2 to 4 R elements are sufficient for determining the error.

  11. A computer code to calculate the fast induced signals by electron swarms in gases

    Energy Technology Data Exchange (ETDEWEB)

    Tobias, Carmen C.B. [Instituto de Pesquisas Energeticas e Nucleares (IPEN/CNEN-SP), Sao Paulo, SP (Brazil); Mangiarotti, Alessio [Universidade de Coimbra (Portugal). Dept. de Fisica. Lab. de Instrumentacao e Fisica Experimental de Particulas


    Full text: The study of electron transport parameters (i.e. drift velocity, diffusion coefficients and first Townsend coefficient) in gases is very important in several areas of applied nuclear science. For example, they are a relevant input to the design of particle detector employing micro-structures (MSGC's, micromegas, GEM's) and RPC's (resistive plate chambers). Moreover, if the data are accurate and complete enough, they can be used to derive a set of electron impact cross-sections with their energy dependence, that are a key ingredient in micro-dosimetry calculations. Despite the fundamental need of such data and the long age of the field, the gases of possible interest are so many and the effort of obtaining good quality data so time demanding, that an important contribution can still be made. As an example, electrons drift velocity at moderate field strengths (up to 50 Td) in pure Isobutane (a tissue equivalent gas) has been measured only recently by the IPEN-LIP collaboration using a dedicated setup. The transport parameters are derived from the recorded electric pulse induced by a swarm started with a pulsed laser shining on the cathode. To aid the data analysis, a special code has been developed to calculate the induced pulse by solving the electrons continuity equation including growth, drift and diffusion. A realistic profile of the initial laser beam is taken into account as well as the boundary conditions at the cathode and anode. The approach is either semi-analytic, based on the expression derived by P. H. Purdie and J. Fletcher, or fully numerical, using a finite difference scheme improved over the one introduced by J. de Urquijo et al. The agreement between the two will be demonstrated under typical conditions for the mentioned experimental setup. A brief discussion on the stability of the finite difference scheme will be given. The new finite difference scheme allows a detailed investigation of the importance of back diffusion to

  12. Computer-based training for improving mental calculation in third- and fifth-graders. (United States)

    Caviola, Sara; Gerotto, Giulia; Mammarella, Irene C


    The literature on intervention programs to improve arithmetical abilities is fragmentary and few studies have examined training on the symbolic representation of numbers (i.e. Arabic digits). In the present research, three groups of 3rd- and 5th-grade schoolchildren were given training on mental additions: 76 were assigned to a computer-based strategic training (ST) group, 73 to a process-based training (PBT) group, and 71 to a passive control (PC) group. Before and after the training, the children were given a criterion task involving complex addition problems, a nearest transfer task on complex subtraction problems, two near transfer tasks on math fluency, and a far transfer task on numerical reasoning. Our results showed developmental differences: 3rd-graders benefited more from the ST, with transfer effects on subtraction problems and math fluency, while 5th-graders benefited more from the PBT, improving their response times in the criterion task. Developmental, clinical and educational implications of these findings are discussed.

  13. Highly Accurate Frequency Calculations of Crab Cavities Using the VORPAL Computational Framework

    Energy Technology Data Exchange (ETDEWEB)

    Austin, T.M.; /Tech-X, Boulder; Cary, J.R.; /Tech-X, Boulder /Colorado U.; Bellantoni, L.; /Argonne


    We have applied the Werner-Cary method [J. Comp. Phys. 227, 5200-5214 (2008)] for extracting modes and mode frequencies from time-domain simulations of crab cavities, as are needed for the ILC and the beam delivery system of the LHC. This method for frequency extraction relies on a small number of simulations, and post-processing using the SVD algorithm with Tikhonov regularization. The time-domain simulations were carried out using the VORPAL computational framework, which is based on the eminently scalable finite-difference time-domain algorithm. A validation study was performed on an aluminum model of the 3.9 GHz RF separators built originally at Fermi National Accelerator Laboratory in the US. Comparisons with measurements of the A15 cavity show that this method can provide accuracy to within 0.01% of experimental results after accounting for manufacturing imperfections. To capture the near degeneracies two simulations, requiring in total a few hours on 600 processors were employed. This method has applications across many areas including obtaining MHD spectra from time-domain simulations.

  14. A computer code for calculations in the algebraic collective model of the atomic nucleus (United States)

    Welsh, T. A.; Rowe, D. J.


    A Maple code is presented for algebraic collective model (ACM) calculations. The ACM is an algebraic version of the Bohr model of the atomic nucleus, in which all required matrix elements are derived by exploiting the model's SU(1 , 1) × SO(5) dynamical group. This paper reviews the mathematical formulation of the ACM, and serves as a manual for the code. The code enables a wide range of model Hamiltonians to be analysed. This range includes essentially all Hamiltonians that are rational functions of the model's quadrupole moments qˆM and are at most quadratic in the corresponding conjugate momenta πˆN (- 2 ≤ M , N ≤ 2). The code makes use of expressions for matrix elements derived elsewhere and newly derived matrix elements of the operators [ π ˆ ⊗ q ˆ ⊗ π ˆ ] 0 and [ π ˆ ⊗ π ˆ ] LM. The code is made efficient by use of an analytical expression for the needed SO(5)-reduced matrix elements, and use of SO(5) ⊃ SO(3) Clebsch-Gordan coefficients obtained from precomputed data files provided with the code.

  15. User's manual to the ICRP Code: a series of computer programs to perform dosimetric calculations for the ICRP Committee 2 report

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.B.; Ford, M.R.


    A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.

  16. Design of a computer software for calculation of required barrier against radiation at the diagnostic x-ray units

    Directory of Open Access Journals (Sweden)

    S.A. Rahimi


    Full Text Available Background and purpose : Instalation of protective barrier against diagnostic x-ray is generally done based on the recommendations of NCRP49. There are analytic methods for designing protective barriers howerer, they lack sufficient efficiency and considering the NCRP49 reports, designing mechanical protective barrier in order to protect the initial x-ray radiation and absorption of the ray quality of such radiation is different.Therefore, the protective barrier for each radiation is measured separately. In this study, a computer software was designed to calculate the needed barrier with high accuracy.Materials and methods: Calculation of required protective barrier particularly when two or more generators are in use at diagnostic x-ray units and or installed diagnostic equipments do not have proper room space and the limitations for other clanges in parameters which are time- consuming and impossible to be manually calculated. For proper determination of thichness of the protective barrier, relevant information about curves of radiation weakness, dose limit etc should be entered. This program was done in windows and designed in such a way that the operator works easily, flexibility of the program is acceptable and its accuracy and sensitivity is high.Results : Results of this program indicate that, in most cases, in x-ray units required protective barrier was not used. Meanwhile sometimes shielding is more than what required which lacks technical standards and cost effectiveness. When the application index is contrasting zero, thichness of NCRP49 calculation is about 20% less than the calculated rate done by the method of this study. When the applied index is equal to zero (that is the only situation where the second barrier is considered, thickness of requined barrier is about 15% less than the lead barrier and concrete barrier calculated in this project is 8% less than that calculated by McGuire method.Conclusion : In this study proper

  17. Evaluation of an asymmetric stent patch design for a patient specific intracranial aneurysm using computational fluid dynamic (CFD) calculations in the computed tomography (CT) derived lumen (United States)

    Kim, Minsuok; Ionita, Ciprian; Tranquebar, Rekha; Hoffmann, Kenneth R.; Taulbee, Dale B.; Meng, Hui; Rudin, Stephen


    Stenting may provide a new, less invasive therapeutic option for cerebral aneurysms. However, a conventional porous stent may be insufficient in modifying the blood flow for clinical aneurysms. We designed an asymmetric stent consisting of a low porosity patch welded onto a porous stent for an anterior cerebral artery aneurysm of a specific patient geometry to block the strong inflow jet. To evaluate the effect of the patch on aneurysmal flow dynamics, we "virtually" implanted it into the patient's aneurysm geometry and performed Computational Fluid Dynamics (CFD) analysis. The patch was computationally deformed to fit into the vessel lumen segmented from the patient CT reconstructions. After the flow calculations, a patch with the same design was fabricated using laser cutting techniques and welded onto a commercial porous stent, creating a patient-specific asymmetric stent. This stent was implanted into a phantom, which was imaged with X-ray angiography. The hemodynamics of untreated and stented aneurysms were compared both computationally and experimentally. It was found from CFD of the patient aneurysm that the asymmetric stent effectively blocked the strong inflow jet into the aneurysm and eliminated the flow impingement on the aneurysm wall at the dome. The impact zone with elevated wall shear stress was eliminated, the aneurysmal flow activity was substantially reduced, and the flow was considerably reduced. Experimental observations corresponded well qualitatively with the CFD results. The demonstrated asymmetric stent could lead to a new minimally invasive image guided intervention to reduce aneurysm growth and rupture.

  18. The DEPOSIT computer code: Calculations of electron-loss cross-sections for complex ions colliding with neutral atoms (United States)

    Litsarev, Mikhail S.


    A description of the DEPOSIT computer code is presented. The code is intended to calculate total and m-fold electron-loss cross-sections (m is the number of ionized electrons) and the energy T(b) deposited to the projectile (positive or negative ion) during a collision with a neutral atom at low and intermediate collision energies as a function of the impact parameter b. The deposited energy is calculated as a 3D integral over the projectile coordinate space in the classical energy-deposition model. Examples of the calculated deposited energies, ionization probabilities and electron-loss cross-sections are given as well as the description of the input and output data. Program summaryProgram title: DEPOSIT Catalogue identifier: AENP_v1_0 Program summary URL: Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 8726 No. of bytes in distributed program, including test data, etc.: 126650 Distribution format: tar.gz Programming language: C++. Computer: Any computer that can run C++ compiler. Operating system: Any operating system that can run C++. Has the code been vectorised or parallelized?: An MPI version is included in the distribution. Classification: 2.4, 2.6, 4.10, 4.11. Nature of problem: For a given impact parameter b to calculate the deposited energy T(b) as a 3D integral over a coordinate space, and ionization probabilities Pm(b). For a given energy to calculate the total and m-fold electron-loss cross-sections using T(b) values. Solution method: Direct calculation of the 3D integral T(b). The one-dimensional quadrature formula of the highest accuracy based upon the nodes of the Yacobi polynomials for the cosθ=x∈[-1,1] angular variable is applied. The Simpson rule for the φ∈[0,2π] angular variable is used. The Newton-Cotes pattern of the seventh order

  19. ALPHN: A computer program for calculating ([alpha], n) neutron production in canisters of high-level waste

    Energy Technology Data Exchange (ETDEWEB)

    Salmon, R.; Hermann, O.W.


    The rate of neutron production from ([alpha], n) reactions in canisters of immobilized high-level waste containing borosilicate glass or glass-ceramic compositions is significant and must be considered when estimating neutron shielding requirements. The personal computer program ALPHA calculates the ([alpha], n) neutron production rate of a canister of vitrified high-level waste. The user supplies the chemical composition of the glass or glass-ceramic and the curies of the alpha-emitting actinides present. The output of the program gives the ([alpha], n) neutron production of each actinide in neutrons per second and the total for the canister. The ([alpha], n) neutron production rates are source terms only; that is, they are production rates within the glass and do not take into account the shielding effect of the glass. For a given glass composition, the user can calculate up to eight cases simultaneously; these cases are based on the same glass composition but contain different quantities of actinides per canister. In a typical application, these cases might represent the same canister of vitrified high-level waste at eight different decay times. Run time for a typical problem containing 20 chemical species, 24 actinides, and 8 decay times was 35 s on an IBM AT personal computer. Results of an example based on an expected canister composition at the Defense Waste Processing Facility are shown.

  20. ALPHN: A computer program for calculating ({alpha}, n) neutron production in canisters of high-level waste

    Energy Technology Data Exchange (ETDEWEB)

    Salmon, R.; Hermann, O.W.


    The rate of neutron production from ({alpha}, n) reactions in canisters of immobilized high-level waste containing borosilicate glass or glass-ceramic compositions is significant and must be considered when estimating neutron shielding requirements. The personal computer program ALPHA calculates the ({alpha}, n) neutron production rate of a canister of vitrified high-level waste. The user supplies the chemical composition of the glass or glass-ceramic and the curies of the alpha-emitting actinides present. The output of the program gives the ({alpha}, n) neutron production of each actinide in neutrons per second and the total for the canister. The ({alpha}, n) neutron production rates are source terms only; that is, they are production rates within the glass and do not take into account the shielding effect of the glass. For a given glass composition, the user can calculate up to eight cases simultaneously; these cases are based on the same glass composition but contain different quantities of actinides per canister. In a typical application, these cases might represent the same canister of vitrified high-level waste at eight different decay times. Run time for a typical problem containing 20 chemical species, 24 actinides, and 8 decay times was 35 s on an IBM AT personal computer. Results of an example based on an expected canister composition at the Defense Waste Processing Facility are shown.

  1. Computer-aided calculating technological dimension chain%计算机辅助求解工艺尺寸链

    Institute of Scientific and Technical Information of China (English)



    Computer-aided analysis and calculation about technologic dimension chain is an indispensably part of CAPP. A soft system of omputer—aided simply conversion system about technologic dimension chain is developed aimed at the condition that technologic standard can not superpose with designing standard, and the steps of establishing and calculating with the help of computer is illustrated through analyzing the concrete instance. The system adds real-time plotting function, has user-friendly interfaces, with the result of a great improvement on velocity and quality of the solution of dimensions chain.%计算机辅助工艺尺寸链的分析与解算是CAPP中不可或缺的一个环节.针对工艺尺寸链计算中工艺基准与设计基准不重合的情况,开发了一套简单的计算机辅助工艺尺寸链换算系统,并通过实例分析说明计算机辅助建立和计算工艺尺寸链的方法和步骤.该系统增加了实时绘图功能,具有友好的用户界面,进一步提高了尺寸链的解算速度和质量.

  2. Computational Modeling and Theoretical Calculations on the Interactions between Spermidine and Functional Monomer (Methacrylic Acid in a Molecularly Imprinted Polymer

    Directory of Open Access Journals (Sweden)

    Yujie Huang


    Full Text Available This paper theoretically investigates interactions between a template and functional monomer required for synthesizing an efficient molecularly imprinted polymer (MIP. We employed density functional theory (DFT to compute geometry, single-point energy, and binding energy (ΔE of an MIP system, where spermidine (SPD and methacrylic acid (MAA were selected as template and functional monomer, respectively. The geometry was calculated by using B3LYP method with 6-31+(d basis set. Furthermore, 6-311++(d, p basis set was used to compute the single-point energy of the above geometry. The optimized geometries at different template to functional monomer molar ratios, mode of bonding between template and functional monomer, changes in charge on natural bond orbital (NBO, and binding energy were analyzed. The simulation results show that SPD and MAA form a stable complex via hydrogen bonding. At 1 : 5 SPD to MAA ratio, the binding energy is minimum, while the amount of transferred charge between the molecules is maximum; SPD and MAA form a stable complex at 1 : 5 molar ratio through six hydrogen bonds. Optimizing structure of template-functional monomer complex, through computational modeling prior synthesis, significantly contributes towards choosing a suitable pair of template-functional monomer that yields an efficient MIP with high specificity and selectivity.

  3. Two computational approaches for Monte Carlo based shutdown dose rate calculation with applications to the JET fusion machine

    Energy Technology Data Exchange (ETDEWEB)

    Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)


    shortly after the deuterium-tritium experiment (DTE1) in 1997. Large computing power, both in terms of amount of data handling and storage and the CPU computing time is needed by the two methods, partly due to the complexity of the problem. With parallel versions of the MCNP code, running on two different platforms, a satisfying accuracy of the calculation has been reached in reasonable times. (authors)

  4. A computer code for forward calculation and inversion of the H/V spectral ratio under the diffuse field assumption

    CERN Document Server

    García-Jerez, Antonio; Sánchez-Sesma, Francisco J; Luzón, Francisco; Perton, Mathieu


    During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, several schemes for inversion of the full HVSRN curve for near surface surveying have been developed over the last decade. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested.It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserv...

  5. Evaluation of open MPI and MPICH2 performances for the computation time in proton therapy dose calculations with Geant4 (United States)

    Kazemi, M.; Afarideh, H.; Riazi, Z.


    The aim of this research work is to use a better parallel software structure to improve the performance of the Monte Carlo Geant4 code in proton treatment planning. The hadron therapy simulation is rewritten to parallelize the shared memory multiprocessor systems by using the Message-Passing Interface (MPI). The speedup performance of the code has been studied by using two MPI-compliant libraries including Open MPI and the MPICH2, separately. Despite the speedup, the results are almost linear for both the Open MPI and MPICH2; the latter was chosen because of its better characteristics and lower computation time. The Geant4 parameters, including the step limiter and the set cut, have been analyzed to minimize the simulation time as much as possible. For a reasonable compromise between the spatial dose distribution and the calculation time, the improvement in time reduction coefficient reaches about 157.

  6. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    Energy Technology Data Exchange (ETDEWEB)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I. [VNIIEF (Russian Federation)] [and others


    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.

  7. Performance of heterogeneous computing with graphics processing unit and many integrated core for hartree potential calculations on a numerical grid. (United States)

    Choi, Sunghwan; Kwon, Oh-Kyoung; Kim, Jaewook; Kim, Woo Youn


    We investigated the performance of heterogeneous computing with graphics processing units (GPUs) and many integrated core (MIC) with 20 CPU cores (20×CPU). As a practical example toward large scale electronic structure calculations using grid-based methods, we evaluated the Hartree potentials of silver nanoparticles with various sizes (3.1, 3.7, 4.9, 6.1, and 6.9 nm) via a direct integral method supported by the sinc basis set. The so-called work stealing scheduler was used for efficient heterogeneous computing via the balanced dynamic distribution of workloads between all processors on a given architecture without any prior information on their individual performances. 20×CPU + 1GPU was up to ∼1.5 and ∼3.1 times faster than 1GPU and 20×CPU, respectively. 20×CPU + 2GPU was ∼4.3 times faster than 20×CPU. The performance enhancement by CPU + MIC was considerably lower than expected because of the large initialization overhead of MIC, although its theoretical performance is similar with that of CPU + GPU. © 2016 Wiley Periodicals, Inc.

  8. Recommendations for computer code selection of a flow and transport code to be used in undisturbed vadose zone calculations for TWRS immobilized environmental analyses

    Energy Technology Data Exchange (ETDEWEB)

    VOOGD, J.A.


    An analysis of three software proposals is performed to recommend a computer code for immobilized low activity waste flow and transport modeling. The document uses criteria restablished in HNF-1839, ''Computer Code Selection Criteria for Flow and Transport Codes to be Used in Undisturbed Vadose Zone Calculation for TWRS Environmental Analyses'' as the basis for this analysis.

  9. Method of Characteristics Calculations and Computer Code for Materials with Arbitrary Equations of State and Using Orthogonal Polynomial Least Square Surface Fits (United States)

    Chang, T. S.


    A numerical scheme using the method of characteristics to calculate the flow properties and pressures behind decaying shock waves for materials under hypervelocity impact is developed. Time-consuming double interpolation subroutines are replaced by a technique based on orthogonal polynomial least square surface fits. Typical calculated results are given and compared with the double interpolation results. The complete computer program is included.

  10. Computer program TRACK_TEST for calculating parameters and plotting profiles for etch pits in nuclear track materials (United States)

    Nikezic, D.; Yu, K. N.


    A computer program called TRACK_TEST for calculating parameters (lengths of the major and minor axes) and plotting profiles in nuclear track materials resulted from light-ion irradiation and subsequent chemical etching is described. The programming steps are outlined, including calculations of alpha-particle ranges, determination of the distance along the particle trajectory penetrated by the chemical etchant, calculations of track coordinates, determination of the lengths of the major and minor axes and determination of the contour of the track opening. Descriptions of the program are given, including the built-in V functions for the two commonly employed nuclear track materials commercially known as LR 115 (cellulose nitrate) and CR-39 (poly allyl diglycol carbonate) irradiated by alpha particles. Program summaryTitle of the program:TRACK_TEST Catalogue identifier:ADWT Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL: Computer:Pentium PC Operating systems:Windows 95+ Programming language:Fortran 90 Memory required to execute with typical data:256 MB No. of lines in distributed program, including test data, etc.: 2739 No. of bytes in distributed program, including test data, etc.:204 526 Distribution format:tar.gz External subprograms used:The entire code must be linked with the MSFLIB library Nature of problem: Fast heavy charged particles (like alpha particles and other light ions etc.) create latent tracks in some dielectric materials. After chemical etching in aqueous NaOH or KOH solutions, these tracks become visible under an optical microscope. The growth of a track is based on the simultaneous actions of the etchant on undamaged regions (with the bulk etch rate V) and along the particle track (with the track etch rate V). Growth of the track is described satisfactorily by these two parameters ( V and V). Several models have been presented in the past describing

  11. A chemical solver to compute molecule and grain abundances and non-ideal MHD resistivities in prestellar core collapse calculations

    CERN Document Server

    Marchand, Pierre; Chabrier, Gilles; Hennebelle, Patrick; Commerçon, Benoit; Vaytet, Neil


    We develop a detailed chemical network relevant to the conditions characteristic of prestellar core collapse. We solve the system of time-dependent differential equations to calculate the equilibrium abundances of molecules and dust grains, with a size distribution given by size-bins for these latter. These abundances are used to compute the different non-ideal magneto-hydrodynamics resistivities (ambipolar, Ohmic and Hall), needed to carry out simulations of protostellar collapse. For the first time in this context, we take into account the evaporation of the grains, the thermal ionisation of Potassium, Sodium and Hydrogen at high temperature, and the thermionic emission of grains in the chemical network, and we explore the impact of various cosmic ray ionisation rates. All these processes significantly affect the non-ideal magneto-hydrodynamics resistivities, which will modify the dynamics of the collapse. Ambipolar diffusion and Hall effect dominate at low densities, up to n_H = 10^12 cm^-3, after which Oh...

  12. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    Directory of Open Access Journals (Sweden)

    Shahamatnia Ehsan


    Full Text Available Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO, solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO, a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  13. Off-design computer code for calculating the aerodynamic performance of axial-flow fans and compressors (United States)

    Schmidt, James F.


    An off-design axial-flow compressor code is presented and is available from COSMIC for predicting the aerodynamic performance maps of fans and compressors. Steady axisymmetric flow is assumed and the aerodynamic solution reduces to solving the two-dimensional flow field in the meridional plane. A streamline curvature method is used for calculating this flow-field outside the blade rows. This code allows for bleed flows and the first five stators can be reset for each rotational speed, capabilities which are necessary for large multistage compressors. The accuracy of the off-design performance predictions depend upon the validity of the flow loss and deviation correlation models. These empirical correlations for the flow loss and deviation are used to model the real flow effects and the off-design code will compute through small reverse flow regions. The input to this off-design code is fully described and a user's example case for a two-stage fan is included with complete input and output data sets. Also, a comparison of the off-design code predictions with experimental data is included which generally shows good agreement.

  14. ESTAB-A Computer Package for Stability Calculation of Ships%船舶稳性计算程序ESTAB

    Institute of Scientific and Technical Information of China (English)

    赵成璧; 邹早建


    This paper introduces a practical computer package for effective 3 dimensional stability calculation of ships, which involves various advanced skill and techniques in order to describe the ship surfaces and hulls conveniently, get the floating attitude and various curves of stability.%介绍了一个基于三维稳性计算方法的实用计算机程序ESTAB。采用三角形剖分技术和一些先进的算法,实现了船舶舱室和各种剖面的生成,可以可靠地获得各种载况下的浮态、不同纵向位置的进水角、各类型舱室的舱容要素曲线和船舶静水力曲线、邦戎曲线、插值曲线、静(动)稳性曲线、可浸长度曲线、弯矩与剪力曲线以及破舱时的浮态和破舱稳性。

  15. One-hundred-nm-scale electronic structure and transport calculations of organic polymers on the K computer (United States)

    Imachi, Hiroto; Yokoyama, Seiya; Kaji, Takami; Abe, Yukiya; Tada, Tomofumi; Hoshi, Takeo


    One-hundred-nm-scale electronic structure calculations were carried out on the K supercomputer by our original simulation code ELSES ( The present paper reports preliminary results of transport calculations for condensed organic polymers. Large-scale calculations are realized by novel massively parallel order-N algorithms. The transport calculations were carried out as a theoretical extension for the quantum wavepacket dynamics simulation. The method was applied to a single polymer chain and condensed polymers.

  16. A computer program for the calculation of the flow field including boundary layer effects for mixed-compression inlets at angle of attack (United States)

    Vadyak, J.; Hoffman, J. D.


    A computer program was developed which is capable of calculating the flow field in the supersonic portion of a mixed compression aircraft inlet operating at angle of attack. The supersonic core flow is computed using a second-order three dimensional method-of-characteristics algorithm. The bow shock and the internal shock train are treated discretely using a three dimensional shock fitting procedure. The boundary layer flows are computed using a second-order implicit finite difference method. The shock wave-boundary layer interaction is computed using an integral formulation. The general structure of the computer program is discussed, and a brief description of each subroutine is given. All program input parameters are defined, and a brief discussion on interpretation of the output is provided. A number of sample cases, complete with data listings, are provided.

  17. Technology Policy Survey: A Study of State Policies Supporting the Use of Calculators and Computers in the Study of Precollege Mathematics. (United States)

    Kansky, Bob

    The Technology Advisory Committee of the National Council of Teachers of Mathematics recently conducted a survey to assess the status of state-level policies affecting the use of calculators and computers in the teaching of mathematics in grades K-12. The committee determined that state-level actions related to the increased availability of…

  18. TIMED: a computer program for calculating cumulated activity of a radionuclide in the organs of the human body at a given time, t, after deposition

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.B.; Snyder, W.S.; Ford, M.R.


    TIMED is a computer program designed to calculate cumulated radioactivity in the various source organs at various times after radionuclide deposition. TIMED embodies a system of differential equations which describes activity transfer in the lungs, gastrointestinal tract, and other organs of the body. This system accounts for delay of transfer of activity between compartments of the body and radioactive daughters.

  19. PN/S calculations for a fighter W/F at high-lift yaw conditions. [parabolized Navier-Stokes computer code (United States)

    Wai, J. C.; Blom, G.; Yoshihara, H.; Chaussee, D.


    The NASA/Ames parabolized Navier/Stokes computer code was used to calculate the turbulent flow over the wing/fuselage for a generic fighter at M = 2.2. 18 deg angle-of-attack, and 0 and 5 deg yaw. Good test/theory agreement was achieved in the zero yaw case. No test data were available for the yaw case.

  20. High-throughput calculation of protein-ligand binding affinities: modification and adaptation of the MM-PBSA protocol to enterprise grid computing. (United States)

    Brown, Scott P; Muchmore, Steven W


    We have developed a system for performing computations on an enterprise grid using a freely available package for grid computing that allows us to harvest unused CPU cycles off of employee desktop computers. By modifying the traditional formulation of Molecular Mechanics with Poisson-Boltzmann Surface Area (MM-PBSA) methodology, in combination with a coarse-grain parallelized implementation suitable for deployment onto our enterprise grid, we show that it is possible to produce rapid physics-based estimates of protein-ligand binding affinities that have good correlation to experimental data. This is demonstrated by examining the correlation of our calculated binding affinities to experimental data and also by comparison to the correlation obtained from the binding-affinity calculations using traditional MM-PBSA that are reported in the literature.

  1. Numerical study for the calculation of computer-generated hologram in color holographic 3D projection enabled by modified wavefront recording plane method (United States)

    Chang, Chenliang; Qi, Yijun; Wu, Jun; Yuan, Caojin; Nie, Shouping; Xia, Jun


    A method of calculating computer-generated hologram (CGH) for color holographic 3D projection is proposed. A color 3D object is decomposed into red, green and blue components. For each color component, a virtual wavefront recording plane (WRP) is established which is nonuniformly sampled according to the depth map of the 3D object. The hologram of each color component is calculated from the nonuniform sampled WRP using the shifted Fresnel diffraction algorithm. Finally three holograms of RGB components are encoded into one single CGH based on the multiplexing encoding method. The computational cost of CGH generation is reduced by converting diffraction calculation from huge 3D voxels to three 2D planar images. Numerical experimental results show that the CGH generated by our method is capable to project zoomable color 3D object with clear quality.

  2. Calculation of Quad-Cities Central Bundle Documented by the U.S. in FY98 Using Russian Computer Codes

    Energy Technology Data Exchange (ETDEWEB)

    Pavlovichev, A.M.


    The report presents calculation results of isotopic composition of irradiated fuel performed for the Quad Cities-1 reactor bundle with UO{sub 2} and MOX fuel. The MCU-REA code was used for calculations. The code is developed in Kurchatov Institute, Russia. The MCU-REA results are compared with the experimental data and HELIOS code results.

  3. FORIG: a computer code for calculating radionuclide generation and depletion in fusion and fission reactors. User's manual

    Energy Technology Data Exchange (ETDEWEB)

    Blink, J.A.


    In this manual we describe the use of the FORIG computer code to solve isotope-generation and depletion problems in fusion and fission reactors. FORIG runs on a Cray-1 computer and accepts more extensive activation cross sections than ORIGEN2 from which it was adapted. This report is an updated and a combined version of the previous ORIGEN2 and FORIG manuals. 7 refs., 15 figs., 13 tabs.

  4. A user`s guide to LUGSAN II. A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Mechanical and Thermal Environments Dept.


    LUG and Sway brace ANalysis (LUGSAN) II is an analysis and database computer program that is designed to calculate store lug and sway brace loads for aircraft captive carriage. LUGSAN II combines the rigid body dynamics code, SWAY85, with a Macintosh Hypercard database to function both as an analysis and archival system. This report describes the LUGSAN II application program, which operates on the Macintosh System (Hypercard 2.2 or later) and includes function descriptions, layout examples, and sample sessions. Although this report is primarily a user`s manual, a brief overview of the LUGSAN II computer code is included with suggested resources for programmers.

  5. ROBOT3: a computer program to calculate the in-pile three-dimensional bowing of cylindrical fuel rods (AWBA Development Program)

    Energy Technology Data Exchange (ETDEWEB)

    Kovscek, S.E.; Martin, S.E.


    ROBOT3 is a FORTRAN computer program which is used in conjunction with the CYGRO5 computer program to calculate the time-dependent inelastic bowing of a fuel rod using an incremental finite element method. The fuel rod is modeled as a viscoelastic beam whose material properties are derived as perturbations of the CYGRO5 axisymmetric model. Fuel rod supports are modeled as displacement, force, or spring-type nodal boundary conditions. The program input is described and a sample problem is given.

  6. TRANGE: computer code to calculate the energy beam degradation in target stack; TRANGE: programa para calcular a degradacao de energia de particulas carregadas em alvos

    Energy Technology Data Exchange (ETDEWEB)

    Bellido, Luis F.


    A computer code to calculate the projectile energy degradation along a target stack was developed for an IBM or compatible personal microcomputer. A comparison of protons and deuterons bombarding uranium and aluminium targets was made. The results showed that the data obtained with TRANGE were in agreement with other computers code such as TRIM, EDP and also using Williamsom and Janni range and stopping power tables. TRANGE can be used for any charged particle ion, for energies between 1 to 100 MeV, in metal foils and solid compounds targets. (author). 8 refs., 2 tabs.

  7. Computer programs for the interpretation of low resolution mass spectra: Program for calculation of molecular isotopic distribution and program for assignment of molecular formulas (United States)

    Miller, R. A.; Kohl, F. J.


    Two FORTRAN computer programs for the interpretation of low resolution mass spectra were prepared and tested. One is for the calculation of the molecular isotopic distribution of any species from stored elemental distributions. The program requires only the input of the molecular formula and was designed for compatability with any computer system. The other program is for the determination of all possible combinations of atoms (and radicals) which may form an ion having a particular integer mass. It also uses a simplified input scheme and was designed for compatability with any system.

  8. Calculation of electromagnetic fields in electric machines by means of the finite element. Computational aspects; Calculo de campos electromagneticos en maquinas electricas mediante elemento finito. Aspectos computacionales

    Energy Technology Data Exchange (ETDEWEB)

    Rosales, Mario; De la Torre, Octavio [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)


    In this article are described the computational characteristics of the Package CALIIE 2D of the Instituto de Investigaciones Electricas (IIE), for the calculation of bi-dimensional electromagnetic fields. The computational implementation of the package is based in the electromagnetic and numerical statements formerly published in this series. [Espanol] En este articulo se describen las caracteristicas computacionales del paquete CALIIE 2D del Instituto de Investigaciones Electricas (IIE), para calcular campos electromagneticos bidimensionales. La implantacion computacional del paquete se basa en los planteamientos electromagneticos y numericos antes publicados en esta serie.

  9. A user`s guide to LUGSAN 1.1: A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Experimental Structural Dynamics Dept.


    LUGSAN (LUG and Sway brace ANalysis) is a analysis and database computer program designed to calculate store lug and sway brace loads from aircraft captive carriage. LUGSAN combines the rigid body dynamics code, SWAY85 and the maneuver calculation code, MILGEN, with an INGRES database to function both as an analysis and archival system. This report describes the operation of the LUGSAN application program, including function description, layout examples, and sample sessions. This report is intended to be a user`s manual for version 1.1 of LUGSAN operating on the VAX/VMS system. The report is not intended to be a programmer or developer`s manual.

  10. ACDOS1: a computer code to calculate dose rates from neutron activation of neutral beamlines and other fusion-reactor components

    Energy Technology Data Exchange (ETDEWEB)

    Keney, G.S.


    A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV.

  11. Computational prediction for emission energy of iridium (III) complexes based on TDDFT calculations using exchange-correlation functionals containing various HF exchange percentages. (United States)

    Xu, Shengxian; Wang, Jinglan; Xia, Hongying; Zhao, Feng; Wang, Yibo


    The accurate prediction for the emission energies of the phosphorescent Ir (III) complexes is very useful for the realizing of full-color displays and large-area solid-state lighting in OLED fields. Quantum chemistry calculations based on TDDFT methods are most widely used to directly compute the triplet vertical excitation energies, yet sometimes the universality of these calculations can be limited because of the lack of experimental data for the relative family of structural analogues. In this letter, 16 literature emission energies at low temperature are linearly correlated with their theoretical values computed by TDDFT using exchange-correlation functionals containing various HF exchange percentage with the relation of E exp (em)  = 1.2Ē calc (em). The relation is proven to be robust across a wide range of structures for Ir (III) complexes. These theoretical studies should be expected to provide some guides for the design and synthesis of efficient emitting materials.

  12. Estimation of subcriticality with the computed values. 3. Application of `indirect estimation method for calculation error` to exponential experiment

    Energy Technology Data Exchange (ETDEWEB)

    Sakurai, Kiyoshi; Arakawa, Takuya; Yamamoto, Toshihiro; Naito, Yoshitaka [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment


    Estimation accuracy for subcriticality on `Indirect Estimation Method for Calculation Error` is expressed in form of {rho}{sub m} - {rho}{sub C} = K ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}). This expression means that estimation accuracy for subcriticality is proportional to ({gamma}{sub zc}{sup 2} - {gamma}{sub zm}{sup 2}) as estimation accuracy of buckling for axial direction. The proportional constant K is calculated, but the influence of the uncertainty of K to estimation accuracy for subcriticality is smaller than in case of comparison for {rho}{sub m} = -K ({gamma}{sub zm}{sup 2} + B{sub z}{sup 2}) with calculated {rho}{sub c}. When the values of K were calculated, the estimation accuracy is kept enough. If {gamma}{sub zc}{sup 2} equal to {gamma}{sub zm}{sup 2}, {rho}{sub c} equal to {rho}{sub m}. Reliability of this method is shown on base of results in which are calculated using MCNP 4A for four subcritical cores of TCA. (author)

  13. Computer programs for calculating pressure distributions including vortex effects on supersonic monoplane or cruciform wing-body-tail combinations with round or elliptical bodies (United States)

    Dillenius, M. F. E.; Nielsen, J. N.


    Computer programs are presented which are capable of calculating detailed aerodynamic loadings and pressure distributions acting on pitched and rolled supersonic missile configurations which utilize bodies of circular or elliptical cross sections. The applicable range of angle of attack is up to 20 deg, and the Mach number range is 1.3 to about 2.5. Effects of body and fin vortices are included in the methods, as well as arbitrary deflections of canard or fin panels.

  14. Computer programs for calculating pKa: a comparative study for 3-(3-(2-nitrophenylprop-2-enoyl-2H-1-benzopyran-2-one

    Directory of Open Access Journals (Sweden)



    Full Text Available Coumarin-based compounds containing a chalcone moiety exhibit antimicrobial activity. These substances are potential drugs and it is important to determine their pKa values. However, they are almost insoluble in water. The dissociation constant was experimentally determined by potentiometric titration for 3-[3-(2-nitrophenylprop-2-enoyl]-2H-1-benzopyran-2-one because this compound shows good activity and solubility. A number of different computer programs for the calculation of the dissociation constant of chemical compounds have been developed. The pKa value of the target compound was calculated using three different computer programs, i.e., the ACD/pKa, CSpKaPredictor and ADME/ToxWEB programs, which are based on different theoretical approaches. The analysis demonstrated good agreement between the experimentally observed pKa value of 3-[3-(2-nitrophenylprop-2-enoyl]-2H-1-benzopyran-2-one and the value calculated using the computer program CSpKa.

  15. Description and Evaluation of a Digital-Computer Program for Calculating the Viscous Drag of Bodies of Revolution (United States)


    Charles W. Dawson and Janet S. Dean of the Computation and Mathematics Department at DTNSRDC assisted in matters relating to the modification and use...R. E. Wilson 1 V. C. Dawson 1 J. E. Goeller 5 NUSC NPT 1 P. Gibson 1 J. F. Brady 1 R. H. Nadolink 1 G. Cristoph 1 NLONLAB NUSC 10

  16. A computer code for calculation of radioactive nuclide generation and depletion, decay heat and {gamma} ray spectrum. FPGS90

    Energy Technology Data Exchange (ETDEWEB)

    Ihara, Hitoshi; Katakura, Jun-ichi; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment


    In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting {gamma} ray and {beta} ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted {gamma} ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library `JNDC Nuclear Data Library of Fission Products - second version -`, which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author).

  17. Computational chemistry calculations of stability for bismuth nanotubes, fullerene-like structures and hydrogen-containing nanostructures. (United States)

    Kharissova, Oxana V; Osorio, Mario; Vázquez, Mario Sánchez; Kharisov, Boris I


    Using molecular mechanics (MM+), semi-empirical (PM6) and density functional theory (DFT) (B3LYP) methods we characterized bismuth nanotubes. In addition, we predicted the bismuth clusters {Bi(20)(C(5V)), Bi(24)(C(6v)), Bi(28)(C(1)), B(32)(D(3H)), Bi(60)(C(I))} and calculated their conductor properties.

  18. LEAF: a computer program to calculate fission product release from a reactor containment building for arbitrary radioactive decay chains

    Energy Technology Data Exchange (ETDEWEB)

    Lee, C.E.; Apperson, C.E. Jr.; Foley, J.E.


    The report describes an analytic containment building model that is used for calculating the leakage into the environment of each isotope of an arbitrary radioactive decay chain. The model accounts for the source, the buildup, the decay, the cleanup, and the leakage of isotopes that are gas-borne inside the containment building.

  19. Development of a computer code for neutronic calculations of a hexagonal lattice of nuclear reactor using the flux expansion nodal method

    Directory of Open Access Journals (Sweden)

    Mohammadnia Meysam


    Full Text Available The flux expansion nodal method is a suitable method for considering nodalization effects in node corners. In this paper we used this method to solve the intra-nodal flux analytically. Then, a computer code, named MA.CODE, was developed using the C# programming language. The code is capable of reactor core calculations for hexagonal geometries in two energy groups and three dimensions. The MA.CODE imports two group constants from the WIMS code and calculates the effective multiplication factor, thermal and fast neutron flux in three dimensions, power density, reactivity, and the power peaking factor of each fuel assembly. Some of the code's merits are low calculation time and a user friendly interface. MA.CODE results showed good agreement with IAEA benchmarks, i. e. AER-FCM-101 and AER-FCM-001.

  20. Computational prediction of binding affinity for CYP1A2-ligand complexes using empirical free energy calculations

    DEFF Research Database (Denmark)

    Poongavanam, Vasanthanathan; Olsen, Lars; Jørgensen, Flemming Steen;


    , and methods based on statistical mechanics. In the present investigation, we started from an LIE model to predict the binding free energy of structurally diverse compounds of cytochrome P450 1A2 ligands, one of the important human metabolizing isoforms of the cytochrome P450 family. The data set includes both...... substrates and inhibitors. It appears that the electrostatic contribution to the binding free energy becomes negligible in this particular protein and a simple empirical model was derived, based on a training set of eight compounds. The root mean square error for the training set was 3.7 kJ/mol. Subsequent......Predicting binding affinities for receptor-ligand complexes is still one of the challenging processes in computational structure-based ligand design. Many computational methods have been developed to achieve this goal, such as docking and scoring methods, the linear interaction energy (LIE) method...

  1. X-Ray Crystallographic Analysis, EPR Studies, and Computational Calculations of a Cu(II) Tetramic Acid Complex (United States)

    Matiadis, Dimitrios; Tsironis, Dimitrios; Stefanou, Valentina; Igglessi–Markopoulou, Olga; McKee, Vickie; Sanakis, Yiannis; Lazarou, Katerina N.


    In this work we present a structural and spectroscopic analysis of a copper(II) N-acetyl-5-arylidene tetramic acid by using both experimental and computational techniques. The crystal structure of the Cu(II) complex was determined by single crystal X-ray diffraction and shows that the copper ion lies on a centre of symmetry, with each ligand ion coordinated to two copper ions, forming a 2D sheet. Moreover, the EPR spectroscopic properties of the Cu(II) tetramic acid complex were also explored and discussed. Finally, a computational approach was performed in order to obtain a detailed and precise insight of product structures and properties. It is hoped that this study can enrich the field of functional supramolecular systems, giving place to the formation of coordination-driven self-assembly architectures. PMID:28316540

  2. Nuclear magnetic resonance, vibrational spectroscopic studies, physico-chemical properties and computational calculations on (nitrophenyl) octahydroquinolindiones by DFT method. (United States)

    Pasha, M A; Siddekha, Aisha; Mishra, Soni; Azzam, Sadeq Hamood Saleh; Umapathy, S


    In the present study, 2'-nitrophenyloctahydroquinolinedione and its 3'-nitrophenyl isomer were synthesized and characterized by FT-IR, FT-Raman, (1)H NMR and (13)C NMR spectroscopy. The molecular geometry, vibrational frequencies, (1)H and (13)C NMR chemical shift values of the synthesized compounds in the ground state have been calculated by using the density functional theory (DFT) method with the 6-311++G (d,p) basis set and compared with the experimental data. The complete vibrational assignments of wave numbers were made on the basis of potential energy distribution using GAR2PED programme. Isotropic chemical shifts for (1)H and (13)C NMR were calculated using gauge-invariant atomic orbital (GIAO) method. The experimental vibrational frequencies, (1)H and (13)C NMR chemical shift values were found to be in good agreement with the theoretical values. On the basis of vibrational analysis, molecular electrostatic potential and the standard thermodynamic functions have been investigated.

  3. Validation of computational fluid dynamics calculation using Rossendorf coolant mixing model flow measurements in primary loop of coolant in a pressurized water reactor model

    Energy Technology Data Exchange (ETDEWEB)

    Farkas, Istvan; Hutli, Ezddin; Faekas, Tatiana; Takacs, Antal; Guba, Attila; Toth, Ivan [Dept. of Thermohydraulics, Centre for Energy Research, Hungarian Academy of Sciences, Budapest (Hungary)


    The aim of this work is to simulate the thermohydraulic consequences of a main steam line break and to compare the obtained results with Rossendorf Coolant Mixing Model (ROCOM) 1.1 experimental results. The objective is to utilize data from steady-state mixing experiments and computational fluid dynamics (CFD) calculations to determine the flow distribution and the effect of thermal mixing phenomena in the primary loops for the improvement of normal operation conditions and structural integrity assessment of pressurized water reactors. The numerical model of ROCOM was developed using the FLUENT code. The positions of the inlet and outlet boundary conditions and the distribution of detailed velocity/turbulence parameters were determined by preliminary calculations. The temperature fields of transient calculation were averaged in time and compared with time-averaged experimental data. The perforated barrel under the core inlet homogenizes the flow, and therefore, a uniform temperature distribution is formed in the pressure vessel bottom. The calculated and measured values of lowest temperature were equal. The inlet temperature is an essential parameter for safety assessment. The calculation predicts precisely the experimental results at the core inlet central region. CFD results showed a good agreement (both qualitatively and quantitatively) with experimental results.

  4. SEXIE 3.0 — an updated computer program for the calculation of coordination shells and geometries (United States)

    Tabor-Morris, Anne E.; Rupp, Bernhard


    We report a new version of our FORTRAN program SEXIE (ACBV). New features permit interfacing to related programs for EXAFS calculations (FEFF by J.J. Rehr et al.) and structure visualization (SCHAKAL by E. Keller). The code has been refined and the basis transformation matrix from fractional to cartesian coordinates has been corrected and made compatible with IUCr (International Union for Crystallography) standards. We discuss how to determine the correct space group setting and atom position input. New examples for Unix script files are provided.

  5. Fast calculation method for computer-generated cylindrical holograms based on the three-dimensional Fourier spectrum. (United States)

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko


    The relation between a three-dimensional (3D) object and its diffracted wavefront in the 3D Fourier space is discussed at first and then a rigorous diffraction formula onto cylindrical surfaces is derived. The azimuthal direction and the spatial frequency direction corresponding to height can be expressed with a one-dimensional (1D) convolution integral and a 1D inverse Fourier transform in the 3D Fourier space, respectively, and fast Fourier transforms are available for fast calculation. A numerical simulation of a diffracted wavefront on cylindrical surfaces is presented. An alternative optical experiment equivalent of the optical reconstruction from cylindrical holograms is also demonstrated.

  6. A computer program for calculating the perfect gas inviscid flow field about blunt axisymmetric bodies at an angle of attack of 0 deg (United States)

    Zoby, E. V.; Graves, R. A., Jr.


    A method for the rapid calculation of the inviscid shock layer about blunt axisymmetric bodies at an angle of attack of 0 deg has been developed. The procedure is of an inverse nature, that is, a shock wave is assumed and calculations proceed along rays normal to the shock. The solution is iterated until the given body is computed. The flow field solution procedure is programed at the Langley Research Center for the Control Data 6600 computer. The geometries specified in the program are sphores, ellipsoids, paraboloids, and hyperboloids which may conical afterbodies. The normal momentum equation is replaced with an approximate algebraic expression. This simplification significantly reduces machine computation time. Comparisons of the present results with shock shapes and surface pressure distributions obtained by the more exact methods indicate that the program provides reasonably accurate results for smooth bodies in axisymmetric flow. However, further research is required to establish the proper approximate form of the normal momentum equation for the two-dimensional case.

  7. SCHEMA computational design of virus capsid chimeras: calibrating how genome packaging, protection, and transduction correlate with calculated structural disruption. (United States)

    Ho, Michelle L; Adler, Benjamin A; Torre, Michael L; Silberg, Jonathan J; Suh, Junghae


    Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions.

  8. Simulations of the pipe overpack to compute constitutive model parameters for use in WIPP room closure calculations.

    Energy Technology Data Exchange (ETDEWEB)

    Park, Byoung Yoon; Hansen, Francis D.


    The regulatory compliance determination for the Waste Isolation Pilot Plant includes the consideration of room closure. Elements of the geomechanical processes include salt creep, gas generation and mechanical deformation of the waste residing in the rooms. The WIPP was certified as complying with regulatory requirements based in part on the implementation of room closure and material models for the waste. Since the WIPP began receiving waste in 1999, waste packages have been identified that are appreciably more robust than the 55-gallon drums characterized for the initial calculations. The pipe overpack comprises one such waste package. This report develops material model parameters for the pipe overpack containers by using axisymmetrical finite element models. Known material properties and structural dimensions allow well constrained models to be completed for uniaxial, triaxial, and hydrostatic compression of the pipe overpack waste package. These analyses show that the pipe overpack waste package is far more rigid than the originally certified drum. The model parameters developed in this report are used subsequently to evaluate the implications to performance assessment calculations.

  9. Quantum Computational Calculations of the Ionization Energies of Acidic and Basic Amino Acids: Aspartate, Glutamate, Arginine, Lysine, and Histidine (United States)

    de Guzman, C. P.; Andrianarijaona, M.; Lee, Y. S.; Andrianarijaona, V.

    An extensive knowledge of the ionization energies of amino acids can provide vital information on protein sequencing, structure, and function. Acidic and basic amino acids are unique because they have three ionizable groups: the C-terminus, the N-terminus, and the side chain. The effects of multiple ionizable groups can be seen in how Aspartate's ionizable side chain heavily influences its preferred conformation (J Phys Chem A. 2011 April 7; 115(13): 2900-2912). Theoretical and experimental data on the ionization energies of many of these molecules is sparse. Considering each atom of the amino acid as a potential departing site for the electron gives insight on how the three ionizable groups affect the ionization process of the molecule and the dynamic coupling between the vibrational modes. In the following study, we optimized the structure of each acidic and basic amino acid then exported the three dimensional coordinates of the amino acids. We used ORCA to calculate single point energies for a region near the optimized coordinates and systematically went through the x, y, and z coordinates of each atom in the neutral and ionized forms of the amino acid. With the calculations, we were able to graph energy potential curves to better understand the quantum dynamic properties of the amino acids. The authors thank Pacific Union College Student Association for providing funds.

  10. Computational Calculation Of The Ionization Energies Of The Human Prion Protein By The Coarse-grain Method (United States)

    Lyu, Justin; Andrianarijaona, V. M.


    The causes of the misfolding of prion protein -i.e. the transformation of PrPC to PrPSc - have not been clearly elucidated. Many studies have focused on identifying possible chemical conditions, such as pH, temperature and chemical denaturation, that may trigger the pathological transformation of prion proteins (Weiwei Tao, Gwonchan Yoon, Penghui Cao, `` β-sheet-like formation during the mechanical unfolding of prion protein'', The Journal of Chemical Physics, 2015, 143, 125101). Here, we attempt to calculate the ionization energies of the prion protein, which will be able to shed light onto the possible causes of the misfolding. We plan on using the coarse-grain method which allows for a more feasible calculation time by means of approximation. We believe that by being able to approximate the ionization potential, particularly that of the regions known to form stable β-strands of the PrPSc form, the possible sources of denaturation, be it chemical or mechanical, may be narrowed down.


    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  12. Conversion Coefficients for Proton Beams using Standing and Sitting Male Hybrid Computational Phantom Calculated in Idealized Irradiation Geometries. (United States)

    Alves, M C; Santos, W S; Lee, C; Bolch, W E; Hunt, J G; Júnior, A B Carvalho


    The aim of this study was the calculation of conversion coefficients for absorbed doses per fluence (DT/Φ) using the sitting and standing male hybrid phantom (UFH/NCI) exposure to monoenergetic protons with energy ranging from 2 MeV to 10 GeV. Sex-averaged effective dose per fluence (E/Φ) using the results of DT/Φ for the male and female hybrid phantom in standing and sitting postures were also calculated. Results of E/Φ of UFH/NCI standing phantom were also compared with tabulated effective dose conversion coefficients provided in ICRP publication 116. To develop an exposure scenario implementing the male UFH/NCI phantom in sitting and standing postures was used the radiation transport code MCNPX. Whole-body irradiations were performed using the recommended irradiation geometries by ICRP publication 116 antero-posterior (AP), postero-anterior (PA), right and left lateral, rotational (ROT) and isotropic (ISO). In most organs, the conversion coefficients DT/Φ were similar for both postures. However, relative differences were significant for organs located in the lower abdominal region, such as prostate, testes and urinary bladder, especially in the AP geometry. Results of effective dose conversion coefficients were 18% higher in the standing posture of the UFH/NCI phantom, especially below 100 MeV in AP and PA. In lateral geometry, the conversion coefficients values below 20 MeV were 16% higher in the sitting posture. In ROT geometry, the differences were below 10%, for almost all energies. In ISO geometry, the differences in E/Φ were negligible. The results of E/Φ of UFH/NCI phantom were in general below the results of the conversion coefficients provided in ICRP publication 116.

  13. User's manual for DELSOL2: a computer code for calculating the optical performance and optimal system design for solar-thermal central-receiver plants

    Energy Technology Data Exchange (ETDEWEB)

    Dellin, T.A.; Fish, M.J.; Yang, C.L.


    DELSOL2 is a revised and substantially extended version of the DELSOL computer program for calculating collector field performance and layout, and optimal system design for solar thermal central receiver plants. The code consists of a detailed model of the optical performance, a simpler model of the non-optical performance, an algorithm for field layout, and a searching algorithm to find the best system design. The latter two features are coupled to a cost model of central receiver components and an economic model for calculating energy costs. The code can handle flat, focused and/or canted heliostats, and external cylindrical, multi-aperture cavity, and flat plate receivers. The program optimizes the tower height, receiver size, field layout, heliostat spacings, and tower position at user specified power levels subject to flux limits on the receiver and land constraints for field layout. The advantages of speed and accuracy characteristic of Version I are maintained in DELSOL2.


    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...


    CERN Multimedia

    I. Fisk


    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...


    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  17. Calculating time since death in a mock crime case comparing a new computational method (ExLAC) with the ADH method. (United States)

    Reibe-Pal, Saskia; Madea, Burkhard


    We compared the results of calculating a minimum post-mortem interval (PMImin) in a mock crime case using two different methods: accumulated degree hours (ADH method) and a newly developed computational model called ExLAC. For the ADH method we further applied five reference datasets for the development time of Calliphora vicina (Diptera: Calliphoridae) from 5 different countries and our results confirmed the following: (1) Reference data for blowfly development that has not been sampled using a local blowfly colony should not, in most circumstances, be used in estimating a PMI in real cases; and (2) The new method ExLAC might be a potential alternative to the ADH method.

  18. Development of a computational code for calculations of shielding in dental facilities; Desenvolvimento de um codigo computacional para calculos de blindagem em instalacoes odontologicas

    Energy Technology Data Exchange (ETDEWEB)

    Lava, Deise D.; Borges, Diogo da S.; Affonso, Renato R.W.; Guimaraes, Antonio C.F.; Moreira, Maria de L., E-mail:, E-mail:, E-mail:, E-mail:, E-mail: [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)


    This paper is prepared in order to address calculations of shielding to minimize the interaction of patients with ionizing radiation and / or personnel. The work includes the use of protection report Radiation in Dental Medicine (NCRP-145 or Radiation Protection in Dentistry), which establishes calculations and standards to be adopted to ensure safety to those who may be exposed to ionizing radiation in dental facilities, according to the dose limits established by CNEN-NN-3.1 standard published in September / 2011. The methodology comprises the use of computer language for processing data provided by that report, and a commercial application used for creating residential projects and decoration. The FORTRAN language was adopted as a method for application to a real case. The result is a programming capable of returning data related to the thickness of material, such as steel, lead, wood, glass, plaster, acrylic, acrylic and leaded glass, which can be used for effective shielding against single or continuous pulse beams. Several variables are used to calculate the thickness of the shield, as: number of films used in the week, film load, use factor, occupational factor, distance between the wall and the source, transmission factor, workload, area definition, beam intensity, intraoral and panoramic exam. Before the application of the methodology is made a validation of results with examples provided by NCRP-145. The calculations redone from the examples provide answers consistent with the report.

  19. A new computational algorithm for the calculation of maximum wind energy penetration in autonomous electrical generation systems

    Energy Technology Data Exchange (ETDEWEB)

    Kaldellis, J.K.; Kavadias, K.A. [Lab of Soft Energy Applications and Environmental Protection, TEI Piraeus, P.O. Box 41046, Athens 12201 (Greece); Filios, A.E. [Fluid Mechanics and Turbomachines Lab., School of Pedagogical and Technological Education, 14121 N. Heraklio Attica (Greece)


    The entirety of Aegean Sea Islands, including Crete, is characterized during the last decade by a considerable annual increase of the electrical power demand exceeding the 5% in annual basis. This continuous amplifying electricity consumption is hardly fulfilled by several outmoded internal combustion engines usually at a very high operational cost. On the other hand most of the islands possess high wind potential that may substantially contribute in order to meet the corresponding load demand. However, in this case some wind energy absorption problems related with the collaboration between wind parks and the local electricity production system cannot be neglected. In this context, the present study is devoted to realistically estimating the maximum wind energy absorption in autonomous electrical island networks. For this purpose a new reliable and integrated numerical algorithm is developed, using the available information of the corresponding electricity generation system, in order to calculate the maximum acceptable wind power contribution in the system, under the normal restrictions that the system manager imposes. The proposed algorithm is successfully compared with existing historical data as well as with the results of a recent investigation based almost exclusively on the existing wind park's energy production. (author)

  20. Vibrational, NMR and UV-visible spectroscopic investigation and NLO studies on benzaldehyde thiosemicarbazone using computational calculations (United States)

    Moorthy, N.; Prabakar, P. C. Jobe; Ramalingam, S.; Pandian, G. V.; Anbusrinivasan, P.


    In order to investigate the vibrational, electronic and NLO characteristics of the compound; benzaldehyde thiosemicarbazone (BTSC), the XRD, FT-IR, FT-Raman, NMR and UV-visible spectra were recorded and were analysed with the calculated spectra by using HF and B3LYP methods with 6-311++G(d,p) basis set. The XRD results revealed that the stabilized molecular systems were confined in orthorhombic unit cell system. The cause for the change of chemical and physical properties behind the compound has been discussed makes use of Mulliken charge levels and NBO in detail. The shift of molecular vibrational pattern by the fusing of ligand; thiosemicarbazone group with benzaldehyde has been keenly observed. The occurrence of in phase and out of phase molecular interaction over the frontier molecular orbitals was determined to evaluate the degeneracy of the electronic energy levels. The thermodynamical studies of the temperature region 100-1000 K to detect the thermal stabilization of the crystal phase of the compound were investigated. The NLO properties were evaluated by the determination of the polarizability and hyperpolarizability of the compound in crystal phase. The physical stabilization of the geometry of the compound has been explained by geometry deformation analysis.


    CERN Multimedia

    I. Fisk


    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  2. Computer

    CERN Document Server

    Atkinson, Paul


    The pixelated rectangle we spend most of our day staring at in silence is not the television as many long feared, but the computer-the ubiquitous portal of work and personal lives. At this point, the computer is almost so common we don't notice it in our view. It's difficult to envision that not that long ago it was a gigantic, room-sized structure only to be accessed by a few inspiring as much awe and respect as fear and mystery. Now that the machine has decreased in size and increased in popular use, the computer has become a prosaic appliance, little-more noted than a toaster. These dramati

  3. Development of computer programme for the use of empirical calculation of mining subsidence; Desarrollo informatico para utilizacion de los metodos empiricos de calculo de subsidencia minera

    Energy Technology Data Exchange (ETDEWEB)


    The fundamental objective of the project is the elaboration of a user friendly computer programme which allows to mining technicians an easy application of the empirical calculation methods of mining subsidence. As is well known these methods use, together with a suitable theoretical support, the experimental data obtained during a long period of mining activities in areas of different geological and geomechanical nature. Thus they can incorporate to the calculus the local parameters that hardly could be taken into account by using pure theoretical methods. In general, as basic calculation method, it has been followed the procedure development by the VNIMI Institute of Leningrad, a particularly suitable method for application to the most various conditions that may occur in the mining of flat or steep seams. The computer programme has been worked out on the basis of MicroStation System (5.0 version) of INTERGRAPH which allows the development of new applications related to the basic aims of the project. An important feature, of the programme that may be quoted is the easy adaptation to local conditions by adjustment of the geomechanical or mining parameters according to the values obtained from the own working experience. (Author)

  4. Application of the ICRP/ICRU reference computational phantoms to internal dosimetry: calculation of specific absorbed fractions of energy for photons and electrons. (United States)

    Hadid, L; Desbrée, A; Schlattl, H; Franck, D; Blanchardon, E; Zankl, M


    The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum München (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed

  5. Application of the ICRP/ICRU reference computational phantoms to internal dosimetry: calculation of specific absorbed fractions of energy for photons and electrons (United States)

    Hadid, L.; Desbrée, A.; Schlattl, H.; Franck, D.; Blanchardon, E.; Zankl, M.


    The emission of radiation from a contaminated body region is connected with the dose received by radiosensitive tissue through the specific absorbed fractions (SAFs) of emitted energy, which is therefore an essential quantity for internal dose assessment. A set of SAFs were calculated using the new adult reference computational phantoms, released by the International Commission on Radiological Protection (ICRP) together with the International Commission on Radiation Units and Measurements (ICRU). Part of these results has been recently published in ICRP Publication 110 (2009 Adult reference computational phantoms (Oxford: Elsevier)). In this paper, we mainly discuss the results and also present them in numeric form. The emission of monoenergetic photons and electrons with energies ranging from 10 keV to 10 MeV was simulated for three source organs: lungs, thyroid and liver. SAFs were calculated for four target regions in the body: lungs, colon wall, breasts and stomach wall. For quality assurance purposes, the simulations were performed simultaneously at the Helmholtz Zentrum München (HMGU, Germany) and at the Institute for Radiological Protection and Nuclear Safety (IRSN, France), using the Monte Carlo transport codes EGSnrc and MCNPX, respectively. The comparison of results shows overall agreement for photons and high-energy electrons with differences lower than 8%. Nevertheless, significant differences were found for electrons at lower energy for distant source/target organ pairs. Finally, the results for photons were compared to the SAF values derived using mathematical phantoms. Significant variations that can amount to 200% were found. The main reason for these differences is the change of geometry in the more realistic voxel body models. For electrons, no SAFs have been computed with the mathematical phantoms; instead, approximate formulae have been used by both the Medical Internal Radiation Dose committee (MIRD) and the ICRP due to the limitations imposed


    CERN Document Server

    I. Fisk


    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...


    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  8. Real-space finite-difference calculation method of generalized Bloch wave functions and complex band structures with reduced computational cost. (United States)

    Tsukamoto, Shigeru; Hirose, Kikuji; Blügel, Stefan


    Generalized Bloch wave functions of bulk structures, which are composed of not only propagating waves but also decaying and growing evanescent waves, are known to be essential for defining the open boundary conditions in the calculations of the electronic surface states and scattering wave functions of surface and junction structures. Electronic complex band structures being derived from the generalized Bloch wave functions are also essential for studying bound states of the surface and junction structures, which do not appear in conventional band structures. We present a novel calculation method to obtain the generalized Bloch wave functions of periodic bulk structures by solving a generalized eigenvalue problem, whose dimension is drastically reduced in comparison with the conventional generalized eigenvalue problem derived by Fujimoto and Hirose [Phys. Rev. B 67, 195315 (2003)]. The generalized eigenvalue problem derived in this work is even mathematically equivalent to the conventional one, and, thus, we reduce computational cost for solving the eigenvalue problem considerably without any approximation and losing the strictness of the formulations. To exhibit the performance of the present method, we demonstrate practical calculations of electronic complex band structures and electron transport properties of Al and Cu nanoscale systems. Moreover, employing atom-structured electrodes and jellium-approximated ones for both of the Al and Si monatomic chains, we investigate how much the electron transport properties are unphysically affected by the jellium parts.


    CERN Multimedia

    I. Fisk


    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...


    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...


    CERN Multimedia

    I. Fisk


    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...


    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...


    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...


    CERN Multimedia

    I. Fisk


    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...


    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...


    CERN Multimedia

    I. Fisk


    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...


    CERN Document Server

    I. Fisk


      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...


    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...


    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...


    CERN Multimedia

    Contributions from I. Fisk


    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...


    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...


    CERN Multimedia


    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  3. SU-E-I-05: A Correction Algorithm for Kilovoltage Cone-Beam Computed Tomography Dose Calculations in Cervical Cancer Patients

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, J; Zhang, W; Lu, J [Cancer Hospital of Shantou University Medical College, Shantou, Guangdong (China)


    Purpose: To investigate the accuracy and feasibility of dose calculations using kilovoltage cone beam computed tomography in cervical cancer radiotherapy using a correction algorithm. Methods: The Hounsfield units (HU) and electron density (HU-density) curve was obtained for both planning CT (pCT) and kilovoltage cone beam CT (CBCT) using a CIRS-062 calibration phantom. The pCT and kV-CBCT images have different HU values, and if the HU-density curve of CBCT was directly used to calculate dose in CBCT images may have a deviation on dose distribution. It is necessary to normalize the different HU values between pCT and CBCT. A HU correction algorithm was used for CBCT images (cCBCT). Fifteen intensity-modulated radiation therapy (IMRT) plans of cervical cancer were chosen, and the plans were transferred to the pCT and cCBCT data sets without any changes for dose calculations. Phantom and patient studies were carried out. The dose differences and dose distributions were compared between cCBCT plan and pCT plan. Results: The HU number of CBCT was measured by several times, and the maximum change was less than 2%. To compare with pCT, the CBCT and cCBCT has a discrepancy, the dose differences in CBCT and cCBCT images were 2.48%±0.65% (range: 1.3%∼3.8%) and 0.48%±0.21% (range: 0.1%∼0.82%) for phantom study, respectively. For dose calculation in patient images, the dose differences were 2.25%±0.43% (range: 1.4%∼3.4%) and 0.63%±0.35% (range: 0.13%∼0.97%), respectively. And for the dose distributions, the passing rate of cCBCT was higher than the CBCTs. Conclusion: The CBCT image for dose calculation is feasible in cervical cancer radiotherapy, and the correction algorithm offers acceptable accuracy. It will become a useful tool for adaptive radiation therapy.

  4. Computer-assisted radiographic calculation of spinal curvature in brachycephalic "screw-tailed" dog breeds with congenital thoracic vertebral malformations: reliability and clinical evaluation.

    Directory of Open Access Journals (Sweden)

    Julien Guevar

    Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.

  5. Computer-assisted radiographic calculation of spinal curvature in brachycephalic "screw-tailed" dog breeds with congenital thoracic vertebral malformations: reliability and clinical evaluation. (United States)

    Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez-Quintana, Rodrigo


    The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.

  6. A Deep Insight into the Details of the Interisomerization and Decomposition Mechanism of o-Quinolyl and o-Isoquinolyl Radicals. Quantum Chemical Calculations and Computer Modeling. (United States)

    Dubnikova, Faina; Tamburu, Carmen; Lifshitz, Assa


    The isomerization of o-quinolyl ↔ o-isoquinolyl radicals and their thermal decomposition were studied by quantum chemical methods, where potential energy surfaces of the reaction channels and their kinetics rate parameters were determined. A detailed kinetics scheme containing 40 elementary steps was constructed. Computer simulations were carried out to determine the isomerization mechanism and the distribution of reaction products in the decomposition. The calculated mole percent of the stable products was compared to the experimental values that were obtained in this laboratory in the past, using the single pulse shock tube. The agreement between the experimental and the calculated mole percents was very good. A map of the figures containing the mole percent's of eight stable products of the decomposition plotted vs T are presented. The fast isomerization of o-quinolyl → o-isoquinolyl radicals via the intermediate indene imine radical and the attainment of fast equilibrium between these two radicals is the reason for the identical product distribution regardless whether the reactant radical is o-quinolyl or o-isoquinolyl. Three of the main decomposition products of o-quinolyl radical, are those containing the benzene ring, namely, phenyl, benzonitrile, and phenylacetylene radicals. They undergo further decomposition mainly at high temperatures via two types of reactions: (1) Opening of the benzene ring in the radicals, followed by splitting into fragments. (2) Dissociative attachment of benzonitrile and phenyl acetylene by hydrogen atoms to form hydrogen cyanide and acetylene.

  7. Development of a computer code for shielding calculation in X-ray facilities; Desenvolvimento de um codigo computacional para calculos de blindagem em salas radiograficas

    Energy Technology Data Exchange (ETDEWEB)

    Borges, Diogo da S.; Lava, Deise D.; Affonso, Renato R.W.; Moreira, Maria de L.; Guimaraes, Antonio C.F., E-mail:, E-mail:, E-mail:, E-mail:, E-mail: [Instituto de Engenharia Nuclear (IEN/CNEN-RJ), Rio de Janeiro, RJ (Brazil)


    The construction of an effective barrier against the interaction of ionizing radiation present in X-ray rooms requires consideration of many variables. The methodology used for specifying the thickness of primary and secondary shielding of an traditional X-ray room considers the following factors: factor of use, occupational factor, distance between the source and the wall, workload, Kerma in the air and distance between the patient and the receptor. With these data it was possible the development of a computer program in order to identify and use variables in functions obtained through graphics regressions offered by NCRP Report-147 (Structural Shielding Design for Medical X-Ray Imaging Facilities) for the calculation of shielding of the room walls as well as the wall of the darkroom and adjacent areas. With the built methodology, a program validation is done through comparing results with a base case provided by that report. The thickness of the obtained values comprise various materials such as steel, wood and concrete. After validation is made an application in a real case of radiographic room. His visual construction is done with the help of software used in modeling of indoor and outdoor. The construction of barriers for calculating program resulted in a user-friendly tool for planning radiographic rooms to comply with the limits established by CNEN-NN-3:01 published in September / 2011.

  8. Reducing the memory usage for effective computer-generated hologram calculation using compressed look-up table in full-color holographic display. (United States)

    Jia, Jia; Wang, Yongtian; Liu, Juan; Li, Xin; Pan, Yijie; Sun, Zhumei; Zhang, Bin; Zhao, Qing; Jiang, Wei


    A fast algorithm with low memory usage is proposed to generate the hologram for full-color 3D display based on a compressed look-up table (C-LUT). The C-LUT is described and built to reduce the memory usage and speed up the calculation of the computer-generated hologram (CGH). Numerical simulations and optical experiments are performed to confirm this method, and several other algorithms are compared. The results show that the memory usage of the C-LUT is kept low when number of depth layers of the 3D object is increased, and the time for building the C-LUT is independent of the number of depth layers of the 3D object. The algorithm based on C-LUT is an efficient method for saving memory usage and calculation time, and it is expected that it could be used for realizing real-time and full-color 3D holographic display in the future.

  9. Development of a model for the rational design of molecular imprinted polymer: Computational approach for combined molecular dynamics/quantum mechanics calculations

    Energy Technology Data Exchange (ETDEWEB)

    Dong Cunku [Department of Chemistry, Harbin Institute of Technology, Harbin 150090 (China); Li Xin, E-mail: [Department of Chemistry, Harbin Institute of Technology, Harbin 150090 (China); Guo Zechong [School of Municipal Environmental Engineering, Harbin Institute of Technology, Harbin 150090 (China); Qi Jingyao, E-mail: [School of Municipal Environmental Engineering, Harbin Institute of Technology, Harbin 150090 (China)


    A new rational approach for the preparation of molecularly imprinted polymer (MIP) based on the combination of molecular dynamics (MD) simulations and quantum mechanics (QM) calculations is described in this work. Before performing molecular modeling, a virtual library of functional monomers was created containing forty frequently used monomers. The MD simulations were first conducted to screen the top three monomers from virtual library in each porogen-acetonitrile, chloroform and carbon tetrachloride. QM simulations were then performed with an aim to select the optimum monomer and progen solvent in which the QM simulations were carried out; the monomers giving the highest binding energies were chosen as the candidate to prepare MIP in its corresponding solvent. The acetochlor, a widely used herbicide, was chosen as the target analyte. According to the theoretical calculation results, the MIP with acetochlor as template was prepared by emulsion polymerization method using N,N-methylene bisacrylamide (MBAAM) as functional monomer and divinylbenzene (DVB) as cross-linker in chloroform. The synthesized MIP was then tested by equilibrium-adsorption method, and the MIP demonstrated high removal efficiency to the acetochlor. Mulliken charge distribution and {sup 1}H NMR spectroscopy of the synthesized MIP provided insight on the nature of recognition during the imprinting process probing the governing interactions for selective binding site formation at a molecular level. We think the computer simulation method first proposed in this paper is a novel and reliable method for the design and synthesis of MIP.

  10. Methods and computer executable instructions for rapidly calculating simulated particle transport through geometrically modeled treatment volumes having uniform volume elements for use in radiotherapy (United States)

    Frandsen, Michael W.; Wessol, Daniel E.; Wheeler, Floyd J.


    Methods and computer executable instructions are disclosed for ultimately developing a dosimetry plan for a treatment volume targeted for irradiation during cancer therapy. The dosimetry plan is available in "real-time" which especially enhances clinical use for in vivo applications. The real-time is achieved because of the novel geometric model constructed for the planned treatment volume which, in turn, allows for rapid calculations to be performed for simulated movements of particles along particle tracks there through. The particles are exemplary representations of neutrons emanating from a neutron source during BNCT. In a preferred embodiment, a medical image having a plurality of pixels of information representative of a treatment volume is obtained. The pixels are: (i) converted into a plurality of substantially uniform volume elements having substantially the same shape and volume of the pixels; and (ii) arranged into a geometric model of the treatment volume. An anatomical material associated with each uniform volume element is defined and stored. Thereafter, a movement of a particle along a particle track is defined through the geometric model along a primary direction of movement that begins in a starting element of the uniform volume elements and traverses to a next element of the uniform volume elements. The particle movement along the particle track is effectuated in integer based increments along the primary direction of movement until a position of intersection occurs that represents a condition where the anatomical material of the next element is substantially different from the anatomical material of the starting element. This position of intersection is then useful for indicating whether a neutron has been captured, scattered or exited from the geometric model. From this intersection, a distribution of radiation doses can be computed for use in the cancer therapy. The foregoing represents an advance in computational times by multiple factors of


    CERN Multimedia

    I. Fisk


    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  12. Cooperative and competitive concurrency in scientific computing. A full open-source upgrade of the program for dynamical calculations of RHEED intensity oscillations (United States)

    Daniluk, Andrzej


    A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures. New version program summaryProgram title: GrowthCP Catalogue identifier: ADVL_v4_0 Program summary URL: Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, No. of lines in distributed program, including test data, etc.: 32 269 No. of bytes in distributed program, including test data, etc.: 8 234 229 Distribution format: tar.gz Programming language: Free Object Pascal Computer: multi-core x64-based PC Operating system: Windows XP, Vista, 7 Has the code been vectorised or parallelized?: No RAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory. Supplementary material: The figures mentioned in the "Summary of revisions" section can be obtained here. Classification: 4.3, 7.2, 6.2, 8, 14 External routines: Lazarus [1] Catalogue

  13. Calculator calculus

    CERN Document Server

    McCarty, George


    How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en­ couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...

  14. Description of input and examples for PHREEQC version 3: a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations (United States)

    Parkhurst, David L.; Appelo, C.A.J.


    PHREEQC version 3 is a computer program written in the C and C++ programming languages that is designed to perform a wide variety of aqueous geochemical calculations. PHREEQC implements several types of aqueous models: two ion-association aqueous models (the Lawrence Livermore National Laboratory model and WATEQ4F), a Pitzer specific-ion-interaction aqueous model, and the SIT (Specific ion Interaction Theory) aqueous model. Using any of these aqueous models, PHREEQC has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations with reversible and irreversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and pressure and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters within specified compositional uncertainty limits. Many new modeling features were added to PHREEQC version 3 relative to version 2. The Pitzer aqueous model (pitzer.dat database, with keyword PITZER) can be used for high-salinity waters that are beyond the range of application for the Debye-Hückel theory. The Peng-Robinson equation of state has been implemented for calculating the solubility of gases at high pressure. Specific volumes of aqueous species are calculated as a function of the dielectric properties of water and the ionic strength of the solution, which allows calculation of pressure effects on chemical reactions and the density of a solution. The specific conductance and the density of a solution are calculated and printed in the output file. In addition to Runge-Kutta integration, a stiff ordinary differential equation solver (CVODE) has been included for kinetic calculations with multiple rates that occur at widely different time scales


    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  16. ISICS2011, an updated version of ISICS: A program for calculation K-, L-, and M-shell cross sections from PWBA and ECPSSR theories using a personal computer (United States)

    Cipolla, Sam J.


    In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged. New version program summaryProgram title: ISICS2011 Catalogue identifier: ADDS_v5_0 Program summary URL: Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, No. of lines in distributed program, including test data, etc.: 6011 No. of bytes in distributed program, including test data, etc.: 130 587 Distribution format: tar.gz Programming language: C Computer: 80486 or higher-level PCs Operating system: WINDOWS XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v4_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716. Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1]. Summary of revisions: The format field for

  17. The repeatability and reproducibility of fetal cardiac ventricular volume calculations utilizing Spatio-Temporal Image Correlation (STIC) and Virtual Organ Computed-aided AnaLysis (VOCAL™) (United States)

    Hamill, Neil; Romero, Roberto; Hassan, Sonia S; Lee, Wesley; Myers, Stephen A; Mittal, Pooja; Kusanovic, Juan Pedro; Chaiworapongsa, Tinnakorn; Vaisbuch, Edi; Espinoza, Jimmy; Gotsch, Francesca; Carletti, Angela; Goncalves, Luis F.; Yeo, Lami


    Objective To quantify the repeatability and reproducibility of fetal cardiac ventricular volumes obtained utilizing STIC and VOCAL™. Methods A technique was developed to compute ventricular volumes using the sub-feature: Contour Finder: Trace. Twenty-five normal pregnancies were evaluated for the following: (1) to compare the coefficient of variation (CV) in ventricular volumes between 15° and 30° rotation; (2) to compare the CV between three methods of quantifying ventricular volumes: (a) Manual Trace (b) Inversion Mode and (c) Contour Finder: Trace; and (3) to determine repeatability by calculating agreement and reliability of ventricular volumes when each STIC was measured twice by 3 observers. Reproducibility was assessed by obtaining two STICs from each of 44 normal pregnancies. For each STIC, 2 ventricular volume calculations were performed, and agreement and reliability were evaluated. Additionally, measurement error was examined. Results (1) Agreement was better with 15° rotation than 30° (15°: 3.6%, 95% CI: 3.0 – 4.2 versus 30°: 7.1%, 95% CI: 5.8 – 8.6; p<0.001); (2) ventricular volumes obtained with Contour Finder: Trace had better agreement than those obtained using either Inversion Mode (Contour Finder: Trace: 3.6%, 95% CI 3.0 – 4.2 versus Inversion Mode: 6.0%, 95% CI 4.9 – 7.2; p < 0.001) or Manual Trace (10.5%, 95% CI 8.7 – 12.5; p < 0.001); (3) ventricular volumes were repeatable with good agreement and excellent reliability for both intra-observer and inter-observer measurements; and 4) ventricular volumes were reproducible with negligible difference in agreement and good reliability. In addition, bias between STIC acquisitions was minimal (<1%; mean percent difference −0.4%, 95% limits of agreement: −5.4 – 5.9). Conclusions Fetal echocardiography utilizing STIC and VOCAL allows repeatable and reproducible calculation of ventricular volumes with the sub-feature Contour Finder: Trace. PMID:19778875

  18. Theoretical variance analysis of single- and dual-energy computed tomography methods for calculating proton stopping power ratios of biological tissues. (United States)

    Yang, M; Virshup, G; Clayton, J; Zhu, X R; Mohan, R; Dong, L


    We discovered an empirical relationship between the logarithm of mean excitation energy (ln Im) and the effective atomic number (EAN) of human tissues, which allows for computing patient-specific proton stopping power ratios (SPRs) using dual-energy CT (DECT) imaging. The accuracy of the DECT method was evaluated for 'standard' human tissues as well as their variance. The DECT method was compared to the existing standard clinical practice-a procedure introduced by Schneider et al at the Paul Scherrer Institute (the stoichiometric calibration method). In this simulation study, SPRs were derived from calculated CT numbers of known material compositions, rather than from measurement. For standard human tissues, both methods achieved good accuracy with the root-mean-square (RMS) error well below 1%. For human tissues with small perturbations from standard human tissue compositions, the DECT method was shown to be less sensitive than the stoichiometric calibration method. The RMS error remained below 1% for most cases using the DECT method, which implies that the DECT method might be more suitable for measuring patient-specific tissue compositions to improve the accuracy of treatment planning for charged particle therapy. In this study, the effects of CT imaging artifacts due to the beam hardening effect, scatter, noise, patient movement, etc were not analyzed. The true potential of the DECT method achieved in theoretical conditions may not be fully achievable in clinical settings. Further research and development may be needed to take advantage of the DECT method to characterize individual human tissues.

  19. Exact p-value calculation for heterotypic clusters of regulatory motifs and its application in computational annotation of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Roytberg Mikhail A


    Full Text Available Abstract Background cis-Regulatory modules (CRMs of eukaryotic genes often contain multiple binding sites for transcription factors. The phenomenon that binding sites form clusters in CRMs is exploited in many algorithms to locate CRMs in a genome. This gives rise to the problem of calculating the statistical significance of the event that multiple sites, recognized by different factors, would be found simultaneously in a text of a fixed length. The main difficulty comes from overlapping occurrences of motifs. So far, no tools have been developed allowing the computation of p-values for simultaneous occurrences of different motifs which can overlap. Results We developed and implemented an algorithm computing the p-value that s different motifs occur respectively k1, ..., ks or more times, possibly overlapping, in a random text. Motifs can be represented with a majority of popular motif models, but in all cases, without indels. Zero or first order Markov chains can be adopted as a model for the random text. The computational tool was tested on the set of cis-regulatory modules involved in D. melanogaster early development, for which there exists an annotation of binding sites for transcription factors. Our test allowed us to correctly identify transcription factors cooperatively/competitively binding to DNA. Method The algorithm that precisely computes the probability of simultaneous motif occurrences is inspired by the Aho-Corasick automaton and employs a prefix tree together with a transition function. The algorithm runs with the O(n|Σ|(m|ℋ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFlecsaaa@3762@| + K|σ|K ∏i ki time complexity, where n is the length of the text, |Σ| is the alphabet size, m is the

  20. A computer program for the calculation of the flow field in supersonic mixed-compression inlets at angle of attack using the three-dimensional method of characteristics with discrete shock wave fitting (United States)

    Vadyak, J.; Hoffman, J. D.; Bishop, A. R.


    The calculation procedure is based on the method of characteristics for steady three-dimensional flow. The bow shock wave and the internal shock wave system were computed using a discrete shock wave fitting procedure. The general structure of the computer program is discussed, and a brief description of each subroutine is given. All program input parameters are defined, and a brief discussion on interpretation of the output is provided. A number of sample cases, complete with data deck listings, are presented.

  1. Calcul statistique du volume des blocs matriciels d'un gisement fissuré The Statistical Computing of Matrix Block Volume in a Fissured Reservoir

    Directory of Open Access Journals (Sweden)

    Guez F.


    Full Text Available La recherche des conditions optimales d'exploitation d'un gisement fissuré repose sur une bonne description de la fissuration. En conséquence il est nécessaire de définir les dimensions et volumes des blocs matriciels en chaque point d'une structure. Or la géométrie du milieu (juxtaposition et formes des blocs est généralement trop complexe pour se prêter au calcul. Aussi, dans une précédente communication, avons-nous dû tourner cette difficulté par un raisonnement sur des moyennes (pendages, azimuts, espacement des fissures qui nous a conduits à un ordre de grandeur des volumes. Cependant un volume moyen ne peut pas rendre compte d'une loi de répartition des volumes des blocs. Or c'est cette répartition qui conditionne le choix d'une ou plusieurs méthodes successives de récupération. Aussi présentons-nous ici une méthode originale de calcul statistique de la loi de distribution des volumes des blocs matriciels, applicable en tout point d'un gisement. La part de gisement concernée par les blocs de volume donné en est déduite. La connaissance générale du phénomène de la fracturation sert de base au modèle. Les observations de subsurface sur la fracturation du gisement en fournissent les données (histogramme d'orientation et d'espacement des fissures.Une application au gisement d'Eschau (Alsace, France est rapportée ici pour illustrer la méthode. The search for optimum production conditions for a fissured reservoir depends on having a good description of the fissure pattern. Hence the sizes and volumes of the matrix blocks must be defined at all points in a structure. However, the geometry of the medium (juxtaposition and shapes of blocks in usually too complex for such computation. This is why, in a previous paper, we got around this problem by reasoning on the bases of averages (clips, azimuths, fissure spacing, and thot led us to an order of magnitude of the volumes. Yet a mean volume cannot be used to explain

  2. Simulating biochemical physics with computers: 1. Enzyme catalysis by phosphotriesterase and phosphodiesterase; 2. Integration-free path-integral method for quantum-statistical calculations (United States)

    Wong, Kin-Yiu

    We have simulated two enzymatic reactions with molecular dynamics (MD) and combined quantum mechanical/molecular mechanical (QM/MM) techniques. One reaction is the hydrolysis of the insecticide paraoxon catalyzed by phosphotriesterase (PTE). PTE is a bioremediation candidate for environments contaminated by toxic nerve gases (e.g., sarin) or pesticides. Based on the potential of mean force (PMF) and the structural changes of the active site during the catalysis, we propose a revised reaction mechanism for PTE. Another reaction is the hydrolysis of the second-messenger cyclic adenosine 3'-5'-monophosphate (cAMP) catalyzed by phosphodiesterase (PDE). Cyclicnucleotide PDE is a vital protein in signal-transduction pathways and thus a popular target for inhibition by drugs (e.g., ViagraRTM). A two-dimensional (2-D) free-energy profile has been generated showing that the catalysis by PDE proceeds in a two-step SN2-type mechanism. Furthermore, to characterize a chemical reaction mechanism in experiment, a direct probe is measuring kinetic isotope effects (KIEs). KIEs primarily arise from internuclear quantum-statistical effects, e.g., quantum tunneling and quantization of vibration. To systematically incorporate the quantum-statistical effects during MD simulations, we have developed an automated integration-free path-integral (AIF-PI) method based on Kleinert's variational perturbation theory for the centroid density of Feynman's path integral. Using this analytic method, we have performed ab initio pathintegral calculations to study the origin of KIEs on several series of proton-transfer reactions from carboxylic acids to aryl substituted alpha-methoxystyrenes in water. In addition, we also demonstrate that the AIF-PI method can be used to systematically compute the exact value of zero-point energy (beyond the harmonic approximation) by simply minimizing the centroid effective potential.

  3. Magnetic Field Grid Calculator (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...

  4. Computer code selection criteria for flow and transport code(s) to be used in undisturbed vadose zone calculations for TWRS environmental analyses

    Energy Technology Data Exchange (ETDEWEB)

    Mann, F.M.


    The Tank Waste Remediation System (TWRS) is responsible for the safe storage, retrieval, and disposal of waste currently being held in 177 underground tanks at the Hanford Site. In order to successfully carry out its mission, TWRS must perform environmental analyses describing the consequences of tank contents leaking from tanks and associated facilities during the storage, retrieval, or closure periods and immobilized low-activity tank waste contaminants leaving disposal facilities. Because of the large size of the facilities and the great depth of the dry zone (known as the vadose zone) underneath the facilities, sophisticated computer codes are needed to model the transport of the tank contents or contaminants. This document presents the code selection criteria for those vadose zone analyses (a subset of the above analyses) where the hydraulic properties of the vadose zone are constant in time the geochemical behavior of the contaminant-soil interaction can be described by simple models, and the geologic or engineered structures are complicated enough to require a two-or three dimensional model. Thus, simple analyses would not need to use the fairly sophisticated codes which would meet the selection criteria in this document. Similarly, those analyses which involve complex chemical modeling (such as those analyses involving large tank leaks or those analyses involving the modeling of contaminant release from glass waste forms) are excluded. The analyses covered here are those where the movement of contaminants can be relatively simply calculated from the moisture flow. These code selection criteria are based on the information from the low-level waste programs of the US Department of Energy (DOE) and of the US Nuclear Regulatory Commission as well as experience gained in the DOE Complex in applying these criteria. Appendix table A-1 provides a comparison between the criteria in these documents and those used here. This document does not define the models (that

  5. Computational chemistry of natural products: a comparison of the chemical reactivity of isonaringin calculated with the M06 family of density functionals. (United States)

    Glossman-Mitnik, Daniel


    The M06 family of density functionals has been assessed for the calculation of the molecular structure and properties of the Isonaringin flavonoid that can be an interesting material for dye-sensitized solar cells (DSSC). The chemical reactivity descriptors have been calculated through chemical reactivity theory within DFT (CR-DFT). The active sites for nucleophilic and electrophilic attacks have been chosen by relating them to the Fukui function indices and the dual descriptor f ((2))(r). A comparison between the descriptors calculated through vertical energy values and those arising from the Janak's theorem approximation have been performed in order to check for the validity of the last procedure.

  6. Declination Calculator (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  7. High Velocity Jet Noise Source Location and Reduction. Task 2 Supplement. Computer Program for Calculating the Aeroacoustic Characteristics of Jets form Nozzles of Arbitrary Shape. (United States)


    noise) by specifying RMIN as input, but with NCBDY = 0. This option causes the computation to begin at r = RMIN(KA), where KA is the axial station...39990 70 COMPUTATION OF AFRO-ACOJ-TIC P’OPf-PTIFS O SI.PPPSSOM N)7ZLS CASE NO, I CkD 7-TIIPF AP=2.i N077LF - VJ=??00 FP, - rTJ=i600 nEr-p AXIAL LOCATION

  8. 物联网的边界计算模型:雾计算%A boundary calculation model of IoT fog computing

    Institute of Scientific and Technical Information of China (English)



    在物联网和云计算带来技术变革和带动产业发展的过程中,由于网络接入设备激增,而网络带宽有限的情况下,思科公司推出了雾计算的概念。首先探讨雾计算的特征和应用模式,然后分析雾计算的“雾节点”与云计算的“云节点”以及物联网的“物节点”的互操作方法,并总结了雾计算的用例,最后给出了前景展望。%During the process of technological change and industry development brought by IoT and cloud computing, Cisco Corporation launched the concept of fog computing because of the surging network access equipments and limited network bandwidth. First, the characteristics and application mode of fog computing are discussed. Then the interoperation method among“the fog nodes”of fog computing,“the cloud nodes”of cloud computing and“the entity nodes”of IoT is analyzed. The examples of fog computing are summarized. The prospect forecast is put forward.

  9. A parametric study of planform and aeroelastic effects on aerodynamic center, alpha- and q- stability derivatives. Appendix A: A computer program for calculating alpha- and q- stability derivatives and induced drag for thin elastic aeroplanes at subsonic and supersonic speeds (United States)

    Roskam, J.; Lan, C.; Mehrotra, S.


    The computer program used to determine the rigid and elastic stability derivatives presented in the summary report is listed in this appendix along with instructions for its use, sample input data and answers. This program represents the airplane at subsonic and supersonic speeds as (a) thin surface(s) (without dihedral) composed of discrete panels of constant pressure according to the method of Woodward for the aerodynamic effects and slender beam(s) for the structural effects. Given a set of input data, the computer program calculates an aerodynamic influence coefficient matrix and a structural influence coefficient matrix.

  10. Quantification of the computational accuracy of code systems on the burn-up credit using experimental re-calculations; Quantifizierung der Rechengenauigkeit von Codesystemen zum Abbrandkredit durch Experimentnachrechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias; Hannstein, Volker; Kilger, Robert; Moser, Franz-Eberhard; Pfeiffer, Arndt; Stuke, Maik


    In order to account for the reactivity-reducing effect of burn-up in the criticality safety analysis for systems with irradiated nuclear fuel (''burnup credit''), numerical methods to determine the enrichment and burnup dependent nuclide inventory (''burnup code'') and its resulting multiplication factor k{sub eff} (''criticality code'') are applied. To allow for reliable conclusions, for both calculation systems the systematic deviations of the calculation results from the respective true values, the bias and its uncertainty, are being quantified by calculation and analysis of a sufficient number of suitable experiments. This quantification is specific for the application case under scope and is also called validation. GRS has developed a methodology to validate a calculation system for the application of burnup credit in the criticality safety analysis for irradiated fuel assemblies from pressurized water reactors. This methodology was demonstrated by applying the GRS home-built KENOREST burnup code and the criticality calculation sequence CSAS5 from SCALE code package. It comprises a bounding approach and alternatively a stochastic, which both have been exemplarily demonstrated by use of a generic spent fuel pool rack and a generic dry storage cask, respectively. Based on publicly available post irradiation examination and criticality experiments, currently the isotopes of uranium and plutonium elements can be regarded for.

  11. Calcul de la pluie sur le bassin versant du lac Titicaca pendant l'Holocène. Computation of the rainfall on Lake Titicaca catchment during the Holocene (United States)

    Talbi, Amal; Coudrain, Anne; Ribstein, Pierre; Pouyaud, Bernard


    The water levels of a lake situated in an endorheic catchment make it possible to calculate the associated rainfall rate on the basis of the water balance over the whole catchment. Evolution during the Holocene of water levels in Lake Titicaca (Bolivia), previously published, shows that in the most arid period, between 8 000 yr and 4 000 yr BP, the average level was 50 m lower than today. The calculated rainfall associated with this low level is 635 ± 50 mm·yr -1 i.e. about 18 % lower than the present amount.

  12. Calculation of limits for significant bidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G;


    .01). RESULTS: From an individual factors used to multiply the first result were calculated to create limits for constant cumulated significant changes. The factors were shown to become a function of the number of results included and the total coefficient of variation. CONCLUSIONS: The first result should...

  13. Calculator. Owning a Small Business. (United States)

    Parma City School District, OH.

    Seven activities are presented in this student workbook designed for an exploration of small business ownership and the use of the calculator in this career. Included are simulated situations in which students must use a calculator to compute property taxes; estimate payroll taxes and franchise taxes; compute pricing, approximate salaries,…

  14. Scientific calculating peripheral

    Energy Technology Data Exchange (ETDEWEB)

    Ethridge, C.D.; Nickell, J.D. Jr.; Hanna, W.H.


    A scientific calculating peripheral for small intelligent data acquisition and instrumentation systems and for distributed-task processing systems is established with a number-oriented microprocessor controlled by a single component universal peripheral interface microcontroller. A MOS/LSI number-oriented microprocessor provides the scientific calculating capability with Reverse Polish Notation data format. Master processor task definition storage, input data sequencing, computation processing, result reporting, and interface protocol is managed by a single component universal peripheral interface microcontroller.

  15. Parallel computing for homogeneous diffusion and transport equations in neutronics; Calcul parallele pour les equations de diffusion et de transport homogenes en neutronique

    Energy Technology Data Exchange (ETDEWEB)

    Pinchedez, K


    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  16. EQ3NR, a computer program for geochemical aqueous speciation-solubility calculations: Theoretical manual, user`s guide, and related documentation (Version 7.0); Part 3

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, T.J.


    EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desired electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = {minus}2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file.

  17. Multi-user software of radio therapeutical calculation using a computational network; Software multiusuario de calculo radioterapeutico usando una red de computo

    Energy Technology Data Exchange (ETDEWEB)

    Allaucca P, J.J.; Picon C, C.; Zaharia B, M. [Departamento de Radioterapia, Instituto de Enfermedades Neoplasicas, Av. Angamos Este 2520, Lima 34 (Peru)


    It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)

  18. MEMS Calculator (United States)

    SRD 166 MEMS Calculator (Web, free access)   This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.

  19. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2. (l’Hypersonique experimentale et de calcul - capacite, ameliorafion et validation) (United States)


    providing reliable information on real-gas properties via first-principles quantum mechanical calculations on collision cross sections of electronic...ESPAGNE INTA (AGARD Publications) Carretera de Torrejön a Ajalvir, Pk.4 28850 Torrejön de Ardoz - Madrid ETATS-UNIS NASA Center for AeroSpace...SDFA - Centro de Documenta9äo Alfragide P-2720 Amadora SPAIN INTA (AGARD Publications) Carretera de Torrejön a Ajalvir, Pk.4 28850 Torrejön de

  20. Computer programs for calculation of sting pitch and roll angles required to obtain angles of attack and sideslip on wind tunnel models (United States)

    Peterson, John B., Jr.


    Two programs have been developed to calculate the pitch and roll angles of a wind-tunnel sting drive system that will position a model at the desired angle of attack and and angle of sideslip in the wind tunnel. These programs account for the effects of sting offset angles, sting bending angles and wind-tunnel stream flow angles. In addition, the second program incorporates inputs from on-board accelerometers that measure model pitch and roll with respect to gravity. The programs are presented in the report and a description of the numerical operation of the programs with a definition of the variables used in the programs is given.

  1. Monte Carlo computations of F-region incoherent radar spectra at high latitudes and the use of a simple method for non-Maxwellian spectral calculations (United States)

    Kikuchi, K.; Barakat, A.; St-Maurice, J.-P.


    Monte Carlo simulations of ion velocity distributions in the high-latitude F region have been performed in order to improve the calculation of incoherent radar spectra in the auroral ionosphere. The results confirm that when the ion temperature becomes large due to frictional heating in the presence of collisions with the neutral background constituent, F region spectra evolve from a normal double hump, to a triple hump, to a spectrum with a single maximum. An empirical approach is developed to overcome the inadequacy of the Maxwellian assumption for the case of radar aspect angles of between 30 and 70 deg.

  2. Construction of a computational exposure model for dosimetric calculations using the EGS4 Monte Carlo code and voxel phantoms; Construcao de um modelo computacional de exposicao para calculos dosimetricos utilizando o codigo Monte Carlo EGS4 e fantomas de voxels

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson


    The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)

  3. Synthesis, characterization, computational calculation and biological studies of some 2,6-diaryl-1-(prop-2-yn-1-yl)piperidin-4-one oxime derivatives. (United States)

    Sundararajan, G; Rajaraman, D; Srinivasan, T; Velmurugan, D; Krishnasamy, K


    A new series of 2,6-diaryl-1-(prop-2-yn-1-yl)piperidin-4-one oximes (17-24) were designed and synthesized from 2,6-diarylpiperidin-4-one oximes (9-16) with propargyl bromide. Unambiguous structural elucidation has been carried out by investigating IR, NMR ((1)H, (13)C, (1)H-(1)H COSY and HSQC), mass spectral techniques and theoretical (DFT) calculations. Further, crystal structure of compound 17 was evaluated by single crystal X-ray diffraction analysis. Single crystal X-ray structural analysis of compound 17 evidenced that the configuration about CN double bond is syn to C-5 carbon (E-form). The existence of chair conformation was further confirmed by theoretical DFT calculation. All the synthesized compounds were screened for in vitro antimicrobial activity against a panel of selected bacterial and fungal strains using Ciprofloxacin and Ketoconazole as standards. The minimum inhibition concentration (MIC) results revealed that most of the 2,6-diaryl-1-(prop-2-yn-1-yl)piperidin-4-one oximes (17, 19, 20 and 23) exhibited better activity against the selected bacterial and fungal strains.

  4. VCM精馏工段的计算机模拟优化计算%Computer simulation and calculation for optimizing VCM rectification procedure

    Institute of Scientific and Technical Information of China (English)

    李群生; 于颖


    Simulation calculation for VCM rectification was made by chemical process simula- tion software ASPEN PLUS, and the sensitivity analysis on operation variables were studied, and thus suitable feed position, reflux ratio and fraction ratio were obtained to provide the basis for the operation optimization of VCM rectification procedure.%采用化工流程模拟软件ASPEN PLUS对VCM精馏工段各精馏塔进行模拟计算,并对操作变量进行灵敏度分析,可得到适宜的进料位置、回流比及馏出比,为氯乙烯精馏工段的操作优化提供依据。

  5. Computational methods to calculate accurate activation and reaction energies of 1,3-dipolar cycloadditions of 24 1,3-dipoles. (United States)

    Lan, Yu; Zou, Lufeng; Cao, Yang; Houk, K N


    Theoretical calculations were performed on the 1,3-dipolar cycloaddition reactions of 24 1,3-dipoles with ethylene and acetylene. The 24 1,3-dipoles are of the formula X≡Y(+)-Z(-) (where X is HC or N, Y is N, and Z is CH(2), NH, or O) or X═Y(+)-Z(-) (where X and Z are CH(2), NH, or O and Y is NH, O, or S). The high-accuracy G3B3 method was employed as the reference. CBS-QB3, CCSD(T)//B3LYP, SCS-MP2//B3LYP, B3LYP, M06-2X, and B97-D methods were benchmarked to assess their accuracies and to determine an accurate method that is practical for large systems. Several basis sets were also evaluated. Compared to the G3B3 method, CBS-QB3 and CCSD(T)/maug-cc-pV(T+d)Z//B3LYP methods give similar results for both activation and reaction enthalpies (mean average deviation, MAD, < 1.5 kcal/mol). SCS-MP2//B3LYP and M06-2X give small errors for the activation enthalpies (MAD < 1.5 kcal/mol), while B3LYP has MAD = 2.3 kcal/mol. SCS-MP2//B3LYP and B3LYP give the reasonable reaction enthalpies (MAD < 5.0 kcal/mol). The B3LYP functional also gives good results for most 1,3-dipoles (MAD = 1.9 kcal/mol for 17 common 1,3-dipoles), but the activation and reaction enthalpies for ozone and sulfur dioxide are difficult to calculate by any of the density functional methods.

  6. Verification Calculation Results to Validate the Procedures and Codes for Pin-by-Pin Power Computation in VVER Type Reactors with MOX Fuel Loading

    Energy Technology Data Exchange (ETDEWEB)

    Chizhikova, Z.N.; Kalashnikov, A.G.; Kapranova, E.N.; Korobitsyn, V.E.; Manturov, G.N.; Tsiboulia, A.A.


    One of the important problems for ensuring the VVER type reactor safety when the reactor is partially loaded with MOX fuel is the choice of appropriate physical zoning to achieve the maximum flattening of pin-by-pin power distribution. When uranium fuel is replaced by MOX one provided that the reactivity due to fuel assemblies is kept constant, the fuel enrichment slightly decreases. However, the average neutron spectrum fission microscopic cross-section for {sup 239}Pu is approximately twice that for {sup 235}U. Therefore power peaks occur in the peripheral fuel assemblies containing MOX fuel which are aggravated by the interassembly water. Physical zoning has to be applied to flatten the power peaks in fuel assemblies containing MOX fuel. Moreover, physical zoning cannot be confined to one row of fuel elements as is the case with a uniform lattice of uranium fuel assemblies. Both the water gap and the jump in neutron absorption macroscopic cross-sections which occurs at the interface of fuel assemblies with different fuels make the problem of calculating space-energy neutron flux distribution more complicated since it increases nondiffusibility effects. To solve this problem it is necessary to update the current codes, to develop new codes and to verify all the codes including nuclear-physical constants libraries employed. In so doing it is important to develop and validate codes of different levels--from design codes to benchmark ones. This paper presents the results of the burnup calculation for a multiassembly structure, consisting of MOX fuel assemblies surrounded by uranium dioxide fuel assemblies. The structure concerned can be assumed to model a fuel assembly lattice symmetry element of the VVER-1000 type reactor in which 1/4 of all fuel assemblies contains MOX fuel.

  7. Computational methods for reactive transport modeling: An extended law of mass-action, xLMA, method for multiphase equilibrium calculations (United States)

    Leal, Allan M. M.; Kulik, Dmitrii A.; Kosakowski, Georg; Saar, Martin O.


    We present an extended law of mass-action (xLMA) method for multiphase equilibrium calculations and apply it in the context of reactive transport modeling. This extended LMA formulation differs from its conventional counterpart in that (i) it is directly derived from the Gibbs energy minimization (GEM) problem (i.e., the fundamental problem that describes the state of equilibrium of a chemical system under constant temperature and pressure); and (ii) it extends the conventional mass-action equations with Lagrange multipliers from the Gibbs energy minimization problem, which can be interpreted as stability indices of the chemical species. Accounting for these multipliers enables the method to determine all stable phases without presuming their types (e.g., aqueous, gaseous) or their presence in the equilibrium state. Therefore, the here proposed xLMA method inherits traits of Gibbs energy minimization algorithms that allow it to naturally detect the phases present in equilibrium, which can be single-component phases (e.g., pure solids or liquids) or non-ideal multi-component phases (e.g., aqueous, melts, gaseous, solid solutions, adsorption, or ion exchange). Moreover, our xLMA method requires no technique that tentatively adds or removes reactions based on phase stability indices (e.g., saturation indices for minerals), since the extended mass-action equations are valid even when their corresponding reactions involve unstable species. We successfully apply the proposed method to a reactive transport modeling problem in which we use PHREEQC and GEMS as alternative backends for the calculation of thermodynamic properties such as equilibrium constants of reactions, standard chemical potentials of species, and activity coefficients. Our tests show that our algorithm is efficient and robust for demanding applications, such as reactive transport modeling, where it converges within 1-3 iterations in most cases. The proposed xLMA method is implemented in Reaktoro, a

  8. Possibilités actuelles du calcul des constantes élastiques de polymères par des méthodes de simulation atomistique Current Possibilities of the Computation of Elastic Constants of Polymers Using Atomistic Simulations

    Directory of Open Access Journals (Sweden)

    Dal Maso F.


    Full Text Available Les propriétés élastiques des phases amorphe et cristalline pures de polymères semi-cristallins ne sont en général pas mesurables directement avec les moyens physiques habituels. Il est donc nécessaire de recourir à des méthodes de calcul numérique. Cet article décrit certaines de ces méthodes, fondées sur des modélisations atomistiques, ainsi qu'une évaluation des implémentations actuelles. Il est montré que la méthode proposée par Zehnder et al. (1996 fournit les meilleurs résultats, au prix d'un temps long de calcul, dû à la dynamique moléculaire. Néanmoins, aucune de ces méthodes n'est vraiment utilisable simplement au jour le jour, car elles requièrent des moyens importants de calcul. Elastic properties of pure crystalline and amorphous phases of a semicrystalline polymer are usually not directly measurable by usual physical means. It therefore is necessary to resort to numerical computing methods. This paper describes some of these methods, based on atomistic simulations, as well as an assessment of current implementations. It is shown that the method proposed by Zehnder et al. (1996 gives the best results, at the expense of long computing time, due to molecular dynamic simulation. Nevertheless none of these methods are really usable on a daily basis, since there are demanding important computing capabilities.

  9. Parallel Algorithm Based on General Purpose Computing on GPU and the Implementation of Calculation Framework%基于GPU通用计算的并行算法和计算框架的实现

    Institute of Scientific and Technical Information of China (English)



    GPU通用计算是近几年来迅速发展的一个计算领域,以其强大的并行处理能力为密集数据单指令型计算提供了一个绝佳的解决方案,但受限制于芯片的制造工艺,其运算能力遭遇瓶颈。本文从GPU通用计算的基础——图形API开始,分析GPU并行算法特征、运算的过程及特点,并抽象出了一套并行计算框架。通过计算密集行案例,演示了框架的使用方法,并与传统GPU通用计算的实现方法比较,证明了本框架具有代码精简、与图形学无关的特点。%GPGPU(General Purpose Computing on Graphics Processing Unit) is a calculation mothed that develops quiet fast in recent years, it provide an optimal solution for the intensive data calculation of a single instruction with a powerful treatment, however it is restricted in CPU making process to lead to entounter the bottleneck of hardware manufacture. This paper started from GPGPU by Graphics API to analyze the featuers, progress and characteristics of GPU parallel algorithm and obtained a set of computing framework to demonstrate it by an intensive line calculation and compared between the traditional GPU and the parallel computing framework to turn out to show that there was a simplified code and had nothing to do with graphics.

  10. Computer tool to calculate the daily energy produced by a grid-connected photovoltaic system; Aplicacion informatica para estimar la energia diaria producida por sistemas fotovoltaicos conectados a red

    Energy Technology Data Exchange (ETDEWEB)

    Sidrach-de-Cardona, M.; Carretero, J.; Martin, B.; Mora-Lopez, L.; Ramirez, L.; Varela, M.


    We present a computer tool to calculate the daily energy produced by a grid-connected photovoltaic system. The main novelty of this tool is that it uses radiation and ambient temperature as input data; these maps allow us to obtain 365 values of these parameters in any point of the image. The radiation map has been obtained by using images of the satellite Meteosat. For the temperature map a system of geographical information has been used by using data of terrestrial meteorological stations. For the calculation of the daily energy an empiric model obtained from the study of the behavior of different photovoltaic systems is used. The program allows to design the photovoltaic generator, includes a database of photovoltaic products and allows to carry out a complete economic analysis. (Author)

  11. Contributing to global computing platform: gliding, tunneling standard services and high energy physics application; Contribution aux infrastructures de calcul global: delegation inter plates-formes, integration de services standards et application a la physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Lodygensky, O


    Centralized computers have been replaced by 'client/server' distributed architectures which are in turn in competition with new distributed systems known as 'peer to peer'. These new technologies are widely spread, and trading, industry and the research world have understood the new goals involved and massively invest around these new technologies, named 'grid'. One of the fields is about calculating. This is the subject of the works presented here. At the Paris Orsay University, a synergy emerged between the Computing Science Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) on grid infrastructure, opening new investigations fields for the first and new high computing perspective for the other. Works presented here are the results of this multi-discipline collaboration. They are based on XtremWeb, the LRI global computing platform. We first introduce a state of the art of the large scale distributed systems, its principles, its architecture based on services. We then introduce XtremWeb and detail modifications and improvements we had to specify and implement to achieve our goals. We present two different studies, first interconnecting grids in order to generalize resource sharing and secondly, be able to use legacy services on such platforms. We finally explain how a research community like the community of high energy cosmic radiation detection can gain access to these services and detail Monte Carlos and data analysis processes over the grids. (author)

  12. Computational dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Siebert, B.R.L.; Thomas, R.H.


    The paper presents a definition of the term ``Computational Dosimetry`` that is interpreted as the sub-discipline of computational physics which is devoted to radiation metrology. It is shown that computational dosimetry is more than a mere collection of computational methods. Computational simulations directed at basic understanding and modelling are important tools provided by computational dosimetry, while another very important application is the support that it can give to the design, optimization and analysis of experiments. However, the primary task of computational dosimetry is to reduce the variance in the determination of absorbed dose (and its related quantities), for example in the disciplines of radiological protection and radiation therapy. In this paper emphasis is given to the discussion of potential pitfalls in the applications of computational dosimetry and recommendations are given for their avoidance. The need for comparison of calculated and experimental data whenever possible is strongly stressed.

  13. Computer program for the calculation of stresses in rotary equipment discs; Programas de computo para el calculo de esfuerzos en discos de equipo rotatorio

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez Delgado, Wilson; Kubiak, Janusz; Serrano Romero, Luis Enrique [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)


    In the preliminary design and diagnosis of rotary machines is very common to utilize simple calculation methods for the mechanical and thermal stresses, dynamic and thermodynamic analysis and flow of fluids in this machines (Gutierrez et al., 1989). The analysis with these methods provides the necessary results for the project initial stage of the machine. Later on, more complex tools are employed to refine the design of some machine components. In the Gutierrez report et al., (1989) 34 programs were developed for the preliminary design and diagnosis of rotating equipment; in this article, one of them is presented in which a method for the analysis of mechanical and thermal stresses is applied in discs of uniform or variable thickness that are normally found in turbomachines and rotary equipment. [Espanol] En el diseno preliminar y diagnostico de maquinas rotatorias es muy comun emplear metodos de calculo sencillos para el analisis de esfuerzos mecanicos y termicos, analisis dinamico y termodinamico y de flujo de fluidos en estas maquinas (Gutierrez et al., 1989). El analisis con estos metodos proporcionan los resultados necesarios para la etapa del proyecto inicial de la maquina. Posteriormente, para refinar el diseno de algunos componentes de la maquina, se aplican las herramientas mas complejas. En el informe de Gutierrez et al., (1989) se desarrollan 34 programas para el diseno preliminar y diagnostico de equipo rotatorio; en este articulo, se presenta uno de ellos, en el que se emplea un metodo para el analisis de esfuerzos mecanicos y termicos en discos de espesor constante o variable que se encuentran comunmente en turbomaquinas y en equipos rotatorios.

  14. Exploring excited-state tunability in luminescent tris-cyclometalated platinum(IV) complexes: synthesis of heteroleptic derivatives and computational calculations. (United States)

    Juliá, Fabio; Aullón, Gabriel; Bautista, Delia; González-Herrero, Pablo


    The synthesis, structure, electrochemistry, and photophysical properties of a series of heteroleptic tris- cyclometalated Pt(IV) complexes are reported. The complexes mer-[Pt(C^N)2 (C'^N')]OTf, with C^N=C-deprotonated 2-(2,4-difluorophenyl)pyridine (dfppy) or 2-phenylpyridine (ppy), and C'^N'=C-deprotonated 2-(2-thienyl)pyridine (thpy) or 1-phenylisoquinoline (piq), were obtained by reacting bis- cyclometalated precursors [Pt(C^N)2 Cl2] with AgOTf (2 equiv) and an excess of the N'^C'H pro-ligand. The complex mer-[Pt(dfppy)2 (ppy)]OTf was obtained analogously and photoisomerized to its fac counterpart. The new complexes display long-lived luminescence at room temperature in the blue to orange color range. The emitting states involve electronic transitions almost exclusively localized on the ligand with the lowest π-π* energy gap and have very little metal character. DFT and time-dependent DFT (TD-DFT) calculations on mer-[Pt(ppy)2 (C'^N')](+) (C'^N'=thpy, piq) and mer/fac-[Pt(ppy)3](+) support this assignment and provide a basis for the understanding of the luminescence of tris-cyclometalated Pt(IV) complexes. Excited states of LMCT character may become thermally accessible from the emitting state in the mer isomers containing dfppy or ppy as chromophoric ligands, leading to strong nonradiative deactivation. This effect does not operate in the fac isomers or the mer complexes containing thpy or piq, for which nonradiative deactivation originates mainly from vibrational coupling to the ground state.

  15. Studies of aircraft differential maneuvering. Report 75-27: Calculating of differential-turning barrier surfaces. Report 75-26: A user's guide to the aircraft energy-turn and tandem-motion computer programs. Report 75-7: A user's guide to the aircraft energy-turn hodograph program. [numerical analysis of tactics and aircraft maneuvers of supersonic attack aircraft (United States)

    Kelley, H. J.; Lefton, L.


    The numerical analysis of composite differential-turn trajectory pairs was studied for 'fast-evader' and 'neutral-evader' attitude dynamics idealization for attack aircraft. Transversality and generalized corner conditions are examined and the joining of trajectory segments discussed. A criterion is given for the screening of 'tandem-motion' trajectory segments. Main focus is upon the computation of barrier surfaces. Fortunately, from a computational viewpoint, the trajectory pairs defining these surfaces need not be calculated completely, the final subarc of multiple-subarc pairs not being required. Some calculations for pairs of example aircraft are presented. A computer program used to perform the calculations is included.

  16. Intraocular lens power calculation after corneal refractive surgery using serf-designed computer software programmed with optimized calculation method%应用优化计算方法与计算机软件计算角膜屈光手术后人工晶状体屈光力

    Institute of Scientific and Technical Information of China (English)

    郭海科; 金海鹰; Gerd.U.AUFFARTH; 张洪洋


    目的 对准分子激光角膜屈光手术后人工晶状体屈光力的计算方法进行优化,并开发为计算机软件,评价其准确性与可靠性.方法 对人工晶状体屈光力计算方法进行优化,包括:角膜屈光力的矫正计算;人工晶状体有效位置的计算与双K值法(double-K method)的应用;标准化计算公式的应用.将计算方法编写为计算机应用软件(IOL calculator for post-refractive cases).应用该软件对49例角膜屈光手术后的白内障患者的人工晶状体屈光力进行计算,以白内障手术后实际屈光状态为标准,预测屈光状态与实际屈光状态之间的差异为预测误差,预测误差的绝对值为绝对预测误差.以SPSS 11.0软件分析预测误差与绝对预测误差的平均值与分布.结果 白内障手术后屈光状态为-2.50~0.75 D,平均为(-0.78±o.83)D,3眼(6.1%)为正视,36眼(73.5%)为近视,10眼(20.4%)为远视.预测误差为-1.26~1.96 D,平均(-0.02±0.75)D,接近于正视性屈光状态.绝对预测误差为0~1.96 D,平均(0.62±0.42)D,绝对预测误差≤0.5 D者19眼(38.8%),>0.5 D且≤1.0 D者22眼(44.9%),>1.0 D且≤1.5 D者7眼(14.3%),>1.5 D 且≤2.0 D者1眼(2.0%).结论 通过优化计算方法与开发计算机软件,可以充分简化准分子激光角膜屈光手术后人工晶状体屈光力的计算过程,并提高计算的准确性与可靠性.%Objective To evaluate intraocular lens power calculation after laser refractive surgery using an optimized calculation method and self-designed computer software. Methods Intraocular lens power calculation method was optimized in the following aspects: corrective algorithm for corneal power estimation; algorithm for effective lens position estimation and double-K method for intraocular lens power calculation; and standardized formula for intraocular lens power calculation.The calculation method was programmed into self-designed computer software (IOL calculator for post


    Institute of Scientific and Technical Information of China (English)

    韩冰; 宋正江; 鲁阳; 陈建成


    云计算数据中心网络的流量特征是研究和设计云计算网络的基础,现有的流量测量研究方法通常要求交换机支持额外功能模块或具备可编程能力,而目前大多数云计算数据中心网络的交换机并不满足此要求。提出一种基于网络层析技术的端到端流量推理算法,仅使用交换机普遍支持的 SNMP(简单的网络管理协议)数据,就能快速准确地计算出端到端的流量信息。并通过仿真实验与已有的网络层析算法进行比较,结果表明新算法更适用于大规模的云计算数据中心网络,可以在较短的时间内得到更准确的计算结果,从而为云计算网络的设计和研究提供了重要的参考依据。%Traffic characteristic of cloud computing data centre networks is the basis of research and design of cloud computing networks. Current study methods of traffic measurement techniques usually ask the switches supporting additional functional modules or being program-mable,however not many of the cloud computing data centre networks can afford such switches.In this paper,we proposed an end-to-end traffic inference algorithm which is based on network tomography.It can rapidly calculate the end-to-end traffic information with high accuracy by only utilising SNMP (simple network management protocol)data ubiquitously supported by switches.Through simulation experiment the algorithm was compared with existing network tomography algorithm,and result showed that the new algorithm was more applicable to large-scale cloud computing data centre networks,and could gain more accurate computing results in short period,so as to provide important refer-ence basis for the design and research of cloud computing networks.

  18. Dead reckoning calculating without instruments

    CERN Document Server

    Doerfler, Ronald W


    No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner

  19. The digital computer

    CERN Document Server

    Parton, K C


    The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer

  20. Bacteria Make Computers Look like Pocket Calculators

    Institute of Scientific and Technical Information of China (English)

    Jacob Aron; 程杰(选注)



  1. Computer Calculations of PIN Diode Limiter Characteristics. (United States)



  2. Daylight Coefficient Computational Method-based Study on Calculation Method of Tubular Daylight Device Efficiency%基于日光系数法的导光管效率计算方法研究

    Institute of Scientific and Technical Information of China (English)

    王书晓; 利岚; 张滨


    天然导光技术的应用对于改善地下空间、大进深空间的采光质量具有重要作用.传统的导光管计算方法多基于某一或某几种特定的导光管的实验数据推算得来,不具有普遍适用性.本文基于日光系数法,对光线在圆柱形导光筒中的传输特性进行了分析,建立了导光筒光传输特性数学模型,并利用TracePro软件对该数学模型的准确性进行了验证.%Application of natural tubular daylight technology is important to improvement of daylight quality in underground space and deep space.The traditional light pipe calculation method is generally based on calculation of the experimental data from certain or some specific light pipes,which lacks general applicability.Based on daylight coefficient computational method,this paper analyzes the light transfer characteristics in tubular daylight device,establishes a mathematical model regarding light transfer characteristics of light in tubular daylight device and utilizes TracePro software to verify the accuracy of the mathematical model.

  3. Parallel computing method of Monte Carlo criticality calculation based on bi-directional traversal%基于双向遍历的蒙特卡罗临界计算并行方法

    Institute of Scientific and Technical Information of China (English)

    李静; 宋婧; 龙鹏程; 刘鸿飞; 江平


    在基于蒙特卡罗粒子输运方法的反应堆模拟中,如裂变堆、聚变裂变混合堆等,达到可接受的统计误差需要大量的计算时间,这已成为蒙特卡罗方法的挑战问题之一,需通过并行计算技术解决。为解决现有方法中通信死锁的问题并保证负载均衡性,设计了基于双向遍历的临界计算并行算法。该方法基于超级蒙特卡罗核计算仿真软件系统SuperMC进行实现,以池式钠冷快堆BN600基准模型进行验证,并与MCNP进行对比。测试结果表明,串行和并行计算结果一致,且SuperMC并行效率高于MCNP。%Background: It requires much computational time with acceptable statistics errors in reactor simulations including fission reactors and fusion-fission hybrid reactors, which has become one challenge of the Monte Carlo method.Purpose: In this paper, an efficient parallel computing method was presented for resolving the communication deadlock and load balancing problem of current methods.Methods: The parallel computing method based on bi-directional traversal of criticality calculation was implemented in super Monte Carlo simulation program (SuperMC) for nuclear and radiation process. The pool-type sodium cooled fast reactor BN600 was proposed for benchmarking and was compared with MCNP.Results: Results showed that the parallel method and un-parallel methods were in agreement with each other.Conclusion: The parallel efficiency of SuperMC is higher than that of MCNP, which demonstrates the accuracy and efficiency of the parallel computing method.

  4. SP-FISPACT2001. A computer code for activation and decay calculations for intermediate energies. A connection of FISPACT with MCNPX; SP-FISPACT2001. Una connessione di FISPACT con MCNPX per la codifica computerizzata delle energie intermedie

    Energy Technology Data Exchange (ETDEWEB)

    Petrovich, C. [ENEA, Divisione Sistemi Energetici Ecosostenibili, Centro Ricerche Ezio Clementel, Bologna (Italy)


    The calculation of the number of atoms and the activity of materials following nuclear interactions at incident energies up to several GeV is necessary in the design of Accelerator Driven Systems, Radioactive Ion Beam and proton accelerator facilities such as spallation neutron sources. As well as the radioactivity of the materials, this allows the evaluation of the formation of active gaseous elements and the assessment of possible corrosion problems The particle energies involved here are higher than those used in typical nuclear reactors and fusion devices for which many codes already exist. These calculations can be performed by coupling two different computer codes: MCNPX and SP-FISPACT. MCNPX performs Monte Carlo particle transport up to energies of several GeV. SP-FISPACT is a modification of FISPACT, a code designed for fusion applications and able to calculate neutron activation for energies <20 MeV. In such a way it is possible to perform a hybrid calculation in which neutron activation data are used for neutron interactions at energies <20 MeV and intermediate energy physics models for all the other nuclear interactions. [Italian] In fase di design di sistemi ADS (Accelerator Driven Systems), di strutture con acceleratori quali quelli finalizzate alla produzione di fasci di ioni radioattivi o a sorgenti neutroniche di spallazione e' necessario calcolare la composizione e l'attivita' di materiali a seguito di interazioni nucleari con energie fino a qualche GeV. Oltre la radioattivita' dei materiali, questi calcoli permettono di prevedere la formazione di elementi gassosi attivi e possibili problemi di corrosione. Le energie delle particelle qui coinvolte sono piu' alte di quelle usate in tipici reattori nucleari ed in dispositivi finalizzati alla fusione, per i quali sono gia' disponibili diversi codici. Questi tipi di calcoli possono essere eseguiti accoppiando due codici differenti: MCNPX e SP-FISPACT. MCNPX trasporta

  5. Matlab numerical calculations

    CERN Document Server

    Lopez, Cesar


    MATLAB is a high-level language and environment for numerical computation, visualization, and programming. Using MATLAB, you can analyze data, develop algorithms, and create models and applications. The language, tools, and built-in math functions enable you to explore multiple approaches and reach a solution faster than with spreadsheets or traditional programming languages, such as C/C++ or Java. This book is designed for use as a scientific/business calculator so that you can get numerical solutions to problems involving a wide array of mathematics using MATLAB. Just look up the function y

  6. Computational Finance

    DEFF Research Database (Denmark)

    Rasmussen, Lykke

    One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...

  7. Calculating Electromagnetic Fields Of A Loop Antenna (United States)

    Schieffer, Mitchell B.


    Approximate field values computed rapidly. MODEL computer program developed to calculate electromagnetic field values of large loop antenna at all distances to observation point. Antenna assumed to be in x-y plane with center at origin of coordinate system. Calculates field values in both rectangular and spherical components. Also solves for wave impedance. Written in MicroSoft FORTRAN 77.

  8. Molecular calculations with B functions

    CERN Document Server

    Steinborn, E O; Ema, I; López, R; Ramírez, G


    A program for molecular calculations with B functions is reported and its performance is analyzed. All the one- and two-center integrals, and the three-center nuclear attraction integrals are computed by direct procedures, using previously developed algorithms. The three- and four-center electron repulsion integrals are computed by means of Gaussian expansions of the B functions. A new procedure for obtaining these expansions is also reported. Some results on full molecular calculations are included to show the capabilities of the program and the quality of the B functions to represent the electronic functions in molecules.

  9. PIC: Protein Interactions Calculator. (United States)

    Tina, K G; Bhadra, R; Srinivasan, N


    Interactions within a protein structure and interactions between proteins in an assembly are essential considerations in understanding molecular basis of stability and functions of proteins and their complexes. There are several weak and strong interactions that render stability to a protein structure or an assembly. Protein Interactions Calculator (PIC) is a server which, given the coordinate set of 3D structure of a protein or an assembly, computes various interactions such as disulphide bonds, interactions between hydrophobic residues, ionic interactions, hydrogen bonds, aromatic-aromatic interactions, aromatic-sulphur interactions and cation-pi interactions within a protein or between proteins in a complex. Interactions are calculated on the basis of standard, published criteria. The identified interactions between residues can be visualized using a RasMol and Jmol interface. The advantage with PIC server is the easy availability of inter-residue interaction calculations in a single site. It also determines the accessible surface area and residue-depth, which is the distance of a residue from the surface of the protein. User can also recognize specific kind of interactions, such as apolar-apolar residue interactions or ionic interactions, that are formed between buried or exposed residues or near the surface or deep inside.

  10. 轴压加筋壁板承载能力计算方法探讨%Investigation of the Computing Methods to Calculate Load-carrying Capacity of Stiffened Panels under Axial Compression

    Institute of Scientific and Technical Information of China (English)

    王海燕; 童贤鑫


    Currently, engineering method is usually used to compute the load carrying capacity of the panels in a design process. In this paper, three major engineering methods are briefly described and investigated. These methods are evaluated through the comparison of calculated results and test data, for the stiffed panels of the central wing and fuselage of an aircraft. It is shown that the results from ultimate load method are safer and more consistent with the experimental data. The reasonable allowable stress for central wing of the aircraft is specified according to the previous comparisons and analysis. Finally, eleven types of stiffened panels under axi- al compression of the aircraft fuselage are calculated by using ultimate load method. The results agree well with the experimental data. It is approved that this method can meet the requirement more accurately and utility in engineering design.%目前飞机设计中主要采用工程方法计算加筋壁板的承载能力。本文介绍并评述了分段处理法、John-son法和极限载荷法三种常用的工程方法,为了探讨哪种方法能更加满足工程需要,采用这三种方法对某型飞机中央翼加筋壁板及其试验件分别进行计算,表明其中极限载荷法的计算结果偏于安全,与试验结果吻合较好,并采用此法确定了中央翼加筋壁板的承载能力。最后采用极限载荷法进一步计算该型飞机机身11种构型加筋壁板轴压试验件,破坏载荷计算值与试验结果相当吻合,从而证实了极限载荷法是一种计算轴压加筋壁板承载能力更准确、实用的工程方法。

  11. Weight Calculation for Computational Geometry Combining Classifier Using Regularity of Class Space%类空间规整度的计算几何组合分类器权重分配

    Institute of Scientific and Technical Information of China (English)

    张涛; 洪文学


    在计算几何组合分类器中,子分类器的权重分配一直未能充分利用空间视觉信息,使得分类器的可视化特性无法完全得到发挥.本文从类空间类别分布特性出发,提出基于类空间规整度的权重分配方法.该方法首先将子分类器由空间的类别表示转变为类别的空间表示,进而利用共生原则分析不同类别在空间中的分布规整度.由于分布规整度为类别分布信息的整体体现,可以用于刻画类空间中不同类别样本的离散程度,因此可以利用当前类空间的规整度信息作为该子分类器的权重.实验表明,利用规整度信息进行加权后的分类器不但与可视化特性更好的吻合,增强了分类过程的可理解性,而且在分类精度上得到了进一步的提升,扩展了应用领域.%In all the tissues about computational geometry combining classifier, the weight calculation for sub classifiers has not taken the advantage of visual information in the spaces, which retains the visual performance about classifier. According to the category distribution in class space, a weight calculation method based on space regulation is proposed. In this method, the space is turned from category information in space to space information in category. And the space regularity is obtained from the later based on co occur rules. As the regularity reflects the distribution of categories and describes the separation of the samples, which makes it as the weight for the sub classifier. The experiments show that the classifier weighted by the regularity not only enhance the visual performance, but also the classify performance of the classifier. It means that the comprehensibility of the classifier is enhanced and the application of the classifier is extended.

  12. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan


    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  13. Computers and data processing

    CERN Document Server

    Deitel, Harvey M


    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  14. Apresentação de um programa de computador para calcular a discrepância de tamanho dentário de Bolton Presentation of a computer program to calculate the Bolton’s tooth size discrepancy

    Directory of Open Access Journals (Sweden)

    Adriano Francisco de Lucca Facholli


    Full Text Available O diagnóstico da discrepância de tamanho dentário de Bolton é de fundamental importância para a boa finalização do tratamento ortodôntico. Por meio da medição dos dentes, com o auxílio de um paquímetro digital e a inserção dos valores no programa de computador desenvolvido e apresentado pelos autores, o trabalho do ortodontista fica mais simples, pois não é necessário realizar nenhum cálculo matemático ou auxiliar-se de nenhuma tabela de valores, eliminando-se a probabilidade de erros. Além disso, o programa apresenta a localização da discrepância por segmento - total, anterior e posterior - e individual - por elemento dentário, permitindo assim maior precisão na planificação das estratégias para a resolução dos problemas, caminhando para um tratamento ortodôntico de sucesso.The diagnosis of the Bolton’s Tooth Size Discrepancy is of fundamental importance for the good orthodontics finalization. Through the measurement of the teeth with the aid of a digital caliper and the insert of the values in the computer program developed by the authors and which that it will be presented in this article, the orthodontist’s work is simpler, because it is not necessary to accomplish any mathematical calculation or to aid of any table of values, eliminating the probability of mistakes. Besides, the program presents the location of the discrepancy for segment - overall, anterior and posterior - and individual - for dental element, allowing larger precision in the planning of the strategies for the resolution of the problems and walking for a success orthodontic treatment.

  15. Réduire les coûts de la simulation informatique grâce aux plans d'expériences : un exemple en calcul de procédé Reducing Computer Simulation Costs with Factorial Designs: an Example of Process Calculation

    Directory of Open Access Journals (Sweden)

    Murray M.


    Full Text Available Cet article est destiné à montrer que la méthode des Plans d'Expériences utilisée dans les laboratoires et les unités de fabrication est également applicable au calcul scientifique et en particulier, à la simulation informatique. Son emploi permet de réduire, dans une forte proportion, le nombre de passages informatiques. Il permet également d'écrire des modèles mathématiques empiriques qui orientent les recherches vers la bonne solution et qui fournissent une bonne image du phénomène étudié. The aim of this article is to show that Factorial Design, which is a commonly used method in laboratories and production units, can also be very successful for designing and computerized simulations. Computer runs can be reduced by a factor as great as four to achieve a comprehensive understanding of how a plant or a process runs. Simple models can then be constructed to provide a good image of the investigated phenomenom. The example given here is that of a plant processing raw Natural Gas whose outputs are a Sales Gas and an NGL which must meet simultaneously five specifications. The operator in charge of the simulations begins by defining the Experimental Range of Investigation (Table 1. Calculations (Table 1, Fig. 2 are set in a pattern defined by Factorial Design (Table 2. These correspond to the apices of the Experimental cube (Fig. 2. Results of the simulations are then reported on Table 3. These require analysis, using Factorial Design Theory, in conjunction with each specification. A graphical approach is used to define the regions for which each specification is met: Fig. 3 shows the zone authorized for the first specification, the Wobbe Index and Fig. 4 gives the results for the outlet pressure of the Turbo-Expander. Figs. 5, 6 and 7 show the zones allowed for the CO2/C2 ratio, the TVP and the C2/C3 ratio. A satisfactory zone is found, for this last ratio, outside of the investigated range. The results acquired so far enable us

  16. Molecular Dynamics Calculations (United States)


    The development of thermodynamics and statistical mechanics is very important in the history of physics, and it underlines the difficulty in dealing with systems involving many bodies, even if those bodies are identical. Macroscopic systems of atoms typically contain so many particles that it would be virtually impossible to follow the behavior of all of the particles involved. Therefore, the behavior of a complete system can only be described or predicted in statistical ways. Under a grant to the NASA Lewis Research Center, scientists at the Case Western Reserve University have been examining the use of modern computing techniques that may be able to investigate and find the behavior of complete systems that have a large number of particles by tracking each particle individually. This is the study of molecular dynamics. In contrast to Monte Carlo techniques, which incorporate uncertainty from the outset, molecular dynamics calculations are fully deterministic. Although it is still impossible to track, even on high-speed computers, each particle in a system of a trillion trillion particles, it has been found that such systems can be well simulated by calculating the trajectories of a few thousand particles. Modern computers and efficient computing strategies have been used to calculate the behavior of a few physical systems and are now being employed to study important problems such as supersonic flows in the laboratory and in space. In particular, an animated video (available in mpeg format--4.4 MB) was produced by Dr. M.J. Woo, now a National Research Council fellow at Lewis, and the G-VIS laboratory at Lewis. This video shows the behavior of supersonic shocks produced by pistons in enclosed cylinders by following exactly the behavior of thousands of particles. The major assumptions made were that the particles involved were hard spheres and that all collisions with the walls and with other particles were fully elastic. The animated video was voted one of two

  17. Pressure Vessel Calculations for VVER-440 Reactors (United States)

    Hordósy, G.; Hegyi, Gy.; Keresztúri, A.; Maráczy, Cs.; Temesvári, E.; Vértes, P.; Zsolnay, É.


    Monte Carlo calculations were performed for a selected cycle of the Paks NPP Unit II to test a computational model. In the model the source term was calculated by the core design code KARATE and the neutron transport calculations were performed by the MCNP. Different forms of the source specification were examined. The calculated results were compared with measurements and in most cases fairly good agreement was found.

  18. Program Calculates Current Densities Of Electronic Designs (United States)

    Cox, Brian


    PDENSITY computer program calculates current densities for use in calculating power densities of electronic designs. Reads parts-list file for given design, file containing current required for each part, and file containing size of each part. For each part in design, program calculates current density in units of milliamperes per square inch. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19588). PC version of program (NPO-19171).

  19. Méthodes de calcul pour la conception des systèmes de protection cathodique des structures longilignes Computing Methods for Designing Cathodic Protection Systemes for Elongate Stuctures

    Directory of Open Access Journals (Sweden)

    Roche M.


    Full Text Available Les différentes structures longilignes qu'utilise l'industrie des hydrocarbures sont, dans la plupart des cas, soumises à un système de protection cathodique par anodes sacrificielles ou par courant imposé. La conception de ces systèmes doit être basée sur l'étude de la variation du potentiel et de l'intensité le long de la structure causée par la chute ohmique. La méthode classique de calcul résoud couramment le cas des structures longilignes à diamètre constant traversant un terrain dont la résistivité est considérée comme constante sur toute la longueur. Dans le cas où la constitution de la structure varie, comme celui des casings de puits de forage, ou quand celle-ci traverse plusieurs types de terrain, le problème se complique. Nous proposons une méthode générale permettant de traiter rapidement tout problème de ce type, le nombre de tronçons n'étant pas limité. Cette méthode fait appel à des notions de facteur de réflexion et de résistance équivalente déjà exposées dans la littérature mais dont l'usage ne semble pas s'être répandu. The different elongate structures used by the hydrocarbon industry are, in most cases, subjected ta a cathodic protection system consisting of sacrificial anodes or an impressed current. Desings of such systems must be boséd on an analysis of variations in the potential and intensity along the structure as the result of the ohm drop. The conventional computing method commonly solves cases of elongote structures with a constant diameter, running through ground whose resistivity is considéred to be constant over the entire length. When the~nake-up of the structure varies, as is the case for borehole casings, or when it goes through several types of formations, the problem gets more complicated. We propose a general method for quickly dealing with any problem of this type, with no limit ta the number of lengths involved. This method makes use of reflection factor and

  20. Influence of the Different Primary Cancers and Different Types of Bone Metastasis on the Lesion-based Artificial Neural Network Value Calculated by a Computer-aided Diagnostic System,BONENAVI, on Bone Scintigraphy Images

    Directory of Open Access Journals (Sweden)



    Full Text Available Objective(s: BONENAVI, a computer-aided diagnostic system, is used in bone scintigraphy. This system provides the artificial neural network (ANN and bone scan index (BSI values. ANN is associated with the possibility of bone metastasis, while BSI is related to the amount of bone metastasis. The degree of uptake on bone scintigraphy can be affected by the type of bone metastasis. Therefore, the ANN value provided by BONENAVI may be influenced by the characteristics of bone metastasis. In this study, we aimed to assess the relationship between ANN value and characteristics of bone metastasis. Methods: We analyzed 50 patients (36 males, 14 females; age range: 42–87 yrs, median age: 72.5 yrs with prostate, breast, or lung cancer who had undergone bone scintigraphy and were diagnosed with bone metastasis (32 cases of prostate cancer, nine cases of breast cancer, and nine cases of lung cancer. Those who had received systematic therapy over the past years were excluded. Bone metastases were diagnosed clinically, and the type of bone metastasis (osteoblastic, mildly osteoblastic,osteolytic, and mixed components was decided visually by the agreement of two radiologists. We compared the ANN values (case-based and lesion-based among the three primary cancers and four types of bone metastasis.Results: There was no significant difference in case-based ANN values among prostate, breast, and lung cancers. However, the lesion-based ANN values were the highest in cases with prostate cancer and the lowest in cases of lung cancer (median values: prostate cancer, 0.980; breast cancer, 0.909; and lung cancer, 0.864. Mildly osteoblastic lesions showed significantly lower ANN values than the other three types of bone metastasis (median values: osteoblastic, 0.939; mildly osteoblastic, 0.788; mixed type, 0.991; and osteolytic, 0.969. The possibility of a lesion-based ANN value below 0.5 was 10.9% for bone metastasis in prostate cancer, 12.9% for breast cancer, and 37

  1. Zero Temperature Hope Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Rozsnyai, B F


    The primary purpose of the HOPE code is to calculate opacities over a wide temperature and density range. It can also produce equation of state (EOS) data. Since the experimental data at the high temperature region are scarce, comparisons of predictions with the ample zero temperature data provide a valuable physics check of the code. In this report we show a selected few examples across the periodic table. Below we give a brief general information about the physics of the HOPE code. The HOPE code is an ''average atom'' (AA) Dirac-Slater self-consistent code. The AA label in the case of finite temperature means that the one-electron levels are populated according to the Fermi statistics, at zero temperature it means that the ''aufbau'' principle works, i.e. no a priory electronic configuration is set, although it can be done. As such, it is a one-particle model (any Hartree-Fock model is a one particle model). The code is an ''ion-sphere'' model, meaning that the atom under investigation is neutral within the ion-sphere radius. Furthermore, the boundary conditions for the bound states are also set at the ion-sphere radius, which distinguishes the code from the INFERNO, OPAL and STA codes. Once the self-consistent AA state is obtained, the code proceeds to generate many-electron configurations and proceeds to calculate photoabsorption in the ''detailed configuration accounting'' (DCA) scheme. However, this last feature is meaningless at zero temperature. There is one important feature in the HOPE code which should be noted; any self-consistent model is self-consistent in the space of the occupied orbitals. The unoccupied orbitals, where electrons are lifted via photoexcitation, are unphysical. The rigorous way to deal with that problem is to carry out complete self-consistent calculations both in the initial and final states connecting photoexcitations, an enormous computational task

  2. Applications of computer algebra

    CERN Document Server


    Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in­ creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa­ tion and obtains a closed for...

  3. 46 CFR 170.090 - Calculations. (United States)


    ... necessary to compute and plot any of the following curves as part of the calculations required in this subchapter, these plots must also be submitted: (1) Righting arm or moment curves. (2) Heeling arm or...

  4. Efficient Finite Element Calculation of Nγ

    DEFF Research Database (Denmark)

    Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.


    This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....

  5. Magnetic Field Calculator (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...

  6. Evaluation of PWR and BWR assembly benchmark calculations. Status report of EPRI computational benchmark results, performed in the framework of the Netherlands` PINK programme (Joint project of ECN, IRI, KEMA and GKN)

    Energy Technology Data Exchange (ETDEWEB)

    Gruppelaar, H. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Klippel, H.T. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Kloosterman, J.L. [Netherlands Energy Research Foundation (ECN), Petten (Netherlands); Hoogenboom, J.E. [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Leege, P.F.A. de [Technische Univ. Delft (Netherlands). Interfacultair Reactor Instituut; Verhagen, F.C.M. [Keuring van Electrotechnische Materialen NV, Arnhem (Netherlands); Bruggink, J.C. [Gemeenschappelijke Kernenergiecentrale Nederland N.V., Dodewaard (Netherlands)


    Benchmark results of the Dutch PINK working group on calculational benchmarks on single pin cell and multipin assemblies as defined by EPRI are presented and evaluated. First a short update of methods used by the various institutes involved is given as well as an update of the status with respect to previous performed pin-cell calculations. Problems detected in previous pin-cell calculations are inspected more closely. Detailed discussion of results of multipin assembly calculations is given. The assembly consists of 9 pins in a multicell square lattice in which the central pin is filled differently, i.e. a Gd pin for the BWR assembly and a control rod/guide tube for the PWR assembly. The results for pin cells showed a rather good overall agreement between the four participants although BWR pins with high void fraction turned out to be difficult to calculate. With respect to burnup calculations good overall agreement for the reactivity swing was obtained, provided that a fine time grid is used. (orig.)

  7. Direct calculation of wind turbine tip loss

    DEFF Research Database (Denmark)

    Wood, D.H.; Okulov, Valery; Bhattacharjee, D.


    . We develop three methods for the direct calculation of the tip loss. The first is the computationally expensive calculation of the velocities induced by the helicoidal wake which requires the evaluation of infinite sums of products of Bessel functions. The second uses the asymptotic evaluation...

  8. Insertion device calculations with mathematica

    Energy Technology Data Exchange (ETDEWEB)

    Carr, R. [Stanford Synchrotron Radiation Lab., CA (United States); Lidia, S. [Univ. of California, Davis, CA (United States)


    The design of accelerator insertion devices such as wigglers and undulators has usually been aided by numerical modeling on digital computers, using code in high level languages like Fortran. In the present era, there are higher level programming environments like IDL{reg_sign}, MatLab{reg_sign}, and Mathematica{reg_sign} in which these calculations may be performed by writing much less code, and in which standard mathematical techniques are very easily used. The authors present a suite of standard insertion device modeling routines in Mathematica to illustrate the new techniques. These routines include a simple way to generate magnetic fields using blocks of CSEM materials, trajectory solutions from the Lorentz force equations for given magnetic fields, Bessel function calculations of radiation for wigglers and undulators and general radiation calculations for undulators.

  9. MBPT calculations with ABINIT (United States)

    Giantomassi, Matteo; Huhs, Georg; Waroquiers, David; Gonze, Xavier


    Many-Body Perturbation Theory (MBPT) defines a rigorous framework for the description of excited-state properties based on the Green's function formalism. Within MBPT, one can calculate charged excitations using e.g. Hedin's GW approximation for the electron self-energy. In the same framework, neutral excitations are also well described through the solution of the Bethe-Salpeter equation (BSE). In this talk, we report on the recent developments concerning the parallelization of the MBPT algorithms available in the ABINIT code ( In particular, we discuss how to improve the parallel efficiency thanks to a hybrid version that employs MPI for the coarse-grained parallelization and OpenMP (a de facto standard for parallel programming on shared memory architectures) for the fine-grained parallelization of the most CPU-intensive parts. Benchmark results obtained with the new implementation are discussed. Finally, we present results for the GW corrections of amorphous SiO2 in the presence of defects and the BSE absorption spectrum. This work has been supported by the Prace project (PaRtnership for Advanced Computing in Europe,

  10. La résistance de vague des carènes. Calcul de la fonction de Green par intégration numérique et par une méthode asymptotique. 1° Partie Hull Resistance to Wave? Computing the Green Function by Numerical Integration and by an Asymptotic Method. Part One

    Directory of Open Access Journals (Sweden)

    Carou A.


    Full Text Available Le calcul de la résistance de vague d'une carène par éléments finis concentrés sur un ouvert borné nécessite la connaissance de la fonction de Green du problème à grande distance. Cette fonction est très difficile à calculer numériquement. On justifie dans ce travail une méthode asymptotique rapide, remplaçant avantageusement l'intégration numérique. Computing wave resistance -by finite elements concentrated on a bounded open set requires the prior knowledge of the Green function of the problem at a great distance. Computing this function is numerically very difficult. A fast asymptotic method is iustified in this article, and it can be used ta advantage as a replacemenf for numerical integration.

  11. Automation of 2-loop Amplitude Calculations

    CERN Document Server

    Jones, S P


    Some of the tools and techniques that have recently been used to compute Higgs boson pair production at NLO in QCD are discussed. The calculation relies on the use of integral reduction, to reduce the number of integrals which must be computed, and expressing the amplitude in terms of a quasi-finite basis, which simplifies their numeric evaluation. Emphasis is placed on sector decomposition and Quasi-Monte Carlo (QMC) integration which are used to numerically compute the master integrals.

  12. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations; Contribution a l'algorithmique et a la programmation efficace des nouvelles architectures paralleles comportant des accelerateurs de calcul dans le domaine de la neutronique et de la radioprotection

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, J.


    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [French] Les travaux de cette these concernent dans un premier temps l'evaluation des nouveaux materiels de calculs tels que les cartes graphiques ou les puces massivement multicoeurs, et leur application aux problemes de valeurs propres pour la neutronique. Ensuite, dans le but d'utiliser le parallelisme massif des supercalculateurs, nous etudions egalement l'utilisation de methodes hybrides asynchrones pour resoudre des problemes a valeur propre avec ce tres haut niveau de parallelisme. Nous experimentons ensuite le resultat de ces recherches sur plusieurs supercalculateurs nationaux tels que la machine hybride Titane du Centre de Calcul, Recherche et Technologies (CCRT), la machine Curie du Tres Grand Centre de Calcul (TGCC) qui

  13. Radiation doses from radiation sources of neutrons and photons by different computer calculation; Tecniche di calcolo di intensita` di dose da sorgenti di radiazione neutronica e fotonica con l`uso di codici basati su metodologie diverse

    Energy Technology Data Exchange (ETDEWEB)

    Siciliano, F.; Lippolis, G.; Bruno, S.G. [ENEA, Centro Ricerche Trisaia, Rotondella (Italy)


    In the present paper the calculation technique aspects of dose rate from neutron and photon radiation sources are covered with reference both to the basic theoretical modeling of the MERCURE-4, XSDRNPM-S and MCNP-3A codes and from practical point of view performing safety analyses of irradiation risk of two transportation casks. The input data set of these calculations -regarding the CEN 10/200 HLW container and dry PWR spent fuel assemblies shipping cask- is frequently commented as for as connecting points of input data and understanding theoretic background are concerned.

  14. Lateral hydraulic forces calculation on PWR fuel assemblies with computational fluid dynamics codes; Calculo de fuerzas laterales hidraulicas en elementos combustibles tipo PWR con codigos de dinamica de fluidos coputacional

    Energy Technology Data Exchange (ETDEWEB)

    Corpa Masa, R.; Jimenez Varas, G.; Moreno Garcia, B.


    To be able to simulate the behavior of nuclear fuel under operating conditions, it is required to include all the representative loads, including the lateral hydraulic forces which were not included traditionally because of the difficulty of calculating them in a reliable way. Thanks to the advance in CFD codes, now it is possible to assess them. This study calculates the local lateral hydraulic forces, caused by the contraction and expansion of the flow due to the bow of the surrounding fuel assemblies, on of fuel assembly under typical operating conditions from a three loop Westinghouse PWR reactor. (Author)

  15. Program Calculates Power Demands Of Electronic Designs (United States)

    Cox, Brian


    CURRENT computer program calculates power requirements of electronic designs. For given design, CURRENT reads in applicable parts-list file and file containing current required for each part. Program also calculates power required for circuit at supply potentials of 5.5, 5.0, and 4.5 volts. Written by use of AWK utility for Sun4-series computers running SunOS 4.x and IBM PC-series and compatible computers running MS-DOS. Sun version of program (NPO-19590). PC version of program (NPO-19111).

  16. Methods for Melting Temperature Calculation (United States)

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  17. NETL Super Computer (United States)

    Federal Laboratory Consortium — The NETL Super Computer was designed for performing engineering calculations that apply to fossil energy research. It is one of the world’s larger supercomputers,...

  18. Computing meaning v.4

    CERN Document Server

    Bunt, Harry; Pulman, Stephen


    This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue i

  19. Contributing to the design of run-time systems dedicated to high performance computing; Contribution a l'elaboration d'environnements de programmation dedies au calcul scientifique hautes performances

    Energy Technology Data Exchange (ETDEWEB)

    Perache, M


    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  20. New developments in the Denox technology. Plant optimization by means of computer calculations and model experiments. Neueste Entwicklungen in der Denox-Technologie. Optimierung von Anlagen durch Berechnungsprogramme und Modellversuche

    Energy Technology Data Exchange (ETDEWEB)

    Herrlander, B. (Flaekt Industriella Processer AB, Flaekt (Sweden))


    NO{sub x} contained in the flue gases of power plants and waste incineration plants cause acidifiction of soil and water. Plants for emission reduction are already in use and are being further developed. Selective catalytic reduction (SCR) is the leading technology worldwide for NO{sub x} removal from flue gases. A Swedish company offers plant optimisation on the basis of computer programs and model experiments; their technique has been tested in a member of reference plants. (orig.).

  1. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J


    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  2. Flow Field Calculations for Afterburner

    Institute of Scientific and Technical Information of China (English)

    ZhaoJianxing; LiuQuanzhong; 等


    In this paper a calculation procedure for simulating the coimbustion flow in the afterburner with the heat shield,flame stabilizer and the contracting nozzle is described and evaluated by comparison with experimental data.The modified two-equation κ-ε model is employed to consider the turbulence effects,and the κ-ε-g turbulent combustion model is used to determine the reaction rate.To take into accunt the influence of heat radiation on gas temperature distribution,heat flux model is applied to predictions of heat flux distributions,The solution domain spanned the entire region between centerline and afterburner wall ,with the heat shield represented as a blockage to the mesh.The enthalpy equation and wall boundary of the heat shield require special handling for two passages in the afterburner,In order to make the computer program suitable to engineering applications,a subregional scheme is developed for calculating flow fields of complex geometries.The computational grids employed are 100×100 and 333×100(non-uniformly distributed).The numerical results are compared with experimental data,Agreement between predictions and measurements shows that the numerical method and the computational program used in the study are fairly reasonable and appopriate for primary design of the afterburner.

  3. SU-E-J-100: The Combination of Deformable Image Registration and Regions-Of-Interest Mapping Technique to Accomplish Accurate Dose Calculation On Cone Beam Computed Tomography for Esophageal Cancer

    Energy Technology Data Exchange (ETDEWEB)

    Huang, B-T; Lu, J-Y [Cancer Hospital of Shantou University Medical College, Shantou (China)


    Purpose: We introduce a new method combined with the deformable image registration (DIR) and regions-of-interest mapping (ROIM) technique to accurately calculate dose on daily CBCT for esophageal cancer. Methods: Patients suffered from esophageal cancer were enrolled in the study. Prescription was set to 66 Gy/30 F and 54 Gy/30 F to the primary tumor (PTV66) and subclinical disease (PTV54) . Planning CT (pCT) were segmented into 8 substructures in terms of their differences in physical density, such as gross target volume (GTV), venae cava superior (SVC), aorta, heart, spinal cord, lung, muscle and bones. The pCT and its substructures were transferred to the MIM software to readout their mean HU values. Afterwards, a deformable planning CT to daily KV-CBCT image registration method was then utilized to acquire a new structure set on CBCT. The newly generated structures on CBCT were then transferred back to the treatment planning system (TPS) and its HU information were overridden manually with mean HU values obtained from pCT. Finally, the treatment plan was projected onto the CBCT images with the same beam arrangements and monitor units (MUs) to accomplish dose calculation. Planning target volume (PTV) and organs at risk (OARs) from both of the pCT and CBCT were compared to evaluate the dose calculation accuracy. Results: It was found that the dose distribution in the CBCT showed little differences compared to the pCT, regardless of whether PTV or OARs were concerned. Specifically, dose variation in GTV, PTV54, PTV66, SVC, lung and heart were within 0.1%. The maximum dose variation was presented in the spinal cord, which was up to 2.7% dose difference. Conclusion: The proposed method combined with DIR and ROIM technique to accurately calculate dose distribution on CBCT for esophageal cancer is feasible.

  4. Computer-assisted Crystallization. (United States)

    Semeister, Joseph J., Jr.; Dowden, Edward


    To avoid a tedious task for recording temperature, a computer was used for calculating the heat of crystallization for the compound sodium thiosulfate. Described are the computer-interfacing procedures. Provides pictures of laboratory equipment and typical graphs from experiments. (YP)

  5. The Computational Materials Repository

    DEFF Research Database (Denmark)

    Landis, David D.; Hummelshøj, Jens S.; Nestorov, Svetlozar


    The possibilities for designing new materials based on quantum physics calculations are rapidly growing, but these design efforts lead to a significant increase in the amount of computational data created. The Computational Materials Repository (CMR) addresses this data challenge and provides...... a software infrastructure that supports the collection, storage, retrieval, analysis, and sharing of data produced by many electronic-structure simulators....

  6. Vibrational spectroscopy [FTIR and FTRaman] investigation, computed vibrational frequency analysis and IR intensity and Raman activity peak resemblance analysis on 4-chloro 2-methylaniline using HF and DFT [LSDA, B3LYP and B3PW91] calculations. (United States)

    Ramalingam, S; Periandy, S


    In the present study, the FT-IR and FT-Raman spectra of 4-chloro-2-methylaniline (4CH2MA) have been recorded in the range of 4000-100 cm(-1). The fundamental modes of vibrational frequencies of 4CH2MA are assigned. All the geometrical parameters have been calculated by HF and DFT (LSDA, B3LYP and B3PW91) methods with 6-31G (d, p) and 6-311G (d, p) basis sets. Optimized geometries of the molecule have been interpreted and compared with the reported experimental values for aniline and some substituted aniline. The harmonic and anharmonic vibrational wavenumbers, IR intensities and Raman activities are calculated at the same theory levels used in geometry optimization. The calculated frequencies are scaled and compared with experimental values. The scaled vibrational frequencies at LSDA/B3LYP/6-311G (d, p) seem to coincide with the experimentally observed values with acceptable deviations. The impact of substitutions on the benzene structure is investigated. The molecular interactions between the substitutions (Cl, CH(3) and NH(2)) are also analyzed.

  7. Calculation of conversion coefficients of dose of a computational anthropomorphic simulator sit exposed to a plane source; Calculo de coeficientes de conversao de dose de um simulador antropomorfico computacional sentado exposto a uma fonte plana

    Energy Technology Data Exchange (ETDEWEB)

    Santos, William S.; Carvalho Junior, Alberico B. de; Pereira, Ariana J.S.; Santos, Marcos S.; Maia, Ana F., E-mail:, E-mail:, E-mail:, E-mail:, E-mail: afmaia@ufs.b [Universidade Federal de Sergipe (UFS), Aracaju, SE (Brazil)


    In this paper conversion coefficients (CCs) of equivalent dose and effective in terms of kerma in the air were calculated suggested by the ICRP 74. These dose coefficients were calculated considering a plane radiation source and monoenergetic for a spectrum of energy varying from 10 keV to 2 MeV. The CCs were obtained for four geometries of irradiation, anterior-posterior, posterior-anterior, lateral right side and lateral left side. It was used the radiation transport code Visual Monte Carlo (VMC), and a anthropomorphic simulator of sit female voxel. The observed differences in the found values for the CCs at the four irradiation sceneries are direct results of the body organs disposition, and the distance of these organs to the irradiation source. The obtained CCs will be used for estimative more precise of dose in situations that the exposed individual be sit, as the normally the CCs available in the literature were calculated by using simulators always lying or on their feet

  8. Molecular structure and computational studies on 2-((2-(4-(3-(2,5-dimethylphenyl)-3-methylcyclobutyl)thiazol-2-yl)hydrazono)methyl)phenol monomer and dimer by DFT calculations (United States)

    Karakurt, Tuncay; Cukurovali, Alaaddin; Subasi, Nuriye Tuna; Kani, Ibrahim


    The title compound, 2-((2-(4-(3-(2,5-Dimethylphenyl)-3-methylcyclobutyl)thiazol-2-yl)hydrazono)methyl)phenol, was characterized by single-crystal X-ray diffraction. In order to calculate molecular geometry along with the infrared, Atoms in Molecules (AIM) analysis and 1H and 13C NMR chemical shift values, the density functional theory (DFT) method with 6-311G++(d,p) basis set was utilized. Experimental data were then used for comparison. While the title crystal structure is photochromic, the molecule is nonplanar. It takes on an enol form including a forceful intramolecular Osbnd H⋯N hydrogen bond as well as a forceful intermolecular Nsbnd H⋯N hydrogen bond. The 6-311G++(d,p) basis function was used to examine the intramolecular tautomerism single proton transfer reaction of the hydrogen-bonded enol-imine and keto-amine monomer in the title crystal structure at the B3LYP theory level. Further, the frontier molecular orbitals (FMO), molecular docking and NLO properties were studied by using theoretical calculations. The calculated NLO properties of title compound are much greater than urea. The title compound generates a stable complex with CDK2 as is distinct from the binding energy values. These results proposed that the compound might exhibit inhibitory effect against CDK2. These are important in development of new antitumor agent.

  9. Geochemical Calculations Using Spreadsheets. (United States)

    Dutch, Steven Ian


    Spreadsheets are well suited to many geochemical calculations, especially those that are highly repetitive. Some of the kinds of problems that can be conveniently solved with spreadsheets include elemental abundance calculations, equilibrium abundances in nuclear decay chains, and isochron calculations. (Author/PR)

  10. Autistic Savant Calendar Calculators. (United States)

    Patti, Paul J.

    This study identified 10 savants with developmental disabilities and an exceptional ability to calculate calendar dates. These "calendar calculators" were asked to demonstrate their abilities, and their strategies were analyzed. The study found that the ability to calculate dates into the past or future varied widely among these…

  11. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan


    Full Text Available

    Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.

    Keywords: Cloud computing, QoS, quality of cloud computing

  12. Improvement of Sodium Neutronic Nuclear Data for the Computation of Generation IV Reactors; Contribution a l'amelioration des donnees nucleaires neutroniques du sodium pour le calcul des reacteurs de generation IV

    Energy Technology Data Exchange (ETDEWEB)

    Archier, P.


    The safety criteria to be met for Generation IV sodium fast reactors (SFR) require reduced and mastered uncertainties on neutronic quantities of interest. Part of these uncertainties come from nuclear data and, in the particular case of SFR, from sodium nuclear data, which show significant differences between available international libraries (JEFF-3.1.1, ENDF/B-VII.0, JENDL-4.0). The objective of this work is to improve the knowledge on sodium nuclear data for a better calculation of SFR neutronic parameters and reliable associated uncertainties. After an overview of existing {sup 23}Na data, the impact of the differences is quantified, particularly on sodium void reactivity effects, with both deterministic and stochastic neutronic codes. Results show that it is necessary to completely re-evaluate sodium nuclear data. Several developments have been made in the evaluation code Conrad, to integrate new nuclear reactions models and their associated parameters and to perform adjustments with integral measurements. Following these developments, the analysis of differential data and the experimental uncertainties propagation have been performed with Conrad. The resolved resonances range has been extended up to 2 MeV and the continuum range begins directly beyond this energy. A new {sup 23}Na evaluation and the associated multigroup covariances matrices were generated for future uncertainties calculations. The last part of this work focuses on the sodium void integral data feedback, using methods of integral data assimilation to reduce the uncertainties on sodium cross sections. This work ends with uncertainty calculations for industrial-like SFR, which show an improved prediction of their neutronic parameters with the new evaluation. (author) [French] Les criteres de surete exiges pour les reacteurs rapides au sodium de Generation IV (RNR-Na) se traduisent par la necessite d'incertitudes reduites et maitrisees sur les grandeurs neutroniques d'interet. Une part

  13. How Do Calculators Calculate Trigonometric Functions? (United States)

    Underwood, Jeremy M.; Edwards, Bruce H.

    How does your calculator quickly produce values of trigonometric functions? You might be surprised to learn that it does not use series or polynomial approximations, but rather the so-called CORDIC method. This paper will focus on the geometry of the CORDIC method, as originally developed by Volder in 1959. This algorithm is a wonderful…

  14. Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RAχFC): A Visual Basic computer code for calculating trace element and isotope variations of open-system magmatic systems (United States)

    Bohrson, Wendy A.; Spera, Frank J.


    Volcanic and plutonic rocks provide abundant evidence for complex processes that occur in magma storage and transport systems. The fingerprint of these processes, which include fractional crystallization, assimilation, and magma recharge, is captured in petrologic and geochemical characteristics of suites of cogenetic rocks. Quantitatively evaluating the relative contributions of each process requires integration of mass, species, and energy constraints, applied in a self-consistent way. The energy-constrained model Energy-Constrained Recharge, Assimilation, and Fractional Crystallization (EC-RaχFC) tracks the trace element and isotopic evolution of a magmatic system (melt + solids) undergoing simultaneous fractional crystallization, recharge, and assimilation. Mass, thermal, and compositional (trace element and isotope) output is provided for melt in the magma body, cumulates, enclaves, and anatectic (i.e., country rock) melt. Theory of the EC computational method has been presented by Spera and Bohrson (2001, 2002, 2004), and applications to natural systems have been elucidated by Bohrson and Spera (2001, 2003) and Fowler et al. (2004). The purpose of this contribution is to make the final version of the EC-RAχFC computer code available and to provide instructions for code implementation, description of input and output parameters, and estimates of typical values for some input parameters. A brief discussion highlights measures by which the user may evaluate the quality of the output and also provides some guidelines for implementing nonlinear productivity functions. The EC-RAχFC computer code is written in Visual Basic, the programming language of Excel. The code therefore launches in Excel and is compatible with both PC and MAC platforms. The code is available on the authors' Web sites as well as in the auxiliary material.

  15. Research in Computational Astrobiology (United States)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.


    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  16. When computers were human

    CERN Document Server

    Grier, David Alan


    Before Palm Pilots and iPods, PCs and laptops, the term ""computer"" referred to the people who did scientific calculations by hand. These workers were neither calculating geniuses nor idiot savants but knowledgeable people who, in other circumstances, might have become scientists in their own right. When Computers Were Human represents the first in-depth account of this little-known, 200-year epoch in the history of science and technology. Beginning with the story of his own grandmother, who was trained as a human computer, David Alan Grier provides a poignant introduction to the wider wo

  17. Painless causality in defect calculations

    CERN Document Server

    Cheung, C; Cheung, Charlotte; Magueijo, Joao


    Topological defects must respect causality, a statement leading to restrictive constraints on the power spectrum of the total cosmological perturbations they induce. Causality constraints have for long been known to require the presence of an under-density in the surrounding matter compensating the defect network on large scales. This so-called compensation can never be neglected and significantly complicates calculations in defect scenarios, eg. computing cosmic microwave background fluctuations. A quick and dirty way to implement the compensation are the so-called compensation fudge factors. Here we derive the complete photon-baryon-CDM backreaction effects in defect scenarios. The fudge factor comes out as an algebraic identity and so we drop the negative qualifier ``fudge''. The compensation scale is computed and physically interpreted. Secondary backreaction effects exist, and neglecting them constitutes the well-defined approximation scheme within which one should consider compensation factor calculatio...

  18. Rate calculation with colored noise

    CERN Document Server

    Bartsch, Thomas; Benito, R M; Borondo, F


    The usual identification of reactive trajectories for the calculation of reaction rates requires very time-consuming simulations, particularly if the environment presents memory effects. In this paper, we develop a new method that permits the identification of reactive trajectories in a system under the action of a stochastic colored driving. This method is based on the perturbative computation of the invariant structures that act as separatrices for reactivity. Furthermore, using this perturbative scheme, we have obtained a formally exact expression for the reaction rate in multidimensional systems coupled to colored noisy environments.

  19. Towards the development of run times leveraging virtualization for high performance computing; Contribution a l'elaboration de supports executifs exploitant la virtualisation pour le calcul hautes performances

    Energy Technology Data Exchange (ETDEWEB)

    Diakhate, F.


    In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)

  20. Implicit upwind schemes for computational fluid dynamics. Solution by domain decomposition; Etude des schemas decentres implicites pour le calcul numerique en mecanique des fluides. Resolution par decomposition de domaine

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, S


    In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)

  1. Development of a computational model for the calculation of neutron dose equivalent in laminated primary barriers of radiotherapy rooms; Desenvolvimento de um modelo computacional para calculo do equivalente de dose de neutrons em barreiras primarias laminadas de salas de radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Rezende, Gabriel Fonseca da Silva


    Many radiotherapy centers acquire 15 and 18 MV linear accelerators to perform more effective treatments for deep tumors. However, the acquisition of these equipment must be accompanied by an additional care in shielding planning of the rooms that will house them. In cases where space is restricted, it is common to find primary barriers made of concrete and metal. The drawback of this type of barrier is the photoneutron emission when high energy photons (e.g. 15 and 18 MV spectra) interact with the metallic material of the barrier. The emission of these particles constitutes a problem of radiation protection inside and outside of radiotherapy rooms, which should be properly assessed. A recent work has shown that the current model underestimate the dose of neutrons outside the treatment rooms. In this work, a computational model for the aforementioned problem was created from Monte Carlo Simulations and Artificial Intelligence. The developed model was composed by three neural networks, each being formed of a pair of material and spectrum: Pb18, Pb15 and Fe18. In a direct comparison with the McGinley method, the Pb18 network exhibited the best responses for approximately 78% of the cases tested; the Pb15 network showed better results for 100% of the tested cases, while the Fe18 network produced better answers to 94% of the tested cases. Thus, the computational model composed by the three networks has shown more consistent results than McGinley method. (author)

  2. ITER Port Interspace Pressure Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Carbajo, Juan J [ORNL; Van Hove, Walter A [ORNL


    The ITER Vacuum Vessel (VV) is equipped with 54 access ports. Each of these ports has an opening in the bioshield that communicates with a dedicated port cell. During Tokamak operation, the bioshield opening must be closed with a concrete plug to shield the radiation coming from the plasma. This port plug separates the port cell into a Port Interspace (between VV closure lid and Port Plug) on the inner side and the Port Cell on the outer side. This paper presents calculations of pressures and temperatures in the ITER (Ref. 1) Port Interspace after a double-ended guillotine break (DEGB) of a pipe of the Tokamak Cooling Water System (TCWS) with high temperature water. It is assumed that this DEGB occurs during the worst possible conditions, which are during water baking operation, with water at a temperature of 523 K (250 C) and at a pressure of 4.4 MPa. These conditions are more severe than during normal Tokamak operation, with the water at 398 K (125 C) and 2 MPa. Two computer codes are employed in these calculations: RELAP5-3D Version 4.2.1 (Ref. 2) to calculate the blowdown releases from the pipe break, and MELCOR, Version 1.8.6 (Ref. 3) to calculate the pressures and temperatures in the Port Interspace. A sensitivity study has been performed to optimize some flow areas.

  3. GASP: A computer code for calculating the thermodynamic and transport properties for ten fluids: Parahydrogen, helium, neon, methane, nitrogen, carbon monoxide, oxygen, fluorine, argon, and carbon dioxide. [enthalpy, entropy, thermal conductivity, and specific heat (United States)

    Hendricks, R. C.; Baron, A. K.; Peller, I. C.


    A FORTRAN IV subprogram called GASP is discussed which calculates the thermodynamic and transport properties for 10 pure fluids: parahydrogen, helium, neon, methane, nitrogen, carbon monoxide, oxygen, fluorine, argon, and carbon dioxide. The pressure range is generally from 0.1 to 400 atmospheres (to 100 atm for helium and to 1000 atm for hydrogen). The temperature ranges are from the triple point to 300 K for neon; to 500 K for carbon monoxide, oxygen, and fluorine; to 600 K for methane and nitrogen; to 1000 K for argon and carbon dioxide; to 2000 K for hydrogen; and from 6 to 500 K for helium. GASP accepts any two of pressure, temperature and density as input conditions along with pressure, and either entropy or enthalpy. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, and surface tension. The subprogram design is modular so that the user can choose only those subroutines necessary to the calculations.

  4. Core calculations of JMTR

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment


    In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)

  5. Computational programs for shielding calculation with transport of one dimensional and monoenergetic S{sub N}; Aplicativo computacional para calculos de blindagem com modelo de transporte S{sub N} unidimensional e monoenergetico

    Energy Technology Data Exchange (ETDEWEB)

    Nunes, Carlos Eduardo A.; Barros, Ricardo C., E-mail: ceanunes@iprj.uerj.b, E-mail: rcbarros@pq.cnpq.b [Universidade do Estado, Nova Friburgo, RJ (Brazil). Inst. Politecnico. Dept. de Modelagem Computacional


    This paper describes a computational program for result simulation of neutron transport problems at one velocity with isotropic scattering in Cartesian onedimensional geometry. Describing the physical modelling, the next phase is a mathematical modelling of the physical problem for simulation of the neutron distribution. The mathematical modelling uses the linearized Boltzmann equation which represents a balance among the production and loss of particles. The formulation of the discrete ordinates S{sub N} consists of discretization of angular variables at N directions (discrete ordinates), and using a set of angular quadratures for the approximation of integral terms of scattering sources. The S{sub N} equations are numerically solved. This work describes three numerical methods: diamond difference, step and characteristic step. The paper also presents numerical results for illustration of the efficiency of the developed program

  6. Application of Computer in Profit Calculation of Commercial Coal Mining%计算机在商业煤炭开采利润计算中的应用

    Institute of Scientific and Technical Information of China (English)



    The profits of coal resources development not only paid close attention by everybody in China, it is also an important research topic in some advanced developed countries. In this paper, how to use a computer to solve profit related problems in commercial coal mining is discussed.%煤炭资源开发所带来的利润不仅在中国受到大家的关注,在世界一些先进发达国家也是一项重要的研究课题。文章对商业煤炭开采中如何使用计算机解决利润的相关问题进行了探讨。

  7. Computer programs for the calculation of dual sting pitch and roll angles required for an articulated sting to obtain angles of attack and sideslip on wind-tunnel models (United States)

    Peterson, John B., Jr.


    Two programs were developed to calculate the pitch and roll position of the conventional sting drive and the pitch of a high angle articulated sting to position a wind tunnel model at the desired angle of attack and sideslip and position the model as near as possible to the centerline of the tunnel. These programs account for the effects of sting offset angles, sting bending angles, and wind-tunnel stream flow angles. In addition, the second program incorporates inputs form on-board accelerometers that measure model pitch and roll with respect to gravity. The programs are presented and a description of the numerical operation of the programs with a definition of the variables used in the programs is given.

  8. Electrical installation calculations advanced

    CERN Document Server

    Kitcher, Christopher


    All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio

  9. Electrical installation calculations basic

    CERN Document Server

    Kitcher, Christopher


    All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo

  10. Calculating correct compilers

    DEFF Research Database (Denmark)

    Bahr, Patrick; Hutton, Graham


    In this article, we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high-level semantics by systematic calculation, with all details of the implementation of the compilers...... falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language features and their combination, including arithmetic expressions, exceptions, state, various forms...

  11. Radar Signature Calculation Facility (United States)

    Federal Laboratory Consortium — FUNCTION: The calculation, analysis, and visualization of the spatially extended radar signatures of complex objects such as ships in a sea multipath environment and...

  12. Electronics Environmental Benefits Calculator (United States)

    U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...

  13. Non-commutative computer algebra and molecular computing

    Directory of Open Access Journals (Sweden)

    Svetlana Cojocaru


    Full Text Available Non-commutative calculations are considered from the molecular computing point of view. The main idea is that one can get more advantage in using molecular computing for non-commutative computer algebra compared with a commutative one. The restrictions, connected with the coefficient handling in Grobner basis calculations are investigated. Semigroup and group cases are considered as more appropriate. SAGBI basis constructions and possible implementations are discussed.

  14. Non-commutative computer algebra and molecular computing



    Non-commutative calculations are considered from the molecular computing point of view. The main idea is that one can get more advantage in using molecular computing for non-commutative computer algebra compared with a commutative one. The restrictions, connected with the coefficient handling in Grobner basis calculations are investigated. Semigroup and group cases are considered as more appropriate. SAGBI basis constructions and possible implementations are discussed.

  15. Implementação computacional do modelo carga-fluxo de carga-fluxo de dipolo para cálculo e interpretação das intensidades do espectro infravermelho Computational implementation of the model charge-charge flux-dipole flux for calculation and analysis of infrared intensities

    Directory of Open Access Journals (Sweden)

    Thiago C. F. Gomes


    Full Text Available The first computational implementation that automates the procedures involved in the calculation of infrared intensities using the charge-charge flux-dipole flux model is presented. The atomic charges and dipoles from the Quantum Theory of Atoms in Molecules (QTAIM model was programmed for Morphy98, Gaussian98 and Gaussian03 programs outputs, but for the ChelpG parameters only the Gaussian programs are supported. Results of illustrative but new calculations for the water, ammonia and methane molecules at the MP2/6-311++G(3d,3p theoretical level, using the ChelpG and QTAIM/Morphy charges and dipoles are presented. These results showed excellent agreement with analytical results obtained directly at the MP2/6-311++G(3d,3p level of theory.

  16. Multigrid Methods in Electronic Structure Calculations

    CERN Document Server

    Briggs, E L; Bernholc, J


    We describe a set of techniques for performing large scale ab initio calculations using multigrid accelerations and a real-space grid as a basis. The multigrid methods provide effective convergence acceleration and preconditioning on all length scales, thereby permitting efficient calculations for ill-conditioned systems with long length scales or high energy cut-offs. We discuss specific implementations of multigrid and real-space algorithms for electronic structure calculations, including an efficient multigrid-accelerated solver for Kohn-Sham equations, compact yet accurate discretization schemes for the Kohn-Sham and Poisson equations, optimized pseudo\\-potentials for real-space calculations, efficacious computation of ionic forces, and a complex-wavefunction implementation for arbitrary sampling of the Brillioun zone. A particular strength of a real-space multigrid approach is its ready adaptability to massively parallel computer architectures, and we present an implementation for the Cray-T3D with essen...

  17. Relaxation Method For Calculating Quantum Entanglement

    CERN Document Server

    Tucci, R R


    In a previous paper, we showed how entanglement of formation can be defined as a minimum of the quantum conditional mutual information (a.k.a. quantum conditional information transmission). In classical information theory, the Arimoto-Blahut method is one of the preferred methods for calculating extrema of mutual information. We present a new method akin to the Arimoto-Blahut method for calculating entanglement of formation. We also present several examples computed with a computer program called Causa Comun that implements the ideas of this paper.

  18. Calculating reliability measures for ordinal data. (United States)

    Gamsu, C V


    Establishing the reliability of measures taken by judges is important in both clinical and research work. Calculating the statistic of choice, the kappa coefficient, unfortunately is not a particularly quick and simple procedure. Two much-needed practical tools have been developed to overcome these difficulties: a comprehensive and easily understood guide to the manual calculation of the most complex form of the kappa coefficient, weighted kappa for ordinal data, has been written; and a computer program to run under CP/M, PC-DOS and MS-DOS has been developed. With simple modification the program will also run on a Sinclair Spectrum home computer.

  19. Flow calculation in a bulb turbine

    Energy Technology Data Exchange (ETDEWEB)

    Goede, E.; Pestalozzi, J.


    In recent years remarkable progress has been made in the field of computational fluid dynamics. Sometimes the impression may arise when reading the relevant literature that most of the problems in this field have already been solved. Upon studying the matter more deeply, however, it is apparent that some questions still remain unanswered. The use of the quasi-3D (Q3D) computational method for calculating the flow in a fuel hydraulic turbine is described.

  20. Computerized calculation of material balances in carbonization

    Energy Technology Data Exchange (ETDEWEB)

    Chistyakov, A.M.


    Charge formulations and carbonisation schedules are described by empirical formulae used to calculate the yield of coking products. An algorithm is proposed for calculating the material balance, and associated computer program. The program can be written in conventional languages, e.g. Fortran, Algol etc. The information obtained can be used for on-line assessment of the effects of charge composition and properties on the coke and by-products yields, as well as the effects of the carbonisation conditions.

  1. Calculating Cumulative Binomial-Distribution Probabilities (United States)

    Scheuer, Ernest M.; Bowerman, Paul N.


    Cumulative-binomial computer program, CUMBIN, one of set of three programs, calculates cumulative binomial probability distributions for arbitrary inputs. CUMBIN, NEWTONP (NPO-17556), and CROSSER (NPO-17557), used independently of one another. Reliabilities and availabilities of k-out-of-n systems analyzed. Used by statisticians and users of statistical procedures, test planners, designers, and numerical analysts. Used for calculations of reliability and availability. Program written in C.

  2. Calculators and Polynomial Evaluation. (United States)

    Weaver, J. F.

    The intent of this paper is to suggest and illustrate how electronic hand-held calculators, especially non-programmable ones with limited data-storage capacity, can be used to advantage by students in one particular aspect of work with polynomial functions. The basic mathematical background upon which calculator application is built is summarized.…

  3. Experimental evaluation of quantum computing elements (qubits) made of electrons trapped over a liquid helium film; Evaluation experimentale d'elements de calcul quantique (qubit) formes d'electrons pieges sur l'helium liquide

    Energy Technology Data Exchange (ETDEWEB)

    Rousseau, E


    An electron on helium presents a quantized energy spectrum. The interaction with the environment is considered sufficiently weak in order to allow the realization of a quantum bit (qubit) by using the first two energy levels. The first stage in the realization of this qubit was to trap and control a single electron. This is carried out thanks to a set of micro-fabricated electrodes defining a well of potential in which the electron is trapped. We are able with such a sample to trap and detect a variables number of electrons varying between one and around twenty. This then allowed us to study the static behaviour of a small number of electrons in a trap. They are supposed to crystallize and form structures called Wigner molecules. Such molecules have not yet been observed yet with electrons above helium. Our results bring circumstantial evidence for of Wigner crystallization. We then sought to characterize the qubit more precisely. We sought to carry out a projective reading (depending on the state of the qubit) and a measurement of the relaxation time. The results were obtained by exciting the electron with an incoherent electric field. A clean measurement of the relaxation time would require a coherent electric field. The conclusion cannot thus be final but it would seem that the relaxation time is shorter than calculated theoretically. That is perhaps due to a measurement of the relaxation between the oscillating states in the trap and not between the states of the qubit. (author)

  4. A computational model for reliability calculation of steam generators from defects in its tubes; Um modelo computacional para o calculo da confiabilidade de geradores de vapor a partir de defeitos em seus tubos

    Energy Technology Data Exchange (ETDEWEB)

    Rivero, Paulo C.M.; Melo, P.F. Frutuoso e [Universidade Federal, Rio de Janeiro, RJ (Brazil). Coordenacao dos Programas de Pos-graduacao de Engenharia. Programa de Engenharia Nuclear


    Nowadays, probability approaches are employed for calculating the reliability of steam generators as a function of defects in their tubes without any deterministic association with warranty assurance. Unfortunately, probability models produce large failure values, as opposed to the recommendation of the U.S. Code of Federal Regulations, that is, failure probabilities must be as small as possible In this paper, we propose the association of the deterministic methodology with the probabilistic one. At first, the failure probability evaluation of steam generators follows a probabilistic methodology: to find the failure probability, critical cracks - obtained from Monte Carlo simulations - are limited to have length's in the interval defined by their lower value and the plugging limit one, so as to obtain a failure probability of at most 1%. The distribution employed for modeling the observed (measured) cracks considers the same interval. Any length outside the mentioned interval is not considered for the probability evaluation: it is approached by the deterministic model. The deterministic approach is to plug the tube when any anomalous crack is detected in it. Such a crack is an observed one placed in the third region on the plot of the logarithmic time derivative of crack lengths versus the mode I stress intensity factor, while for normal cracks the plugging of tubes occurs in the second region of that plot - if they are dangerous, of course, considering their random evolution. A methodology for identifying anomalous cracks is also presented. (author)

  5. Calcul des efforts de deuxième ordre à très haute fréquence sur des plates-formes à lignes tendues Computing High-Frequency Second Order Loads on Tension Leg Platforms

    Directory of Open Access Journals (Sweden)

    Chen X.


    Full Text Available Le problème considéré ici est celui de l'évaluation des efforts excitateurs de deuxième ordre (en mode somme, c'est-à-dire prenant place aux sommes deux à deux des fréquences de houle sur des plates-formes à lignes tendues. Ces efforts sont tenus pour responsables de comportements résonnants (en roulis, tangage et pilonnement observés lors d'essais en bassin et pourraient réduire sensiblement la durée de vie en fatigue des tendons. Des résultats sont tout d'abord présentés pour une structure simplifiée, consistant en 4 cylindres verticaux reposant sur le fond marin. L'intérêt de cette géométrie est que tous les calculs peuvent être menés à terme de façon quasi analytique. Les résultats obtenus permettent d'illustrer le haut degré d'interaction entre les colonnes, et la faible décroissance du potentiel de diffraction de deuxième ordre avec la profondeur. On présente ensuite des résultats pour une plate-forme réelle, celle de Snorre. Tension Leg Platforms (TLP's are now regarded as a promising technology for the development of deep offshore fields. As the water depth increases however, their natural periods of heave, roll and pitch tend to increase as well (roughly to the one-half power, and it is not clear yet what the maximum permissible values for these natural periods can be. For the Snorre TLP for instance, they are only about 2. 5 seconds, which seems to be sufficiently low since there is very limited free wave energy at such periods. Model tests, however, have shown some resonant response in sea states with peak periods of about 5 seconds. Often referred to as springing , this resonant motion can severely affect the fatigue life of tethers and increase their design loads. In order to calculate this springing motion at the design stage, it is necessary to identify and evaluate both the exciting loads and the mechanisms of energy dissipation. With the help of the French Norwegian Foundation a joint effort was

  6. Parallel solutions of correlation dimension calculation

    Institute of Scientific and Technical Information of China (English)


    The calculation of correlation dimension is a key problem of the fractals. The standard algorithm requires O(N2) computations. The previous improvement methods endeavor to sequentially reduce redundant computation on condition that there are many different dimensional phase spaces, whose application area and performance improvement degree are limited. This paper presents two fast parallel algorithms: O(N2/p + logp) time p processors PRAM algorithm and O(N2/p) time p processors LARPBS algorithm. Analysis and results of numeric computation indicate that the speedup of parallel algorithms relative to sequence algorithms is efficient. Compared with the PRAM algorithm, The LARPBS algorithm is practical, optimally scalable and cost optimal.

  7. Automatic Calculation of Dimension Chains in AutoCAD

    Institute of Scientific and Technical Information of China (English)


    In the course of mechanical part designing, process p lanning and assembling designing, we often have to calculate and analyse a dimen sion chain. Traditionally, a dimension chain is established and calculated m anually. With wide computer application in the field of mechanical design and ma nufacture, people began to use a computer to acquire and calculate a dimension c hain automatically. In reported work, a dimension chain can be established and c alculated automatically. However, dimension text value...

  8. Effectively calculable quantum mechanics


    Bolotin, Arkady


    According to mathematical constructivism, a mathematical object can exist only if there is a way to compute (or "construct") it; so, what is non-computable is non-constructive. In the example of the quantum model, whose Fock states are associated with Fibonacci numbers, this paper shows that the mathematical formalism of quantum mechanics is non-constructive since it permits an undecidable (or effectively impossible) subset of Hilbert space. On the other hand, as it is argued in the paper, if...

  9. Interval arithmetic in calculations (United States)

    Bairbekova, Gaziza; Mazakov, Talgat; Djomartova, Sholpan; Nugmanova, Salima


    Interval arithmetic is the mathematical structure, which for real intervals defines operations analogous to ordinary arithmetic ones. This field of mathematics is also called interval analysis or interval calculations. The given math model is convenient for investigating various applied objects: the quantities, the approximate values of which are known; the quantities obtained during calculations, the values of which are not exact because of rounding errors; random quantities. As a whole, the idea of interval calculations is the use of intervals as basic data objects. In this paper, we considered the definition of interval mathematics, investigated its properties, proved a theorem, and showed the efficiency of the new interval arithmetic. Besides, we briefly reviewed the works devoted to interval analysis and observed basic tendencies of development of integral analysis and interval calculations.

  10. Unit Cost Compendium Calculations (United States)

    U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...

  11. Calculativeness and trust

    DEFF Research Database (Denmark)

    Frederiksen, Morten


    Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... to take actual subjective experience into consideration. It points out that, first, Løgstrup places trust alongside calculativeness as a different mode of engaging in social interaction, rather conceiving of trust as a state or the outcome of a decision-making process. Secondly, the analysis must take...


    Institute of Scientific and Technical Information of China (English)



    This paper presents a procedure for calculating the effective discharge for rivers with alluvial channels.An alluvial river adjusts the bankfull shape and dimensions of its channel to the wide range of flows that mobilize the boundary sediments. It has been shown that time-averaged river morphology is adjusted to the flow that, over a prolonged period, transports most sediment. This is termed the effective discharge.The effective discharge may be calculated provided that the necessary data are available or can be synthesized. The procedure for effective discharge calculation presented here is designed to have general applicability, have the capability to be applied consistently, and represent the effects of physical processes responsible for determining the channel, dimensions. An example of the calculations necessary and applications of the effective discharge concept are presented.

  13. Current interruption transients calculation

    CERN Document Server

    Peelo, David F


    Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,

  14. Source and replica calculations

    Energy Technology Data Exchange (ETDEWEB)

    Whalen, P.P.


    The starting point of the Hiroshima-Nagasaki Dose Reevaluation Program is the energy and directional distributions of the prompt neutron and gamma-ray radiation emitted from the exploding bombs. A brief introduction to the neutron source calculations is presented. The development of our current understanding of the source problem is outlined. It is recommended that adjoint calculations be used to modify source spectra to resolve the neutron discrepancy problem.

  15. Computational modeling of the mathematical dummy of the Brazilian woman for calculations of internal dosimetry and ends of comparison of the fractions absorbed specific with the woman reference; Modelagem computacional do manequim matematico da mulher brasileira para calculos de dosimetria interna e para fins de comparacao das fracoes absorvidas especificas com a mulher referencia

    Energy Technology Data Exchange (ETDEWEB)

    Ximenes, Edmir


    Tools for dosimetric calculations are of the utmost importance for the basic principles of radiological protection, not only in nuclear medicine, but also in other scientific calculations. In this work a mathematical model of the Brazilian woman is developed in order to be used as a basis for calculations of Specific Absorbed Fractions (SAFs) in internal organs and in the skeleton, in accord with the objectives of diagnosis or therapy in nuclear medicine. The model developed here is similar in form to that of Snyder, but modified to be more relevant to the case of the Brazilian woman. To do this, the formalism of the Monte Carlo method was used by means of the ALGAM- 97{sup R} computational code. As a contribution to the objectives of this thesis, we developed the computational system cSAF - consultation for Specific Absorbed Fractions (cFAE from Portuguese acronym) - which furnishes several 'look-up' facilities for the research user. The dialogue interface with the operator was planned following current practices in the utilization of event-oriented languages. This interface permits the user to navigate by means of the reference models, choose the source organ, the energy desired, and receive an answer through an efficient and intuitive dialogue. The system furnishes, in addition to the data referring to the Brazilian woman, data referring to the model of Snyder and to the model of the Brazilian man. The system makes available not only individual data to the SAFs of the three models, but also a comparison among them. (author)

  16. Calculation of Thermochemical Constants of Propellants

    Directory of Open Access Journals (Sweden)

    K. P. Rao


    Full Text Available A method for calculation of thermo chemical constants and products of explosion of propellants from the knowledge of molecular formulae and heats of formation of the ingredients is given. A computer programme in AUTOMATH-400 has been established for the method. The results of application of the method for a number of propellants are given.

  17. On the calculation of Mossbauer isomer shift

    NARCIS (Netherlands)

    Filatov, Michael


    A quantum chemical computational scheme for the calculation of isomer shift in Mossbauer spectroscopy is suggested. Within the described scheme, the isomer shift is treated as a derivative of the total electronic energy with respect to the radius of a finite nucleus. The explicit use of a finite nuc

  18. Development of a computer program of fast calculation for the pre design of advanced nuclear fuel 10 x 10 for BWR type reactors; Desarrollo de un program de computo de calculo rapido para el prediseno de celdas de combustible nuclear avanzado 10 x 10 para reactores de agua en ebullicion

    Energy Technology Data Exchange (ETDEWEB)

    Perusquia, R.; Montes, J.L.; Ortiz, J.J. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail:


    In the National Institute of Nuclear Research (ININ) a methodology is developed to optimize the design of cells 10x10 of assemble fuels for reactors of water in boil or BWR. It was proposed a lineal calculation formula based on a coefficients matrix (of the change reason of the relative power due to changes in the enrichment of U-235) for estimate the relative powers by pin of a cell. With this it was developed the computer program of fast calculation named PreDiCeldas. The one which by means of a simple search algorithm allows to minimize the relative power peak maximum of cell or LPPF. This is achieved varying the distribution of U-235 inside the cell, maintaining in turn fixed its average enrichment. The accuracy in the estimation of the relative powers for pin is of the order from 1.9% when comparing it with results of the 'best estimate' HELIOS code. With the PreDiCeldas it was possible, at one minimum time of calculation, to re-design a reference cell diminishing the LPPF, to the beginning of the life, of 1.44 to a value of 1.31. With the cell design with low LPPF is sought to even design cycles but extensive that those reached at the moment in the BWR of the Laguna Verde Central. (Author)

  19. Good Practices in Free-energy Calculations (United States)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher


    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  20. Recent computational chemistry (United States)

    Onishi, Taku


    Now we can investigate quantum phenomena for the real materials and molecules, and can design functional materials by computation, due to the previous developments of quantum theory and calculation methods. As there still exist the limit and problem in theory, the cooperation between theory and computation is getting more important to clarify the unknown quantum mechanism, and discover more efficient functional materials. It would be next-generation standard. Finally, our theoretical methodology for boundary solid is introduced.

  1. Recent computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Taku [Department of Chemistry for Materials, and The Center of Ultimate Technology on nano-Electronics, Mie University (Japan); Center for Theoretical and Computational Chemistry, Department of Chemistry, University of Oslo (Norway)


    Now we can investigate quantum phenomena for the real materials and molecules, and can design functional materials by computation, due to the previous developments of quantum theory and calculation methods. As there still exist the limit and problem in theory, the cooperation between theory and computation is getting more important to clarify the unknown quantum mechanism, and discover more efficient functional materials. It would be next-generation standard. Finally, our theoretical methodology for boundary solid is introduced.

  2. Computing Borel's Regulator II

    CERN Document Server

    Choo, Zacky; Sánchez-García, Rubén J; Snaith, Victor P


    In our earlier article we described a power series formula for the Borel regulator evaluated on the odd-dimensional homology of the general linear group of a number field and, concentrating on dimension three for simplicity, described a computer algorithm which calculates the value to any chosen degree of accuracy. In this sequel we give an algorithm for the construction of the input homology classes and describe the results of one cyclotomic field computation.

  3. Numerical inductance calculations based on first principles. (United States)

    Shatz, Lisa F; Christensen, Craig W


    A method of calculating inductances based on first principles is presented, which has the advantage over the more popular simulators in that fundamental formulas are explicitly used so that a deeper understanding of the inductance calculation is obtained with no need for explicit discretization of the inductor. It also has the advantage over the traditional method of formulas or table lookups in that it can be used for a wider range of configurations. It relies on the use of fast computers with a sophisticated mathematical computing language such as Mathematica to perform the required integration numerically so that the researcher can focus on the physics of the inductance calculation and not on the numerical integration.

  4. Parallel scalability of Hartree–Fock calculations

    Energy Technology Data Exchange (ETDEWEB)

    Chow, Edmond, E-mail:; Liu, Xing [School of Computational Science and Engineering, Georgia Institute of Technology, Atlanta, Georgia 30332-0765 (United States); Smelyanskiy, Mikhail; Hammond, Jeff R. [Parallel Computing Lab, Intel Corporation, Santa Clara, California 95054-1549 (United States)


    Quantum chemistry is increasingly performed using large cluster computers consisting of multiple interconnected nodes. For a fixed molecular problem, the efficiency of a calculation usually decreases as more nodes are used, due to the cost of communication between the nodes. This paper empirically investigates the parallel scalability of Hartree–Fock calculations. The construction of the Fock matrix and the density matrix calculation are analyzed separately. For the former, we use a parallelization of Fock matrix construction based on a static partitioning of work followed by a work stealing phase. For the latter, we use density matrix purification from the linear scaling methods literature, but without using sparsity. When using large numbers of nodes for moderately sized problems, density matrix computations are network-bandwidth bound, making purification methods potentially faster than eigendecomposition methods.

  5. Calculations in apheresis. (United States)

    Neyrinck, Marleen M; Vrielink, Hans


    It's important to work smoothly with your apheresis equipment when you are an apheresis nurse. Attention should be paid to your donor/patient and the product you're collecting. It gives additional value to your work when you are able to calculate the efficiency of your procedures. You must be capable to obtain an optimal product without putting your donor/patient at risk. Not only the total blood volume (TBV) of the donor/patient plays an important role, but also specific blood values influence the apheresis procedure. Therefore, not all donors/patients should be addressed in the same way. Calculation of TBV, extracorporeal volume, and total plasma volume is needed. Many issues determine your procedure time. By knowing the collection efficiency (CE) of your apheresis machine, you can calculate the number of blood volumes to be processed to obtain specific results. You can calculate whether you need one procedure to obtain specific results or more. It's not always needed to process 3× the TBV. In this way, it can be avoided that the donor/patient is needless long connected to the apheresis device. By calculating the CE of each device, you can also compare the various devices for quality control reasons, but also nurses/operators.

  6. Computer Music (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  7. INVAP's Nuclear Calculation System

    Directory of Open Access Journals (Sweden)

    Ignacio Mochi


    Full Text Available Since its origins in 1976, INVAP has been on continuous development of the calculation system used for design and optimization of nuclear reactors. The calculation codes have been polished and enhanced with new capabilities as they were needed or useful for the new challenges that the market imposed. The actual state of the code packages enables INVAP to design nuclear installations with complex geometries using a set of easy-to-use input files that minimize user errors due to confusion or misinterpretation. A set of intuitive graphic postprocessors have also been developed providing a fast and complete visualization tool for the parameters obtained in the calculations. The capabilities and general characteristics of this deterministic software package are presented throughout the paper including several examples of its recent application.

  8. Calculating Quenching Weights

    CERN Document Server

    Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim


    We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...


    Directory of Open Access Journals (Sweden)

    Malte BETHKE


    Full Text Available A food calculator for elderly people was elaborated by Centiv GmbH, an active partner in the European FP7 OPTIFEL Project, based on the functional requirement specifications and the existing recommendations for daily allowances across Europe, data which were synthetized and used to give aims in amounts per portion. The OPTIFEL Personalised Nutritional Calculator is the only available online tool which allows to determine on a personalised level the required nutrients for elderly people (65+. It has been developed mainly to support nursing homes providing best possible (personalised nutrient enriched food to their patients. The European FP7 OPTIFEL project “Optimised Food Products for Elderly Populations” aims to develop innovative products based on vegetables and fruits for elderly populations to increase length of independence. The OPTIFEL Personalised Nutritional Calculator is recommended to be used by nursing homes.

  10. EOSPEC: a complementary toolbox for MODTRAN calculations (United States)

    Dion, Denis


    For more than a decade, Defence Research and Development Canada (DRDC) has been developing a Library of computer models for the calculations of atmospheric effects on EO-IR sensor performances. The Library, called EOSPEC-LIB (EO-IR Sensor PErformance Computation LIBrary) has been designed as a complement to MODTRAN, the radiative transfer code developed by the Air Force Research Laboratory and Spectral Science Inc. in the USA. The Library comprises modules for the definition of the atmospheric conditions, including aerosols, and provides modules for the calculation of turbulence and fine refraction effects. SMART (Suite for Multi-resolution Atmospheric Radiative Transfer), a key component of EOSPEC, allows one to perform fast computations of transmittances and radiances using MODTRAN through a wide-band correlated-k computational approach. In its most recent version, EOSPEC includes a MODTRAN toolbox whose functions help generate in a format compatible to MODTRAN 5 and 6 atmospheric and aerosol profiles, user-defined refracted optical paths and inputs for configuring the MODTRAN sea radiance (BRDF) model. The paper gives an overall description of the EOSPEC features and capacities. EOSPEC provides augmented capabilities for computations in the lower atmosphere, and for computations in maritime environments.

  11. Spin Resonance Strength Calculations (United States)

    Courant, E. D.


    In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.

  12. Spin resonance strength calculations

    Energy Technology Data Exchange (ETDEWEB)



    In calculating the strengths of depolarizing resonances it may be convenient to reformulate the equations of spin motion in a coordinate system based on the actual trajectory of the particle, as introduced by Kondratenko, rather than the conventional one based on a reference orbit. It is shown that resonance strengths calculated by the conventional and the revised formalisms are identical. Resonances induced by radiofrequency dipoles or solenoids are also treated; with rf dipoles it is essential to consider not only the direct effect of the dipole but also the contribution from oscillations induced by it.

  13. Environmental flow allocation and statistics calculator (United States)

    Konrad, Christopher P.


    The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.

  14. Digital computers in action

    CERN Document Server

    Booth, A D


    Digital Computers in Action is an introduction to the basics of digital computers as well as their programming and various applications in fields such as mathematics, science, engineering, economics, medicine, and law. Other topics include engineering automation, process control, special purpose games-playing devices, machine translation and mechanized linguistics, and information retrieval. This book consists of 14 chapters and begins by discussing the history of computers, from the idea of performing complex arithmetical calculations to the emergence of a modern view of the structure of a ge

  15. Computing with Harmonic Functions


    Axler, Sheldon


    This document is the manual for a free Mathematica package for computing with harmonic functions. This package allows the user to make calculations that would take a prohibitive amount of time if done without a computer. For example, the Poisson integral of any polynomial can be computed exactly. This software can find exact solutions to Dirichlet, Neumann, and biDirichlet problems in R^n with polynomial data on balls, ellipsoids, and annular regions. It can also find bases for spaces of sphe...

  16. Storage to Energy Calculator

    NARCIS (Netherlands)

    Taal, A.; Makkes, M.X.; Grosso, P.


    Computational and storage tasks can nowadays be offloaded among data centers, in order to optimize costs and or performance. We set out to investigate what are the environmental effects, namely the total CO2 emission, of such offloading. We built models for the various components present in these of

  17. Curvature calculations with GEOCALC

    Energy Technology Data Exchange (ETDEWEB)

    Moussiaux, A.; Tombal, P.


    A new method for calculating the curvature tensor has been recently proposed by D. Hestenes. This method is a particular application of geometric calculus, which has been implemented in an algebraic programming language on the form of a package called GEOCALC. They show how to apply this package to the Schwarzchild case and they discuss the different results.

  18. Haida Numbers and Calculation. (United States)

    Cogo, Robert

    Experienced traders in furs, blankets, and other goods, the Haidas of the 1700's had a well-developed decimal system for counting and calculating. Their units of linear measure included the foot, yard, and fathom, or six feet. This booklet lists the numbers from 1 to 20 in English and Haida; explains the Haida use of ten, hundred, and thousand…

  19. Daylight calculations in practice

    DEFF Research Database (Denmark)

    Iversen, Anne; Roy, Nicolas; Hvass, Mette;

    programs can give different results. This can be due to restrictions in the program itself and/or be due to the skills of the persons setting up the models. This is crucial as daylight calculations are used to document that the demands and recommendations to daylight levels outlined by building authorities...

  20. Dynamics Calculation of Spoke

    Institute of Scientific and Technical Information of China (English)


    Compared with ellipse cavity, the spoke cavity has many advantages, especially for the low and medium beam energy. It will be used in the superconductor accelerator popular in the future. Based on the spoke cavity, we design and calculate an accelerator

  1. Computational invariant theory

    CERN Document Server

    Derksen, Harm


    This book is about the computational aspects of invariant theory. Of central interest is the question how the invariant ring of a given group action can be calculated. Algorithms for this purpose form the main pillars around which the book is built. There are two introductory chapters, one on Gröbner basis methods and one on the basic concepts of invariant theory, which prepare the ground for the algorithms. Then algorithms for computing invariants of finite and reductive groups are discussed. Particular emphasis lies on interrelations between structural properties of invariant rings and computational methods. Finally, the book contains a chapter on applications of invariant theory, covering fields as disparate as graph theory, coding theory, dynamical systems, and computer vision. The book is intended for postgraduate students as well as researchers in geometry, computer algebra, and, of course, invariant theory. The text is enriched with numerous explicit examples which illustrate the theory and should be ...

  2. Grid Computing

    Indian Academy of Sciences (India)


    A computing grid interconnects resources such as high performancecomputers, scientific databases, and computercontrolledscientific instruments of cooperating organizationseach of which is autonomous. It precedes and is quitedifferent from cloud computing, which provides computingresources by vendors to customers on demand. In this article,we describe the grid computing model and enumerate themajor differences between grid and cloud computing.

  3. Computer Virus

    Institute of Scientific and Technical Information of China (English)



    If you work with a computer,it is certain that you can not avoid dealing, with at least one computer virus.But how much do you know about it? Well,actually,a computer virus is not a biological' one as causes illnesses to people.It is a kind of computer program

  4. Analog computing

    CERN Document Server

    Ulmann, Bernd


    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  5. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan


    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  6. Computational chemistry



    Computational chemistry has come of age. With significant strides in computer hardware and software over the last few decades, computational chemistry has achieved full partnership with theory and experiment as a tool for understanding and predicting the behavior of a broad range of chemical, physical, and biological phenomena. The Nobel Prize award to John Pople and Walter Kohn in 1998 highlighted the importance of these advances in computational chemistry. With massively parallel computers ...

  7. Computing Logarithms by Hand (United States)

    Reed, Cameron


    How can old-fashioned tables of logarithms be computed without technology? Today, of course, no practicing mathematician, scientist, or engineer would actually use logarithms to carry out a calculation, let alone worry about deriving them from scratch. But high school students may be curious about the process. This article develops a…

  8. Computer Technology for Industry (United States)


    Shell Oil Company used a COSMIC program, called VISCEL to insure the accuracy of the company's new computer code for analyzing polymers, and chemical compounds. Shell reported that there were no other programs available that could provide the necessary calculations. Shell produces chemicals for plastic products used in the manufacture of automobiles, housewares, appliances, film, textiles, electronic equipment and furniture.

  9. Computer Series, 38. (United States)

    Moore, John W., Ed.


    Discusses numerical solution of the one-dimension Schrodinger equation. A PASCAL computer program for the Apple II which performs the calculations is available from the authors. Also discusses quantization and perturbation theory using microcomputers, indicating benefits of using the addition of a perturbation term to harmonic oscillator as an…

  10. Radioprotection calculations for MEGAPIE. (United States)

    Zanini, L


    The MEGAwatt PIlot Experiment (MEGAPIE) liquid lead-bismuth spallation neutron source will commence operation in 2006 at the SINQ facility of the Paul Scherrer Institut. Such an innovative system presents radioprotection concerns peculiar to a liquid spallation target. Several radioprotection issues have been addressed and studied by means of the Monte Carlo transport code, FLUKA. The dose rates in the room above the target, where personnel access may be needed at times, from the activated lead-bismuth and from the volatile species produced were calculated. Results indicate that the dose rate level is of the order of 40 mSv h(-1) 2 h after shutdown, but it can be reduced below the mSv h(-1) level with slight modifications to the shielding. Neutron spectra and dose rates from neutron transport, of interest for possible damage to radiation sensitive components, have also been calculated.

  11. Comparing Implementations of a Calculator for Exact Real Number Computation

    Directory of Open Access Journals (Sweden)

    José Raymundo Marcial-Romero


    Full Text Available Al ser uno de los primeros lenguajes de programación teóricos para el cómputo con números reales, Real PCF demostró ser impráctico debido a los constructores paralelos que necesita para el cálculo de funciones básicas. Posteriormente, se propuso LRT como una variante de Real PCF el cual evita el uso de constructores paralelos introduciendo un constructor no determinista dentro del lenguaje. En este artículo se presenta la implementación de una calculadora para el cómputo con números reales exactos basada en LRT y se compara su eficacia con una aplicación de números reales estándar en un lenguaje de programación imperativo. Finalmente, la implementación se compara con una implementación estándar de computación de números reales exactos, basada en la representación de dígitos con signo, que a su vez se basa sobre la computación de números reales exactos.

  12. Computer program /TURBLE/ for calculating velocities and streamlines in turbomachines (United States)

    Katsanis, T.; Mcnally, W. D.


    Program is used in design of turbomachinery blade rows, where fluid velocities in blade to blade passage must be obtained. TURBLE requires input data on blade geometry, meridional stream-channel geometry, total flow conditions, weight flow, and inlet and outlet flow angles.

  13. CACTUS (Calculator and Computer Technology User Service): Some Easter Mathematics (United States)

    Hyde, Hartley


    In the Western Gregorian Calendar, the date of Easter Sunday is defined as the Sunday following the ecclesiastical Full Moon that falls on or next after March 21. While the pattern of dates so defined usually repeats each 19 years, there is a 0.08 day difference between the cycles. More accurately, the system has a period of 70 499 183 lunations…

  14. Efforts to transform computers reach milestone

    CERN Multimedia

    Johnson, G


    Scientists in San Jose, Californina, have performed the most complex calculation ever using a quantum computer - factoring the number 15. In contast to the switches in conventional computers, which although tiny consist of billions of atoms, quantum computations are carried out by manipulating single atoms. The laws of quantum mechanics which govern these actions in fact mean that multiple computations could be done in parallel, this would drastically cut down the time needed to carry out very complex calculations.

  15. S-parameter uncertainty computations

    DEFF Research Database (Denmark)

    Vidkjær, Jens


    A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings.......A method for computing uncertainties of measured s-parameters is presented. Unlike the specification software provided with network analyzers, the new method is capable of calculating the uncertainties of arbitrary s-parameter sets and instrument settings....

  16. VLSI Architectures for Computing DFT's (United States)

    Truong, T. K.; Chang, J. J.; Hsu, I. S.; Reed, I. S.; Pei, D. Y.


    Simplifications result from use of residue Fermat number systems. System of finite arithmetic over residue Fermat number systems enables calculation of discrete Fourier transform (DFT) of series of complex numbers with reduced number of multiplications. Computer architectures based on approach suitable for design of very-large-scale integrated (VLSI) circuits for computing DFT's. General approach not limited to DFT's; Applicable to decoding of error-correcting codes and other transform calculations. System readily implemented in VLSI.

  17. Duality Computing in Quantum Computers

    Institute of Scientific and Technical Information of China (English)

    LONG Gui-Lu; LIU Yang


    In this letter, we propose a duality computing mode, which resembles particle-wave duality property when a quantum system such as a quantum computer passes through a double-slit. In this mode, computing operations are not necessarily unitary. The duality mode provides a natural link between classical computing and quantum computing. In addition, the duality mode provides a new tool for quantum algorithm design.

  18. Three-phase Short Circuit Calculating Method Based on Pre-computed Surface for Doubly-fed Induction Generators Considering Low-voltage Ride Through%计及低电压穿越的双馈感应发电机三相短路运算曲面法

    Institute of Scientific and Technical Information of China (English)

    王强钢; 周念成; 李喜兰; 谢光莉


    The voltage support from conventional energy sources to doubly-fed induction generators (DFIGs) is analyzed by making the systems outside the coupling point of DFIG Thevenin equivalent.A short circuit calculating method based on pre-computed surface is proposed by developing the surface of short circuit current varying with the calculated reactance and the open circuit voltage.Moreover,for the transient feature differences of DFIG during the crowbar activation and deactivation, the rotor current peak is used to judge the activation,the relation between crowbar activation and post-fault terminal voltage is investigated.And the short circuit currents are derived by taking into account the rotor excitation and crowbar activation time delay.Finally,the pre-computed surfaces of short circuit current at different times are established,and the procedure for DFIG short circuit calculation is designed.The correctness of proposed method is verified by simulation.%分析了常规电源对双馈感应发电机(DFIG)的电压支撑作用,将 DFIG 接入点以外的系统进行戴维南等值,利用 DFIG 短路电流周期分量与计算电抗、开路电压的关系曲面,提出了 DFIG 接入电网的短路电流运算曲面法。针对撬棒投入和未投入时 DFIG 的暂态特性差异,以转子电流峰值为撬棒投入的判据,分析撬棒动作与故障后 DFIG 端电压关系;考虑转子励磁控制和撬棒动作延时,推导了 DFIG 三相短路电流计算式;制定不同时刻 DFIG 三相短路电流与计算电抗、开路电压运算曲面,给出计及低电压穿越的 DFIG 短路运算曲面法计算步骤,通过仿真验证该方法的正确性。

  19. Numerical calculation of impurity charge state distributions

    Energy Technology Data Exchange (ETDEWEB)

    Crume, E. C.; Arnurius, D. E.


    The numerical calculation of impurity charge state distributions using the computer program IMPDYN is discussed. The time-dependent corona atomic physics model used in the calculations is reviewed, and general and specific treatments of electron impact ionization and recombination are referenced. The complete program and two examples relating to tokamak plasmas are given on a microfiche so that a user may verify that his version of the program is working properly. In the discussion of the examples, the corona steady-state approximation is shown to have significant defects when the plasma environment, particularly the electron temperature, is changing rapidly.

  20. Computational manufacturing

    Institute of Scientific and Technical Information of China (English)


    This paper presents a general framework for computational manufacturing. The methodology of computational manufacturing aims at integrating computational geometry, machining principle, sensor information fusion, optimization, computational intelligence and virtual prototyping to solve problems of the modeling, reasoning, control, planning and scheduling of manufacturing processes and systems. There are three typical problems in computational manufacturing, i.e., scheduling (time-domain), geometric reasoning (space-domain) and decision- making (interaction between time-domain and space-domain). Some theoretical fundamentals of computational manufacturing are also discussed.

  1. Calculations in furnace technology

    CERN Document Server

    Davies, Clive; Hopkins, DW; Owen, WS


    Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi

  2. Acute calculous cholecystitis


    Angarita, Fernando A.; University Health Network; Acuña, Sergio A.; Mount Sinai Hospital; Jimenez, Carolina; University of Toronto; Garay, Javier; Pontificia Universidad Javeriana; Gömez, David; University of Toronto; Domínguez, Luis Carlos; Pontificia Universidad Javeriana


    Acute calculous cholecystitis is the most important cause of cholecystectomies worldwide. We review the physiopathology of the inflammatory process in this organ secondary to biliary tract obstruction, as well as its clinical manifestations, workup, and the treatment it requires. La colecistitis calculosa aguda es la causa más importante de colecistectomías en el mundo. En esta revisión de tema se resume la fisiopatología del proceso inflamatorio de la vesículabiliar secundaria a la obstru...

  3. Linewidth calculations and simulations

    CERN Document Server

    Strandberg, Ingrid


    We are currently developing a new technique to further enhance the sensitivity of collinear laser spectroscopy in order to study the most exotic nuclides available at radioactive ion beam facilities, such as ISOLDE at CERN. The overall goal is to evaluate the feasibility of the new method. This report will focus on the determination of the expected linewidth (hence resolution) of this approach. Different effects which could lead to a broadening of the linewidth, e.g. the ions' energy spread and their trajectories inside the trap, are studied with theoretical calculations as well as simulations.

  4. Isogeometric analysis in electronic structure calculations

    CERN Document Server

    Cimrman, Robert; Kolman, Radek; Tůma, Miroslav; Vackář, Jiří


    In electronic structure calculations, various material properties can be obtained by means of computing the total energy of a system as well as derivatives of the total energy w.r.t. atomic positions. The derivatives, also known as Hellman-Feynman forces, require, because of practical computational reasons, the discretized charge density and wave functions having continuous second derivatives in the whole solution domain. We describe an application of isogeometric analysis (IGA), a spline modification of finite element method (FEM), to achieve the required continuity. The novelty of our approach is in employing the technique of B\\'ezier extraction to add the IGA capabilities to our FEM based code for ab-initio calculations of electronic states of non-periodic systems within the density-functional framework, built upon the open source finite element package SfePy. We compare FEM and IGA in benchmark problems and several numerical results are presented.

  5. Computer Algebra. (United States)

    Pavelle, Richard; And Others


    Describes the nature and use of computer algebra and its applications to various physical sciences. Includes diagrams illustrating, among others, a computer algebra system and flow chart of operation of the Euclidean algorithm. (SK)

  6. Quantum computing


    Li, Shu-Shen; Long, Gui-lu; Bai, Feng-Shan; Feng, Song-Lin; Zheng, Hou-Zhi


    Quantum computing is a quickly growing research field. This article introduces the basic concepts of quantum computing, recent developments in quantum searching, and decoherence in a possible quantum dot realization.

  7. Contextual Computing

    CERN Document Server

    Porzel, Robert


    This book uses the latest in knowledge representation and human-computer interaction to address the problem of contextual computing in artificial intelligence. It uses high-level context to solve some challenging problems in natural language understanding.

  8. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini


    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  9. Energy Dissipation in Quantum Computers

    CERN Document Server

    Granik, A


    A method is described for calculating the heat generated in a quantum computer due to loss of quantum phase information. Amazingly enough, this heat generation can take place at zero temperature. and may explain why it is impossible to extrax=ct energy from vacuum fluctuations. Implications for optical computers and quantum cosmology are also briefly discussed.

  10. Computable models

    CERN Document Server

    Turner, Raymond


    Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al

  11. Multilayer optical calculations

    CERN Document Server

    Byrnes, Steven J


    When light hits a multilayer planar stack, it is reflected, refracted, and absorbed in a way that can be derived from the Fresnel equations. The analysis is treated in many textbooks, and implemented in many software programs, but certain aspects of it are difficult to find explicitly and consistently worked out in the literature. Here, we derive the formulas underlying the transfer-matrix method of calculating the optical properties of these stacks, including oblique-angle incidence, absorption-vs-position profiles, and ellipsometry parameters. We discuss and explain some strange consequences of the formulas in the situation where the incident and/or final (semi-infinite) medium are absorptive, such as calculating $T>1$ in the absence of gain. We also discuss some implementation details like complex-plane branch cuts. Finally, we derive modified formulas for including one or more "incoherent" layers, i.e. very thick layers in which interference can be neglected. This document was written in conjunction with ...

  12. Calculating Speed of Sound (United States)

    Bhatnagar, Shalabh


    Sound is an emerging source of renewable energy but it has some limitations. The main limitation is, the amount of energy that can be extracted from sound is very less and that is because of the velocity of the sound. The velocity of sound changes as per medium. If we could increase the velocity of the sound in a medium we would be probably able to extract more amount of energy from sound and will be able to transfer it at a higher rate. To increase the velocity of sound we should know the speed of sound. If we go by the theory of classic mechanics speed is the distance travelled by a particle divided by time whereas velocity is the displacement of particle divided by time. The speed of sound in dry air at 20 °C (68 °F) is considered to be 343.2 meters per second and it won't be wrong in saying that 342.2 meters is the velocity of sound not the speed as it's the displacement of the sound not the total distance sound wave covered. Sound travels in the form of mechanical wave, so while calculating the speed of sound the whole path of wave should be considered not just the distance traveled by sound. In this paper I would like to focus on calculating the actual speed of sound wave which can help us to extract more energy and make sound travel with faster velocity.

  13. A Generally Applicable Computer Algorithm Based on the Group Additivity Method for the Calculation of Seven Molecular Descriptors: Heat of Combustion, LogPO/W, LogS, Refractivity, Polarizability, Toxicity and LogBB of Organic Compounds; Scope and Limits of Applicability. (United States)

    Naef, Rudolf


    A generally applicable computer algorithm for the calculation of the seven molecular descriptors heat of combustion, logPoctanol/water, logS (water solubility), molar refractivity, molecular polarizability, aqueous toxicity (protozoan growth inhibition) and logBB (log (cblood/cbrain)) is presented. The method, an extendable form of the group-additivity method, is based on the complete break-down of the molecules into their constituting atoms and their immediate neighbourhood. The contribution of the resulting atom groups to the descriptor values is calculated using the Gauss-Seidel fitting method, based on experimental data gathered from literature. The plausibility of the method was tested for each descriptor by means of a k-fold cross-validation procedure demonstrating good to excellent predictive power for the former six descriptors and low reliability of logBB predictions. The goodness of fit (Q²) and the standard deviation of the 10-fold cross-validation calculation was >0.9999 and 25.2 kJ/mol, respectively, (based on N = 1965 test compounds) for the heat of combustion, 0.9451 and 0.51 (N = 2640) for logP, 0.8838 and 0.74 (N = 1419) for logS, 0.9987 and 0.74 (N = 4045) for the molar refractivity, 0.9897 and 0.77 (N = 308) for the molecular polarizability, 0.8404 and 0.42 (N = 810) for the toxicity and 0.4709 and 0.53 (N = 383) for logBB. The latter descriptor revealing a very low Q² for the test molecules (R² was 0.7068 and standard deviation 0.38 for N = 413 training molecules) is included as an example to show the limits of the group-additivity method. An eighth molecular descriptor, the heat of formation, was indirectly calculated from the heat of combustion data and correlated with published experimental heat of formation data with a correlation coefficient R² of 0.9974 (N = 2031).

  14. Computing fundamentals introduction to computers

    CERN Document Server

    Wempen, Faithe


    The absolute beginner's guide to learning basic computer skills Computing Fundamentals, Introduction to Computers gets you up to speed on basic computing skills, showing you everything you need to know to conquer entry-level computing courses. Written by a Microsoft Office Master Instructor, this useful guide walks you step-by-step through the most important concepts and skills you need to be proficient on the computer, using nontechnical, easy-to-understand language. You'll start at the very beginning, getting acquainted with the actual, physical machine, then progress through the most common

  15. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan


    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  16. The Future of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anamaroa SIclovan


    Full Text Available

    Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered
    to the consumers as a product delivered online. This represents an advantage for the organization both regarding the cost and the opportunity for the new business. This paper presents the future perspectives in cloud computing. The paper presents some issues of the cloud computing paradigm. It is a theoretical paper.

    Keywords: Cloud Computing, Pay-per-use

  17. Theoretical Calculations of Atomic Data for Spectroscopy (United States)

    Bautista, Manuel A.


    Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.

  18. Practical Rhumb Line Calculations on the Spheroid (United States)

    Bennett, G. G.

    About ten years ago this author wrote the software for a suite of navigation programmes which was resident in a small hand-held computer. In the course of this work it became apparent that the standard text books of navigation were perpetuating a flawed method of calculating rhumb lines on the Earth considered as an oblate spheroid. On further investigation it became apparent that these incorrect methods were being used in programming a number of calculator/computers and satellite navigation receivers. Although the discrepancies were not large, it was disquieting to compare the results of the same rhumb line calculations from a number of such devices and find variations of some miles when the output was given, and therefore purported to be accurate, to a tenth of a mile in distance and/or a tenth of a minute of arc in position. The problem has been highlighted in the past and the references at the end of this show that a number of methods have been proposed for the amelioration of this problem. This paper summarizes formulae that the author recommends should be used for accurate solutions. Most of these may be found in standard geodetic text books, such as, but also provided are new formulae and schemes of solution which are suitable for use with computers or tables. The latter also take into account situations when a near-indeterminate solution may arise. Some examples are provided in an appendix which demonstrate the methods. The data for these problems do not refer to actual terrestrial situations but have been selected for illustrative purposes only. Practising ships' navigators will find the methods described in detail in this paper to be directly applicable to their work and also they should find ready acceptance because they are similar to current practice. In none of the references cited at the end of this paper has the practical task of calculating, using either a computer or tabular techniques, been addressed.

  19. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    of the new microprocessors and network technologies. However, the understanding of the computer represented within this program poses a challenge for the intentions of the program. The computer is understood as a multitude of invisible intelligent information devices which confines the computer as a tool...

  20. Distributed Computing. (United States)

    Ryland, Jane N.


    The microcomputer revolution, in which small and large computers have gained tremendously in capability, has created a distributed computing environment. This circumstance presents administrators with the opportunities and the dilemmas of choosing appropriate computing resources for each situation. (Author/MSE)

  1. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren


    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot en...

  2. Computational Complexity

    Directory of Open Access Journals (Sweden)

    J. A. Tenreiro Machado


    Full Text Available Complex systems (CS involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...

  3. Computer Ease. (United States)

    Drenning, Susan; Getz, Lou


    Computer Ease is an intergenerational program designed to put an Ohio elementary school's computer lab, software library, staff, and students at the disposal of older adults desiring to become computer literate. Three 90-minute instructional sessions allow seniors to experience 1-to-1 high-tech instruction by enthusiastic, nonthreatening…

  4. Computational drug discovery

    Institute of Scientific and Technical Information of China (English)

    Si-sheng OU-YANG; Jun-yan LU; Xiang-qian KONG; Zhong-jie LIANG; Cheng LUO; Hualiang JIANG


    Computational drug discovery is an effective strategy for accelerating and economizing drug discovery and development process.Because of the dramatic increase in the availability of biological macromolecule and small molecule information,the applicability of computational drug discovery has been extended and broadly applied to nearly every stage in the drug discovery and development workflow,including target identification and validation,lead discovery and optimization and preclinical tests.Over the past decades,computational drug discovery methods such as molecular docking,pharmacophore modeling and mapping,de novo design,molecular similarity calculation and sequence-based virtual screening have been greatly improved.In this review,we present an overview of these important computational methods,platforms and successful applications in this field.

  5. Computer science

    CERN Document Server

    Blum, Edward K


    Computer Science: The Hardware, Software and Heart of It focuses on the deeper aspects of the two recognized subdivisions of Computer Science, Software and Hardware. These subdivisions are shown to be closely interrelated as a result of the stored-program concept. Computer Science: The Hardware, Software and Heart of It includes certain classical theoretical computer science topics such as Unsolvability (e.g. the halting problem) and Undecidability (e.g. Godel's incompleteness theorem) that treat problems that exist under the Church-Turing thesis of computation. These problem topics explain in

  6. Human Computation

    CERN Document Server

    CERN. Geneva


    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  7. Computer Science Research: Computation Directorate

    Energy Technology Data Exchange (ETDEWEB)

    Durst, M.J. (ed.); Grupe, K.F. (ed.)


    This report contains short papers in the following areas: large-scale scientific computation; parallel computing; general-purpose numerical algorithms; distributed operating systems and networks; knowledge-based systems; and technology information systems.

  8. Impact cratering calculations (United States)

    Ahrens, Thomas J.; Okeefe, J. D.; Smither, C.; Takata, T.


    In the course of carrying out finite difference calculations, it was discovered that for large craters, a previously unrecognized type of crater (diameter) growth occurred which was called lip wave propagation. This type of growth is illustrated for an impact of a 1000 km (2a) silicate bolide at 12 km/sec (U) onto a silicate half-space at earth gravity (1 g). The von Misses crustal strength is 2.4 kbar. The motion at the crater lip associated with this wave type phenomena is up, outward, and then down, similar to the particle motion of a surface wave. It is shown that the crater diameter has grown d/a of approximately 25 to d/a of approximately 4 via lip propagation from Ut/a = 5.56 to 17.0 during the time when rebound occurs. A new code is being used to study partitioning of energy and momentum and cratering efficiency with self gravity for finite-sized objects rather than the previously discussed planetary half-space problems. These are important and fundamental subjects which can be addressed with smoothed particle hydrodynamic (SPH) codes. The SPH method was used to model various problems in astrophysics and planetary physics. The initial work demonstrates that the energy budget for normal and oblique impacts are distinctly different than earlier calculations for silicate projectile impact on a silicate half space. Motivated by the first striking radar images of Venus obtained by Magellan, the effect of the atmosphere on impact cratering was studied. In order the further quantify the processes of meteor break-up and trajectory scattering upon break-up, the reentry physics of meteors striking Venus' atmosphere versus that of the Earth were studied.

  9. Proton Affinity Calculations with High Level Methods. (United States)

    Kolboe, Stein


    Proton affinities, stretching from small reference compounds, up to the methylbenzenes and naphthalene and anthracene, have been calculated with high accuracy computational methods, viz. W1BD, G4, G3B3, CBS-QB3, and M06-2X. Computed and the currently accepted reference proton affinities are generally in excellent accord, but there are deviations. The literature value for propene appears to be 6-7 kJ/mol too high. Reported proton affinities for the methylbenzenes seem 4-5 kJ/mol too high. G4 and G3 computations generally give results in good accord with the high level W1BD. Proton affinity values computed with the CBS-QB3 scheme are too low, and the error increases with increasing molecule size, reaching nearly 10 kJ/mol for the xylenes. The functional M06-2X fails markedly for some of the small reference compounds, in particular, for CO and ketene, but calculates methylbenzene proton affinities with high accuracy.

  10. Computer sciences (United States)

    Smith, Paul H.


    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  11. Reaction to Indispensable Manual Calculation Skills in a CAS Environment. (United States)

    Monaghan, John


    Reacts to an article published in a previous issue of this journal on the effects of graphing calculators and computer algebra systems (CAS) on students' manual calculation and algebraic manipulation skills. Considers the contribution made by Jean-Baptiste Lagrange to thinking about the role of CAS in teaching algebra. (ASK)

  12. Perturbation calculation of thermodynamic density of states. (United States)

    Brown, G; Schulthess, T C; Nicholson, D M; Eisenbach, M; Stocks, G M


    The density of states g (ε) is frequently used to calculate the temperature-dependent properties of a thermodynamic system. Here a derivation is given for calculating the warped density of states g*(ε) resulting from the addition of a perturbation. The method is validated for a classical Heisenberg model of bcc Fe and the errors in the free energy are shown to be second order in the perturbation. Taking the perturbation to be the difference between a first-principles quantum-mechanical energy and a corresponding classical energy, this method can significantly reduce the computational effort required to calculate g(ε) for quantum systems using the Wang-Landau approach.

  13. Benchmarking calculations of excitonic couplings between bacteriochlorophylls

    CERN Document Server

    Kenny, Elise P


    Excitonic couplings between (bacterio)chlorophyll molecules are necessary for simulating energy transport in photosynthetic complexes. Many techniques for calculating the couplings are in use, from the simple (but inaccurate) point-dipole approximation to fully quantum-chemical methods. We compared several approximations to determine their range of applicability, noting that the propagation of experimental uncertainties poses a fundamental limit on the achievable accuracy. In particular, the uncertainty in crystallographic coordinates yields an uncertainty of about 20% in the calculated couplings. Because quantum-chemical corrections are smaller than 20% in most biologically relevant cases, their considerable computational cost is rarely justified. We therefore recommend the electrostatic TrEsp method across the entire range of molecular separations and orientations because its cost is minimal and it generally agrees with quantum-chemical calculations to better than the geometric uncertainty. We also caution ...

  14. Lagrange interpolation for the radiation shielding calculation

    CERN Document Server

    Isozumi, Y; Miyatake, H; Kato, T; Tosaki, M


    Basing on some formulas of Lagrange interpolation derived in this paper, a computer program for table calculations has been prepared. Main features of the program are as follows; 1) maximum degree of polynomial in Lagrange interpolation is 10, 2) tables with both one variable and two variables can be applied, 3) logarithmic transformations of function and/or variable values can be included and 4) tables with discontinuities and cusps can be applied. The program has been carefully tested by using the data tables in the manual of shielding calculation for radiation facilities. For all available tables in the manual, calculations with the program have been reasonably performed under conditions of 1) logarithmic transformation of both function and variable values and 2) degree 4 or 5 of the polynomial.

  15. Computer Literacy: Teaching Computer Ethics. (United States)

    Troutner, Joanne


    Suggests learning activities for teaching computer ethics in three areas: (1) equal access; (2) computer crime; and (3) privacy. Topics include computer time, advertising, class enrollments, copyright law, sabotage ("worms"), the Privacy Act of 1974 and the Freedom of Information Act of 1966. (JM)

  16. Renormalization-group calculation of excitation properties for impurity models (United States)

    Yoshida, M.; Whitaker, M. A.; Oliveira, L. N.


    The renormalization-group method developed by Wilson to calculate thermodynamical properties of dilute magnetic alloys is generalized to allow the calculation of dynamical properties of many-body impurity Hamiltonians. As a simple illustration, the impurity spectral density for the resonant-level model (i.e., the U=0 Anderson model) is computed. As a second illustration, for the same model, the longitudinal relaxation rate for a nuclear spin coupled to the impurity is calculated as a function of temperature.

  17. The experience of GPU calculations at Lunarc (United States)

    Sjöström, Anders; Lindemann, Jonas; Church, Ross


    To meet the ever increasing demand for computational speed and use of ever larger datasets, multi GPU instal- lations look very tempting. Lunarc and the Theoretical Astrophysics group at Lund Observatory collaborate on a pilot project to evaluate and utilize multi-GPU architectures for scientific calculations. Starting with a small workshop in 2009, continued investigations eventually lead to the procurement of the GPU-resource Timaeus, which is a four-node eight-GPU cluster with two Nvidia m2050 GPU-cards per node. The resource is housed within the larger cluster Platon and share disk-, network- and system resources with that cluster. The inaugu- ration of Timaeus coincided with the meeting "Computational Physics with GPUs" in November 2010, hosted by the Theoretical Astrophysics group at Lund Observatory. The meeting comprised of a two-day workshop on GPU-computing and a two-day science meeting on using GPUs as a tool for computational physics research, with a particular focus on astrophysics and computational biology. Today Timaeus is used by research groups from Lund, Stockholm and Lule in fields ranging from Astrophysics to Molecular Chemistry. We are investigating the use of GPUs with commercial software packages and user supplied MPI-enabled codes. Looking ahead, Lunarc will be installing a new cluster during the summer of 2011 which will have a small number of GPU-enabled nodes that will enable us to continue working with the combination of parallel codes and GPU-computing. It is clear that the combination of GPUs/CPUs is becoming an important part of high performance computing and here we will describe what has been done at Lunarc regarding GPU-computations and how we will continue to investigate the new and coming multi-GPU servers and how they can be utilized in our environment.

  18. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony


    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  19. Graph Partitioning Models for Parallel Computing

    Energy Technology Data Exchange (ETDEWEB)

    Hendrickson, B.; Kolda, T.G.


    Calculations can naturally be described as graphs in which vertices represent computation and edges reflect data dependencies. By partitioning the vertices of a graph, the calculation can be divided among processors of a parallel computer. However, the standard methodology for graph partitioning minimizes the wrong metric and lacks expressibility. We survey several recently proposed alternatives and discuss their relative merits.

  20. Direct Computation on the Kinetic Spectrophotometry

    DEFF Research Database (Denmark)

    Hansen, Jørgen-Walther; Broen Pedersen, P.


    This report describes an analog computer designed for calculations of transient absorption from photographed recordings of the oscilloscope trace of the transmitted light intensity. The computer calculates the optical density OD, the natural logarithm of OD, and the natural logarithm of the diffe...