WorldWideScience

Sample records for calculations computer

  1. Computer Calculation of Fire Danger

    Science.gov (United States)

    William A. Main

    1969-01-01

    This paper describes a computer program that calculates National Fire Danger Rating Indexes. fuel moisture, buildup index, and drying factor are also available. The program is written in FORTRAN and is usable on even the smallest compiler.

  2. Computational methods for probability of instability calculations

    Science.gov (United States)

    Wu, Y.-T.; Burnside, O. H.

    1990-01-01

    This paper summarizes the development of the methods and a computer program to compute the probability of instability of a dynamic system than can be represented by a system of second-order ordinary linear differential equations. Two instability criteria based upon the roots of the characteristics equation or Routh-Hurwitz test functions are investigated. Computational methods based on system reliability analysis methods and importance sampling concepts are proposed to perform efficient probabilistic analysis. Numerical examples are provided to demonstrate the methods.

  3. MacSPOC: Orbital trajectory calculations on a laptop computer

    Science.gov (United States)

    Adamo, Dan

    1991-01-01

    Orbital trajectory calculations on a laptop computer are presented in the form of the viewgraphs. The following subject areas are covered: laptop computing in the Space Shuttle program; current laptop prototyping with MacSPOC; future laptop applications; and summary.

  4. Computational system for activity calculation of radiopharmaceuticals

    African Journals Online (AJOL)

    ... this is specially practised in big countries like Brazil where the distance from one state to other is bigger than one country compared to others in continents like Europe. The purpose of this paper is to describe a computational system developed to evaluate the dose of radiopharmaceuticals during the production until the ...

  5. Newnes circuit calculations pocket book with computer programs

    CERN Document Server

    Davies, Thomas J

    2013-01-01

    Newnes Circuit Calculations Pocket Book: With Computer Programs presents equations, examples, and problems in circuit calculations. The text includes 300 computer programs that help solve the problems presented. The book is comprised of 20 chapters that tackle different aspects of circuit calculation. The coverage of the text includes dc voltage, dc circuits, and network theorems. The book also covers oscillators, phasors, and transformers. The text will be useful to electrical engineers and other professionals whose work involves electronic circuitry.

  6. Fast calculation method for computer-generated cylindrical holograms.

    Science.gov (United States)

    Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi

    2008-07-01

    Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.

  7. Effect of Calculator Use in Mathematics Computation by Junior ...

    African Journals Online (AJOL)

    The purpose of the study was to find out whether the use of calculators in the computation of mathematics would enhance pupils\\' achievement in the subject. A sample of 160 pupils, randomly selected from three junior secondary schools, were involved in the study. In one school, 60 pupils were selected while in the other ...

  8. Development of a computational methodology for internal dose calculations

    CERN Document Server

    Yoriyaz, H

    2000-01-01

    A new approach for calculating internal dose estimates was developed through the use of a more realistic computational model of the human body and a more precise tool for the radiation transport simulation. The present technique shows the capability to build a patient-specific phantom with tomography data (a voxel-based phantom) for the simulation of radiation transport and energy deposition using Monte Carlo methods such as in the MCNP-4B code. In order to utilize the segmented human anatomy as a computational model for the simulation of radiation transport, an interface program, SCMS, was developed to build the geometric configurations for the phantom through the use of tomographic images. This procedure allows to calculate not only average dose values but also spatial distribution of dose in regions of interest. With the present methodology absorbed fractions for photons and electrons in various organs of the Zubal segmented phantom were calculated and compared to those reported for the mathematical phanto...

  9. Computationally efficient implementation of combustion chemistry in parallel PDF calculations

    Science.gov (United States)

    Lu, Liuyan; Lantz, Steven R.; Ren, Zhuyin; Pope, Stephen B.

    2009-08-01

    In parallel calculations of combustion processes with realistic chemistry, the serial in situ adaptive tabulation (ISAT) algorithm [S.B. Pope, Computationally efficient implementation of combustion chemistry using in situ adaptive tabulation, Combustion Theory and Modelling, 1 (1997) 41-63; L. Lu, S.B. Pope, An improved algorithm for in situ adaptive tabulation, Journal of Computational Physics 228 (2009) 361-386] substantially speeds up the chemistry calculations on each processor. To improve the parallel efficiency of large ensembles of such calculations in parallel computations, in this work, the ISAT algorithm is extended to the multi-processor environment, with the aim of minimizing the wall clock time required for the whole ensemble. Parallel ISAT strategies are developed by combining the existing serial ISAT algorithm with different distribution strategies, namely purely local processing (PLP), uniformly random distribution (URAN), and preferential distribution (PREF). The distribution strategies enable the queued load redistribution of chemistry calculations among processors using message passing. They are implemented in the software x2f_mpi, which is a Fortran 95 library for facilitating many parallel evaluations of a general vector function. The relative performance of the parallel ISAT strategies is investigated in different computational regimes via the PDF calculations of multiple partially stirred reactors burning methane/air mixtures. The results show that the performance of ISAT with a fixed distribution strategy strongly depends on certain computational regimes, based on how much memory is available and how much overlap exists between tabulated information on different processors. No one fixed strategy consistently achieves good performance in all the regimes. Therefore, an adaptive distribution strategy, which blends PLP, URAN and PREF, is devised and implemented. It yields consistently good performance in all regimes. In the adaptive parallel

  10. An efficient computational method for calculating ligand binding affinities.

    Directory of Open Access Journals (Sweden)

    Atsushi Suenaga

    Full Text Available Virtual compound screening using molecular docking is widely used in the discovery of new lead compounds for drug design. However, the docking scores are not sufficiently precise to represent the protein-ligand binding affinity. Here, we developed an efficient computational method for calculating protein-ligand binding affinity, which is based on molecular mechanics generalized Born/surface area (MM-GBSA calculations and Jarzynski identity. Jarzynski identity is an exact relation between free energy differences and the work done through non-equilibrium process, and MM-GBSA is a semimacroscopic approach to calculate the potential energy. To calculate the work distribution when a ligand is pulled out of its binding site, multiple protein-ligand conformations are randomly generated as an alternative to performing an explicit single-molecule pulling simulation. We assessed the new method, multiple random conformation/MM-GBSA (MRC-MMGBSA, by evaluating ligand-binding affinities (scores for four target proteins, and comparing these scores with experimental data. The calculated scores were qualitatively in good agreement with the experimental binding affinities, and the optimal docking structure could be determined by ranking the scores of the multiple docking poses obtained by the molecular docking process. Furthermore, the scores showed a strong linear response to experimental binding free energies, so that the free energy difference of the ligand binding (ΔΔG could be calculated by linear scaling of the scores. The error of calculated ΔΔG was within ≈ ± 1.5 kcal.mol(-1 of the experimental values. Particularly, in the case of flexible target proteins, the MRC-MMGBSA scores were more effective in ranking ligands than those generated by the MM-GBSA method using a single protein-ligand conformation. The results suggest that, owing to its lower computational costs and greater accuracy, the MRC-MMGBSA offers efficient means to rank the ligands, in

  11. A Computational Framework for Automation of Point Defect Calculations

    Science.gov (United States)

    Goyal, Anuj; Gorai, Prashun; Peng, Haowei; Lany, Stephan; Stevanovic, Vladan; National Renewable Energy Laboratory, Golden, Colorado 80401 Collaboration

    A complete and rigorously validated open-source Python framework to automate point defect calculations using density functional theory has been developed. The framework provides an effective and efficient method for defect structure generation, and creation of simple yet customizable workflows to analyze defect calculations. The package provides the capability to compute widely accepted correction schemes to overcome finite-size effects, including (1) potential alignment, (2) image-charge correction, and (3) band filling correction to shallow defects. Using Si, ZnO and In2O3as test examples, we demonstrate the package capabilities and validate the methodology. We believe that a robust automated tool like this will enable the materials by design community to assess the impact of point defects on materials performance. National Renewable Energy Laboratory, Golden, Colorado 80401.

  12. On the computation of momentum distributions within wavepacket propagation calculations

    Energy Technology Data Exchange (ETDEWEB)

    Feuerstein, Bernold; Thumm, Uwe [Department of Physics, Kansas State University, Manhattan, KS 66506 (United States)

    2003-02-28

    We present a new method to extract momentum distributions from time-dependent wavepacket calculations. In contrast to established Fourier transformation of the spatial wavepacket at a fixed time, the proposed 'virtual detector' method examines the time dependence of the wavepacket at a fixed position. In first applications to the ionization of model atoms and the dissociation of H{sub 2}{sup +}, we find a significant reduction of computing time and are able to extract reliable fragment momentum distributions by using a comparatively small spatial numerical grid for the time-dependent wavefunction.

  13. On the computation of momentum distributions within wavepacket propagation calculations

    Science.gov (United States)

    Feuerstein, Bernold; Thumm, Uwe

    2003-02-01

    We present a new method to extract momentum distributions from time-dependent wavepacket calculations. In contrast to established Fourier transformation of the spatial wavepacket at a fixed time, the proposed 'virtual detector' method examines the time dependence of the wavepacket at a fixed position. In first applications to the ionization of model atoms and the dissociation of H2+, we find a significant reduction of computing time and are able to extract reliable fragment momentum distributions by using a comparatively small spatial numerical grid for the time-dependent wavefunction.

  14. Activity computer program for calculating ion irradiation activation

    Science.gov (United States)

    Palmer, Ben; Connolly, Brian; Read, Mark

    2017-07-01

    A computer program, Activity, was developed to predict the activity and gamma lines of materials irradiated with an ion beam. It uses the TENDL (Koning and Rochman, 2012) [1] proton reaction cross section database, the Stopping and Range of Ions in Matter (SRIM) (Biersack et al., 2010) code, a Nuclear Data Services (NDS) radioactive decay database (Sonzogni, 2006) [2] and an ENDF gamma decay database (Herman and Chadwick, 2006) [3]. An extended version of Bateman's equation is used to calculate the activity at time t, and this equation is solved analytically, with the option to also solve by numeric inverse Laplace Transform as a failsafe. The program outputs the expected activity and gamma lines of the activated material.

  15. Color calculations for and perceptual assessment of computer graphic images

    Energy Technology Data Exchange (ETDEWEB)

    Meyer, G.W.

    1986-01-01

    Realistic image synthesis involves the modelling of an environment in accordance with the laws of physics and the production of a final simulation that is perceptually acceptable. To be considered a scientific endeavor, synthetic image generation should also include the final step of experimental verification. This thesis concentrates on the color calculations that are inherent in the production of the final simulation and on the perceptual assessment of the computer graphic images that result. The fundamental spectral sensitivity functions that are active in the human visual system are introduced and are used to address color-blindness issues in computer graphics. A digitally controlled color television monitor is employed to successfully implement both the Farnsworth Munsell 100 hues test and a new color vision test that yields more accurate diagnoses. Images that simulate color blind vision are synthesized and are used to evaluate color scales for data display. Gaussian quadrature is used with a set of opponent fundamental to select the wavelengths at which to perform synthetic image generation.

  16. A Distributed Computing Infrastructure for Computational Thermodynamic Calculations of Solid-Liquid Phase Equilibria

    Science.gov (United States)

    Ghiorso, M. S.; Kress, V. C.

    2004-12-01

    Software tools like MELTS (Ghiorso and Sack, 1995, CMP 119:197) and its derivatives (Ghiorso et al., 2002, G3 3:10.1029/2001GC000217) are sophisticated calculators used by geoscientists to quantify the chemistry of melt production, transport and storage. These tools utilize computational thermodynamics to evaluate the equilibrium state of the system under specified external conditions by minimizing a suitably constructed thermodynamic potential. Like any thermodynamically based tool, the principal advantage in employing these techniques to model igneous processes is the intrinsic ability to couple the chemistry and energetics of the evolution of the system in a self consistent and rigorous formalism. Access to MELTS is normally accomplished via a standalone X11-based executable or as a Java-based web applet. The latter is a dedicated client-server application rooted at the University of Chicago. Our on-going objective is the development of a distributed computing infrastructure to provide "MELTS-like" computations on demand to remote network users by utilizing a language independent client-server protocol based on CORBA. The advantages of this model are numerous. First, the burden of implementing and executing MELTS computations is centralized with a software implementation optimized to a compute cluster dedicated for that purpose. Improvements and updates to MELTS software are handled locally on the server side without intervention of the user and the server-model lessens the burden of supporting the computational code on a variety of hardware and OS platforms. Second, the client hardware platform does not incur the computational cost of performing a MELTS simulation and the remote user can focus on the task of incorporating results into their model. Third, the client user can write software in a computer language of their choosing and procedural calls to the MELTS library can be executed transparently over the network as if a local language-compatible library of

  17. Computing NLTE Opacities -- Node Level Parallel Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Holladay, Daniel [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2015-09-11

    Presentation. The goal: to produce a robust library capable of computing reasonably accurate opacities inline with the assumption of LTE relaxed (non-LTE). Near term: demonstrate acceleration of non-LTE opacity computation. Far term (if funded): connect to application codes with in-line capability and compute opacities. Study science problems. Use efficient algorithms that expose many levels of parallelism and utilize good memory access patterns for use on advanced architectures. Portability to multiple types of hardware including multicore processors, manycore processors such as KNL, GPUs, etc. Easily coupled to radiation hydrodynamics and thermal radiative transfer codes.

  18. Quantum computing applied to calculations of molecular energies

    Czech Academy of Sciences Publication Activity Database

    Pittner, Jiří; Veis, L.

    2011-01-01

    Roč. 241, - (2011), 151-phys ISSN 0065-7727. [National Meeting and Exposition of the American-Chemical-Society (ACS) /241./. 27.03.2011-31.03.2011, Anaheim] Institutional research plan: CEZ:AV0Z40400503 Keywords : molecular energie * quantum computers Subject RIV: CF - Physical ; Theoretical Chemistry

  19. Parallel computer calculation of quantum spin lattices; Calcul de chaines de spins quantiques sur ordinateur parallele

    Energy Technology Data Exchange (ETDEWEB)

    Lamarcq, J. [Service de Physique Theorique, CEA Centre d`Etudes de Saclay, 91 - Gif-sur-Yvette (France)

    1998-07-10

    Numerical simulation allows the theorists to convince themselves about the validity of the models they use. Particularly by simulating the spin lattices one can judge about the validity of a conjecture. Simulating a system defined by a large number of degrees of freedom requires highly sophisticated machines. This study deals with modelling the magnetic interactions between the ions of a crystal. Many exact results have been found for spin 1/2 systems but not for systems of other spins for which many simulation have been carried out. The interest for simulations has been renewed by the Haldane`s conjecture stipulating the existence of a energy gap between the ground state and the first excited states of a spin 1 lattice. The existence of this gap has been experimentally demonstrated. This report contains the following four chapters: 1. Spin systems; 2. Calculation of eigenvalues; 3. Programming; 4. Parallel calculation 14 refs., 6 figs.

  20. Time-partitioning simulation models for calculation on parallel computers

    Science.gov (United States)

    Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.

    1987-01-01

    A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.

  1. Time-partitioning simulation models for calculation of parallel computers

    Science.gov (United States)

    Milner, Edward J.; Blech, Richard A.; Chima, Rodrick V.

    1987-01-01

    A technique allowing time-staggered solution of partial differential equations is presented in this report. Using this technique, called time-partitioning, simulation execution speedup is proportional to the number of processors used because all processors operate simultaneously, with each updating of the solution grid at a different time point. The technique is limited by neither the number of processors available nor by the dimension of the solution grid. Time-partitioning was used to obtain the flow pattern through a cascade of airfoils, modeled by the Euler partial differential equations. An execution speedup factor of 1.77 was achieved using a two processor Cray X-MP/24 computer.

  2. Computational benchmark for calculation of silane and siloxane thermochemistry.

    Science.gov (United States)

    Cypryk, Marek; Gostyński, Bartłomiej

    2016-01-01

    Geometries of model chlorosilanes, R3SiCl, silanols, R3SiOH, and disiloxanes, (R3Si)2O, R = H, Me, as well as the thermochemistry of the reactions involving these species were modeled using 11 common density functionals in combination with five basis sets to examine the accuracy and applicability of various theoretical methods in organosilicon chemistry. As the model reactions, the proton affinities of silanols and siloxanes, hydrolysis of chlorosilanes and condensation of silanols to siloxanes were considered. As the reference values, experimental bonding parameters and reaction enthalpies were used wherever available. Where there are no experimental data, W1 and CBS-QB3 values were used instead. For the gas phase conditions, excellent agreement between theoretical CBS-QB3 and W1 and experimental thermochemical values was observed. All DFT methods also give acceptable values and the precision of various functionals used was comparable. No significant advantage of newer more advanced functionals over 'classical' B3LYP and PBEPBE ones was noted. The accuracy of the results was improved significantly when triple-zeta basis sets were used for energy calculations, instead of double-zeta ones. The accuracy of calculations for the reactions in water solution within the SCRF model was inferior compared to the gas phase. However, by careful estimation of corrections to the ΔHsolv and ΔGsolv of H(+) and HCl, reasonable values of thermodynamic quantities for the discussed reactions can be obtained.

  3. Trends in high-performance computing for engineering calculations.

    Science.gov (United States)

    Giles, M B; Reguly, I

    2014-08-13

    High-performance computing has evolved remarkably over the past 20 years, and that progress is likely to continue. However, in recent years, this progress has been achieved through greatly increased hardware complexity with the rise of multicore and manycore processors, and this is affecting the ability of application developers to achieve the full potential of these systems. This article outlines the key developments on the hardware side, both in the recent past and in the near future, with a focus on two key issues: energy efficiency and the cost of moving data. It then discusses the much slower evolution of system software, and the implications of all of this for application developers. © 2014 The Author(s) Published by the Royal Society. All rights reserved.

  4. Computer calculations of collisions, black holes, and naked singularities

    CERN Document Server

    Shapiro, S L

    1993-01-01

    Abstract. We describe a method for the numerical solution of Einstein’ equations for the dynamical evolution of a collisionless gas of particles in gen eral relativity. The gravitational field can be arbitrarily strong and particle velocities can approach the speed of light. The computational method uses the tools of numerical relativity and N—body particle simulation to follow the full nonlinear behavior of these systems. Specifically, we solve the Vlasov equation in general relativity by particle simulation. The gravitational field is integrated using the 3 + l formalism of Arnowitt, Deser, and Misner. One application of our method is the study of headeon collisions of relativistic clusters. We have constructed and followed the evolution of three classes of initial configurations: spheres of particles at rest; spheres of particles boosted towards each other; and spheres ol‘ particles in circular orbits about their re spective centers, in the first two cases, the spheres implode towards their cente...

  5. Easy-to-use application programs for decay heat and delayed neutron calculations on personal computers

    Energy Technology Data Exchange (ETDEWEB)

    Oyamatsu, Kazuhiro [Nagoya Univ. (Japan)

    1998-03-01

    Application programs for personal computers are developed to calculate the decay heat power and delayed neutron activity from fission products. The main programs can be used in any computers from personal computers to main frames because their sources are written in Fortran. These programs have user friendly interfaces to be used easily not only for research activities but also for educational purposes. (author)

  6. Plane stress calculations with a two dimensional elastic-plastic computer program. [HEMP

    Energy Technology Data Exchange (ETDEWEB)

    Wilkins, M.L.; Guinan, M.W.

    1976-04-05

    In the study of ductile fracture it is useful to simulate fracture on the computer under plane stress conditions. In general, this is a three dimensional problem. Presented here is a method for adapting a two dimensional elastic-plastic computer program to calculate problems in plane stress as well as plane strain geometry. A simulation of a tension test of a flat aluminum plate pulled to failure is calculated with the modified two dimensional program. The results are compared with a fully three dimensional calculation. Finally a comparison is made with an experiment to demonstrate the effectiveness of the computational methods for studying fracture of work hardening materials.

  7. Radiation therapy calculations using an on-demand virtual cluster via cloud computing

    CERN Document Server

    Keyes, Roy W; Arnold, Dorian; Luan, Shuang

    2010-01-01

    Computer hardware costs are the limiting factor in producing highly accurate radiation dose calculations on convenient time scales. Because of this, large-scale, full Monte Carlo simulations and other resource intensive algorithms are often considered infeasible for clinical settings. The emerging cloud computing paradigm promises to fundamentally alter the economics of such calculations by providing relatively cheap, on-demand, pay-as-you-go computing resources over the Internet. We believe that cloud computing will usher in a new era, in which very large scale calculations will be routinely performed by clinics and researchers using cloud-based resources. In this research, several proof-of-concept radiation therapy calculations were successfully performed on a cloud-based virtual Monte Carlo cluster. Performance evaluations were made of a distributed processing framework developed specifically for this project. The expected 1/n performance was observed with some caveats. The economics of cloud-based virtual...

  8. Calculation of collision integrals and computation of diffusion coefficients in metal vapor mixtures

    Science.gov (United States)

    Aref'ev, K. M.; Guseva, M. A.; Novikov, A. A.

    1992-06-01

    Results of numerical calculations of diffusion coefficients are presented for binary mixtures of metal vapors with gases and different metal vapors. The calculations have been performed by using a program for computing collision integrals for potential functions defined in arbitrary form. A table of integrals for the Buckingham-Corner potential is presented.

  9. COMPUTER PROGRAM FOR CALCULATION MICROCHANNEL HEAT EXCHANGERS FOR AIR CONDITIONING SYSTEMS

    Directory of Open Access Journals (Sweden)

    Olga V. Olshevska

    2016-08-01

    Full Text Available Creating a computer program to calculate microchannel air condensers to reduce design time and carrying out variant calculations. Software packages for thermophysical properties of the working substance and the coolant, the correlation equation for calculating heat transfer, aerodynamics and hydrodynamics, the thermodynamic equations for the irreversible losses and their minimization in the heat exchanger were used in the process of creating. Borland Delphi 7 is used for creating software package.

  10. Calculation reduction method for color computer-generated hologram using color space conversion

    CERN Document Server

    Shimobaba, Tomoyoshi; Oikawa, Minoru; Takada, Naoki; Okada, Naohisa; Endo, Yutaka; Hirayama, Ryuji; Ito, Tomoyoshi

    2013-01-01

    We report a calculation reduction method for color computer-generated holograms (CGHs) using color space conversion. Color CGHs are generally calculated on RGB space. In this paper, we calculate color CGHs in other color spaces: for example, YCbCr color space. In YCbCr color space, a RGB image is converted to the luminance component (Y), blue-difference chroma (Cb) and red-difference chroma (Cr) components. In terms of the human eye, although the negligible difference of the luminance component is well-recognized, the difference of the other components is not. In this method, the luminance component is normal sampled and the chroma components are down-sampled. The down-sampling allows us to accelerate the calculation of the color CGHs. We compute diffraction calculations from the components, and then we convert the diffracted results in YCbCr color space to RGB color space.

  11. Computer codes used in the calculation of high-temperature thermodynamic properties of sodium

    Energy Technology Data Exchange (ETDEWEB)

    Fink, J.K.

    1979-12-01

    Three computer codes - SODIPROP, NAVAPOR, and NASUPER - were written in order to calculate a self-consistent set of thermodynamic properties for saturated, subcooled, and superheated sodium. These calculations incorporate new critical parameters (temperature, pressure, and density) and recently derived single equations for enthalpy and vapor pressure. The following thermodynamic properties have been calculated in these codes: enthalpy, heat capacity, entropy, vapor pressure, heat of vaporization, density, volumetric thermal expansion coefficient, compressibility, and thermal pressure coefficient. In the code SODIPROP, these properties are calculated for saturated and subcooled liquid sodium. Thermodynamic properties of saturated sodium vapor are calculated in the code NAVAPOR. The code NASUPER calculates thermodynamic properties for super-heated sodium vapor only for low (< 1644 K) temperatures. No calculations were made for the supercritical region.

  12. HADOC: a computer code for calculation of external and inhalation doses from acute radionuclide releases

    Energy Technology Data Exchange (ETDEWEB)

    Strenge, D.L.; Peloquin, R.A.

    1981-04-01

    The computer code HADOC (Hanford Acute Dose Calculations) is described and instructions for its use are presented. The code calculates external dose from air submersion and inhalation doses following acute radionuclide releases. Atmospheric dispersion is calculated using the Hanford model with options to determine maximum conditions. Building wake effects and terrain variation may also be considered. Doses are calculated using dose conversion factor supplied in a data library. Doses are reported for one and fifty year dose commitment periods for the maximum individual and the regional population (within 50 miles). The fractional contribution to dose by radionuclide and exposure mode are also printed if requested.

  13. Computer program to calculate three-dimensional boundary layer flows over wings with wall mass transfer

    Science.gov (United States)

    Mclean, J. D.; Randall, J. L.

    1979-01-01

    A system of computer programs for calculating three dimensional transonic flow over wings, including details of the three dimensional viscous boundary layer flow, was developed. The flow is calculated in two overlapping regions: an outer potential flow region, and a boundary layer region in which the first order, three dimensional boundary layer equations are numerically solved. A consistent matching of the two solutions is achieved iteratively, thus taking into account viscous-inviscid interaction. For the inviscid outer flow calculations, the Jameson-Caughey transonic wing program FLO 27 is used, and the boundary layer calculations are performed by a finite difference boundary layer prediction program. Interface programs provide communication between the two basic flow analysis programs. Computed results are presented for the NASA F8 research wing, both with and without distributed surface suction.

  14. Depth compensating calculation method of computer-generated holograms using symmetry and similarity of zone plates

    Science.gov (United States)

    Wei, Hui; Gong, Guanghong; Li, Ni

    2017-10-01

    Computer-generated hologram (CGH) is a promising 3D display technology while it is challenged by heavy computation load and vast memory requirement. To solve these problems, a depth compensating CGH calculation method based on symmetry and similarity of zone plates is proposed and implemented on graphics processing unit (GPU). An improved LUT method is put forward to compute the distances between object points and hologram pixels in the XY direction. The concept of depth compensating factor is defined and used for calculating the holograms of points with different depth positions instead of layer-based methods. The proposed method is suitable for arbitrary sampling objects with lower memory usage and higher computational efficiency compared to other CGH methods. The effectiveness of the proposed method is validated by numerical and optical experiments.

  15. Simple and effective calculations about spectral power distributions of outdoor light sources for computer vision.

    Science.gov (United States)

    Tian, Jiandong; Duan, Zhigang; Ren, Weihong; Han, Zhi; Tang, Yandong

    2016-04-04

    The spectral power distributions (SPD) of outdoor light sources are not constant over time and atmospheric conditions, which causes the appearance variation of a scene and common natural illumination phenomena, such as twilight, shadow, and haze/fog. Calculating the SPD of outdoor light sources at different time (or zenith angles) and under different atmospheric conditions is of interest to physically-based vision. In this paper, for computer vision and its applications, we propose a feasible, simple, and effective SPD calculating method based on analyzing the transmittance functions of absorption and scattering along the path of solar radiation through the atmosphere in the visible spectrum. Compared with previous SPD calculation methods, our model has less parameters and is accurate enough to be directly applied in computer vision. It can be applied in computer vision tasks including spectral inverse calculation, lighting conversion, and shadowed image processing. The experimental results of the applications demonstrate that our calculation methods have practical values in computer vision. It establishes a bridge between image and physical environmental information, e.g., time, location, and weather conditions.

  16. Shielding calculations using computer techniques; Calculo de blindajes mediante tecnicas de computacion

    Energy Technology Data Exchange (ETDEWEB)

    Gonzalez Portilla, M. I.; Marquez, J.

    2011-07-01

    Radiological protection aims to limit the ionizing radiation received by people and equipment, which in numerous occasions requires of protection shields. Although, for certain configurations, there are analytical formulas, to characterize these shields, the design setup may be very intensive in numerical calculations, therefore the most efficient from to design the shields is by means of computer programs to calculate dose and dose rates. In the present article we review the codes most frequently used to perform these calculations, and the techniques used by such codes. (Author) 13 refs.

  17. Massively parallel computational fluid dynamics calculations for aerodynamics and aerothermodynamics applications

    Energy Technology Data Exchange (ETDEWEB)

    Payne, J.L.; Hassan, B.

    1998-09-01

    Massively parallel computers have enabled the analyst to solve complicated flow fields (turbulent, chemically reacting) that were previously intractable. Calculations are presented using a massively parallel CFD code called SACCARA (Sandia Advanced Code for Compressible Aerothermodynamics Research and Analysis) currently under development at Sandia National Laboratories as part of the Department of Energy (DOE) Accelerated Strategic Computing Initiative (ASCI). Computations were made on a generic reentry vehicle in a hypersonic flowfield utilizing three different distributed parallel computers to assess the parallel efficiency of the code with increasing numbers of processors. The parallel efficiencies for the SACCARA code will be presented for cases using 1, 150, 100 and 500 processors. Computations were also made on a subsonic/transonic vehicle using both 236 and 521 processors on a grid containing approximately 14.7 million grid points. Ongoing and future plans to implement a parallel overset grid capability and couple SACCARA with other mechanics codes in a massively parallel environment are discussed.

  18. An approach to first principles electronic structure calculation by symbolic-numeric computation

    Directory of Open Access Journals (Sweden)

    Akihito Kikuchi

    2013-04-01

    Full Text Available There is a wide variety of electronic structure calculation cooperating with symbolic computation. The main purpose of the latter is to play an auxiliary role (but not without importance to the former. In the field of quantum physics [1-9], researchers sometimes have to handle complicated mathematical expressions, whose derivation seems almost beyond human power. Thus one resorts to the intensive use of computers, namely, symbolic computation [10-16]. Examples of this can be seen in various topics: atomic energy levels, molecular dynamics, molecular energy and spectra, collision and scattering, lattice spin models and so on [16]. How to obtain molecular integrals analytically or how to manipulate complex formulas in many body interactions, is one such problem. In the former, when one uses special atomic basis for a specific purpose, to express the integrals by the combination of already known analytic functions, may sometimes be very difficult. In the latter, one must rearrange a number of creation and annihilation operators in a suitable order and calculate the analytical expectation value. It is usual that a quantitative and massive computation follows a symbolic one; for the convenience of the numerical computation, it is necessary to reduce a complicated analytic expression into a tractable and computable form. This is the main motive for the introduction of the symbolic computation as a forerunner of the numerical one and their collaboration has won considerable successes. The present work should be classified as one such trial. Meanwhile, the use of symbolic computation in the present work is not limited to indirect and auxiliary part to the numerical computation. The present work can be applicable to a direct and quantitative estimation of the electronic structure, skipping conventional computational methods.

  19. Easy calculations of lod scores and genetic risks on small computers.

    Science.gov (United States)

    Lathrop, G M; Lalouel, J M

    1984-01-01

    A computer program that calculates lod scores and genetic risks for a wide variety of both qualitative and quantitative genetic traits is discussed. An illustration is given of the joint use of a genetic marker, affection status, and quantitative information in counseling situations regarding Duchenne muscular dystrophy. PMID:6585139

  20. Calculating nasoseptal flap dimensions : a cadaveric study using cone beam computed tomography

    NARCIS (Netherlands)

    ten Dam, Ellen; Korsten-Meijer, Astrid G. W.; Schepers, Rutger H.; van der Meer, Wicher J.; Gerrits, Peter O.; van der Laan, Bernard F. A. M.; Feijen, Robert A.

    We hypothesize that three-dimensional imaging using cone beam computed tomography (CBCT) is suitable for calculating nasoseptal flap (NSF) dimensions. To evaluate our hypothesis, we compared CBCT NSF dimensions with anatomical dissections. The NSF reach and vascularity were studied. In an anatomical

  1. Computer program for calculation of complex chemical equilibrium compositions and applications. Part 1: Analysis

    Science.gov (United States)

    Gordon, Sanford; Mcbride, Bonnie J.

    1994-01-01

    This report presents the latest in a number of versions of chemical equilibrium and applications programs developed at the NASA Lewis Research Center over more than 40 years. These programs have changed over the years to include additional features and improved calculation techniques and to take advantage of constantly improving computer capabilities. The minimization-of-free-energy approach to chemical equilibrium calculations has been used in all versions of the program since 1967. The two principal purposes of this report are presented in two parts. The first purpose, which is accomplished here in part 1, is to present in detail a number of topics of general interest in complex equilibrium calculations. These topics include mathematical analyses and techniques for obtaining chemical equilibrium; formulas for obtaining thermodynamic and transport mixture properties and thermodynamic derivatives; criteria for inclusion of condensed phases; calculations at a triple point; inclusion of ionized species; and various applications, such as constant-pressure or constant-volume combustion, rocket performance based on either a finite- or infinite-chamber-area model, shock wave calculations, and Chapman-Jouguet detonations. The second purpose of this report, to facilitate the use of the computer code, is accomplished in part 2, entitled 'Users Manual and Program Description'. Various aspects of the computer code are discussed, and a number of examples are given to illustrate its versatility.

  2. Calculating CyberShake Map 1.0 on Shared Open Science High Performance Computing Resources

    Science.gov (United States)

    Callaghan, S.; Maechling, P. J.; Deelman, E.; Small, P.; Vahi, K.; Mehta, G.; Juve, G.; Milner, K.; Graves, R. W.; Field, E. H.; Okaya, D. A.; Jordan, T. H.

    2009-12-01

    Current Probabilistic Seismic Hazard Analysis (PSHA) calculations produce predictive seismic hazard curves using an earthquake rupture forecast (ERF) and a ground motion prediction equation (GMPE) that defines how ground motions decay with distance from an earthquake. Traditionally, GMPEs are empirically-based attenuation models. However, these GMPEs have important limitations. The observational data used to develop the attenuation relationships do not cover the full range of possible earthquake magnitudes. These GMPEs predict only peak ground motions, and do not produce ground motion time series. Phenomena such as rupture directivity and basin effects may not be well captured with these GMPEs. To improve the accuracy of PSHA calculations, researchers at the Southern California Earthquake Center (SCEC) are performing physics-based PSHA using the CyberShake project. CyberShake utilizes full 3D waveform modeling as the GMPE to compute PSHA hazard curves for various sites in Southern California. For each rupture in the Uniform California Earthquake Rupture Forecast (UCERF) 2.0, we capture rupture variability by varying the hypocenter and slip distribution to produce about 410,000 different events (“rupture variations”) per site of interest. We calculate strain Green’s tensors for each site and use seismic reciprocity to compute the intensity measure of interest for each rupture variation. These intensity measures are then synthesized into a hazard curve, relating shaking levels to probability of exceedance. The goal of CyberShake is to calculate more accurate hazard curves using high performance computing techniques. A set of hazard curves, computed for many sites in a region, can be integrated to produce a regional hazard map. In 2009, a series of CyberShake calculations, called the CyberShake 1.0 Map calculation, were performed using distributed resources on Texas Advanced Computing Center’s Ranger system, part of the NSF-funded TeraGrid, and the University

  3. Calculating a checksum with inactive networking components in a computing system

    Science.gov (United States)

    Aho, Michael E; Chen, Dong; Eisley, Noel A; Gooding, Thomas M; Heidelberger, Philip; Tauferner, Andrew T

    2015-01-27

    Calculating a checksum utilizing inactive networking components in a computing system, including: identifying, by a checksum distribution manager, an inactive networking component, wherein the inactive networking component includes a checksum calculation engine for computing a checksum; sending, to the inactive networking component by the checksum distribution manager, metadata describing a block of data to be transmitted by an active networking component; calculating, by the inactive networking component, a checksum for the block of data; transmitting, to the checksum distribution manager from the inactive networking component, the checksum for the block of data; and sending, by the active networking component, a data communications message that includes the block of data and the checksum for the block of data.

  4. Semiempirical Quantum Chemical Calculations Accelerated on a Hybrid Multicore CPU-GPU Computing Platform.

    Science.gov (United States)

    Wu, Xin; Koslowski, Axel; Thiel, Walter

    2012-07-10

    In this work, we demonstrate that semiempirical quantum chemical calculations can be accelerated significantly by leveraging the graphics processing unit (GPU) as a coprocessor on a hybrid multicore CPU-GPU computing platform. Semiempirical calculations using the MNDO, AM1, PM3, OM1, OM2, and OM3 model Hamiltonians were systematically profiled for three types of test systems (fullerenes, water clusters, and solvated crambin) to identify the most time-consuming sections of the code. The corresponding routines were ported to the GPU and optimized employing both existing library functions and a GPU kernel that carries out a sequence of noniterative Jacobi transformations during pseudodiagonalization. The overall computation times for single-point energy calculations and geometry optimizations of large molecules were reduced by one order of magnitude for all methods, as compared to runs on a single CPU core.

  5. PABLM: a computer program to calculate accumulated radiation doses from radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Kennedy, W.E. Jr.; Soldat, J.K.

    1980-03-01

    A computer program, PABLM, was written to facilitate the calculation of internal radiation doses to man from radionuclides in food products and external radiation doses from radionuclides in the environment. This report contains details of mathematical models used and calculational procedures required to run the computer program. Radiation doses from radionuclides in the environment may be calculated from deposition on the soil or plants during an atmospheric or liquid release, or from exposure to residual radionuclides in the environment after the releases have ended. Radioactive decay is considered during the release of radionuclides, after they are deposited on the plants or ground, and during holdup of food after harvest. The radiation dose models consider several exposure pathways. Doses may be calculated for either a maximum-exposed individual or for a population group. The doses calculated are accumulated doses from continuous chronic exposure. A first-year committed dose is calculated as well as an integrated dose for a selected number of years. The equations for calculating internal radiation doses are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and MPC's of each radionuclide. The radiation doses from external exposure to contaminated water and soil are calculated using the basic assumption that the contaminated medium is large enough to be considered an infinite volume or plane relative to the range of the emitted radiations. The equations for calculations of the radiation dose from external exposure to shoreline sediments include a correction for the finite width of the contaminated beach.

  6. TEMP: a computer code to calculate fuel pin temperatures during a transient. [LMFBR

    Energy Technology Data Exchange (ETDEWEB)

    Bard, F E; Christensen, B Y; Gneiting, B C

    1980-04-01

    The computer code TEMP calculates fuel pin temperatures during a transient. It was developed to accommodate temperature calculations in any system of axi-symmetric concentric cylinders. When used to calculate fuel pin temperatures, the code will handle a fuel pin as simple as a solid cylinder or as complex as a central void surrounded by fuel that is broken into three regions by two circumferential cracks. Any fuel situation between these two extremes can be analyzed along with additional cladding, heat sink, coolant or capsule regions surrounding the fuel. The one-region version of the code accurately calculates the solution to two problems having closed-form solutions. The code uses an implicit method, an explicit method and a Crank-Nicolson (implicit-explicit) method.

  7. Open Quantum Dynamics Calculations with the Hierarchy Equations of Motion on Parallel Computers.

    Science.gov (United States)

    Strümpfer, Johan; Schulten, Klaus

    2012-08-14

    Calculating the evolution of an open quantum system, i.e., a system in contact with a thermal environment, has presented a theoretical and computational challenge for many years. With the advent of supercomputers containing large amounts of memory and many processors, the computational challenge posed by the previously intractable theoretical models can now be addressed. The hierarchy equations of motion present one such model and offer a powerful method that remained under-utilized so far due to its considerable computational expense. By exploiting concurrent processing on parallel computers the hierarchy equations of motion can be applied to biological-scale systems. Herein we introduce the quantum dynamics software PHI, that solves the hierarchical equations of motion. We describe the integrator employed by PHI and demonstrate PHI's scaling and efficiency running on large parallel computers by applying the software to the calculation of inter-complex excitation transfer between the light harvesting complexes 1 and 2 of purple photosynthetic bacteria, a 50 pigment system.

  8. VORSTAB: A computer program for calculating lateral-directional stability derivatives with vortex flow effect

    Science.gov (United States)

    Lan, C. Edward

    1985-01-01

    A computer program based on the Quasi-Vortex-Lattice Method of Lan is presented for calculating longitudinal and lateral-directional aerodynamic characteristics of nonplanar wing-body combination. The method is based on the assumption of inviscid subsonic flow. Both attached and vortex-separated flows are treated. For the vortex-separated flow, the calculation is based on the method of suction analogy. The effect of vortex breakdown is accounted for by an empirical method. A summary of the theoretical method, program capabilities, input format, output variables and program job control set-up are described. Three test cases are presented as guides for potential users of the code.

  9. Fast calculation of spherical computer generated hologram using spherical wave spectrum method.

    Science.gov (United States)

    Jackin, Boaz Jessie; Yatagai, Toyohiko

    2013-01-14

    A fast calculation method for computer generation of spherical holograms in proposed. This method is based on wave propagation defined in spectral domain and in spherical coordinates. The spherical wave spectrum and transfer function were derived from boundary value solutions to the scalar wave equation. It is a spectral propagation formula analogous to angular spectrum formula in cartesian coordinates. A numerical method to evaluate the derived formula is suggested, which uses only N(logN)2 operations for calculations on N sampling points. Simulation results are presented to verify the correctness of the proposed method. A spherical hologram for a spherical object was generated and reconstructed successfully using the proposed method.

  10. An interactive computer code for calculation of gas-phase chemical equilibrium (EQLBRM)

    Science.gov (United States)

    Pratt, B. S.; Pratt, D. T.

    1984-01-01

    A user friendly, menu driven, interactive computer program known as EQLBRM which calculates the adiabatic equilibrium temperature and product composition resulting from the combustion of hydrocarbon fuels with air, at specified constant pressure and enthalpy is discussed. The program is developed primarily as an instructional tool to be run on small computers to allow the user to economically and efficiency explore the effects of varying fuel type, air/fuel ratio, inlet air and/or fuel temperature, and operating pressure on the performance of continuous combustion devices such as gas turbine combustors, Stirling engine burners, and power generation furnaces.

  11. THE COMPARISON BETWEEN COMPUTER SIMULATION AND PHYSICAL MODEL IN CALCULATING ILLUMINANCE LEVEL OF ATRIUM BUILDING

    Directory of Open Access Journals (Sweden)

    Sushardjanti Felasari

    2003-01-01

    Full Text Available This research examines the accuracy of computer programmes to simulate the illuminance level in atrium buildings compare to the measurement of those in physical models. The case was taken in atrium building with 4 types of roof i.e. pitched roof, barrel vault roof, monitor pitched roof (both monitor pitched roof and monitor barrel vault roof, and north light roof (both with north orientation and south orientation. The results show that both methods have agreement and disagreement. They show the same pattern of daylight distribution. In the other side, in terms of daylight factors, computer simulation tends to underestimate calculation compared to physical model measurement, while for average and minimum illumination, it tends to overestimate the calculation.

  12. Fast calculation method of computer-generated hologram using a depth camera with point cloud gridding

    Science.gov (United States)

    Zhao, Yu; Shi, Chen-Xiao; Kwon, Ki-Chul; Piao, Yan-Ling; Piao, Mei-Lan; Kim, Nam

    2018-03-01

    We propose a fast calculation method for a computer-generated hologram (CGH) of real objects that uses a point cloud gridding method. The depth information of the scene is acquired using a depth camera and the point cloud model is reconstructed virtually. Because each point of the point cloud is distributed precisely to the exact coordinates of each layer, each point of the point cloud can be classified into grids according to its depth. A diffraction calculation is performed on the grids using a fast Fourier transform (FFT) to obtain a CGH. The computational complexity is reduced dramatically in comparison with conventional methods. The feasibility of the proposed method was confirmed by numerical and optical experiments.

  13. Efficient Probability of Failure Calculations for QMU using Computational Geometry LDRD 13-0144 Final Report

    Energy Technology Data Exchange (ETDEWEB)

    Mitchell, Scott A. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ebeida, Mohamed Salah [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Romero, Vicente J. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Swiler, Laura Painton [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rushdi, Ahmad A. [Univ. of Texas, Austin, TX (United States); Abdelkader, Ahmad [Univ. of Maryland, College Park, MD (United States)

    2015-09-01

    This SAND report summarizes our work on the Sandia National Laboratory LDRD project titled "Efficient Probability of Failure Calculations for QMU using Computational Geometry" which was project #165617 and proposal #13-0144. This report merely summarizes our work. Those interested in the technical details are encouraged to read the full published results, and contact the report authors for the status of the software and follow-on projects.

  14. Two methods for calculating regional cerebral blood flow from emission computed tomography of inert gas concentrations

    DEFF Research Database (Denmark)

    Kanno, I; Lassen, N A

    1979-01-01

    Two methods are described for calculation of regional cerebral blood flow from completed tomographic data of radioactive inert gas distribution in a slice of brain tissue. It is assumed that the tomographic picture gives the average inert gas concentration in each pixel over data collection perio...... are implemented using synthetic data of xenon-133 emission computed tomography and some of the difficulties likely to be encountered in practice are stressed....

  15. STATIC_TEMP: a useful computer code for calculating static formation temperatures in geothermal wells

    Science.gov (United States)

    Santoyo, E.; Garcia, A.; Espinosa, G.; Hernandez, I.; Santoyo, S.

    2000-03-01

    The development and application of the computer code STATIC_TEMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC_TEMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliable used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC_TEMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented.

  16. Strategy for reflector pattern calculation: Let the computer do the work

    Science.gov (United States)

    Lam, P. T.; Lee, S. W.; Hung, C. C.; Acousta, R.

    1985-10-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. it is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  17. Strategy for reflector pattern calculation - Let the computer do the work

    Science.gov (United States)

    Lam, P. T.; Lee, S.-W.; Hung, C. C.; Acosta, R.

    1986-04-01

    Using high frequency approximations, the secondary pattern of a reflector antenna can be calculated by numerically evaluating a radiation integral I(u,v). In recent years, tremendous effort has been expended to reducing I(u,v) to Fourier integrals. These reduction schemes are invariably reflector geometry dependent. Hence, different analyses/computer software development must be carried out for different reflector shapes/boundaries. It is pointed out, that, as the computer power improves, these reduction schemes are no longer necessary. Comparable accuracy and computation time can be achieved by evaluating I(u,v) by a brute force FFT described in this note. Furthermore, there is virtually no restriction on the reflector geometry by using the brute force FFT.

  18. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

    2017-07-15

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  19. Fast polygon-based method for calculating computer-generated holograms in three-dimensional display.

    Science.gov (United States)

    Pan, Yijie; Wang, Yongtian; Liu, Juan; Li, Xin; Jia, Jia

    2013-01-01

    In the holographic three-dimensional (3D) display, the numerical synthesis of the computer-generated holograms needs tremendous calculation. To solve the problem, a fast polygon-based method based on two-dimensional Fourier analysis of 3D affine transformation is proposed. From one primitive polygon, the proposed method calculates the diffracted optical field of each arbitrary polygon in the 3D model, where the pseudo-inverse matrix, the interpolation, and the compensation of the power spectral density are employed. The proposed method could save the computation time in the hologram synthesis since it does not need the fast Fourier transform for each polygonal surface and the additional diffusion computation. The numerical simulation and the optical experimental results are presented to demonstrate the effectiveness of the method. The results reveal the proposed method could reconstruct the 3D scene with the solid effect and without the depth limitation. The factors that influence the image quality are discussed, and the thresholds are proposed to ensure the reconstruction quality.

  20. Multithreaded transactions in scientific computing: New versions of a computer program for kinematical calculations of RHEED intensity oscillations

    Science.gov (United States)

    Brzuszek, Marcin; Daniluk, Andrzej

    2006-11-01

    Writing a concurrent program can be more difficult than writing a sequential program. Programmer needs to think about synchronisation, race conditions and shared variables. Transactions help reduce the inconvenience of using threads. A transaction is an abstraction, which allows programmers to group a sequence of actions on the program into a logical, higher-level computation unit. This paper presents multithreaded versions of the GROWTH program, which allow to calculate the layer coverages during the growth of thin epitaxial films and the corresponding RHEED intensities according to the kinematical approximation. The presented programs also contain graphical user interfaces, which enable displaying program data at run-time. New version program summaryTitles of programs:GROWTHGr, GROWTH06 Catalogue identifier:ADVL_v2_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v2_0 Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version:ADVL Does the new version supersede the original program:No Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used:Object Pascal Memory required to execute with typical data:More than 1 MB Number of bits in a word:64 bits Number of processors used:1 No. of lines in distributed program, including test data, etc.:20 931 Number of bytes in distributed program, including test data, etc.: 1 311 268 Distribution format:tar.gz Nature of physical problem: The programs compute the RHEED intensities during the growth of thin epitaxial structures prepared using the molecular beam epitaxy (MBE). The computations are based on the use of kinematical diffraction theory [P.I. Cohen, G.S. Petrich, P.R. Pukite, G.J. Whaley, A.S. Arrott, Surf. Sci. 216 (1989) 222. [1

  1. CALCULATION OF MATRIX CORRESPONDENCE WITH THE USE OF PARALLEL COMPUTING TECHNOLOGIES

    Directory of Open Access Journals (Sweden)

    E. E. Ilyasov

    2014-01-01

    Full Text Available Increasing the number of vehicles has led to urban congestion, many hours of traffic jams, obstruction of pedestrian traffic, increase the number of accidents, etc. Therefore, the importance of gaining the optimum network planning, improved traffic management, optimization of the system of public transport routes. The solution of such problems is impossible without mathematical modeling of traffic flows. An important task of modeling is to calculate the trip distribution. In this paper, we develop a program for calculating trip distribution using parallel computing technologies. The application of these technologies will improve the efficiency of simulation, increase accuracy and speed of the algorithm.

  2. Computational Issues Associated with Automatic Calculation of Acute Myocardial Infarction Scores

    Science.gov (United States)

    Destro-Filho, J. B.; Machado, S. J. S.; Fonseca, G. T.

    2008-12-01

    This paper presents a comparison among the three principal acute myocardial infarction (AMI) scores (Selvester, Aldrich, Anderson-Wilkins) as they are automatically estimated from digital electrocardiographic (ECG) files, in terms of memory occupation and processing time. Theoretical algorithm complexity is also provided. Our simulation study supposes that the ECG signal is already digitized and available within a computer platform. We perform 1000 000 Monte Carlo experiments using the same input files, leading to average results that point out drawbacks and advantages of each score. Since all these calculations do not require either large memory occupation or long processing, automatic estimation is compatible with real-time requirements associated with AMI urgency and with telemedicine systems, being faster than manual calculation, even in the case of simple costless personal microcomputers.

  3. UDATE1: A computer program for the calculation of uranium-series isotopic ages

    Science.gov (United States)

    Rosenbauer, Robert J.

    UDATE1 is a FORTRAN-77 program with an interface for an Apple Macintosh computer that calculates isotope activities from measured count rates to date geologic materials by uranium-series disequilibria. Dates on pure samples can be determined directly by the accumulation of 230Th from 234U and of 231Pa from 235U. Dates for samples contaminated by clays containing abundant natural thorium can be corrected by the program using various mixing models. Input to the program and file management are made simple and user friendly by a series of Macintosh modal dialog boxes.

  4. Parallel computing and first-principles calculations: Applications to complex ceramics and Vitamin B12

    Science.gov (United States)

    Ouyang, Lizhi

    A systematic improvement and extension of the orthogonalized linear combinations of atomic orbitals method was carried out using a combined computational and theoretical approach. For high performance parallel computing, a Beowulf class personal computer cluster was constructed. It also served as a parallel program development platform that helped us to port the programs of the method to the national supercomputer facilities. The program, received a language upgrade from Fortran 77 to Fortran 90, and a dynamic memory allocation feature. A preliminary parallel High Performance Fortran version of the program has been developed as well. To be of more benefit though, scalability improvements are needed. In order to circumvent the difficulties of the analytical force calculation in the method, we developed a geometry optimization scheme using the finite difference approximation based on the total energy calculation. The implementation of this scheme was facilitated by the powerful general utility lattice program, which offers many desired features such as multiple optimization schemes and usage of space group symmetry. So far, many ceramic oxides have been tested with the geometry optimization program. Their optimized geometries were in excellent agreement with the experimental data. For nine ceramic oxide crystals, the optimized cell parameters differ from the experimental ones within 0.5%. Moreover, the geometry optimization was recently used to predict a new phase of TiNx. The method has also been used to investigate a complex Vitamin B12-derivative, the OHCbl crystals. In order to overcome the prohibitive disk I/O demand, an on-demand version of the method was developed. Based on the electronic structure calculation of the OHCbl crystal, a partial density of states analysis and a bond order analysis were carried out. The calculated bonding of the corrin ring of OHCbl model was coincident with the big open-ring pi bond. One interesting find of the calculation was

  5. DIST: a computer code system for calculation of distribution ratios of solutes in the purex system

    Energy Technology Data Exchange (ETDEWEB)

    Tachimori, Shoichi [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1996-05-01

    Purex is a solvent extraction process for reprocessing the spent nuclear fuel using tri n-butylphosphate (TBP). A computer code system DIST has been developed to calculate distribution ratios for the major solutes in the Purex process. The DIST system is composed of database storing experimental distribution data of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}: DISTEX and of Zr(IV), Tc(VII): DISTEXFP and calculation programs to calculate distribution ratios of U(IV), U(VI), Pu(III), Pu(IV), Pu(VI), Np(IV), Np(VI), HNO{sub 3} and HNO{sub 2}(DIST1), and Zr(IV), Tc(VII)(DITS2). The DIST1 and DIST2 determine, by the best-fit procedures, the most appropriate values of many parameters put on empirical equations by using the DISTEX data which fulfill the assigned conditions and are applied to calculate distribution ratios of the respective solutes. Approximately 5,000 data were stored in the DISTEX and DISTEXFP. In the present report, the following items are described, 1) specific features of DIST1 and DIST2 codes and the examples of calculation 2) explanation of databases, DISTEX, DISTEXFP and a program DISTIN, which manages the data in the DISTEX and DISTEXFP by functions as input, search, correction and delete. and at the annex, 3) programs of DIST1, DIST2, and figure-drawing programs DIST1G and DIST2G 4) user manual for DISTIN. 5) source programs of DIST1 and DIST2. 6) the experimental data stored in the DISTEX and DISTEXFP. (author). 122 refs.

  6. Use of Monte Carlo simulation software for calculating effective dose in cone beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Gomes B, W. O., E-mail: wilsonottobatista@gmail.com [Instituto Federal da Bahia, Rua Emidio dos Santos s/n, Barbalho 40301-015, Salvador de Bahia (Brazil)

    2016-10-15

    This study aimed to develop a geometry of irradiation applicable to the software PCXMC and the consequent calculation of effective dose in applications of the Computed Tomography Cone Beam (CBCT). We evaluated two different CBCT equipment s for dental applications: Care stream Cs 9000 3-dimensional tomograph; i-CAT and GENDEX GXCB-500. Initially characterize each protocol measuring the surface kerma input and the product kerma air-area, P{sub KA}, with solid state detectors RADCAL and PTW transmission chamber. Then we introduce the technical parameters of each preset protocols and geometric conditions in the PCXMC software to obtain the values of effective dose. The calculated effective dose is within the range of 9.0 to 15.7 μSv for 3-dimensional computer 9000 Cs; within the range 44.5 to 89 μSv for GXCB-500 equipment and in the range of 62-111 μSv for equipment Classical i-CAT. These values were compared with results obtained dosimetry using TLD implanted in anthropomorphic phantom and are considered consistent. Os effective dose results are very sensitive to the geometry of radiation (beam position in mathematical phantom). This factor translates to a factor of fragility software usage. But it is very useful to get quick answers to regarding process optimization tool conclusions protocols. We conclude that use software PCXMC Monte Carlo simulation is useful assessment protocols for CBCT tests in dental applications. (Author)

  7. Impact on Dose Coefficients Calculated with ICRP Adult Mesh-type Reference Computational Phantoms

    Energy Technology Data Exchange (ETDEWEB)

    Yeom, Yeon Soo; Nguyen, Thang Tat; Choi, Chan Soo; Lee, Han Jin; Han, Hae Gin; Han, Min Cheol; Shin, Bang Ho; Kim, Chan Hyeong [Dept. of Nuclear Engineering, Hanyang University, Seoul (Korea, Republic of)

    2017-04-15

    In 2016, the International Commission on Radiological Protection (ICRP) formulated a new Task Group (TG) (i.e., TG 103) within Committee 2. The ultimate aim of the TG 103 is to develop the mesh-type reference computational phantoms (MRCPs) that can address dosimetric limitations of the currently used voxel-type reference computational phantoms (VRCPs) due to their limited voxel resolutions. The objective of the present study is to investigate dosimetric impact of the adult MRCPs by comparing dose coefficients (DCs) calculated with the MRCPs for some external and internal exposure cases and the reference DCs in ICRP Publications 116 and 133 that were produced with the adult VRCPs. In the present study, the DCs calculated with the adult MRCPs for some exposure cases were compared with the values in ICRP Publications 116 and 133. This comparison shows that in general the MRCPs provide very similar DCs for uncharged particles, but for charged particles provide significantly different DCs due to the improvement of the MRCPs.

  8. Computational scheme for pH-dependent binding free energy calculation with explicit solvent.

    Science.gov (United States)

    Lee, Juyong; Miller, Benjamin T; Brooks, Bernard R

    2016-01-01

    We present a computational scheme to compute the pH-dependence of binding free energy with explicit solvent. Despite the importance of pH, the effect of pH has been generally neglected in binding free energy calculations because of a lack of accurate methods to model it. To address this limitation, we use a constant-pH methodology to obtain a true ensemble of multiple protonation states of a titratable system at a given pH and analyze the ensemble using the Bennett acceptance ratio (BAR) method. The constant pH method is based on the combination of enveloping distribution sampling (EDS) with the Hamiltonian replica exchange method (HREM), which yields an accurate semi-grand canonical ensemble of a titratable system. By considering the free energy change of constraining multiple protonation states to a single state or releasing a single protonation state to multiple states, the pH dependent binding free energy profile can be obtained. We perform benchmark simulations of a host-guest system: cucurbit[7]uril (CB[7]) and benzimidazole (BZ). BZ experiences a large pKa shift upon complex formation. The pH-dependent binding free energy profiles of the benchmark system are obtained with three different long-range interaction calculation schemes: a cutoff, the particle mesh Ewald (PME), and the isotropic periodic sum (IPS) method. Our scheme captures the pH-dependent behavior of binding free energy successfully. Absolute binding free energy values obtained with the PME and IPS methods are consistent, while cutoff method results are off by 2 kcal mol(-1) . We also discuss the characteristics of three long-range interaction calculation methods for constant-pH simulations. © 2015 The Protein Society.

  9. Computational Aspects of Sensitivity Calculations in Linear Transient Structural Analysis. Ph.D. Thesis

    Science.gov (United States)

    Greene, William H.

    1989-01-01

    A study has been performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semianalytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models.

  10. DNAStat, version 2.1--a computer program for processing genetic profile databases and biostatistical calculations.

    Science.gov (United States)

    Berent, Jarosław

    2010-01-01

    This paper presents the new DNAStat version 2.1 for processing genetic profile databases and biostatistical calculations. The popularization of DNA studies employed in the judicial system has led to the necessity of developing appropriate computer programs. Such programs must, above all, address two critical problems, i.e. the broadly understood data processing and data storage, and biostatistical calculations. Moreover, in case of terrorist attacks and mass natural disasters, the ability to identify victims by searching related individuals is very important. DNAStat version 2.1 is an adequate program for such purposes. The DNAStat version 1.0 was launched in 2005. In 2006, the program was updated to 1.1 and 1.2 versions. There were, however, slight differences between those versions and the original one. The DNAStat version 2.0 was launched in 2007 and the major program improvement was an introduction of the group calculation options with the potential application to personal identification of mass disasters and terrorism victims. The last 2.1 version has the option of language selection--Polish or English, which will enhance the usage and application of the program also in other countries.

  11. Quantum computing applied to calculations of molecular energies: CH2 benchmark.

    Science.gov (United States)

    Veis, Libor; Pittner, Jiří

    2010-11-21

    Quantum computers are appealing for their ability to solve some tasks much faster than their classical counterparts. It was shown in [Aspuru-Guzik et al., Science 309, 1704 (2005)] that they, if available, would be able to perform the full configuration interaction (FCI) energy calculations with a polynomial scaling. This is in contrast to conventional computers where FCI scales exponentially. We have developed a code for simulation of quantum computers and implemented our version of the quantum FCI algorithm. We provide a detailed description of this algorithm and the results of the assessment of its performance on the four lowest lying electronic states of CH(2) molecule. This molecule was chosen as a benchmark, since its two lowest lying (1)A(1) states exhibit a multireference character at the equilibrium geometry. It has been shown that with a suitably chosen initial state of the quantum register, one is able to achieve the probability amplification regime of the iterative phase estimation algorithm even in this case.

  12. Geometrical splitting technique to improve the computational efficiency in Monte Carlo calculations for proton therapy.

    Science.gov (United States)

    Ramos-Méndez, José; Perl, Joseph; Faddegon, Bruce; Schümann, Jan; Paganetti, Harald

    2013-04-01

    To present the implementation and validation of a geometrical based variance reduction technique for the calculation of phase space data for proton therapy dose calculation. The treatment heads at the Francis H Burr Proton Therapy Center were modeled with a new Monte Carlo tool (TOPAS based on Geant4). For variance reduction purposes, two particle-splitting planes were implemented. First, the particles were split upstream of the second scatterer or at the second ionization chamber. Then, particles reaching another plane immediately upstream of the field specific aperture were split again. In each case, particles were split by a factor of 8. At the second ionization chamber and at the latter plane, the cylindrical symmetry of the proton beam was exploited to position the split particles at randomly spaced locations rotated around the beam axis. Phase space data in IAEA format were recorded at the treatment head exit and the computational efficiency was calculated. Depth-dose curves and beam profiles were analyzed. Dose distributions were compared for a voxelized water phantom for different treatment fields for both the reference and optimized simulations. In addition, dose in two patients was simulated with and without particle splitting to compare the efficiency and accuracy of the technique. A normalized computational efficiency gain of a factor of 10-20.3 was reached for phase space calculations for the different treatment head options simulated. Depth-dose curves and beam profiles were in reasonable agreement with the simulation done without splitting: within 1% for depth-dose with an average difference of (0.2 ± 0.4)%, 1 standard deviation, and a 0.3% statistical uncertainty of the simulations in the high dose region; 1.6% for planar fluence with an average difference of (0.4 ± 0.5)% and a statistical uncertainty of 0.3% in the high fluence region. The percentage differences between dose distributions in water for simulations done with and without particle

  13. Development of selective photoionization spectroscopy technology - Development of a computer program to calculate selective ionization of atoms with multistep processes

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Young Soon; Nam, Baek Il [Myongji University, Seoul (Korea, Republic of)

    1995-08-01

    We have developed computer programs to calculate 2-and 3-step selective resonant multiphoton ionization of atoms. Autoionization resonances in the final continuum can be put into account via B-Spline basis set method. 8 refs., 5 figs. (author)

  14. Electronic Structure Calculations and Adaptation Scheme in Multi-core Computing Environments

    Energy Technology Data Exchange (ETDEWEB)

    Seshagiri, Lakshminarasimhan; Sosonkina, Masha; Zhang, Zhao

    2009-05-20

    Multi-core processing environments have become the norm in the generic computing environment and are being considered for adding an extra dimension to the execution of any application. The T2 Niagara processor is a very unique environment where it consists of eight cores having a capability of running eight threads simultaneously in each of the cores. Applications like General Atomic and Molecular Electronic Structure (GAMESS), used for ab-initio molecular quantum chemistry calculations, can be good indicators of the performance of such machines and would be a guideline for both hardware designers and application programmers. In this paper we try to benchmark the GAMESS performance on a T2 Niagara processor for a couple of molecules. We also show the suitability of using a middleware based adaptation algorithm on GAMESS on such a multi-core environment.

  15. A computational approach to calculate personalized pennation angle based on MRI: effect on motion analysis.

    Science.gov (United States)

    Chincisan, Andra; Tecante, Karelia; Becker, Matthias; Magnenat-Thalmann, Nadia; Hurschler, Christof; Choi, Hon Fai

    2016-05-01

    Muscles are the primary component responsible for the locomotion and change of posture of the human body. The physiologic basis of muscle force production and movement is determined by the muscle architecture (maximum muscle force, [Formula: see text], optimal muscle fiber length, [Formula: see text], tendon slack length, [Formula: see text], and pennation angle at optimal muscle fiber length, [Formula: see text]). The pennation angle is related to the maximum force production and to the range of motion. The aim of this study was to investigate a computational approach to calculate subject-specific pennation angle from magnetic resonance images (MRI)-based 3D anatomical model and to determine the impact of this approach on the motion analysis with personalized musculoskeletal models. A 3D method that calculates the pennation angle using MRI was developed. The fiber orientations were automatically computed, while the muscle line of action was determined using approaches based on anatomical landmarks and on centroids of image segmentation. Three healthy male volunteers were recruited for MRI scanning and motion capture acquisition. This work evaluates the effect of subject-specific pennation angle as musculoskeletal parameter in the lower limb, focusing on the quadriceps group. A comparison was made for assessing the contribution of personalized models on motion analysis. Gait and deep squat were analyzed using neuromuscular simulations (OpenSim). The results showed variation of the pennation angle between the generic and subject-specific models, demonstrating important interindividual differences, especially for the vastus intermedius and vastus medialis muscles. The pennation angle variation between personalized and generic musculoskeletal models generated significant variation in muscle moments and forces during dynamic motion analysis. A MRI-based approach to define subject-specific pennation angle was proposed and evaluated in motion analysis models. The

  16. SU-E-I-28: Evaluating the Organ Dose From Computed Tomography Using Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ono, T; Araki, F [Faculty of Life Sciences, Kumamoto University, Kumamoto (Japan)

    2014-06-01

    Purpose: To evaluate organ doses from computed tomography (CT) using Monte Carlo (MC) calculations. Methods: A Philips Brilliance CT scanner (64 slice) was simulated using the GMctdospp (IMPS, Germany) based on the EGSnrc user code. The X-ray spectra and a bowtie filter for MC simulations were determined to coincide with measurements of half-value layer (HVL) and off-center ratio (OCR) profile in air. The MC dose was calibrated from absorbed dose measurements using a Farmer chamber and a cylindrical water phantom. The dose distribution from CT was calculated using patient CT images and organ doses were evaluated from dose volume histograms. Results: The HVLs of Al at 80, 100, and 120 kV were 6.3, 7.7, and 8.7 mm, respectively. The calculated HVLs agreed with measurements within 0.3%. The calculated and measured OCR profiles agreed within 3%. For adult head scans (CTDIvol) =51.4 mGy), mean doses for brain stem, eye, and eye lens were 23.2, 34.2, and 37.6 mGy, respectively. For pediatric head scans (CTDIvol =35.6 mGy), mean doses for brain stem, eye, and eye lens were 19.3, 24.5, and 26.8 mGy, respectively. For adult chest scans (CTDIvol=19.0 mGy), mean doses for lung, heart, and spinal cord were 21.1, 22.0, and 15.5 mGy, respectively. For adult abdominal scans (CTDIvol=14.4 mGy), the mean doses for kidney, liver, pancreas, spleen, and spinal cord were 17.4, 16.5, 16.8, 16.8, and 13.1 mGy, respectively. For pediatric abdominal scans (CTDIvol=6.76 mGy), mean doses for kidney, liver, pancreas, spleen, and spinal cord were 8.24, 8.90, 8.17, 8.31, and 6.73 mGy, respectively. In head scan, organ doses were considerably different from CTDIvol values. Conclusion: MC dose distributions calculated by using patient CT images are useful to evaluate organ doses absorbed to individual patients.

  17. MILDOS - A Computer Program for Calculating Environmental Radiation Doses from Uranium Recovery Operations

    Energy Technology Data Exchange (ETDEWEB)

    Strange, D. L.; Bander, T. J.

    1981-04-01

    The MILDOS Computer Code estimates impacts from radioactive emissions from uranium milling facilities. These impacts are presented as dose commitments to individuals and the regional population within an 80 km radius of the facility. Only airborne releases of radioactive materials are considered: releases to surface water and to groundwater are not addressed in MILDOS. This code is multi-purposed and can be used to evaluate population doses for NEPA assessments, maximum individual doses for predictive 40 CFR 190 compliance evaluations, or maximum offsite air concentrations for predictive evaluations of 10 CFR 20 compliance. Emissions of radioactive materials from fixed point source locations and from area sources are modeled using a sector-averaged Gaussian plume dispersion model, which utilizes user-provided wind frequency data. Mechanisms such as deposition of particulates, resuspension. radioactive decay and ingrowth of daughter radionuclides are included in the transport model. Annual average air concentrations are computed, from which subsequent impacts to humans through various pathways are computed. Ground surface concentrations are estimated from deposition buildup and ingrowth of radioactive daughters. The surface concentrations are modified by radioactive decay, weathering and other environmental processes. The MILDOS Computer Code allows the user to vary the emission sources as a step function of time by adjustinq the emission rates. which includes shutting them off completely. Thus the results of a computer run can be made to reflect changing processes throughout the facility's operational lifetime. The pathways considered for individual dose commitments and for population impacts are: • Inhalation • External exposure from ground concentrations • External exposure from cloud immersion • Ingestioo of vegetables • Ingestion of meat • Ingestion of milk • Dose commitments are calculated using dose conversion factors, which are ultimately based

  18. Fast diffraction calculation of cylindrical computer generated hologram based on outside-in propagation model

    Science.gov (United States)

    Wang, Jun; Wang, Qiong-Hua; Hu, Yuhen

    2017-11-01

    Cylindrical computer-generated hologram is a promising approach to realize a display with 360°field of view. However, conventional cylindrical hologram employs an inside-out propagation model and suffers from two main drawbacks: limited object size and lack of effective reconstructed method. Previously, we proposed to fix these problems using an outside-in propagation model with reversed propagation direction of the inside-out model. We also derived corresponding diffraction calculation formula for the outside-in propagation model. In this work, we investigate a non-constant obliquity factor in the outside-in propagation model, and show that it is the projection of the unit complex amplitude in the propagation direction onto the outer normal of the observation point. We then propose to apply fast Fourier transform to accelerate the convolution operation needed for diffraction calculation. We conducted experiments on inverse diffraction and reconstruction of the cylindrical objects. Very encouraging results demonstrate the validity of this proposed approach.

  19. Measurements and computer calculations of pulverized-coal combustion at Asnaes Power Station 4

    Energy Technology Data Exchange (ETDEWEB)

    Biede, O.; Swane Lund, J.

    1996-07-01

    Measurements have been performed on a front-fired 270 MW (net electrical out-put) pulverized-coal utility furnace with 24 swirl-stabilized burners, placed in four horizontal rows. Apart from continuous operational measurements, special measurements were performed as follows. At one horizontal level above the upper burner row, gas temperatures were measured by an acoustic pyrometer. At the same level and at the level of the second upper burner row, irradiation to the walls was measured in ten positions by means of specially designed 2 {pi}-thermal radiation meters. Fly-ash was collected and analysed for unburned carbon. Coal size distribution to each individual burner was measured. Eight different cases were measured. On a Columbian coal, three cases with different oxygen concentrations in the exit-gas were measured at a load of 260 MW, and in addition, measurements were performed at reduced loads of 215 MW and 130 MW. On a South African coal blend measurements were performed at a load of 260 MW with three different oxygen exit concentrations. Each case has been simulated by a three-dimensional numerical computer code for the prediction of distribution of gas temperatures, species concentrations and thermal radiative net heat absorption on the furnace walls. Comparisons between measured and calculated gas temperatures, irradiation and unburned carbon are made. Measured results among the cases differ significantly, and the computational results agree well with the measured results. (au)

  20. Calculation of local skin doses with ICRP adult mesh-type reference computational phantoms

    Science.gov (United States)

    Yeom, Yeon Soo; Han, Haegin; Choi, Chansoo; Nguyen, Thang Tat; Lee, Hanjin; Shin, Bangho; Kim, Chan Hyeong; Han, Min Cheol

    2018-01-01

    Recently, Task Group 103 of the International Commission on Radiological Protection (ICRP) developed new mesh-type reference computational phantoms (MRCPs) for adult males and females in order to address the limitations of the current voxel-type reference phantoms described in ICRP Publication 110 due to their limited voxel resolutions and the nature of the voxel geometry. One of the substantial advantages of the MRCPs over the ICRP-110 reference phantoms is the inclusion of a 50-μm-thick radiosensitive skin basal-cell layer; however, a methodology for calculating the local skin dose (LSD), i.e., the maximum dose to the basal layer averaged over a 1-cm2 area, has yet to be developed. In the present study, a dedicated program for the LSD calculation with the MRCPs was developed based on the mean shift algorithm and the Geant4 Monte Carlo code. The developed program was used to calculate local skin dose coefficients (LSDCs) for electrons and alpha particles, which were then compared with the values given in ICRP Publication 116 that were produced with a simple tissue-equivalent cube model. The results of the present study show that the LSDCs of the MRCPs are generally in good agreement with the ICRP-116 values for alpha particles, but for electrons, significant differences are found at energies higher than 0.15 MeV. The LSDCs of the MRCPs are greater than the ICRP-116 values by as much as 2.7 times at 10 MeV, which is due mainly to the different curvature between realistic MRCPs ( i.e., curved) and the simple cube model ( i.e., flat).

  1. An Examination of the Performance of Parallel Calculation of the Radiation Integral on a Beowulf-Class Computer

    Science.gov (United States)

    Katz, D.; Cwik, T.; Sterling, T.

    1998-01-01

    This paper uses the parallel calculation of the radiation integral for examination of performance and compiler issues on a Beowulf-class computer. This type of computer, built from mass-market, commodity, off-the-shelf components, has limited communications performance and therefore also has a limited regime of codes for which it is suitable.

  2. Do physicians correctly calculate thromboembolic risk scores? A comparison of concordance between manual and computer-based calculation of CHADS2 and CHA2 DS2 -VASc scores.

    Science.gov (United States)

    Esteve-Pastor, M A; Marín, F; Bertomeu-Martinez, V; Roldán-Rabadán, I; Cequier-Fillat, Á; Badimon, L; Muñiz-García, J; Valdés, M; Anguita-Sánchez, M

    2016-05-01

    Clinical risk scores, CHADS2 and CHA2 DS2 -VASc scores, are the established tools for assessing stroke risk in patients with atrial fibrillation (AF). The aim of this study is to assess concordance between manual and computer-based calculation of CHADS2 and CHA2 DS2 -VASc scores, as well as to analyse the patient categories using CHADS2 and the potential improvement on stroke risk stratification with CHA2 DS2 -VASc score. We linked data from Atrial Fibrillation Spanish registry FANTASIIA. Between June 2013 and March 2014, 1318 consecutive outpatients were recruited. We explore the concordance between manual scoring and computer-based calculation. We compare the distribution of embolic risk of patients using both CHADS2 and CHA2 DS2 -VASc scores The mean age was 73.8 ± 9.4 years, and 758 (57.5%) were male. For CHADS2 score, concordance between manual scoring and computer-based calculation was 92.5%, whereas for CHA2 DS2 -VASc score was 96.4%. In CHADS2 score, 6.37% of patients with AF changed indication on antithrombotic therapy (3.49% of patients with no treatment changed to need antithrombotic treatment and 2.88% of patients otherwise). Using CHA2 DS2 -VASc score, only 0.45% of patients with AF needed to change in the recommendation of antithrombotic therapy. We have found a strong concordance between manual and computer-based score calculation of both CHADS2 and CHA2 DS2 -VASc risk scores with minimal changes in anticoagulation recommendations. The use of CHA2 DS2 -VASc score significantly improves classification of AF patients at low and intermediate risk of stroke into higher grade of thromboembolic score. Moreover, CHA2 DS2 -VASc score could identify 'truly low risk' patients compared with CHADS2 score. © 2016 Royal Australasian College of Physicians.

  3. The presence of mathematics and computer anxiety in nursing students and their effects on medication dosage calculations.

    Science.gov (United States)

    Glaister, Karen

    2007-05-01

    To determine if the presence of mathematical and computer anxiety in nursing students affects learning of dosage calculations. The quasi-experimental study compared learning outcomes at differing levels of mathematical and computer anxiety when integrative and computer based learning approaches were used. Participants involved a cohort of second year nursing students (n=97). Mathematical anxiety exists in 20% (n=19) of the student nurse population, and 14% (n=13) experienced mathematical testing anxiety. Those students more anxious about mathematics and the testing of mathematics benefited from integrative learning to develop conditional knowledge (F(4,66)=2.52 at panxiety was present in 12% (n=11) of participants, with those reporting medium and high levels of computer anxiety performing less well than those with low levels (F(1,81)=3.98 at pmathematical and computer anxiety when planning an educational program to develop competency in dosage calculations.

  4. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 1: Computational technique

    Science.gov (United States)

    Marconi, F.; Salas, M.; Yaeger, L.

    1976-01-01

    A numerical procedure has been developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second order accurate finite difference scheme is used to integrate the three dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  5. A Scientific Calculator for Exact Real Number Computation Based on LRT, GMP and FC++

    Directory of Open Access Journals (Sweden)

    J. A. Hernández

    2012-03-01

    Full Text Available Language for Redundant Test (LRT is a programming language for exact real number computation. Its lazy evaluation mechanism (also called call-by-need and its infinite list requirement, make the language appropriate to be implemented in a functional programming language such as Haskell. However, a direction translation of the operational semantics of LRT into Haskell as well as the algorithms to implement basic operations (addition subtraction, multiplication, division and trigonometric functions (sin, cosine, tangent, etc. makes the resulting scientific calculator time consuming and so inefficient. In this paper, we present an alternative implementation of the scientific calculator using FC++ and GMP. FC++ is a functional C++ library while GMP is a GNU multiple presicion library. We show that a direct translation of LRT in FC++ results in a faster scientific calculator than the one presented in Haskell.El lenguaje de verificación redundante (LRT, por sus siglas en inglés es un lenguaje de programación para el cómputo con números reales exactos. Su método de evaluación lazy (o mejor conocido como llamada por necesidad y el manejo de listas infinitas requerido, hace que el lenguaje sea apropiado para su implementación en un lenguaje funcional como Haskell. Sin embargo, la implementación directa de la semántica operacional de LRT en Haskell así como los algoritmos para funciones básicas (suma, resta, multiplicación y división y funciones trigonométricas (seno, coseno, tangente, etc hace que la calculadora científica resultante sea ineficiente. En este artículo, presentamos una implementación alternativa de la calculadora científica usando FC++ y GMP. FC++ es una librería que utiliza el paradigma Funcional en C++ mientras que GMP es una librería GNU de múltiple precisión. En el artículo mostramos que la implementación directa de LRT en FC++ resulta en una librería más eficiente que la implementada en Haskell.

  6. Calculation of computer-generated hologram (CGH) from 3D object of arbitrary size and viewing angle

    Science.gov (United States)

    Xu, Liyao; Chang, Chenliang; Feng, Shaotong; Yuan, Caojin; Nie, Shouping

    2017-11-01

    We propose a method to calculate computer-generated hologram (CGH) from 3D object of arbitrary size and viewing angle. The spectrum relation between a 3D voxel object and the CGH wavefront is established. The CGH is generated via diffraction calculation from 3D voxel object based on the development of a scaled Fresnel diffraction algorithm. This CGH calculation method overcomes the limitations on sampling imposed by conventional Fourier transform based algorithm. The calculation and reconstruction of 3D object with arbitrary size and viewing angle is achieved. Both of simulation and optical experiments proves the validation of the proposed method.

  7. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease

    Directory of Open Access Journals (Sweden)

    Lopes Antonio

    2009-01-01

    Full Text Available Background : In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. Objective : We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Materials and Methods : Using Microsoft ® Excel facilities, we constructed a matrix containing 5 models (equations for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. Results : By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups ( P < .001 and between-methods ( P < .001 differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. Conclusion : The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations.

  8. A computer-based matrix for rapid calculation of pulmonary hemodynamic parameters in congenital heart disease.

    Science.gov (United States)

    Lopes, Antonio Augusto; Dos Anjos Miranda, Rogério; Gonçalves, Rilvani Cavalcante; Thomaz, Ana Maria

    2009-07-01

    In patients with congenital heart disease undergoing cardiac catheterization for hemodynamic purposes, parameter estimation by the indirect Fick method using a single predicted value of oxygen consumption has been a matter of criticism. We developed a computer-based routine for rapid estimation of replicate hemodynamic parameters using multiple predicted values of oxygen consumption. Using Microsoft® Excel facilities, we constructed a matrix containing 5 models (equations) for prediction of oxygen consumption, and all additional formulas needed to obtain replicate estimates of hemodynamic parameters. By entering data from 65 patients with ventricular septal defects, aged 1 month to 8 years, it was possible to obtain multiple predictions for oxygen consumption, with clear between-age groups (P <.001) and between-methods (P <.001) differences. Using these predictions in the individual patient, it was possible to obtain the upper and lower limits of a likely range for any given parameter, which made estimation more realistic. The organized matrix allows for rapid obtainment of replicate parameter estimates, without error due to exhaustive calculations.

  9. A toolchain for the automatic generation of computer codes for correlated wavefunction calculations.

    Science.gov (United States)

    Krupička, Martin; Sivalingam, Kantharuban; Huntington, Lee; Auer, Alexander A; Neese, Frank

    2017-06-05

    In this work, the automated generator environment for ORCA (ORCA-AGE) is described. It is a powerful toolchain for the automatic implementation of wavefunction-based quantum chemical methods. ORCA-AGE consists of three main modules: (1) generation of "raw" equations from a second quantized Ansatz for the wavefunction, (2) factorization and optimization of equations, and (3) generation of actual computer code. We generate code for the ORCA package, making use of the powerful functionality for wavefunction-based correlation calculations that is already present in the code. The equation generation makes use of the most elementary commutation relations and hence is extremely general. Consequently, code can be generated for single reference as well as multireference approaches and spin-independent as well as spin-dependent operators. The performance of the generated code is demonstrated through comparison with efficient hand-optimized code for some well-understood standard configuration interaction and coupled cluster methods. In general, the speed of the generated code is no more than 30% slower than the hand-optimized code, thus allowing for routine application of canonical ab initio methods to molecules with about 500-1000 basis functions. Using the toolchain, complicated methods, especially those surpassing human ability for handling complexity, can be efficiently and reliably implemented in very short times. This enables the developer to shift the attention from debugging code to the physical content of the chosen wavefunction Ansatz. Automatic code generation also has the desirable property that any improvement in the toolchain immediately applies to all generated code. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  10. Computer Calculations of Eddy-Current Power Loss in Rotating Titanium Wheels and Rims in Localized Axial Magnetic Fields

    Energy Technology Data Exchange (ETDEWEB)

    Mayhall, D J; Stein, W; Gronberg, J B

    2006-05-15

    We have performed preliminary computer-based, transient, magnetostatic calculations of the eddy-current power loss in rotating titanium-alloy and aluminum wheels and wheel rims in the predominantly axially-directed, steady magnetic fields of two small, solenoidal coils. These calculations have been undertaken to assess the eddy-current power loss in various possible International Linear Collider (ILC) positron target wheels. They have also been done to validate the simulation code module against known results published in the literature. The commercially available software package used in these calculations is the Maxwell 3D, Version 10, Transient Module from the Ansoft Corporation.

  11. Calculations of reactor-accident consequences, Version 2. CRAC2: computer code user's guide

    Energy Technology Data Exchange (ETDEWEB)

    Ritchie, L.T.; Johnson, J.D.; Blond, R.M.

    1983-02-01

    The CRAC2 computer code is a revision of the Calculation of Reactor Accident Consequences computer code, CRAC, developed for the Reactor Safety Study. The CRAC2 computer code incorporates significant modeling improvements in the areas of weather sequence sampling and emergency response, and refinements to the plume rise, atmospheric dispersion, and wet deposition models. New output capabilities have also been added. This guide is to facilitate the informed and intelligent use of CRAC2. It includes descriptions of the input data, the output results, the file structures, control information, and five sample problems.

  12. Bessel function expansion to reduce the calculation time and memory usage for cylindrical computer-generated holograms.

    Science.gov (United States)

    Sando, Yusuke; Barada, Daisuke; Jackin, Boaz Jessie; Yatagai, Toyohiko

    2017-07-10

    This study proposes a method to reduce the calculation time and memory usage required for calculating cylindrical computer-generated holograms. The wavefront on the cylindrical observation surface is represented as a convolution integral in the 3D Fourier domain. The Fourier transformation of the kernel function involving this convolution integral is analytically performed using a Bessel function expansion. The analytical solution can drastically reduce the calculation time and the memory usage without any cost, compared with the numerical method using fast Fourier transform to Fourier transform the kernel function. In this study, we present the analytical derivation, the efficient calculation of Bessel function series, and a numerical simulation. Furthermore, we demonstrate the effectiveness of the analytical solution through comparisons of calculation time and memory usage.

  13. Calculation method for computer-generated holograms with cylindrical basic object light by using a graphics processing unit.

    Science.gov (United States)

    Sakata, Hironobu; Hosoyachi, Kouhei; Yang, Chan-Young; Sakamoto, Yuji

    2011-12-01

    It takes an enormous amount of time to calculate a computer-generated hologram (CGH). A fast calculation method for a CGH using precalculated object light has been proposed in which the light waves of an arbitrary object are calculated using transform calculations of the precalculated object light. However, this method requires a huge amount of memory. This paper proposes the use of a method that uses a cylindrical basic object light to reduce the memory requirement. Furthermore, it is accelerated by using a graphics processing unit (GPU). Experimental results show that the calculation speed on a GPU is about 65 times faster than that on a CPU. © 2011 Optical Society of America

  14. CONC/11: A computer program for calculating the performance of dish-type solar thermal collectors and power systems

    Science.gov (United States)

    Jaffe, L. D.

    1984-01-01

    The CONC/11 computer program designed for calculating the performance of dish-type solar thermal collectors and power systems is discussed. This program is intended to aid the system or collector designer in evaluating the performance to be expected with possible design alternatives. From design or test data on the characteristics of the various subsystems, CONC/11 calculates the efficiencies of the collector and the overall power system as functions of the receiver temperature for a specified insolation. If desired, CONC/11 will also determine the receiver aperture and the receiver temperature that will provide the highest efficiencies at a given insolation. The program handles both simple and compound concentrators. The CONC/11 is written in Athena Extended FORTRAN (similar to FORTRAN 77) to operate primarily in an interactive mode on a Sperry 1100/81 computer. It could also be used on many small computers. A user's manual is also provided for this program.

  15. A method for calculating regional cerebral blood flow from emission computed tomography of inert gas concentrations

    DEFF Research Database (Denmark)

    Celsis, P; Goldman, T; Henriksen, L

    1981-01-01

    Emission tomography of positron or gamma emitting inert gases allows calculation of regional cerebral blood flow (rCBF) in cross-sectional slices of human brain. An algorithm is presented for rCBF calculations from a sequence of time averaged tomograms using inhaled 133Xe. The approach is designed...

  16. Calculation of the density shift and broadening of the transition lines in pionic helium: Computational problems

    Energy Technology Data Exchange (ETDEWEB)

    Bakalov, Dimitar, E-mail: dbakalov@inrne.bas.bg [Bulgarian Academy of Sciences, INRNE (Bulgaria)

    2015-08-15

    The potential energy surface and the computational codes, developed for the evaluation of the density shift and broadening of the spectral lines of laser-induced transitions from metastable states of antiprotonic helium, fail to produce convergent results in the case of pionic helium. We briefly analyze the encountered computational problems and outline possible solutions of the problems.

  17. VLab: A Science Gateway for Distributed First Principles Calculations in Heterogeneous High Performance Computing Systems

    Science.gov (United States)

    da Silveira, Pedro Rodrigo Castro

    2014-01-01

    This thesis describes the development and deployment of a cyberinfrastructure for distributed high-throughput computations of materials properties at high pressures and/or temperatures--the Virtual Laboratory for Earth and Planetary Materials--VLab. VLab was developed to leverage the aggregated computational power of grid systems to solve…

  18. Calculation of dipole polarizability derivatives of adamantane and their use in electron scattering computations

    DEFF Research Database (Denmark)

    Sauer, Stephan P. A.; Paidarová, Ivana; Čársky, Petr

    2016-01-01

    In this paper we present calculations of the static polarizability and its derivatives for the adamantane molecule carried out at the density functional theory level using the B3LYP exchange correlation functional and Sadlej’s polarized valence triple zeta basis set. It is shown that the polariza......In this paper we present calculations of the static polarizability and its derivatives for the adamantane molecule carried out at the density functional theory level using the B3LYP exchange correlation functional and Sadlej’s polarized valence triple zeta basis set. It is shown...... that the polarizability tensor is necessary to correct long-range behavior of DFT functionals used in electron-molecule scattering calculations. The impact of such a long-range correction is demonstrated on elastic and vibrationally inelastic electron collisions with adamantane, a molecule representing a large polyatomic...... target for electron scattering calculations....

  19. A new version of a computer program for dynamical calculations of RHEED intensity oscillations

    Science.gov (United States)

    Daniluk, Andrzej; Skrobas, Kazimierz

    2006-01-01

    We present a new version of the RHEED program which contains a graphical user interface enabling the use of the program in the graphical environment. The presented program also contains a graphical component which enables displaying program data at run-time through an easy-to-use graphical interface. New version program summaryTitle of program: RHEEDGr Catalogue identifier: ADWV Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWV Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Catalogue identifier of previous version: ADUY Authors of the original program: A. Daniluk Does the new version supersede the original program: no Computer for which the new version is designed and others on which it has been tested: Pentium-based PC Operating systems or monitors under which the new version has been tested: Windows 9x, XP, NT Programming language used: Borland C++ Builder Memory required to execute with typical data: more than 1 MB Number of bits in a word: 64 bits Number of processors used: 1 Number of lines in distributed program, including test data, etc.: 5797 Number of bytes in distributed program, including test data, etc.: 588 121 Distribution format: tar.gz Nature of physical problem: Reflection high-energy electron diffraction (RHEED) is a very useful technique for studying growth and surface analysis of thin epitaxial structures prepared by the molecular beam epitaxy (MBE). The RHEED technique can reveal, almost instantaneously, changes either in the coverage of the sample surface by adsorbates or in the surface structure of a thin film. Method of solution: RHEED intensities are calculated within the framework of the general matrix formulation of Peng and Whelan [1] under the one-beam condition. Reasons for the new version: Responding to the user feedback we designed a graphical package that enables displaying program data at run-time through an easy-to-use graphical interface. Summary of revisions:In the present form

  20. Computation of nodal surfaces in fixed-node diffusion Monte Carlo calculations using a genetic algorithm.

    Science.gov (United States)

    Ramilowski, Jordan A; Farrelly, David

    2010-10-21

    The fixed-node diffusion Monte Carlo (DMC) algorithm is a powerful way of computing excited state energies in a remarkably diverse number of contexts in quantum chemistry and physics. The main difficulty in implementing the procedure lies in obtaining a good estimate of the nodal surface of the excited state in question. Although the nodal surface can sometimes be obtained from symmetry or by making approximations this is not always the case. In any event, nodal surfaces are usually obtained in an ad hoc way. In fact, the search for nodal surfaces can be formulated as an optimization problem within the DMC procedure itself. Here we investigate the use of a genetic algorithm to systematically and automatically compute nodal surfaces. Application is made to the computation of excited states of the HCN-(4)He complex and to the computation of tunneling splittings in the hydrogen bonded HCl-HCl complex.

  1. Effectiveness of a computer based medication calculation education and testing programme for nurses.

    Science.gov (United States)

    Sherriff, Karen; Burston, Sarah; Wallis, Marianne

    2012-01-01

    The aim of the study was to evaluate the effect of an on-line, medication calculation education and testing programme. The outcome measures were medication calculation proficiency and self efficacy. This quasi-experimental study involved the administration of questionnaires before and after nurses completed annual medication calculation testing. The study was conducted in two hospitals in south-east Queensland, Australia, which provide a variety of clinical services including obstetrics, paediatrics, ambulatory, mental health, acute and critical care and community services. Participants were registered nurses (RNs) and enrolled nurses with a medication endorsement (EN(Med)) working as clinicians (n=107). Data pertaining to success rate, number of test attempts, self-efficacy, medication calculation error rates and nurses' satisfaction with the programme were collected. Medication calculation scores at first test attempt showed improvement following one year of access to the programme. Two of the self-efficacy subscales improved over time and nurses reported satisfaction with the online programme. Results of this study may facilitate the continuation and expansion of medication calculation and administration education to improve nursing knowledge, inform practise and directly improve patient safety. Crown Copyright © 2011. Published by Elsevier Ltd. All rights reserved.

  2. Computer codes in nuclear safety, radiation transport and dosimetry; Les codes de calcul en radioprotection, radiophysique et dosimetrie

    Energy Technology Data Exchange (ETDEWEB)

    Bordy, J.M.; Kodeli, I.; Menard, St.; Bouchet, J.L.; Renard, F.; Martin, E.; Blazy, L.; Voros, S.; Bochud, F.; Laedermann, J.P.; Beaugelin, K.; Makovicka, L.; Quiot, A.; Vermeersch, F.; Roche, H.; Perrin, M.C.; Laye, F.; Bardies, M.; Struelens, L.; Vanhavere, F.; Gschwind, R.; Fernandez, F.; Quesne, B.; Fritsch, P.; Lamart, St.; Crovisier, Ph.; Leservot, A.; Antoni, R.; Huet, Ch.; Thiam, Ch.; Donadille, L.; Monfort, M.; Diop, Ch.; Ricard, M

    2006-07-01

    The purpose of this conference was to describe the present state of computer codes dedicated to radiation transport or radiation source assessment or dosimetry. The presentations have been parted into 2 sessions: 1) methodology and 2) uses in industrial or medical or research domains. It appears that 2 different calculation strategies are prevailing, both are based on preliminary Monte-Carlo calculations with data storage. First, quick simulations made from a database of particle histories built though a previous Monte-Carlo simulation and secondly, a neuronal approach involving a learning platform generated through a previous Monte-Carlo simulation. This document gathers the slides of the presentations.

  3. The Computer Code NOVO for the Calculation of Wake Potentials of the Very Short Ultra-relativistic Bunches

    Energy Technology Data Exchange (ETDEWEB)

    Novokhatski, Alexander; /SLAC

    2005-12-01

    The problem of electromagnetic interaction of a beam and accelerator elements is very important for linear colliders, electron-positron factories, and free electron lasers. Precise calculation of wake fields is required for beam dynamics study in these machines. We describe a method which allows computation of wake fields of the very short bunches. Computer code NOVO was developed based on this method. This method is free of unphysical solutions like ''self-acceleration'' of a bunch head, which is common to well known wake field codes. Code NOVO was used for the wake fields study for many accelerator projects all over the world.

  4. Computer program for calculating supersonic flow on the windward side conical delta wings by the method of lines

    Science.gov (United States)

    Klunker, E. B.; South, J. C., Jr.; Davis, R. M.

    1972-01-01

    A user's manual is presented for a program that calculates the supersonic flow on the windward side of conical delta wings with shock attached at the sharp leading edge by the method of lines. The program also has a limited capability for computing the flow about circular and elliptic cones at incidence. It provides information including the shock shape, flow field, isentropic surface-flow properties, and force coefficients. A description of the program operation, a sample computation, and a FORTRAN 4 program listing are included.

  5. Computational Calorimetry: High-Precision Calculation of Host-Guest Binding Thermodynamics.

    Science.gov (United States)

    Henriksen, Niel M; Fenley, Andrew T; Gilson, Michael K

    2015-09-08

    We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van't Hoff equation. Excellent agreement between the direct and van't Hoff methods is demonstrated for both host-guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design.

  6. Emergency Doses (ED) - Revision 3: A calculator code for environmental dose computations

    Energy Technology Data Exchange (ETDEWEB)

    Rittmann, P.D.

    1990-12-01

    The calculator program ED (Emergency Doses) was developed from several HP-41CV calculator programs documented in the report Seven Health Physics Calculator Programs for the HP-41CV, RHO-HS-ST-5P (Rittman 1984). The program was developed to enable estimates of offsite impacts more rapidly and reliably than was possible with the software available for emergency response at that time. The ED - Revision 3, documented in this report, revises the inhalation dose model to match that of ICRP 30, and adds the simple estimates for air concentration downwind from a chemical release. In addition, the method for calculating the Pasquill dispersion parameters was revised to match the GENII code within the limitations of a hand-held calculator (e.g., plume rise and building wake effects are not included). The summary report generator for printed output, which had been present in the code from the original version, was eliminated in Revision 3 to make room for the dispersion model, the chemical release portion, and the methods of looping back to an input menu until there is no further no change. This program runs on the Hewlett-Packard programmable calculators known as the HP-41CV and the HP-41CX. The documentation for ED - Revision 3 includes a guide for users, sample problems, detailed verification tests and results, model descriptions, code description (with program listing), and independent peer review. This software is intended to be used by individuals with some training in the use of air transport models. There are some user inputs that require intelligent application of the model to the actual conditions of the accident. The results calculated using ED - Revision 3 are only correct to the extent allowed by the mathematical models. 9 refs., 36 tabs.

  7. Computational Calorimetry: High-Precision Calculation of Host–Guest Binding Thermodynamics

    Science.gov (United States)

    2015-01-01

    We present a strategy for carrying out high-precision calculations of binding free energy and binding enthalpy values from molecular dynamics simulations with explicit solvent. The approach is used to calculate the thermodynamic profiles for binding of nine small molecule guests to either the cucurbit[7]uril (CB7) or β-cyclodextrin (βCD) host. For these systems, calculations using commodity hardware can yield binding free energy and binding enthalpy values with a precision of ∼0.5 kcal/mol (95% CI) in a matter of days. Crucially, the self-consistency of the approach is established by calculating the binding enthalpy directly, via end point potential energy calculations, and indirectly, via the temperature dependence of the binding free energy, i.e., by the van’t Hoff equation. Excellent agreement between the direct and van’t Hoff methods is demonstrated for both host–guest systems and an ion-pair model system for which particularly well-converged results are attainable. Additionally, we find that hydrogen mass repartitioning allows marked acceleration of the calculations with no discernible cost in precision or accuracy. Finally, we provide guidance for accurately assessing numerical uncertainty of the results in settings where complex correlations in the time series can pose challenges to statistical analysis. The routine nature and high precision of these binding calculations opens the possibility of including measured binding thermodynamics as target data in force field optimization so that simulations may be used to reliably interpret experimental data and guide molecular design. PMID:26523125

  8. GRAIN: a computer program to calculate ancestral and partial inbreeding coefficients using a gene dropping approach.

    Science.gov (United States)

    Baumung, R; Farkas, J; Boichard, D; Mészáros, G; Sölkner, J; Curik, I

    2015-04-01

    GRain is freely available software intended to enable and promote testing of hypotheses with respect to purging and heterogeneity of inbreeding depression. The program is based on a stochastic approach, the gene dropping method, and calculates various coefficients from large and complex pedigrees. GRain calculates, together with the 'classical' inbreeding coefficient, ancestral inbreeding coefficients proposed by Ballou, (1997) J. Hered., 88, 169 and Kalinowski et al., (2000) Conserv. Biol., 14, 1375 as well as an ancestral history coefficient (AHC ), defined here for the first time. AHC is defined as the number that tells how many times during pedigree segregation (gene dropping) a randomly taken allele has been in IBD status. Furthermore, GRain enables testing of heterogeneity and/or purging of inbreeding depression with respect to different founders/ancestors by calculating partial coefficients for all previously obtained coefficients. © 2015 Blackwell Verlag GmbH.

  9. An on-line calculator to compute phonotactic probability and neighborhood density based on child corpora of spoken American English

    Science.gov (United States)

    Storkel, Holly L.; Hoover, Jill R.

    2010-01-01

    An on-line calculator was developed (http://www.bncdnet.ku.edu/cml/info_ccc.vi) to compute phonotactic probability, the likelihood of occurrence of a sound sequence, and neighborhood density, the number of phonologically similar words, based on child corpora of American English (Kolson, 1960; Moe, Hopkins, & Rush, 1982) and compared to an adult calculator. Phonotactic probability and neighborhood density were computed for a set of 380 nouns (Fenson et al., 1993) using both the child and adult corpora. Child and adult raw values were significantly correlated. However, significant differences were detected. Specifically, child phonotactic probability was higher than adult phonotactic probability, especially for high probability words; and child neighborhood density was lower than adult neighborhood density, especially for high density words. These differences were reduced or eliminated when relative measures (i.e., z scores) were used. Suggestions are offered regarding which values to use in future research. PMID:20479181

  10. Reducing the computational requirements for simulating tunnel fires by combining multiscale modelling and multiple processor calculation

    DEFF Research Database (Denmark)

    Vermesi, Izabella; Rein, Guillermo; Colella, Francesco

    2017-01-01

    in FDS version 6.0, a widely used fire-specific, open source CFD software. Furthermore, it compares the reduction in simulation time given by multiscale modelling with the one given by the use of multiple processor calculation. This was done using a 1200m long tunnel with a rectangular cross...... processor calculation (97% faster when using a single mesh and multiscale modelling; only 46% faster when using the full tunnel and multiple meshes). In summary, it was found that multiscale modelling with FDS v.6.0 is feasible, and the combination of multiple meshes and multiscale modelling was established...

  11. Acute Calculous Cholecystitis Missed on Computed Tomography and Ultrasound but Diagnosed with Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography.

    Science.gov (United States)

    Aparici, Carina Mari; Win, Aung Zaw

    2016-01-01

    We present a case of a 69-year-old patient who underwent ascending aortic aneurysm repair with aortic valve replacement. On postsurgical day 12, he developed leukocytosis and low-grade fevers. The chest computed tomography (CT) showed a periaortic hematoma which represents a postsurgical change from aortic aneurysm repair, and a small pericardial effusion. The abdominal ultrasound showed cholelithiasis without any sign of cholecystitis. Finally, a fluorodeoxyglucose (FDG)-positron emission tomography (PET)/CT examination was ordered to find the cause of fever of unknown origin, and it showed increased FDG uptake in the gallbladder wall, with no uptake in the lumen. FDG-PET/CT can diagnose acute cholecystitis in patients with nonspecific clinical symptoms and laboratory results.

  12. Acute Calculous Cholecystitis Missed on Computed Tomography and Ultrasound but Diagnosed with Fluorodeoxyglucose-Positron Emission Tomography/Computed Tomography

    Directory of Open Access Journals (Sweden)

    Carina Mari Aparici

    2016-01-01

    Full Text Available We present a case of a 69-year-old patient who underwent ascending aortic aneurysm repair with aortic valve replacement. On postsurgical day 12, he developed leukocytosis and low-grade fevers. The chest computed tomography (CT showed a periaortic hematoma which represents a postsurgical change from aortic aneurysm repair, and a small pericardial effusion. The abdominal ultrasound showed cholelithiasis without any sign of cholecystitis. Finally, a fluorodeoxyglucose (FDG-positron emission tomography (PET/CT examination was ordered to find the cause of fever of unknown origin, and it showed increased FDG uptake in the gallbladder wall, with no uptake in the lumen. FDG-PET/CT can diagnose acute cholecystitis in patients with nonspecific clinical symptoms and laboratory results.

  13. X-ray Computed Tomography of Gas Diffusion Layers of PEM Fuel Cells - Calculation of Thermal Conductivity

    OpenAIRE

    Pfrang, Andreas; VEYRET Damien; SIEKER Frank; Tsotridis, Georgios

    2009-01-01

    Three commercially available gas diffusion layers were investigated by 3D X-ray computed tomography (CT). The carbon fibers and the 3D structure of the gas diffusion layers were clearly resolved by this lab-based technique. Based on 3D structures reconstructed from tomography data, the macroscopic, anisotropic effective thermal conductivities of the gas diffusion layers were calculated by solving the energy equation considering a pure thermal conduction problem. The average in-plane therma...

  14. Computational Chemistry Laboratory: Calculating the Energy Content of Food Applied to a Real-Life Problem

    Science.gov (United States)

    Barbiric, Dora; Tribe, Lorena; Soriano, Rosario

    2015-01-01

    In this laboratory, students calculated the nutritional value of common foods to assess the energy content needed to answer an everyday life application; for example, how many kilometers can an average person run with the energy provided by 100 g (3.5 oz) of beef? The optimized geometries and the formation enthalpies of the nutritional components…

  15. SIMPLE METHOD OF SIZE-SPECIFIC DOSE ESTIMATES CALCULATION FROM PATIENT WEIGHT ON COMPUTED TOMOGRAPHY.

    Science.gov (United States)

    Iriuchijima, Akiko; Fukushima, Yasuhiro; Nakajima, Takahito; Tsushima, Yoshito; Ogura, Akio

    2017-07-28

    The purpose of this study is to develop a new and simple methodology for calculating mean size-specific dose estimates (SSDE) over the entire scan range (mSSDE) from weight and volume CT dose index (CTDIvol). We retrospectively analyzed data from a dose index registry. Scan areas were divided into two regions: chest and abdomen-pelvis. The original mSSDE was calculated by a commercially available software. The conversion formulas for mSSDE were estimated from weight and CTDIvol (SSDEweight) in each region. SSDEweight were compared with the original mSSDE using Bland-Altman analysis. Root mean square differences were 1.4 mGy for chest and 1.5 mGy for abdomen-pelvis. Our method using formulae can calculate SSDEweight using weight and CTDIvol without a dedicated software, and can be used to calculate DRL to optimize CT exposure doses. © The Author 2017. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  16. The use of symbolic computation in radiative, energy, and neutron transport calculations. Final report

    Energy Technology Data Exchange (ETDEWEB)

    Frankel, J.I.

    1997-09-01

    This investigation used sysmbolic manipulation in developing analytical methods and general computational strategies for solving both linear and nonlinear, regular and singular integral and integro-differential equations which appear in radiative and mixed-mode energy transport. Contained in this report are seven papers which present the technical results as individual modules.

  17. Assessment of Computer-Mediated Module Intervention in a Pharmacy Calculations Course

    Science.gov (United States)

    Bell, Edward C.; Fike, David S.; Liang, Dong; Lockman, Paul R.; McCall, Kenneth L.

    2017-01-01

    Computer module intervention is the process of exposing students to a series of discrete exercises for the purpose of strengthening students' familiarity with conceptual material. The method has been suggested as a remedy to student under-preparedness. This study was conducted to determine the effectiveness of module intervention in improving and…

  18. Application of Heat-Transfer Calculations and Computational Fluid Mechanics to the Design of Protective Clothing

    Science.gov (United States)

    Cherunova, I.; Kornev, N.; Jacobi, G.; Treshchun, I.; Gross, A.; Turnow, J.; Schreier, S.; Paschen, M.

    2014-07-01

    Three examples of use of computational fluid dynamics for designing clothing protecting a human body from high and low temperatures with an incident air fl ow and without it are presented. The internal thermodynamics of a human body and the interaction of it with the surroundings were investigated. The inner and outer problems were considered separately with their own boundary conditions.

  19. STATIC{sub T}EMP: a useful computer code for calculating static formation temperatures in geothermal wells

    Energy Technology Data Exchange (ETDEWEB)

    Santoyo, E. [Universidad Nacional Autonoma de Mexico, Centro de Investigacion en Energia, Temixco (Mexico); Garcia, A.; Santoyo, S. [Unidad Geotermia, Inst. de Investigaciones Electricas, Temixco (Mexico); Espinosa, G. [Universidad Autonoma Metropolitana, Co. Vicentina (Mexico); Hernandez, I. [ITESM, Centro de Sistemas de Manufactura, Monterrey (Mexico)

    2000-07-01

    The development and application of the computer code STATIC{sub T}EMP, a useful tool for calculating static formation temperatures from actual bottomhole temperature data logged in geothermal wells is described. STATIC{sub T}EMP is based on five analytical methods which are the most frequently used in the geothermal industry. Conductive and convective heat flow models (radial, spherical/radial and cylindrical/radial) were selected. The computer code is a useful tool that can be reliably used in situ to determine static formation temperatures before or during the completion stages of geothermal wells (drilling and cementing). Shut-in time and bottomhole temperature measurements logged during well completion activities are required as input data. Output results can include up to seven computations of the static formation temperature by each wellbore temperature data set analysed. STATIC{sub T}EMP was written in Fortran-77 Microsoft language for MS-DOS environment using structured programming techniques. It runs on most IBM compatible personal computers. The source code and its computational architecture as well as the input and output files are described in detail. Validation and application examples on the use of this computer code with wellbore temperature data (obtained from specialised literature) and with actual bottomhole temperature data (taken from completion operations of some geothermal wells) are also presented. (Author)

  20. Involving High School Students in Computational Physics University Research: Theory Calculations of Toluene Adsorbed on Graphene.

    Science.gov (United States)

    Ericsson, Jonas; Husmark, Teodor; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth

    2016-01-01

    To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.

  1. Involving high school students in computational physics university research: Theory calculations of toluene adsorbed on graphene

    CERN Document Server

    Ericsson, Jonas; Mathiesen, Christoffer; Sepahvand, Benjamin; Borck, Øyvind; Gunnarsson, Linda; Lydmark, Pär; Schröder, Elsebeth

    2016-01-01

    To increase public awareness of theoretical materials physics, a small group of high school students is invited to participate actively in a current research projects at Chalmers University of Technology. The Chalmers research group explores methods for filtrating hazardous and otherwise unwanted molecules from drinking water, for example by adsorption in active carbon filters. In this project, the students use graphene as an idealized model for active carbon, and estimate the energy of adsorption of the methylbenzene toluene on graphene with the help of the atomic-scale calculational method density functional theory. In this process the students develop an insight into applied quantum physics, a topic usually not taught at this educational level, and gain some experience with a couple of state-of-the-art calculational tools in materials research.

  2. Software abstractions and computational issues in parallel structure adaptive mesh methods for electronic structure calculations

    Energy Technology Data Exchange (ETDEWEB)

    Kohn, S.; Weare, J.; Ong, E.; Baden, S.

    1997-05-01

    We have applied structured adaptive mesh refinement techniques to the solution of the LDA equations for electronic structure calculations. Local spatial refinement concentrates memory resources and numerical effort where it is most needed, near the atomic centers and in regions of rapidly varying charge density. The structured grid representation enables us to employ efficient iterative solver techniques such as conjugate gradient with FAC multigrid preconditioning. We have parallelized our solver using an object- oriented adaptive mesh refinement framework.

  3. Parallel calculations on shared memory, NUMA-based computers using MATLAB

    Science.gov (United States)

    Krotkiewski, Marcin; Dabrowski, Marcin

    2014-05-01

    Achieving satisfactory computational performance in numerical simulations on modern computer architectures can be a complex task. Multi-core design makes it necessary to parallelize the code. Efficient parallelization on NUMA (Non-Uniform Memory Access) shared memory architectures necessitates explicit placement of the data in the memory close to the CPU that uses it. In addition, using more than 8 CPUs (~100 cores) requires a cluster solution of interconnected nodes, which involves (expensive) communication between the processors. It takes significant effort to overcome these challenges even when programming in low-level languages, which give the programmer full control over data placement and work distribution. Instead, many modelers use high-level tools such as MATLAB, which severely limit the optimization/tuning options available. Nonetheless, the advantage of programming simplicity and a large available code base can tip the scale in favor of MATLAB. We investigate whether MATLAB can be used for efficient, parallel computations on modern shared memory architectures. A common approach to performance optimization of MATLAB programs is to identify a bottleneck and migrate the corresponding code block to a MEX file implemented in, e.g. C. Instead, we aim at achieving a scalable parallel performance of MATLABs core functionality. Some of the MATLABs internal functions (e.g., bsxfun, sort, BLAS3, operations on vectors) are multi-threaded. Achieving high parallel efficiency of those may potentially improve the performance of significant portion of MATLABs code base. Since we do not have MATLABs source code, our performance tuning relies on the tools provided by the operating system alone. Most importantly, we use custom memory allocation routines, thread to CPU binding, and memory page migration. The performance tests are carried out on multi-socket shared memory systems (2- and 4-way Intel-based computers), as well as a Distributed Shared Memory machine with 96 CPU

  4. DITTY - a computer program for calculating population dose integrated over ten thousand years

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Peloquin, R.A.; Strenge, D.L.

    1986-03-01

    The computer program DITTY (Dose Integrated Over Ten Thousand Years) was developed to determine the collective dose from long term nuclear waste disposal sites resulting from the ground-water pathways. DITTY estimates the time integral of collective dose over a ten-thousand-year period for time-variant radionuclide releases to surface waters, wells, or the atmosphere. This document includes the following information on DITTY: a description of the mathematical models, program designs, data file requirements, input preparation, output interpretations, sample problems, and program-generated diagnostic messages.

  5. Fractional time stepping for unsteady engineering calculations on parallel computer systems

    Science.gov (United States)

    Molev, Sergey; Podaruev, Vladimir; Troshin, Alexey

    2017-11-01

    The tool for explicit scheme acceleration is described. Its essence is reducing arithmetic operations. Cells of the mesh are scattered by groups named levels. Each level has own time step. Coordination of levels is carried out. The method may be useful for great time scale scattering problems of aerodynamics. Reasons that produce deterioration of unsteady process modelling are revealed. Resolutions that correct the troubles are proposed. Example that demonstrates troubles rising conditions and successful abolition of them is presented. Limit of producing acceleration is denoted. Means that favor effective parallel computing with method are discussed.

  6. Sassena — X-ray and neutron scattering calculated from molecular dynamics trajectories using massively parallel computers

    Science.gov (United States)

    Lindner, Benjamin; Smith, Jeremy C.

    2012-07-01

    Massively parallel computers now permit the molecular dynamics (MD) simulation of multi-million atom systems on time scales up to the microsecond. However, the subsequent analysis of the resulting simulation trajectories has now become a high performance computing problem in itself. Here, we present software for calculating X-ray and neutron scattering intensities from MD simulation data that scales well on massively parallel supercomputers. The calculation and data staging schemes used maximize the degree of parallelism and minimize the IO bandwidth requirements. The strong scaling tested on the Jaguar Petaflop Cray XT5 at Oak Ridge National Laboratory exhibits virtually linear scaling up to 7000 cores for most benchmark systems. Since both MPI and thread parallelism is supported, the software is flexible enough to cover scaling demands for different types of scattering calculations. The result is a high performance tool capable of unifying large-scale supercomputing and a wide variety of neutron/synchrotron technology. Catalogue identifier: AELW_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELW_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License, version 3 No. of lines in distributed program, including test data, etc.: 1 003 742 No. of bytes in distributed program, including test data, etc.: 798 Distribution format: tar.gz Programming language: C++, OpenMPI Computer: Distributed Memory, Cluster of Computers with high performance network, Supercomputer Operating system: UNIX, LINUX, OSX Has the code been vectorized or parallelized?: Yes, the code has been parallelized using MPI directives. Tested with up to 7000 processors RAM: Up to 1 Gbytes/core Classification: 6.5, 8 External routines: Boost Library, FFTW3, CMAKE, GNU C++ Compiler, OpenMPI, LibXML, LAPACK Nature of problem: Recent developments in supercomputing allow molecular dynamics simulations to

  7. FRAPCON-2: A Computer Code for the Calculation of Steady State Thermal-Mechanical Behavior of Oxide Fuel Rods

    Energy Technology Data Exchange (ETDEWEB)

    Berna, G. A; Bohn, M. P.; Rausch, W. N.; Williford, R. E.; Lanning, D. D.

    1981-01-01

    FRAPCON-2 is a FORTRAN IV computer code that calculates the steady state response of light Mater reactor fuel rods during long-term burnup. The code calculates the temperature, pressure, deformation, and tai lure histories of a fuel rod as functions of time-dependent fuel rod power and coolant boundary conditions. The phenomena modeled by the code include (a) heat conduction through the fuel and cladding, (b) cladding elastic and plastic deformation, (c) fuel-cladding mechanical interaction, (d) fission gas release, (e} fuel rod internal gas pressure, (f) heat transfer between fuel and cladding, (g) cladding oxidation, and (h) heat transfer from cladding to coolant. The code contains necessary material properties, water properties, and heat transfer correlations. FRAPCON-2 is programmed for use on the CDC Cyber 175 and 176 computers. The FRAPCON-2 code Is designed to generate initial conditions for transient fuel rod analysis by either the FRAP-T6 computer code or the thermal-hydraulic code, RELAP4/MOD7 Version 2.

  8. Protonation Sites, Tandem Mass Spectrometry and Computational Calculations of o-Carbonyl Carbazolequinone Derivatives

    Directory of Open Access Journals (Sweden)

    Maximiliano Martínez-Cifuentes

    2016-07-01

    Full Text Available A series of a new type of tetracyclic carbazolequinones incorporating a carbonyl group at the ortho position relative to the quinone moiety was synthesized and analyzed by tandem electrospray ionization mass spectrometry (ESI/MS-MS, using Collision-Induced Dissociation (CID to dissociate the protonated species. Theoretical parameters such as molecular electrostatic potential (MEP, local Fukui functions and local Parr function for electrophilic attack as well as proton affinity (PA and gas phase basicity (GB, were used to explain the preferred protonation sites. Transition states of some main fragmentation routes were obtained and the energies calculated at density functional theory (DFT B3LYP level were compared with the obtained by ab initio quadratic configuration interaction with single and double excitation (QCISD. The results are in accordance with the observed distribution of ions. The nature of the substituents in the aromatic ring has a notable impact on the fragmentation routes of the molecules.

  9. Computational Fluid Dynamics calculation of a planar solid oxide fuel cell design running on syngas

    Directory of Open Access Journals (Sweden)

    Pianko-Oprych Paulina

    2017-12-01

    Full Text Available The present study deals with modelling and validation of a planar Solid Oxide Fuel Cell (SOFC design fuelled by gas mixture of partially pre-reformed methane. A 3D model was developed using the ANSYS Fluent Computational Fluid Dynamics (CFD tool that was supported by an additional Fuel Cell Tools module. The governing equations for momentum, heat, gas species, ion and electron transport were implemented and coupled to kinetics describing the electrochemical and reforming reactions. In the model, the Water Gas Shift reaction in a porous anode layer was included. Electrochemical oxidation of hydrogen and carbon monoxide fuels were both considered. The developed model enabled to predict the distributions of temperature, current density and gas flow in the fuel cell.

  10. Computational analysis of calculated physicochemical and ADMET properties of protein-protein interaction inhibitors

    Science.gov (United States)

    Lagorce, David; Douguet, Dominique; Miteva, Maria A.; Villoutreix, Bruno O.

    2017-04-01

    The modulation of PPIs by low molecular weight chemical compounds, particularly by orally bioavailable molecules, would be very valuable in numerous disease indications. However, it is known that PPI inhibitors (iPPIs) tend to have properties that are linked to poor Absorption, Distribution, Metabolism, Excretion and Toxicity (ADMET) and in some cases to poor clinical outcomes. Previously reported in silico analyses of iPPIs have essentially focused on physicochemical properties but several other ADMET parameters would be important to assess. In order to gain new insights into the ADMET properties of iPPIs, computations were carried out on eight datasets collected from several databases. These datasets involve compounds targeting enzymes, GPCRs, ion channels, nuclear receptors, allosteric modulators, oral marketed drugs, oral natural product-derived marketed drugs and iPPIs. Several trends are reported that should assist the design and optimization of future PPI inhibitors, either for drug discovery endeavors or for chemical biology projects.

  11. GO-STOP Control Using Optical Brain-Computer Interface during Calculation Task

    Science.gov (United States)

    Utsugi, Kei; Obata, Akiko; Sato, Hiroki; Aoki, Ryuta; Maki, Atsushi; Koizumi, Hideaki; Sagara, Kazuhiko; Kawamichi, Hiroaki; Atsumori, Hirokazu; Katura, Takusige

    We have developed a prototype optical brain-computer interface (BCI) system that can be used by an operator to manipulate external, electrically controlled equipment. Our optical BCI uses near-infrared spectroscopy and functions as a compact, practical, unrestrictive, non-invasive brain-switch. The optical BCI system measured spatiotemporal changes in the hemoglobin concentrations in the blood flow of a subject's prefrontal cortex at 22 measurement points. An exponential moving average (EMA) filter was applied to the data, and then their weighted sum with a taskrelated parameter derived from a pretest is utilized for time-indicated control (GO-STOP) of an external object. In experiments using untrained subjects, the system achieved control patterns within an accuracy of ±6 sec for more than 80% control.

  12. Computational analysis of calculated physicochemical and ADMET properties of protein-protein interaction inhibitors

    Science.gov (United States)

    Lagorce, David; Douguet, Dominique; Miteva, Maria A.; Villoutreix, Bruno O.

    2017-01-01

    The modulation of PPIs by low molecular weight chemical compounds, particularly by orally bioavailable molecules, would be very valuable in numerous disease indications. However, it is known that PPI inhibitors (iPPIs) tend to have properties that are linked to poor Absorption, Distribution, Metabolism, Excretion and Toxicity (ADMET) and in some cases to poor clinical outcomes. Previously reported in silico analyses of iPPIs have essentially focused on physicochemical properties but several other ADMET parameters would be important to assess. In order to gain new insights into the ADMET properties of iPPIs, computations were carried out on eight datasets collected from several databases. These datasets involve compounds targeting enzymes, GPCRs, ion channels, nuclear receptors, allosteric modulators, oral marketed drugs, oral natural product-derived marketed drugs and iPPIs. Several trends are reported that should assist the design and optimization of future PPI inhibitors, either for drug discovery endeavors or for chemical biology projects. PMID:28397808

  13. A simple computer program for calculating PSA recurrence in prostate cancer patients

    Directory of Open Access Journals (Sweden)

    Liao Zhongyue

    2004-06-01

    Full Text Available Abstract Background Prostate cancer is the most common tumor in men. The most commonly used diagnostic and tumor recurrence marker is Prostate Specific Antigen (PSA. After surgical removal or radiation treatment, PSA levels drop (PSA nadir and subsequent elevated or increased PSA levels are indicative of recurrent disease (PSA recurrence. For clinical follow-up and local care PSA nadir and recurrence is often hand calculated for patients, which can result in the application of heterogeneous criteria. For large datasets of prostate cancer patients used in clinical studies PSA measurements are used as surrogate measures of disease progression. In these datasets a method to measure PSA recurrence is needed for the subsequent analysis of outcomes data and as such need to be applied in a uniform and reproducible manner. This method needs to be simple and reproducible, and based on known aspects of PSA biology. Methods We have created a simple Perl-based algorithm for the calculation of post-treatment PSA outcomes results based on the initial PSA and multiple PSA values obtained after treatment. The algorithm tracks the post-surgical PSA nadir and if present, subsequent PSA recurrence. Times to PSA recurrence or recurrence free intervals are supplied in months. Results Use of the algorithm is demonstrated with a sample dataset from prostate cancer patients. The results are compared with hand-annotated PSA recurrence analysis. The strengths and limitations are discussed. Conclusions The use of this simple PSA algorithm allows for the standardized analysis of PSA recurrence in large datasets of patients who have undergone treatment for prostate cancer. The script is freely available, and easily modifiable for desired user parameters and improvements.

  14. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y.C. [Square Y, Orchard Park, NY (United States); Chen, S.Y.; LePoire, D.J. [Argonne National Lab., IL (United States). Environmental Assessment and Information Sciences Div.; Rothman, R. [USDOE Idaho Field Office, Idaho Falls, ID (United States)

    1993-02-01

    This report presents the technical details of RISIUND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, semiinteractive program that can be run on an IBM or equivalent personal computer. The program language is FORTRAN-77. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incidentfree models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionudide inventory and dose conversion factors.

  15. A computer code for forward calculation and inversion of the H/V spectral ratio under the diffuse field assumption

    Science.gov (United States)

    García-Jerez, Antonio; Piña-Flores, José; Sánchez-Sesma, Francisco J.; Luzón, Francisco; Perton, Mathieu

    2016-12-01

    During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, over the last decade several schemes for inversion of the full HVSRN curve for near surface surveying have been developed. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested. It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserved by means of an adaptation of the Wang's orthonormalization method to the calculation of dispersion curves, surface-waves medium responses and contributions of body waves. This code has been combined with a variety of inversion methods to make up a powerful tool for passive seismic surveying.

  16. Detecting number processing and mental calculation in patients with disorders of consciousness using a hybrid brain-computer interface system.

    Science.gov (United States)

    Li, Yuanqing; Pan, Jiahui; He, Yanbin; Wang, Fei; Laureys, Steven; Xie, Qiuyou; Yu, Ronghao

    2015-12-15

    For patients with disorders of consciousness such as coma, a vegetative state or a minimally conscious state, one challenge is to detect and assess the residual cognitive functions in their brains. Number processing and mental calculation are important brain functions but are difficult to detect in patients with disorders of consciousness using motor response-based clinical assessment scales such as the Coma Recovery Scale-Revised due to the patients' motor impairments and inability to provide sufficient motor responses for number- and calculation-based communication. In this study, we presented a hybrid brain-computer interface that combines P300 and steady state visual evoked potentials to detect number processing and mental calculation in Han Chinese patients with disorders of consciousness. Eleven patients with disorders of consciousness who were in a vegetative state (n = 6) or in a minimally conscious state (n = 3) or who emerged from a minimally conscious state (n = 2) participated in the brain-computer interface-based experiment. During the experiment, the patients with disorders of consciousness were instructed to perform three tasks, i.e., number recognition, number comparison, and mental calculation, including addition and subtraction. In each experimental trial, an arithmetic problem was first presented. Next, two number buttons, only one of which was the correct answer to the problem, flickered at different frequencies to evoke steady state visual evoked potentials, while the frames of the two buttons flashed in a random order to evoke P300 potentials. The patients needed to focus on the target number button (the correct answer). Finally, the brain-computer interface system detected P300 and steady state visual evoked potentials to determine the button to which the patients attended, further presenting the results as feedback. Two of the six patients who were in a vegetative state, one of the three patients who were in a minimally conscious state, and

  17. Computational aspects of sensitivity calculations in linear transient structural analysis. Ph.D. Thesis - Virginia Polytechnic Inst. and State Univ.

    Science.gov (United States)

    Greene, William H.

    1990-01-01

    A study was performed focusing on the calculation of sensitivities of displacements, velocities, accelerations, and stresses in linear, structural, transient response problems. One significant goal of the study was to develop and evaluate sensitivity calculation techniques suitable for large-order finite element analyses. Accordingly, approximation vectors such as vibration mode shapes are used to reduce the dimensionality of the finite element model. Much of the research focused on the accuracy of both response quantities and sensitivities as a function of number of vectors used. Two types of sensitivity calculation techniques were developed and evaluated. The first type of technique is an overall finite difference method where the analysis is repeated for perturbed designs. The second type of technique is termed semi-analytical because it involves direct, analytical differentiation of the equations of motion with finite difference approximation of the coefficient matrices. To be computationally practical in large-order problems, the overall finite difference methods must use the approximation vectors from the original design in the analyses of the perturbed models. In several cases this fixed mode approach resulted in very poor approximations of the stress sensitivities. Almost all of the original modes were required for an accurate sensitivity and for small numbers of modes, the accuracy was extremely poor. To overcome this poor accuracy, two semi-analytical techniques were developed. The first technique accounts for the change in eigenvectors through approximate eigenvector derivatives. The second technique applies the mode acceleration method of transient analysis to the sensitivity calculations. Both result in accurate values of the stress sensitivities with a small number of modes and much lower computational costs than if the vibration modes were recalculated and then used in an overall finite difference method.

  18. Computer visualization of the results for calculating the Ink «dusting»

    Science.gov (United States)

    Varepo, L. G.; Trapeznikova, O. V.; Panichkin, A. V.; Nagornova, I. V.; Bobrov, V. I.

    2017-06-01

    The paper presents the results of the numerical modeling of the quantitative assessment of the offset printing ink “dusting” index at the outlet of the printing contact zone while transferring it to the substrates having various surface characteristics. The modeling was carried out with the help of finite-difference methods. The paper considers the results of practical implementation of the software calculating the printing ink “dusting”. The result of the ink filaments splitting into many fine particles and the intensive spraying these particles into the surrounding space by the centrifugal force reduces the print quality. Creating new concepts of reducing the level of ““dusting”" in offset printing and its quantitative assessment is important. A new approach to solving the problem of reducing the “dusting”, which is characterized by the influence on the ink in the engagement zone of the idle pulse of varying power, is suggested. To quantify the ink transferred into noodles and participating in the formation of “ink “dusting”", process numerical modeling techniques using solution finite difference methods are applied. Graphic visualization of the given solution results on the basis of the graphic modeling is presented. The practical implementation of this method improves the quality of the final printing product, and allows to predict the quantitative assessment of the “dusting” indices directly in the process of preparing the order for printing and to optimize the selection of the printing system components.

  19. The DEPOSIT computer code: Calculations of electron-loss cross-sections for complex ions colliding with neutral atoms

    Science.gov (United States)

    Litsarev, Mikhail S.

    2013-02-01

    A description of the DEPOSIT computer code is presented. The code is intended to calculate total and m-fold electron-loss cross-sections (m is the number of ionized electrons) and the energy T(b) deposited to the projectile (positive or negative ion) during a collision with a neutral atom at low and intermediate collision energies as a function of the impact parameter b. The deposited energy is calculated as a 3D integral over the projectile coordinate space in the classical energy-deposition model. Examples of the calculated deposited energies, ionization probabilities and electron-loss cross-sections are given as well as the description of the input and output data. Program summaryProgram title: DEPOSIT Catalogue identifier: AENP_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AENP_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: GNU General Public License version 3 No. of lines in distributed program, including test data, etc.: 8726 No. of bytes in distributed program, including test data, etc.: 126650 Distribution format: tar.gz Programming language: C++. Computer: Any computer that can run C++ compiler. Operating system: Any operating system that can run C++. Has the code been vectorised or parallelized?: An MPI version is included in the distribution. Classification: 2.4, 2.6, 4.10, 4.11. Nature of problem: For a given impact parameter b to calculate the deposited energy T(b) as a 3D integral over a coordinate space, and ionization probabilities Pm(b). For a given energy to calculate the total and m-fold electron-loss cross-sections using T(b) values. Solution method: Direct calculation of the 3D integral T(b). The one-dimensional quadrature formula of the highest accuracy based upon the nodes of the Yacobi polynomials for the cosθ=x∈[-1,1] angular variable is applied. The Simpson rule for the φ∈[0,2π] angular variable is used. The Newton-Cotes pattern of the seventh order

  20. User's manual to the ICRP Code: a series of computer programs to perform dosimetric calculations for the ICRP Committee 2 report

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.B.; Ford, M.R.

    1980-02-01

    A computer code has been developed that implements the recommendations of ICRP Committee 2 for computing limits for occupational exposure of radionuclides. The purpose of this report is to describe the various modules of the computer code and to present a description of the methods and criteria used to compute the tables published in the Committee 2 report. The computer code contains three modules of which: (1) one computes specific effective energy; (2) one calculates cumulated activity; and (3) one computes dose and the series of ICRP tables. The description of the first two modules emphasizes the new ICRP Committee 2 recommendations in computing specific effective energy and cumulated activity. For the third module, the complex criteria are discussed for calculating the tables of committed dose equivalent, weighted committed dose equivalents, annual limit of intake, and derived air concentration.

  1. An online calculator to compute phonotactic probability and neighborhood density on the basis of child corpora of spoken American English.

    Science.gov (United States)

    Storkel, Holly L; Hoover, Jill R

    2010-05-01

    An online calculator was developed (www.bncdnet.ku.edu/cml/info_ccc.vi) to compute phonotactic probability--the likelihood of occurrence of a sound sequence--and neighborhood density--the number of phonologically similar words--on the basis of child corpora of American English (Kolson, 1960; Moe, Hopkins, & Rush, 1982) and to compare its results to those of an adult calculator. Phonotactic probability and neighborhood density were computed for a set of 380 nouns (Fenson et al., 1993) using both the child and adult corpora. The child and adult raw values were significantly correlated. However, significant differences were detected. Specifically, child phonotactic probability was higher than adult phonotactic probability, especially for high-probability words, and child neighborhood density was lower than adult neighborhood density, especially for words with high-density neighborhoods. These differences were reduced or eliminated when relative measures (i.e., z scores) were used. Suggestions are offered regarding which values to use in future research.

  2. ALPHN: A computer program for calculating ({alpha}, n) neutron production in canisters of high-level waste

    Energy Technology Data Exchange (ETDEWEB)

    Salmon, R.; Hermann, O.W.

    1992-10-01

    The rate of neutron production from ({alpha}, n) reactions in canisters of immobilized high-level waste containing borosilicate glass or glass-ceramic compositions is significant and must be considered when estimating neutron shielding requirements. The personal computer program ALPHA calculates the ({alpha}, n) neutron production rate of a canister of vitrified high-level waste. The user supplies the chemical composition of the glass or glass-ceramic and the curies of the alpha-emitting actinides present. The output of the program gives the ({alpha}, n) neutron production of each actinide in neutrons per second and the total for the canister. The ({alpha}, n) neutron production rates are source terms only; that is, they are production rates within the glass and do not take into account the shielding effect of the glass. For a given glass composition, the user can calculate up to eight cases simultaneously; these cases are based on the same glass composition but contain different quantities of actinides per canister. In a typical application, these cases might represent the same canister of vitrified high-level waste at eight different decay times. Run time for a typical problem containing 20 chemical species, 24 actinides, and 8 decay times was 35 s on an IBM AT personal computer. Results of an example based on an expected canister composition at the Defense Waste Processing Facility are shown.

  3. ALPHN: A computer program for calculating ([alpha], n) neutron production in canisters of high-level waste

    Energy Technology Data Exchange (ETDEWEB)

    Salmon, R.; Hermann, O.W.

    1992-10-01

    The rate of neutron production from ([alpha], n) reactions in canisters of immobilized high-level waste containing borosilicate glass or glass-ceramic compositions is significant and must be considered when estimating neutron shielding requirements. The personal computer program ALPHA calculates the ([alpha], n) neutron production rate of a canister of vitrified high-level waste. The user supplies the chemical composition of the glass or glass-ceramic and the curies of the alpha-emitting actinides present. The output of the program gives the ([alpha], n) neutron production of each actinide in neutrons per second and the total for the canister. The ([alpha], n) neutron production rates are source terms only; that is, they are production rates within the glass and do not take into account the shielding effect of the glass. For a given glass composition, the user can calculate up to eight cases simultaneously; these cases are based on the same glass composition but contain different quantities of actinides per canister. In a typical application, these cases might represent the same canister of vitrified high-level waste at eight different decay times. Run time for a typical problem containing 20 chemical species, 24 actinides, and 8 decay times was 35 s on an IBM AT personal computer. Results of an example based on an expected canister composition at the Defense Waste Processing Facility are shown.

  4. Two computational approaches for Monte Carlo based shutdown dose rate calculation with applications to the JET fusion machine

    Energy Technology Data Exchange (ETDEWEB)

    Petrizzi, L.; Batistoni, P.; Migliori, S. [Associazione EURATOM ENEA sulla Fusione, Frascati (Roma) (Italy); Chen, Y.; Fischer, U.; Pereslavtsev, P. [Association FZK-EURATOM Forschungszentrum Karlsruhe (Germany); Loughlin, M. [EURATOM/UKAEA Fusion Association, Culham Science Centre, Abingdon, Oxfordshire, OX (United Kingdom); Secco, A. [Nice Srl Via Serra 33 Camerano Casasco AT (Italy)

    2003-07-01

    shortly after the deuterium-tritium experiment (DTE1) in 1997. Large computing power, both in terms of amount of data handling and storage and the CPU computing time is needed by the two methods, partly due to the complexity of the problem. With parallel versions of the MCNP code, running on two different platforms, a satisfying accuracy of the calculation has been reached in reasonable times. (authors)

  5. A computer code for forward calculation and inversion of the H/V spectral ratio under the diffuse field assumption

    CERN Document Server

    García-Jerez, Antonio; Sánchez-Sesma, Francisco J; Luzón, Francisco; Perton, Mathieu

    2016-01-01

    During a quarter of a century, the main characteristics of the horizontal-to-vertical spectral ratio of ambient noise HVSRN have been extensively used for site effect assessment. In spite of the uncertainties about the optimum theoretical model to describe these observations, several schemes for inversion of the full HVSRN curve for near surface surveying have been developed over the last decade. In this work, a computer code for forward calculation of H/V spectra based on the diffuse field assumption (DFA) is presented and tested.It takes advantage of the recently stated connection between the HVSRN and the elastodynamic Green's function which arises from the ambient noise interferometry theory. The algorithm allows for (1) a natural calculation of the Green's functions imaginary parts by using suitable contour integrals in the complex wavenumber plane, and (2) separate calculation of the contributions of Rayleigh, Love, P-SV and SH waves as well. The stability of the algorithm at high frequencies is preserv...

  6. Computational Modeling and Theoretical Calculations on the Interactions between Spermidine and Functional Monomer (Methacrylic Acid in a Molecularly Imprinted Polymer

    Directory of Open Access Journals (Sweden)

    Yujie Huang

    2015-01-01

    Full Text Available This paper theoretically investigates interactions between a template and functional monomer required for synthesizing an efficient molecularly imprinted polymer (MIP. We employed density functional theory (DFT to compute geometry, single-point energy, and binding energy (ΔE of an MIP system, where spermidine (SPD and methacrylic acid (MAA were selected as template and functional monomer, respectively. The geometry was calculated by using B3LYP method with 6-31+(d basis set. Furthermore, 6-311++(d, p basis set was used to compute the single-point energy of the above geometry. The optimized geometries at different template to functional monomer molar ratios, mode of bonding between template and functional monomer, changes in charge on natural bond orbital (NBO, and binding energy were analyzed. The simulation results show that SPD and MAA form a stable complex via hydrogen bonding. At 1 : 5 SPD to MAA ratio, the binding energy is minimum, while the amount of transferred charge between the molecules is maximum; SPD and MAA form a stable complex at 1 : 5 molar ratio through six hydrogen bonds. Optimizing structure of template-functional monomer complex, through computational modeling prior synthesis, significantly contributes towards choosing a suitable pair of template-functional monomer that yields an efficient MIP with high specificity and selectivity.

  7. Insilico molecular modeling, docking and spectroscopic [FT-IR/FT-Raman/UV/NMR] analysis of Chlorfenson using computational calculations

    Science.gov (United States)

    Ramalingam, S.; Periandy, S.; Sugunakala, S.; Prabhu, T.; Bououdina, M.

    2013-11-01

    In the present work, the recorded FT-IR/FT-Raman spectra of the Chlorfenson (4-Chorophenyl-4-chlorobenzenesulfonate) are analysed. The observed vibrational frequencies are assigned and the computational calculations are carried out by DFT (LSDA, B3LYP and B3PW91) methods with 6-31++G(d,p) and 6-311++G(d,p) basis sets and the corresponding results are investigated with the UV/NMR data. The fluctuation of structure of Chlorobenzenesulfonate due to the substitution of C6H4Cl is investigated. The vibrational sequence pattern of the molecule related to the substitutions is intensely analysed. Moreover, 13C NMR and 1H NMR chemical shifts are calculated by using the gage independent atomic orbital (GIAO) technique with HF/B3LYP/B3PW91 methods on same basis set. A study on the electronic properties; absorption wavelengths, excitation energy, dipole moment and frontier molecular orbital energies, are performed by HF and DFT methods. The calculated energy of Kubo gap (HOMO and LUMO) ensures that the charge transfer occurs within the molecule. Besides frontier molecular orbitals (FMOs), molecular electrostatic potential (MEP) is executed. NLO properties and Mulliken charges of the Chlorfenson is also calculated and interpreted. Biological properties like the target receptor identification, and Identification of interacting residues, of this compound is identified and analysed by using SWISSMODEL, Castp, Hex and Pdb Sum. By using these properties, the mechanism of action of this compound on ATP Synthase of Tetranychus urticae is found and it is very much useful to develop efficient pesticides having less toxic to the environment.

  8. Estimation of radiation exposure in low-dose multislice computed tomography of the heart and comparison with a calculation program

    Energy Technology Data Exchange (ETDEWEB)

    Hohl, C.; Muehlenbruch, G.; Wildberger, J.E.; Schmidt, T.; Guenther, R.W.; Mahnken, A.H. [University of Technology of Aachen, Department of Diagnostic Radiology, Aachen (Germany); Leidecker, C. [University of Erlangen-Nuremberg, Institute of Medical Physics, Erlangen (Germany); Suess, C. [Siemens Medical Solutions Computed Tomography, Forchheim (Germany)

    2006-08-15

    The purpose of this study was to evaluate the achievable organ dose savings in low-dose multislice computed tomography (MSCT) of the heart using different tube voltages (80 kVp, 100 kVp, 120 kVp) and compare it with calculated values. A female Alderson-Rando phantom was equipped with thermoluminescent dosimeters (TLDs) in five different positions to assess the mean doses within representative organs (thyroid gland, thymus, oesophagus, pancreas, liver). Radiation exposure was performed on a 16-row MSCT scanner with six different routine scan protocols: a 120-kV and a 100-kV CT angiography (CTA) protocol with the same collimation, two 120-kV Ca-scoring (CS) protocols with different collimations and two 80-kV CS protocols with the same collimation as the 120-kV CS protocols. Each scan protocol was repeated five times. The measured dose values for the organs were compared with the values calculated by a commercially available computer program. Directly irradiated organs, such as the esophagus, received doses of 34.7 mSv (CTA 16 x 0.75 120 kVp), 21.9 mSv (CTA 16 x 0.75 100 kVp) and 4.96 mSv (CS score 12 x 1.5 80 kVp), the thyroid as an organ receiving only scattered radiation collected organ doses of 2.98 mSv (CTA 16 x 0.75 120 kVp), 1.97 mSv (CTA 16 x 0.75 100 kVp) and 0.58 mSv (CS score 12 x 1.5 80 kVp). The measured relative organ dose reductions from standard to low-kV protocols ranged from 30.9% to 55.9% and were statistically significant (P<0.05). The comparison with the calculated organ doses showed that the calculation program can predict the relative dose reduction of cardiac low photon-energy protocols precisely. (orig.)

  9. Acceleration of the calculation speed of computer-generated holograms using the sparsity of the holographic fringe pattern for a 3D object.

    Science.gov (United States)

    Kim, Hak Gu; Jeong, Hyunwook; Man Ro, Yong

    2016-10-31

    In computer-generated hologram (CGH) calculations, a diffraction pattern needs to be calculated from all points of a 3-D object, which requires a heavy computational cost. In this paper, we propose a novel fast computer-generated hologram calculation method using sparse fast Fourier transform. The proposed method consists of two steps. First, the sparse dominant signals of CGHs are measured by calculating a wavefront on a virtual plane between the object and the CGH plane. Second, the wavefront on CGH plane is calculated by using the measured sparsity with sparse Fresnel diffraction. Experimental results proved that the proposed method is much faster than existing works while it preserving the visual quality.

  10. A Computer Program to Calculate Two-Stage Short-Run Control Chart Factors for (X,MR Charts

    Directory of Open Access Journals (Sweden)

    Matthew E. Elam

    2006-04-01

    Full Text Available This paper is the second in a series of two papers that fully develops two-stage short-run (X, MR control charts. This paper describes the development and execution of a computer program that accurately calculates first- and second-stage short-run control chart factors for (X, MR charts using the equations derived in the first paper. The software used is Mathcad. The program accepts values for number of subgroups, α for the X chart, and α for the MR chart both above the upper control limit and below the lower control limit. Tables are generated for specific values of these inputs and the implications of the results are discussed. A numerical example illustrates the use of the program.

  11. RISKIND: A computer program for calculating radiological consequences and health risks from transportation of spent nuclear fuel

    Energy Technology Data Exchange (ETDEWEB)

    Yuan, Y.C. [Square Y Consultants, Orchard Park, NY (US); Chen, S.Y.; Biwer, B.M.; LePoire, D.J. [Argonne National Lab., IL (US)

    1995-11-01

    This report presents the technical details of RISKIND, a computer code designed to estimate potential radiological consequences and health risks to individuals and the collective population from exposures associated with the transportation of spent nuclear fuel. RISKIND is a user-friendly, interactive program that can be run on an IBM or equivalent personal computer under the Windows{trademark} environment. Several models are included in RISKIND that have been tailored to calculate the exposure to individuals under various incident-free and accident conditions. The incident-free models assess exposures from both gamma and neutron radiation and can account for different cask designs. The accident models include accidental release, atmospheric transport, and the environmental pathways of radionuclides from spent fuels; these models also assess health risks to individuals and the collective population. The models are supported by databases that are specific to spent nuclear fuels and include a radionuclide inventory and dose conversion factors. In addition, the flexibility of the models allows them to be used for assessing any accidental release involving radioactive materials. The RISKIND code allows for user-specified accident scenarios as well as receptor locations under various exposure conditions, thereby facilitating the estimation of radiological consequences and health risks for individuals. Median (50% probability) and typical worst-case (less than 5% probability of being exceeded) doses and health consequences from potential accidental releases can be calculated by constructing a cumulative dose/probability distribution curve for a complete matrix of site joint-wind-frequency data. These consequence results, together with the estimated probability of the entire spectrum of potential accidents, form a comprehensive, probabilistic risk assessment of a spent nuclear fuel transportation accident.

  12. 3-D parallel program for numerical calculation of gas dynamics problems with heat conductivity on distributed memory computational systems (CS)

    Energy Technology Data Exchange (ETDEWEB)

    Sofronov, I.D.; Voronin, B.L.; Butnev, O.I. [VNIIEF (Russian Federation)] [and others

    1997-12-31

    The aim of the work performed is to develop a 3D parallel program for numerical calculation of gas dynamics problem with heat conductivity on distributed memory computational systems (CS), satisfying the condition of numerical result independence from the number of processors involved. Two basically different approaches to the structure of massive parallel computations have been developed. The first approach uses the 3D data matrix decomposition reconstructed at temporal cycle and is a development of parallelization algorithms for multiprocessor CS with shareable memory. The second approach is based on using a 3D data matrix decomposition not reconstructed during a temporal cycle. The program was developed on 8-processor CS MP-3 made in VNIIEF and was adapted to a massive parallel CS Meiko-2 in LLNL by joint efforts of VNIIEF and LLNL staffs. A large number of numerical experiments has been carried out with different number of processors up to 256 and the efficiency of parallelization has been evaluated in dependence on processor number and their parameters.

  13. Direct application of UNIFAC activity coefficient computer programs to the calculation of solvent activities and .chi.-parameters for polymer solutions

    NARCIS (Netherlands)

    van den Berg, J.W.A.

    1984-01-01

    Application of UNIFAC computer calculations to polymer solutions does not seem to make sense because of the value of the solvent activity: close to 1.000 over a considerable range of concentrations (up to 90% of polymer). A simple procedure is proposed to calculate solvent activity coefficients, and

  14. Computer program MCAP-TOSS calculates steady-state fluid dynamics of coolant in parallel channels and temperature distribution in surrounding heat-generating solid

    Science.gov (United States)

    Lee, A. Y.

    1967-01-01

    Computer program calculates the steady state fluid distribution, temperature rise, and pressure drop of a coolant, the material temperature distribution of a heat generating solid, and the heat flux distributions at the fluid-solid interfaces. It performs the necessary iterations automatically within the computer, in one machine run.

  15. Computer program TRACK_TEST for calculating parameters and plotting profiles for etch pits in nuclear track materials

    Science.gov (United States)

    Nikezic, D.; Yu, K. N.

    2006-01-01

    A computer program called TRACK_TEST for calculating parameters (lengths of the major and minor axes) and plotting profiles in nuclear track materials resulted from light-ion irradiation and subsequent chemical etching is described. The programming steps are outlined, including calculations of alpha-particle ranges, determination of the distance along the particle trajectory penetrated by the chemical etchant, calculations of track coordinates, determination of the lengths of the major and minor axes and determination of the contour of the track opening. Descriptions of the program are given, including the built-in V functions for the two commonly employed nuclear track materials commercially known as LR 115 (cellulose nitrate) and CR-39 (poly allyl diglycol carbonate) irradiated by alpha particles. Program summaryTitle of the program:TRACK_TEST Catalogue identifier:ADWT Program obtainable from:CPC Program Library, Queen's University of Belfast, N. Ireland Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADWT Computer:Pentium PC Operating systems:Windows 95+ Programming language:Fortran 90 Memory required to execute with typical data:256 MB No. of lines in distributed program, including test data, etc.: 2739 No. of bytes in distributed program, including test data, etc.:204 526 Distribution format:tar.gz External subprograms used:The entire code must be linked with the MSFLIB library Nature of problem: Fast heavy charged particles (like alpha particles and other light ions etc.) create latent tracks in some dielectric materials. After chemical etching in aqueous NaOH or KOH solutions, these tracks become visible under an optical microscope. The growth of a track is based on the simultaneous actions of the etchant on undamaged regions (with the bulk etch rate V) and along the particle track (with the track etch rate V). Growth of the track is described satisfactorily by these two parameters ( V and V). Several models have been presented in the past describing

  16. Theoretical background and user's manual for the computer code on groundwater flow and radionuclide transport calculation in porous rock

    Energy Technology Data Exchange (ETDEWEB)

    Shirakawa, Toshihiko [Computer Software Development Co., Ltd., Tokyo (Japan); Hatanaka, Koichiro [Japan Nuclear Cycle Development Inst., Tokai Works, Tokai, Ibaraki (Japan)

    2001-11-01

    In order to document a basic manual about input data, output data, execution of computer code on groundwater flow and radionuclide transport calculation in heterogeneous porous rock, we investigated the theoretical background about geostatistical computer codes and the user's manual for the computer code on groundwater flow and radionuclide transport which calculates water flow in three dimension, the path of moving radionuclide, and one dimensional radionuclide migration. In this report, based on above investigation we describe the geostatistical background about simulating heterogeneous permeability field. And we describe construction of files, input and output data, a example of calculating of the programs which simulates heterogeneous permeability field, and calculates groundwater flow and radionuclide transport. Therefore, we can document a manual by investigating the theoretical background about geostatistical computer codes and the user's manual for the computer code on groundwater flow and radionuclide transport calculation. And we can model heterogeneous porous rock and analyze groundwater flow and radionuclide transport by utilizing the information from this report. (author)

  17. An evolutionary computation based algorithm for calculating solar differential rotation by automatic tracking of coronal bright points

    Science.gov (United States)

    Shahamatnia, Ehsan; Dorotovič, Ivan; Fonseca, Jose M.; Ribeiro, Rita A.

    2016-03-01

    Developing specialized software tools is essential to support studies of solar activity evolution. With new space missions such as Solar Dynamics Observatory (SDO), solar images are being produced in unprecedented volumes. To capitalize on that huge data availability, the scientific community needs a new generation of software tools for automatic and efficient data processing. In this paper a prototype of a modular framework for solar feature detection, characterization, and tracking is presented. To develop an efficient system capable of automatic solar feature tracking and measuring, a hybrid approach combining specialized image processing, evolutionary optimization, and soft computing algorithms is being followed. The specialized hybrid algorithm for tracking solar features allows automatic feature tracking while gathering characterization details about the tracked features. The hybrid algorithm takes advantages of the snake model, a specialized image processing algorithm widely used in applications such as boundary delineation, image segmentation, and object tracking. Further, it exploits the flexibility and efficiency of Particle Swarm Optimization (PSO), a stochastic population based optimization algorithm. PSO has been used successfully in a wide range of applications including combinatorial optimization, control, clustering, robotics, scheduling, and image processing and video analysis applications. The proposed tool, denoted PSO-Snake model, was already successfully tested in other works for tracking sunspots and coronal bright points. In this work, we discuss the application of the PSO-Snake algorithm for calculating the sidereal rotational angular velocity of the solar corona. To validate the results we compare them with published manual results performed by an expert.

  18. QUANTITATIVE STRUCTURE-ACTIVITY RELATIONSHIP ANALYSIS OF CURCUMIN AND ITS DERIVATIVES AS GST INHIBITORS BASED ON COMPUTATIONAL CHEMISTRY CALCULATION

    Directory of Open Access Journals (Sweden)

    Enade Perdana Istyastono

    2010-06-01

    Full Text Available The Quantitative Structure-Activity Relationship (QSAR study was established on curcumin and its derivatives as glutathione S-transferase(s (GSTs inhibitors using atomic net charges as the descriptors. The charges were resulted by semiempirical AM1 and PM3 quantum-chemical calculations using computational chemistry approach. The inhibition activity was expressed as the concentration that gave 50% inhibition of GSTs activity (IC50. The selection of the best QSAR equation models was determined by multiple linear regression analysis. This research was related to the nature of GSTs as multifunctional enzymes, which play an important role in the detoxification of electrophilic compounds, the process of inflammation and the effectivity of anticancer compounds. The result showed that AM1 semiempirical method gave better descriptor for the construction of QSAR equation model than PM3 did. The best QSAR equation model was described by : log 1/IC50 = -2,238 - 17,326 qC2' + 1,876 qC4' + 9,200 qC6' The equation was significant at 95% level with statistical parameters : n = 10, m = 3, r­ = 0,839, SE = 0,254, F = 4,764, F/Ftable = 1,001.   Keywords: QSAR analysis, curcumin, glutathione S-transferase(s (GSTs, atomic net charge

  19. Computer calculations of the thermally-induced magnetic and electronic properties of the rare earth compounds RERu{sub 2}Si{sub 2}

    Energy Technology Data Exchange (ETDEWEB)

    Michalski, R. [Inst. of Physics, Pedagocial Univ., Cracow (Poland); Radwanski, R.J. [Center for Solid State Physics, Cracow (Poland)

    2005-07-01

    The aim of this paper is to demonstrate the effectiveness of the calculation method, which takes into consideration the electrostatic ligands field as well as the the magnetic interactions. Our calculations method based on crystal field (CEF) together with the Zeeman effect in one Hamiltonian and allows calculating many of the temperature dependencies of the magnetic and electronic properties of the rare earth compounds. The result of the calculations shows the accuracy of the approach even for the intermetallic compounds. The obtained results for calculations of the compounds of the family in RERu{sub 2}Si{sub 2} (RE - rare-earth element) are fully confirmed the experimental data such as: the easy magnetic direction of all the analyzed compounds, the thermal dependencies of magnetic properties; in particular the giant magnetocrystalline anisotropy of PrRu{sub 2}S{sub 2} with the calculated anisotropy field B{sub A}>400T, in-plain anisotropy of ErRu{sub 2}Si{sub 2}, the cause of difficulty in magnetic ordering of compounds TmRu{sub 2}Si{sub 2} and YbRu{sub 2}Si{sub 2} as well as effects and dependencies not foreseen before. In this paper we have put together the elementary calculated magnetic properties for the chosen compounds of RERu{sub 2}Si{sub 2} in the paramagnetic region. All Calculations are on the basis of the calculating computer package BIREC 1.5{sup 1}. (orig.)

  20. Description and application of the EAP computer program for calculating life-cycle energy use and greenhouse gas emissions of household consumption items

    NARCIS (Netherlands)

    Benders, R.M.J.; Wilting, H.C.; Kramer, K.J.; Moll, H.C.

    2001-01-01

    Focusing on reduction in energy use and greenhouse gas emissions, a life-cycle-based analysis tool has been developed. The energy analysis program (EAP) is a computer program for determining energy use and greenhouse gas emissions related to household consumption items, using a hybrid calculation

  1. TIMED: a computer program for calculating cumulated activity of a radionuclide in the organs of the human body at a given time, t, after deposition

    Energy Technology Data Exchange (ETDEWEB)

    Watson, S.B.; Snyder, W.S.; Ford, M.R.

    1976-12-01

    TIMED is a computer program designed to calculate cumulated radioactivity in the various source organs at various times after radionuclide deposition. TIMED embodies a system of differential equations which describes activity transfer in the lungs, gastrointestinal tract, and other organs of the body. This system accounts for delay of transfer of activity between compartments of the body and radioactive daughters.

  2. The comparative analysis of different computations methods of strength of materials by the example of calculations of the axle beam

    Science.gov (United States)

    Evtushenko, S. I.; Petrov, I. A.; Shutova, M. N.; Alekseeva, A. S.

    2017-02-01

    The paper presents data of calculation the main characteristics of resilience by different ways. The basic data for the article were the calculation of the guiding axle beam of the vehicle. The calculation was performed by the analytic method and it was necessary to re-check strength of materials by any other method for reliability of the carried-out work. The finite element method was chosen as the competing option.

  3. Detecting number processing and mental calculation in patients with disorders of consciousness using a hybrid brain-computer interface system

    National Research Council Canada - National Science Library

    Li, Yuanqing; Pan, Jiahui; He, Yanbin; Wang, Fei; Laureys, Steven; Xie, Qiuyou; Yu, Ronghao

    2015-01-01

    .... Number processing and mental calculation are important brain functions but are difficult to detect in patients with disorders of consciousness using motor response-based clinical assessment scales...

  4. Calculating buoy response for a wave energy converter—A comparison of two computational methods and experimental results

    Directory of Open Access Journals (Sweden)

    Linnea Sjökvist

    2017-05-01

    Full Text Available When designing a wave power plant, reliable and fast simulation tools are required. Computational fluid dynamics (CFD software provides high accuracy but with a very high computational cost, and in operational, moderate sea states, linear potential flow theories may be sufficient to model the hydrodynamics. In this paper, a model is built in COMSOL Multiphysics to solve for the hydrodynamic parameters of a point-absorbing wave energy device. The results are compared with a linear model where the hydrodynamical parameters are computed using WAMIT, and to experimental results from the Lysekil research site. The agreement with experimental data is good for both numerical models.

  5. Development of a computer code for calculating the steady super/hypersonic inviscid flow around real configurations. Volume 2: Code description

    Science.gov (United States)

    Marconi, F.; Yaeger, L.

    1976-01-01

    A numerical procedure was developed to compute the inviscid super/hypersonic flow field about complex vehicle geometries accurately and efficiently. A second-order accurate finite difference scheme is used to integrate the three-dimensional Euler equations in regions of continuous flow, while all shock waves are computed as discontinuities via the Rankine-Hugoniot jump conditions. Conformal mappings are used to develop a computational grid. The effects of blunt nose entropy layers are computed in detail. Real gas effects for equilibrium air are included using curve fits of Mollier charts. Typical calculated results for shuttle orbiter, hypersonic transport, and supersonic aircraft configurations are included to demonstrate the usefulness of this tool.

  6. Aircraft Trajectories Computation-Prediction-Control (La Trajectoire de l’Avion Calcul-Prediction-Controle). Volume 2

    Science.gov (United States)

    1990-05-01

    example a VDU with a touch-sensitive screen. Having indicated his choice he can leave the computer to compose and launch the data link message or messages...IBM-compatible computers (XT and AT types) ; - on-line regulation of traffic and 4-D guidance of flight systems such as ROSAS /CINTIA - simulations of...utilisation of runway facilities. This component of the Zone of Convergence concept is referred to as ROSAS (Regional Optimised Sequencing And

  7. Burnup calculations using the OREST computer code for uranium dioxide fuel elements of boiling water reactors. Abbrandberechnung mit OREST fuer Urandioxid-Siedewasserreaktor-Brennelemente

    Energy Technology Data Exchange (ETDEWEB)

    Hesse, U.

    1991-01-01

    There are plans to also use plutonium containing fuel elements (mixed oxide fuel) in the BWR type reactors, with a proportion of up to one third of the entire fuel core. The new concept uses complete MOX fuel elements, as are used in the PWR type reactors. The OREST computer code has been designed for burnup calculations in PWRs. The situation in BWRs is different, as in these reactor types, fuel elements are heterogenous in design, and burnup calculations have to take into account the axial variations of the void fraction, so that multi-dimensional effects have to be calculated. The report explains that the one-dimensional OREST code can be enhanced by supplementing calculations, performed with the Monte-Carlo type KENO code in this case, and is thus suitable without restrictions for performing burnup calculations for MOX fuel elements in BWRs. The calculation method and performance is illustrated by the example of a UO{sub 2} fuel element of the Wuergassen reactor. The model calculations predict a relatively high residual activity in the upper part of the fuel element, and a distinct curium buildup in the lower third of the BWR fuel element. (orig./HP).

  8. Calculation of Lung Cancer Volume of Target Based on Thorax Computed Tomography Images using Active Contour Segmentation Method for Treatment Planning System

    Science.gov (United States)

    Patra Yosandha, Fiet; Adi, Kusworo; Edi Widodo, Catur

    2017-06-01

    In this research, calculation process of the lung cancer volume of target based on computed tomography (CT) thorax images was done. Volume of the target calculation was done in purpose to treatment planning system in radiotherapy. The calculation of the target volume consists of gross tumor volume (GTV), clinical target volume (CTV), planning target volume (PTV) and organs at risk (OAR). The calculation of the target volume was done by adding the target area on each slices and then multiply the result with the slice thickness. Calculations of area using of digital image processing techniques with active contour segmentation method. This segmentation for contouring to obtain the target volume. The calculation of volume produced on each of the targets is 577.2 cm3 for GTV, 769.9 cm3 for CTV, 877.8 cm3 for PTV, 618.7 cm3 for OAR 1, 1,162 cm3 for OAR 2 right, and 1,597 cm3 for OAR 2 left. These values indicate that the image processing techniques developed can be implemented to calculate the lung cancer target volume based on CT thorax images. This research expected to help doctors and medical physicists in determining and contouring the target volume quickly and precisely.

  9. A user`s guide to LUGSAN 1.1: A computer program to calculate and archive lug and sway brace loads for aircraft-carried stores

    Energy Technology Data Exchange (ETDEWEB)

    Dunn, W.N. [Sandia National Labs., Albuquerque, NM (United States). Experimental Structural Dynamics Dept.

    1994-07-01

    LUGSAN (LUG and Sway brace ANalysis) is a analysis and database computer program designed to calculate store lug and sway brace loads from aircraft captive carriage. LUGSAN combines the rigid body dynamics code, SWAY85 and the maneuver calculation code, MILGEN, with an INGRES database to function both as an analysis and archival system. This report describes the operation of the LUGSAN application program, including function description, layout examples, and sample sessions. This report is intended to be a user`s manual for version 1.1 of LUGSAN operating on the VAX/VMS system. The report is not intended to be a programmer or developer`s manual.

  10. ACDOS1: a computer code to calculate dose rates from neutron activation of neutral beamlines and other fusion-reactor components

    Energy Technology Data Exchange (ETDEWEB)

    Keney, G.S.

    1981-08-01

    A computer code has been written to calculate neutron induced activation of neutral-beam injector components and the corresponding dose rates as a function of geometry, component composition, and time after shutdown. The code, ACDOS1, was written in FORTRAN IV to calculate both activity and dose rates for up to 30 target nuclides and 50 neutron groups. Sufficient versatility has also been incorporated into the code to make it applicable to a variety of general activation problems due to neutrons of energy less than 20 MeV.

  11. Calculation of electromagnetic fields in electric machines by means of the finite element. Computational aspects; Calculo de campos electromagneticos en maquinas electricas mediante elemento finito. Aspectos computacionales

    Energy Technology Data Exchange (ETDEWEB)

    Rosales, Mario; De la Torre, Octavio [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1989-12-31

    In this article are described the computational characteristics of the Package CALIIE 2D of the Instituto de Investigaciones Electricas (IIE), for the calculation of bi-dimensional electromagnetic fields. The computational implementation of the package is based in the electromagnetic and numerical statements formerly published in this series. [Espanol] En este articulo se describen las caracteristicas computacionales del paquete CALIIE 2D del Instituto de Investigaciones Electricas (IIE), para calcular campos electromagneticos bidimensionales. La implantacion computacional del paquete se basa en los planteamientos electromagneticos y numericos antes publicados en esta serie.

  12. POTAMOS mass spectrometry calculator: computer aided mass spectrometry to the post-translational modifications of proteins. A focus on histones.

    Science.gov (United States)

    Vlachopanos, A; Soupsana, E; Politou, A S; Papamokos, G V

    2014-12-01

    Mass spectrometry is a widely used technique for protein identification and it has also become the method of choice in order to detect and characterize the post-translational modifications (PTMs) of proteins. Many software tools have been developed to deal with this complication. In this paper we introduce a new, free and user friendly online software tool, named POTAMOS Mass Spectrometry Calculator, which was developed in the open source application framework Ruby on Rails. It can provide calculated mass spectrometry data in a time saving manner, independently of instrumentation. In this web application we have focused on a well known protein family of histones whose PTMs are believed to play a crucial role in gene regulation, as suggested by the so called "histone code" hypothesis. The PTMs implemented in this software are: methylations of arginines and lysines, acetylations of lysines and phosphorylations of serines and threonines. The application is able to calculate the kind, the number and the combinations of the possible PTMs corresponding to a given peptide sequence and a given mass along with the full set of the unique primary structures produced by the possible distributions along the amino acid sequence. It can also calculate the masses and charges of a fragmented histone variant, which carries predefined modifications already implemented. Additional functionality is provided by the calculation of the masses of fragments produced upon protein cleavage by the proteolytic enzymes that are most widely used in proteomics studies. Copyright © 2014 Elsevier Ltd. All rights reserved.

  13. Recent developments in methodologies for calculating the entropy and free energy of biological systems by computer simulation.

    Science.gov (United States)

    Meirovitch, Hagai

    2007-04-01

    The Helmholtz free energy, F, plays an important role in proteins because of their rugged potential energy surface, which is 'decorated' with a tremendous number of local wells (denoted microstates, m). F governs protein folding, whereas differences DeltaF(mn) determine the relative populations of microstates that are visited by a flexible cyclic peptide or a flexible protein segment (e.g. a surface loop). Recently developed methodologies for calculating DeltaF(mn) (and entropy differences, DeltaS(mn)) mainly use thermodynamic integration and calculation of the absolute F; interesting new approaches in these categories are the adaptive integration method and the hypothetical scanning molecular dynamics method, respectively.

  14. Description of the computations and pilot procedures for planning fuel-conservative descents with a small programmable calculator

    Energy Technology Data Exchange (ETDEWEB)

    Vicroy, D.D.; Knox, C.E.

    1983-05-01

    A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.

  15. Description of the computations and pilot procedures for planning fuel-conservative descents with a small programmable calculator

    Science.gov (United States)

    Vicroy, D. D.; Knox, C. E.

    1983-01-01

    A simplified flight management descent algorithm was developed and programmed on a small programmable calculator. It was designed to aid the pilot in planning and executing a fuel conservative descent to arrive at a metering fix at a time designated by the air traffic control system. The algorithm may also be used for planning fuel conservative descents when time is not a consideration. The descent path was calculated for a constant Mach/airspeed schedule from linear approximations of airplane performance with considerations given for gross weight, wind, and nonstandard temperature effects. The flight management descent algorithm and the vertical performance modeling required for the DC-10 airplane is described.

  16. A Computational Model for Real-Time Calculation of Electric Field due to Transcranial Magnetic Stimulation in Clinics

    Directory of Open Access Journals (Sweden)

    Alessandra Paffi

    2015-01-01

    Full Text Available The aim of this paper is to propose an approach for an accurate and fast (real-time computation of the electric field induced inside the whole brain volume during a transcranial magnetic stimulation (TMS procedure. The numerical solution implements the admittance method for a discretized realistic brain model derived from Magnetic Resonance Imaging (MRI. Results are in a good agreement with those obtained using commercial codes and require much less computational time. An integration of the developed code with neuronavigation tools will permit real-time evaluation of the stimulated brain regions during the TMS delivery, thus improving the efficacy of clinical applications.

  17. Assessment of effectiveness of geologic isolation systems. ARRRG and FOOD: computer programs for calculating radiation dose to man from radionuclides in the environment

    Energy Technology Data Exchange (ETDEWEB)

    Napier, B.A.; Roswell, R.L.; Kennedy, W.E. Jr.; Strenge, D.L.

    1980-06-01

    The computer programs ARRRG and FOOD were written to facilitate the calculation of internal radiation doses to man from the radionuclides in the environment and external radiation doses from radionuclides in the environment. Using ARRRG, radiation doses to man may be calculated for radionuclides released to bodies of water from which people might obtain fish, other aquatic foods, or drinking water, and in which they might fish, swim or boat. With the FOOD program, radiation doses to man may be calculated from deposition on farm or garden soil and crops during either an atmospheric or water release of radionuclides. Deposition may be either directly from the air or from irrigation water. Fifteen crop or animal product pathways may be chosen. ARRAG and FOOD doses may be calculated for either a maximum-exposed individual or for a population group. Doses calculated are a one-year dose and a committed dose from one year of exposure. The exposure is usually considered as chronic; however, equations are included to calculate dose and dose commitment from acute (one-time) exposure. The equations for calculating internal dose and dose commitment are derived from those given by the International Commission on Radiological Protection (ICRP) for body burdens and Maximum Permissible Concentration (MPC) of each radionuclide. The radiation doses from external exposure to contaminated farm fields or shorelines are calculated assuming an infinite flat plane source of radionuclides. A factor of two is included for surface roughness. A modifying factor to compensate for finite extent is included in the shoreline calculations.

  18. DOSEFU: Computer application for dose calculation and effluent management in normal operation; DOSEFU: Aplicacion informatica para calculo de dosis y gestion de efluents en operacion normal

    Energy Technology Data Exchange (ETDEWEB)

    Martin Garcia, J. E.; Gonzalvo Manovel, A.; Revuelta Garcia, L.

    2002-07-01

    DOSEFU is a computer application on Windows that develops the methodology of nuclear power plant Exterior Dose Calculation Manuals (Manuals de Calculo de Dosis al Exterior-MACADE) for calculating doses in normal operation caused by radioactive liquid and gaseous effluents, for the purpose of enforcing the new Spanish Regulation on Health Protection against Ionizing Radiations, Royal Decree 783/2001 resulting from transposition of Directive 96/29/Euratom whereby the basic rules regarding health protection of workers and the population against risks resulting from ionizing radiations are established. In addition to making dose calculations, DOSEFU generates, on a magnetic support, the information regarding radioactive liquid and gaseous effluents that plants must periodically send to the CSN (ELGA format). The computer application has been developed for the specific case of Jose Cabrera NPP, which is called DOEZOR. This application can be easily implemented in any other nuclear or radioactive facility. The application is user-friendly, as the end user inputs data and executes the different modules through keys and dialogue boxes that are enabled by clicking on the mouse (see figures 2, 3, 4 and 5 ), The application runs under Windows 95. Digital Visual Fortran has been used as the development program, as this does not require additional libraries (DLLs), it can be installed in any computer without affecting other programs that are already installed. (Author)

  19. A computer code for calculation of radioactive nuclide generation and depletion, decay heat and {gamma} ray spectrum. FPGS90

    Energy Technology Data Exchange (ETDEWEB)

    Ihara, Hitoshi; Katakura, Jun-ichi; Nakagawa, Tsuneo [Japan Atomic Energy Research Inst., Tokai, Ibaraki (Japan). Tokai Research Establishment

    1995-11-01

    In a nuclear reactor radioactive nuclides are generated and depleted with burning up of nuclear fuel. The radioactive nuclides, emitting {gamma} ray and {beta} ray, play role of radioactive source of decay heat in a reactor and radiation exposure. In safety evaluation of nuclear reactor and nuclear fuel cycle, it is needed to estimate the number of nuclides generated in nuclear fuel under various burn-up condition of many kinds of nuclear fuel used in a nuclear reactor. FPGS90 is a code calculating the number of nuclides, decay heat and spectrum of emitted {gamma} ray from fission products produced in a nuclear fuel under the various kinds of burn-up condition. The nuclear data library used in FPGS90 code is the library `JNDC Nuclear Data Library of Fission Products - second version -`, which is compiled by working group of Japanese Nuclear Data Committee for evaluating decay heat in a reactor. The code has a function of processing a so-called evaluated nuclear data file such as ENDF/B, JENDL, ENSDF and so on. It also has a function of making figures of calculated results. Using FPGS90 code it is possible to do all works from making library, calculating nuclide generation and decay heat through making figures of the calculated results. (author).

  20. Calculating how long it takes for a diffusion process to effectively reach steady state without computing the transient solution

    Science.gov (United States)

    Carr, Elliot J.

    2017-07-01

    Mathematically, it takes an infinite amount of time for the transient solution of a diffusion equation to transition from initial to steady state. Calculating a finite transition time, defined as the time required for the transient solution to transition to within a small prescribed tolerance of the steady-state solution, is much more useful in practice. In this paper, we study estimates of finite transition times that avoid explicit calculation of the transient solution by using the property that the transition to steady state defines a cumulative distribution function when time is treated as a random variable. In total, three approaches are studied: (i) mean action time, (ii) mean plus one standard deviation of action time, and (iii) an approach we derive by approximating the large time asymptotic behavior of the cumulative distribution function. Our approach leads to a simple formula for calculating the finite transition time that depends on the prescribed tolerance δ and the (k -1 )th and k th moments (k ≥1 ) of the distribution. Results comparing exact and approximate finite transition times lead to two key findings. First, although the first two approaches are useful at characterizing the time scale of the transition, they do not provide accurate estimates for diffusion processes. Second, the new approach allows one to calculate finite transition times accurate to effectively any number of significant digits using only the moments with the accuracy increasing as the index k is increased.

  1. An Improved Computational Technique for Calculating Electromagnetic Forces and Power Absorptions Generated in Spherical and Deformed Body in Levitation Melting Devices

    Science.gov (United States)

    Zong, Jin-Ho; Szekely, Julian; Schwartz, Elliot

    1992-01-01

    An improved computational technique for calculating the electromagnetic force field, the power absorption and the deformation of an electromagnetically levitated metal sample is described. The technique is based on the volume integral method, but represents a substantial refinement; the coordinate transformation employed allows the efficient treatment of a broad class of rotationally symmetrical bodies. Computed results are presented to represent the behavior of levitation melted metal samples in a multi-coil, multi-frequency levitation unit to be used in microgravity experiments. The theoretical predictions are compared with both analytical solutions and with the results or previous computational efforts for the spherical samples and the agreement has been very good. The treatment of problems involving deformed surfaces and actually predicting the deformed shape of the specimens breaks new ground and should be the major usefulness of the proposed method.

  2. YASEIS: Yet Another computer program to calculate synthetic SEISmograms for a spherically multi-layered Earth model

    Science.gov (United States)

    Ma, Yanlu

    2013-04-01

    Although most researches focus on the lateral heterogeneity of 3D Earth nowadays, a spherically multi-layered model where the parameters depend only on depth still represents a good first order approximation of real Earth. Such 1D models could be used as starting models for seismic tomographic inversion or as background model where the source mechanisms are inverted. The problem of wave propagation in a spherically layered model had been solved theoretically long time ago (Takeuchi and Saito, 1972). The existing computer programs such as Mineos (developed by G. Master, J. Woodhouse and F. Gilbert), Gemini (Friederich and Dalkolmo 1995), DSM (Kawai et. al. 2006) and QSSP (Wang 1999) tackled the computational aspects of the problem. A new simple and fast program for computing the Green's function of a stack of spherical dissipative layers is presented here. The analytical solutions within each homogeneous spherical layer are joined through the continuous boundary conditions and propagated from the center of model up to the level of source depth. Another solution is built by propagating downwardly from the free surface of model to the source level. The final solution is then constructed in frequency domain from the previous two solutions to satisfy the discontinuities of displacements and stresses at the source level which are required by the focal mechanism. The numerical instability in the propagator approach is solved by complementing the matrix propagating with an orthonormalization procedure (Wang 1999). Another unstable difficulty due to the high attenuation in the upper mantle low velocity zone is overcome by switching the bases of solutions from the spherical Bessel functions to the spherical Hankel functions when necessary. We compared the synthetic seismograms obtained from the new program YASEIS with those computed by Gemini and QSSP. In the range of near distances, the synthetics by a reflectivity code for the horizontally layers are also compared with

  3. Calculation Software

    Science.gov (United States)

    1994-01-01

    MathSoft Plus 5.0 is a calculation software package for electrical engineers and computer scientists who need advanced math functionality. It incorporates SmartMath, an expert system that determines a strategy for solving difficult mathematical problems. SmartMath was the result of the integration into Mathcad of CLIPS, a NASA-developed shell for creating expert systems. By using CLIPS, MathSoft, Inc. was able to save the time and money involved in writing the original program.

  4. Coupled magneto-thermal field computation in three-phase gas insulated cables. Pt. 2. Calculation of ampacity and losses

    Energy Technology Data Exchange (ETDEWEB)

    Hatziathanassiou, V. [Dept. of Electrical Engineering, Section of Electrical Energy, Aristotelian Univ. of Thessaloniki (Greece); Labridis, D. [Dept. of Electrical Engineering, Section of Electrical Energy, Aristotelian Univ. of Thessaloniki (Greece)

    1993-12-31

    The calculation of ampacity and losses of three-phases gas insulated cables based on the FEM formulation which was developed in Part 1 is presented. Limitations of the common mesh for both problems (electromagnetic and thermal) are also presented. Comparisons with existing calculations are made. Results concerning the sensitivity of cable ampacity and losses to variations of design and environmental parameters (burial depth, ambient temperature, soil thermal conductivity, cable emissivities, heat transfer coefficient, sheath radius) are finally presented. (orig.) [Deutsch] Basierend auf dem in Teil 1 beschriebenen Loesungsansatz mit der Finite-Elemente-Methode wird die Berechnung der Stromtragefaehigkeit und der Verluste eines dreiphasigen gasisolierten Kabels angegeben. Dabei wird auch auf die Grenzen bei Verwendung eines fuer beide Probleme (elektromagnetisches und thermisches Feld) gemeinsamen Gitternetzes eingegangen und ein Vergleich mit existierenden Berechnungsverfahren gemacht. Abschliessend werden Ergebnisse bezueglich der Abhaengigkeit der Stromtragefaehigkeit und der Verluste von Konstruktions- und Umgebungsparametern (Verlegetiefe, Umgebungstemperatur, Waermeleitfaehigkeit des Bodens, Waermeabstrahlung des Kabels, Waermeuebergangskoeffizient, Mantelradius) vorgestellt. (orig.)

  5. Implementation of a Pseudo-Bending Seismic Travel-Time Calculator in a Distributed Parallel Computing Environment

    Science.gov (United States)

    2008-09-01

    of a given phase must interact ( Moho , 410, 660, etc.). We specify additional interfaces at levels within the Earth model that could potentially... Moho and other interfaces in the mantle down to, but not including, the 660-km discontinuity, thereby constraining the computed ray to bottom somewhere...REFLECTION M100 DIFFRACTION MOHO REFLECTION MOHO DIFFRACTION MOHO REFRACTION M150 REFRACTION M175 REFRACTION M200 REFRACTION M410 REFRACTION M660

  6. Evaluation of Flood Level under Main Feedwater Line Break Accident using GOTHIC Computer Code and Analytical Calculation by ANSI 56.11

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Keon Yeop; Park, Jae Won; Jeon, Woo Jae [FNC Technology Co., Yongin (Korea, Republic of)

    2016-10-15

    The design basis internal flooding is caused by postulated pipe ruptures or component failures. The flooding can cause failure of safety-related equipment and affect the integrity of the structure. Though large diameter pipe rupture is significant in flooding analysis, split breaks should also be considered with consideration of a spectrum of pipe break size and power level. The pipe rupture analysis should be based on the most severe single active failure. For enveloping spectrum of pipe break condition, flood relief paths are necessary and passive flood protection without operating action, basically, shall be applied. In this study, the evaluation of flood level in case of Main Feedwater Line Break (MFLB) was performed by using GOTHIC computer program and hand calculation. The flooding analyses were performed by hand calculation and GOTHIC analysis for an assumed MFLB condition. The calculated flood levels were 0.823m and 0.691m for hand calculation and GOTHIC analysis, respectively. In comparison to the GOTHIC analysis, hand calculation showed conservative results. However, in actual flood protection design, margin for uncertainty shall be considered, in order to reflect the outflow reducing effect due to vortex and intake of air.

  7. Computing the full spectrum of large sparse palindromic quadratic eigenvalue problems arising from surface Green's function calculations

    Science.gov (United States)

    Huang, Tsung-Ming; Lin, Wen-Wei; Tian, Heng; Chen, Guan-Hua

    2018-03-01

    Full spectrum of a large sparse ⊤-palindromic quadratic eigenvalue problem (⊤-PQEP) is considered arguably for the first time in this article. Such a problem is posed by calculation of surface Green's functions (SGFs) of mesoscopic transistors with a tremendous non-periodic cross-section. For this problem, general purpose eigensolvers are not efficient, nor is advisable to resort to the decimation method etc. to obtain the Wiener-Hopf factorization. After reviewing some rigorous understanding of SGF calculation from the perspective of ⊤-PQEP and nonlinear matrix equation, we present our new approach to this problem. In a nutshell, the unit disk where the spectrum of interest lies is broken down adaptively into pieces small enough that they each can be locally tackled by the generalized ⊤-skew-Hamiltonian implicitly restarted shift-and-invert Arnoldi (G⊤SHIRA) algorithm with suitable shifts and other parameters, and the eigenvalues missed by this divide-and-conquer strategy can be recovered thanks to the accurate estimation provided by our newly developed scheme. Notably the novel non-equivalence deflation is proposed to avoid as much as possible duplication of nearby known eigenvalues when a new shift of G⊤SHIRA is determined. We demonstrate our new approach by calculating the SGF of a realistic nanowire whose unit cell is described by a matrix of size 4000 × 4000 at the density functional tight binding level, corresponding to a 8 × 8nm2 cross-section. We believe that quantum transport simulation of realistic nano-devices in the mesoscopic regime will greatly benefit from this work.

  8. Computation of Viscous-Inviscid Interactions (Le Calcul de l’Interaction Fluide Parfait-Fluide Visqueux).

    Science.gov (United States)

    1981-10-01

    mesure de calculer las dgcollements. Des 616ments d’analysa sant indiqu6s dana lea conf~rances de Melnik [10] , Le Balleur [13. Una revue plus...Aux incertitudes d6JA mentionn~es sur la technique num~rique adoptge Pour l’gcoulement externe, s’aiautc Is difficult6 d’un maillage de discr~tisation...d’algorithmes num~riques indispensables 5 leur rlsolution coupl~e rigoureuse. Bien que des incertitudes demeurent aur les Drobigmes de bord de fuite ou

  9. Computational tool for phase-shift calculation in an interference pattern by fringe displacements based on a skeletonized image

    Science.gov (United States)

    Rivera-Ortega, Uriel; Pico-Gonzalez, Beatriz

    2016-01-01

    In this manuscript an algorithm based on a graphic user interface (GUI) designed in MATLAB for an automatic phase-shifting estimation between two digitalized interferograms is presented. The proposed algorithm finds the midpoint locus of the dark and bright interference fringes in two skeletonized fringe patterns and relates their displacements with the corresponding phase-shift. In order to demonstrate the usefulness of the proposed GUI, its application to simulated and experimental interference patterns will be shown. The viability of this GUI makes it a helpful and easy-to-use computational tool for educational or research purposes in optical phenomena for undergraduate or graduate studies in the field of physics.

  10. Computational Calculation Of The Ionization Energies Of The Human Prion Protein By The Coarse-grain Method

    Science.gov (United States)

    Lyu, Justin; Andrianarijaona, V. M.

    2016-05-01

    The causes of the misfolding of prion protein -i.e. the transformation of PrPC to PrPSc - have not been clearly elucidated. Many studies have focused on identifying possible chemical conditions, such as pH, temperature and chemical denaturation, that may trigger the pathological transformation of prion proteins (Weiwei Tao, Gwonchan Yoon, Penghui Cao, `` β-sheet-like formation during the mechanical unfolding of prion protein'', The Journal of Chemical Physics, 2015, 143, 125101). Here, we attempt to calculate the ionization energies of the prion protein, which will be able to shed light onto the possible causes of the misfolding. We plan on using the coarse-grain method which allows for a more feasible calculation time by means of approximation. We believe that by being able to approximate the ionization potential, particularly that of the regions known to form stable β-strands of the PrPSc form, the possible sources of denaturation, be it chemical or mechanical, may be narrowed down.

  11. SCHEMA computational design of virus capsid chimeras: calibrating how genome packaging, protection, and transduction correlate with calculated structural disruption.

    Science.gov (United States)

    Ho, Michelle L; Adler, Benjamin A; Torre, Michael L; Silberg, Jonathan J; Suh, Junghae

    2013-12-20

    Adeno-associated virus (AAV) recombination can result in chimeric capsid protein subunits whose ability to assemble into an oligomeric capsid, package a genome, and transduce cells depends on the inheritance of sequence from different AAV parents. To develop quantitative design principles for guiding site-directed recombination of AAV capsids, we have examined how capsid structural perturbations predicted by the SCHEMA algorithm correlate with experimental measurements of disruption in seventeen chimeric capsid proteins. In our small chimera population, created by recombining AAV serotypes 2 and 4, we found that protection of viral genomes and cellular transduction were inversely related to calculated disruption of the capsid structure. Interestingly, however, we did not observe a correlation between genome packaging and calculated structural disruption; a majority of the chimeric capsid proteins formed at least partially assembled capsids and more than half packaged genomes, including those with the highest SCHEMA disruption. These results suggest that the sequence space accessed by recombination of divergent AAV serotypes is rich in capsid chimeras that assemble into 60-mer capsids and package viral genomes. Overall, the SCHEMA algorithm may be useful for delineating quantitative design principles to guide the creation of libraries enriched in genome-protecting virus nanoparticles that can effectively transduce cells. Such improvements to the virus design process may help advance not only gene therapy applications but also other bionanotechnologies dependent upon the development of viruses with new sequences and functions.

  12. Development of a computer application for the calculation of the thermodynamic properties of the ammonia-water mixture

    Directory of Open Access Journals (Sweden)

    Iván Vera-Romero

    2017-06-01

    Full Text Available The design and optimization of energy systems are very important today. Some of these systems use the ammonia-water mixture as working fluid, therefore, calculation of the thermodynamic properties becomes indispensable for its evaluation, design and optimization. In the present work an application has been developed in ExcelTM using Visual Basic (VBA from a formulation based on free Gibbs Energy of Excess, in order to simulate different systems such as cooling, air conditioning, heat pumps, cogeneration and power cycles, without to acquire commercial simulators for this purpose. To validate this program, the results were compared with data obtained by the National Institute of Standards and Technology (NIST software and experimental data reported in the literature.

  13. Calculation of the multipole coefficients of 2-dimensional magnetic fields from the spectrum analysis computed by FLUX2D

    CERN Document Server

    Gyr, Marcel

    1997-01-01

    Two-dimensional fields in regions with constant material properties (magnetic permeability, dielectric permittivity, heat conductivity, etc.) can be derived from a complex potential function W, where the real and imaginary parts correspond to the scalar potential function F and the stream function Y respectively. In source-free regions (no currents, electric charges, heat sinks, etc.), the field, is conservative and its complex potential W is a harmonic function where F and Y satisfy the Laplace equation. In such regions W can be expressed as a power series, the coefficients of which being the, multipole components of the field with respect to the origin of the series expansion. This note describes how these multipole coefficients are related to the amplitudes and phase angles as computed by, FLUX2D in the spectrum analysis of the magnetic vector potential. In the case of electrical or thermal problems, these same considerations apply by analogy.

  14. The self-consistent calculation of pseudo-molecule energy levels, construction of energy level correlation diagrams and an automated computation system for SCF-X(Alpha)-SW calculations

    Science.gov (United States)

    Schlosser, H.

    1981-01-01

    The self consistent calculation of the electronic energy levels of noble gas pseudomolecules formed when a metal surface is bombarded by noble gas ions is discussed along with the construction of energy level correlation diagrams as a function of interatomic spacing. The self consistent field x alpha scattered wave (SCF-Xalpha-SW) method is utilized. Preliminary results on the Ne-Mg system are given. An interactive x alpha programming system, implemented on the LeRC IBM 370 computer, is described in detail. This automated system makes use of special PROCDEFS (procedure definitions) to minimize the data to be entered manually at a remote terminal. Listings of the special PROCDEFS and of typical input data are given.

  15. Application of computational fluid dynamics and fluid structure interaction techniques for calculating the 3D transient flow of journal bearings coupled with rotor systems

    Science.gov (United States)

    Li, Qiang; Yu, Guichang; Liu, Shulian; Zheng, Shuiying

    2012-09-01

    Journal bearings are important parts to keep the high dynamic performance of rotor machinery. Some methods have already been proposed to analysis the flow field of journal bearings, and in most of these methods simplified physical model and classic Reynolds equation are always applied. While the application of the general computational fluid dynamics (CFD)-fluid structure interaction (FSI) techniques is more beneficial for analysis of the fluid field in a journal bearing when more detailed solutions are needed. This paper deals with the quasi-coupling calculation of transient fluid dynamics of oil film in journal bearings and rotor dynamics with CFD-FSI techniques. The fluid dynamics of oil film is calculated by applying the so-called "dynamic mesh" technique. A new mesh movement approach is presented while the dynamic mesh models provided by FLUENT are not suitable for the transient oil flow in journal bearings. The proposed mesh movement approach is based on the structured mesh. When the journal moves, the movement distance of every grid in the flow field of bearing can be calculated, and then the update of the volume mesh can be handled automatically by user defined function (UDF). The journal displacement at each time step is obtained by solving the moving equations of the rotor-bearing system under the known oil film force condition. A case study is carried out to calculate the locus of the journal center and pressure distribution of the journal in order to prove the feasibility of this method. The calculating results indicate that the proposed method can predict the transient flow field of a journal bearing in a rotor-bearing system where more realistic models are involved. The presented calculation method provides a basis for studying the nonlinear dynamic behavior of a general rotor-bearing system.

  16. Calculation of DNA strand breaks due to direct and indirect effects of Auger electrons from incorporated 123I and 125I radionuclides using the Geant4 computer code.

    Science.gov (United States)

    Raisali, Gholamreza; Mirzakhanian, Lalageh; Masoudi, Seyed Farhad; Semsarha, Farid

    2013-01-01

    In this work the number of DNA single-strand breaks (SSB) and double-strand breaks (DSB) due to direct and indirect effects of Auger electrons from incorporated (123)I and (125)I have been calculated by using the Geant4-DNA toolkit. We have performed and compared the calculations for several cases: (125)I versus (123)I, source positions and direct versus indirect breaks to study the capability of the Geant4-DNA in calculations of DNA damage yields. Two different simple geometries of a 41 base pair of B-DNA have been simulated. The location of (123)I has been considered to be in (123)IdUrd and three different locations for (125)I. The results showed that the simpler geometry is sufficient for direct break calculations while indirect damage yield is more sensitive to the helical shape of DNA. For (123)I Auger electrons, the average number of DSB due to the direct hits is almost twice the DSB due to the indirect hits. Furthermore, a comparison between the average number of SSB or DSB caused by Auger electrons of (125)I and (123)I in (125)IdUrd and (123)IdUrd shows that (125)I is 1.5 times more effective than (123)I per decay. The results are in reasonable agreement with previous experimental and theoretical results which shows the applicability of the Geant-DNA toolkit in nanodosimetry calculations which benefits from the open-source accessibility with the advantage that the DNA models used in this work enable us to save the computational time. Also, the results showed that the simpler geometry is suitable for direct break calculations, while for the indirect damage yield, the more precise model is preferred.

  17. Vibrational, NMR and UV-visible spectroscopic investigation and NLO studies on benzaldehyde thiosemicarbazone using computational calculations

    Science.gov (United States)

    Moorthy, N.; Prabakar, P. C. Jobe; Ramalingam, S.; Pandian, G. V.; Anbusrinivasan, P.

    2016-04-01

    In order to investigate the vibrational, electronic and NLO characteristics of the compound; benzaldehyde thiosemicarbazone (BTSC), the XRD, FT-IR, FT-Raman, NMR and UV-visible spectra were recorded and were analysed with the calculated spectra by using HF and B3LYP methods with 6-311++G(d,p) basis set. The XRD results revealed that the stabilized molecular systems were confined in orthorhombic unit cell system. The cause for the change of chemical and physical properties behind the compound has been discussed makes use of Mulliken charge levels and NBO in detail. The shift of molecular vibrational pattern by the fusing of ligand; thiosemicarbazone group with benzaldehyde has been keenly observed. The occurrence of in phase and out of phase molecular interaction over the frontier molecular orbitals was determined to evaluate the degeneracy of the electronic energy levels. The thermodynamical studies of the temperature region 100-1000 K to detect the thermal stabilization of the crystal phase of the compound were investigated. The NLO properties were evaluated by the determination of the polarizability and hyperpolarizability of the compound in crystal phase. The physical stabilization of the geometry of the compound has been explained by geometry deformation analysis.

  18. Vibrational, NMR and UV-Visible spectroscopic investigation, VCD and NLO studies on Benzophenone thiosemicarbazone using computational calculations

    Science.gov (United States)

    Moorthy, N.; Jobe Prabakar, P. C.; Ramalingam, S.; Periandy, S.; Parasuraman, K.

    2016-04-01

    In order to explore the unbelievable NLO property of prepared Benzophenone thiosemicarbazone (BPTSC), the experimental and theoretical investigation has been made. The theoretical calculations were made using RHF and CAM-B3LYP methods at 6-311++G(d,p) basis set. The title compound contains Cdbnd S ligand which helps to improve the second harmonic generation (SHG) efficiency. The molecule has been examined in terms of the vibrational, electronic and optical properties. The entire molecular behavior was studied by their fundamental IR and Raman wavenumbers and was compared with the theoretical aspect. The molecular chirality has been studied by performing vibrational circular dichroism (circularly polarized infrared radiation). The Mulliken charge levels of the compound ensure the perturbation of atomic charges according to the ligand. The molecular interaction of frontier orbitals emphasizes the modification of chemical properties of the compound through the reaction path. The enormous amount of NLO activity was induced by the Benzophenone in thiosemicarbazone. The Gibbs free energy was evaluated at different temperature and from which the enhancement of chemical stability was stressed. The VCD spectrum was simulated and the optical dichroism of the compound has been analyzed.

  19. Development of computer programme for the use of empirical calculation of mining subsidence; Desarrollo informatico para utilizacion de los metodos empiricos de calculo de subsidencia minera

    Energy Technology Data Exchange (ETDEWEB)

    1999-09-01

    The fundamental objective of the project is the elaboration of a user friendly computer programme which allows to mining technicians an easy application of the empirical calculation methods of mining subsidence. As is well known these methods use, together with a suitable theoretical support, the experimental data obtained during a long period of mining activities in areas of different geological and geomechanical nature. Thus they can incorporate to the calculus the local parameters that hardly could be taken into account by using pure theoretical methods. In general, as basic calculation method, it has been followed the procedure development by the VNIMI Institute of Leningrad, a particularly suitable method for application to the most various conditions that may occur in the mining of flat or steep seams. The computer programme has been worked out on the basis of MicroStation System (5.0 version) of INTERGRAPH which allows the development of new applications related to the basic aims of the project. An important feature, of the programme that may be quoted is the easy adaptation to local conditions by adjustment of the geomechanical or mining parameters according to the values obtained from the own working experience. (Author)

  20. Clinical application of calculated split renal volume using computed tomography-based renal volumetry after partial nephrectomy: Correlation with technetium-99m dimercaptosuccinic acid renal scan data.

    Science.gov (United States)

    Lee, Chan Ho; Park, Young Joo; Ku, Ja Yoon; Ha, Hong Koo

    2017-06-01

    To evaluate the clinical application of computed tomography-based measurement of renal cortical volume and split renal volume as a single tool to assess the anatomy and renal function in patients with renal tumors before and after partial nephrectomy, and to compare the findings with technetium-99m dimercaptosuccinic acid renal scan. The data of 51 patients with a unilateral renal tumor managed by partial nephrectomy were retrospectively analyzed. The renal cortical volume of tumor-bearing and contralateral kidneys was measured using ImageJ software. Split estimated glomerular filtration rate and split renal volume calculated using this renal cortical volume were compared with the split renal function measured with technetium-99m dimercaptosuccinic acid renal scan. A strong correlation between split renal function and split renal volume of the tumor-bearing kidney was observed before and after surgery (r = 0.89, P < 0.001 and r = 0.94, P < 0.001). The preoperative and postoperative split estimated glomerular filtration rate of the operated kidney showed a moderate correlation with split renal function (r = 0.39, P = 0.004 and r = 0.49, P < 0.001). The correlation between reductions in split renal function and split renal volume of the operated kidney (r = 0.87, P < 0.001) was stronger than that between split renal function and percent reduction in split estimated glomerular filtration rate (r = 0.64, P < 0.001). The split renal volume calculated using computed tomography-based renal volumetry had a strong correlation with the split renal function measured using technetium-99m dimercaptosuccinic acid renal scan. Computed tomography-based split renal volume measurement before and after partial nephrectomy can be used as a single modality for anatomical and functional assessment of the tumor-bearing kidney. © 2017 The Japanese Urological Association.

  1. Calculator calculus

    CERN Document Server

    McCarty, George

    1982-01-01

    How THIS BOOK DIFFERS This book is about the calculus. What distinguishes it, however, from other books is that it uses the pocket calculator to illustrate the theory. A computation that requires hours of labor when done by hand with tables is quite inappropriate as an example or exercise in a beginning calculus course. But that same computation can become a delicate illustration of the theory when the student does it in seconds on his calculator. t Furthermore, the student's own personal involvement and easy accomplishment give hi~ reassurance and en­ couragement. The machine is like a microscope, and its magnification is a hundred millionfold. We shall be interested in limits, and no stage of numerical approximation proves anything about the limit. However, the derivative of fex) = 67.SgX, for instance, acquires real meaning when a student first appreciates its values as numbers, as limits of 10 100 1000 t A quick example is 1.1 , 1.01 , 1.001 , •••• Another example is t = 0.1, 0.01, in the functio...

  2. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...

  3. COMPUTING

    CERN Multimedia

    M. Kasemann

    Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...

  4. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...

  5. COMPUTING

    CERN Multimedia

    P. McBride

    The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...

  6. A Deep Insight into the Details of the Interisomerization and Decomposition Mechanism of o-Quinolyl and o-Isoquinolyl Radicals. Quantum Chemical Calculations and Computer Modeling.

    Science.gov (United States)

    Dubnikova, Faina; Tamburu, Carmen; Lifshitz, Assa

    2016-09-29

    The isomerization of o-quinolyl ↔ o-isoquinolyl radicals and their thermal decomposition were studied by quantum chemical methods, where potential energy surfaces of the reaction channels and their kinetics rate parameters were determined. A detailed kinetics scheme containing 40 elementary steps was constructed. Computer simulations were carried out to determine the isomerization mechanism and the distribution of reaction products in the decomposition. The calculated mole percent of the stable products was compared to the experimental values that were obtained in this laboratory in the past, using the single pulse shock tube. The agreement between the experimental and the calculated mole percents was very good. A map of the figures containing the mole percent's of eight stable products of the decomposition plotted vs T are presented. The fast isomerization of o-quinolyl → o-isoquinolyl radicals via the intermediate indene imine radical and the attainment of fast equilibrium between these two radicals is the reason for the identical product distribution regardless whether the reactant radical is o-quinolyl or o-isoquinolyl. Three of the main decomposition products of o-quinolyl radical, are those containing the benzene ring, namely, phenyl, benzonitrile, and phenylacetylene radicals. They undergo further decomposition mainly at high temperatures via two types of reactions: (1) Opening of the benzene ring in the radicals, followed by splitting into fragments. (2) Dissociative attachment of benzonitrile and phenyl acetylene by hydrogen atoms to form hydrogen cyanide and acetylene.

  7. Human dental age estimation by calculation of pulp-tooth volume ratios yielded on clinically acquired cone beam computed tomography images of monoradicular teeth.

    Science.gov (United States)

    Star, Hazha; Thevissen, Patrick; Jacobs, Reinhilde; Fieuws, Steffen; Solheim, Tore; Willems, Guy

    2011-01-01

    Secondary dentine is responsible for a decrease in the volume of the dental pulp cavity with aging. The aim of this study is to evaluate a human dental age estimation method based on the ratio between the volume of the pulp and the volume of its corresponding tooth, calculated on clinically taken cone beam computed tomography (CBCT) images from monoradicular teeth. On the 3D images of 111 clinically obtained CBCT images (Scanora(®) 3D dental cone beam unit) of 57 female and 54 male patients ranging in age between 10 and 65 years, the pulp-tooth volume ratio of 64 incisors, 32 canines, and 15 premolars was calculated with Simplant(®) Pro software. A linear regression model was fit with age as dependent variable and ratio as predictor, allowing for interactions of specific gender or tooth type. The obtained pulp-tooth volume ratios were the strongest related to age on incisors. © 2010 American Academy of Forensic Sciences.

  8. PACER -- A fast running computer code for the calculation of short-term containment/confinement loads following coolant boundary failure. Volume 2: User information

    Energy Technology Data Exchange (ETDEWEB)

    Sienicki, J.J. [Argonne National Lab., IL (United States). Reactor Engineering Div.

    1997-06-01

    A fast running and simple computer code has been developed to calculate pressure loadings inside light water reactor containments/confinements under loss-of-coolant accident conditions. PACER was originally developed to calculate containment/confinement pressure and temperature time histories for loss-of-coolant accidents in Soviet-designed VVER reactors and is relevant to the activities of the US International Nuclear Safety Center. The code employs a multicompartment representation of the containment volume and is focused upon application to early time containment phenomena during and immediately following blowdown. PACER has been developed for FORTRAN 77 and earlier versions of FORTRAN. The code has been successfully compiled and executed on SUN SPARC and Hewlett-Packard HP-735 workstations provided that appropriate compiler options are specified. The code incorporates both capabilities built around a hardwired default generic VVER-440 Model V230 design as well as fairly general user-defined input. However, array dimensions are hardwired and must be changed by modifying the source code if the number of compartments/cells differs from the default number of nine. Detailed input instructions are provided as well as a description of outputs. Input files and selected output are presented for two sample problems run on both HP-735 and SUN SPARC workstations.

  9. Computer-assisted radiographic calculation of spinal curvature in brachycephalic "screw-tailed" dog breeds with congenital thoracic vertebral malformations: reliability and clinical evaluation.

    Directory of Open Access Journals (Sweden)

    Julien Guevar

    Full Text Available The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013 to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.

  10. Computer-assisted radiographic calculation of spinal curvature in brachycephalic "screw-tailed" dog breeds with congenital thoracic vertebral malformations: reliability and clinical evaluation.

    Science.gov (United States)

    Guevar, Julien; Penderis, Jacques; Faller, Kiterie; Yeamans, Carmen; Stalin, Catherine; Gutierrez-Quintana, Rodrigo

    2014-01-01

    The objectives of this study were: To investigate computer-assisted digital radiographic measurement of Cobb angles in dogs with congenital thoracic vertebral malformations, to determine its intra- and inter-observer reliability and its association with the presence of neurological deficits. Medical records were reviewed (2009-2013) to identify brachycephalic screw-tailed dog breeds with radiographic studies of the thoracic vertebral column and with at least one vertebral malformation present. Twenty-eight dogs were included in the study. The end vertebrae were defined as the cranial end plate of the vertebra cranial to the malformed vertebra and the caudal end plate of the vertebra caudal to the malformed vertebra. Three observers performed the measurements twice. Intraclass correlation coefficients were used to calculate the intra- and inter-observer reliabilities. The intraclass correlation coefficient was excellent for all intra- and inter-observer measurements using this method. There was a significant difference in the kyphotic Cobb angle between dogs with and without associated neurological deficits. The majority of dogs with neurological deficits had a kyphotic Cobb angle higher than 35°. No significant difference in the scoliotic Cobb angle was observed. We concluded that the computer assisted digital radiographic measurement of the Cobb angle for kyphosis and scoliosis is a valid, reproducible and reliable method to quantify the degree of spinal curvature in brachycephalic screw-tailed dog breeds with congenital thoracic vertebral malformations.

  11. PYFLOW: A computer code for the calculation of the impact parameters of Dilute Pyroclastic Density Currents (DPDC) based on field data

    Science.gov (United States)

    Dioguardi, Fabio; Dellino, Pierfrancesco

    2014-05-01

    PYFLOW is a computer code designed for quantifying the hazard related to Dilute Pyroclastic Density Currents (DPDC). DPDCs are multiphase flows that form during explosive volcanic eruptions. They are the major source of hazard related to volcanic eruptions, as they exert a significant stress over buildings and transport significant amounts of volcanic ash, which is hot and unbreathable. The program calculates the DPDC's impact parameters (e.g. dynamic pressure and particle volumetric concentration) and is founded on the turbulent boundary layer theory adapted to a multiphase framework. Fluid-dynamic variables are searched with a probabilistic approach, meaning that for each variable the average, maximum and minimum solutions are calculated. From these values, PYFLOW creates probability functions that allow to calculate the parameter at a given percentile. The code is written in Fortran 90 and can be compiled and installed on Windows, Mac OS X, Linux operating systems (OS). A User's manual is provided, explaining the details of the theoretical background, the setup and running procedure and the input data. The model inputs are DPDC deposits data, e.g. particle grainsize, layer thickness, particles shape factor and density. PYFLOW reads input data from a specifically designed input file or from the user's direct typing by command lines. Guidelines for writing input data are also contained in the package. PYFLOW guides the user at each step of execution, asking for additional data and inputs. The program is a tool for DPDC hazard assessment and, as an example, an application to the DPDC deposits of the Agnano-Monte Spina eruption (4.1 ky BP) at Campi Flegrei (Italy) is presented.

  12. A study of computer-aided diagnosis for pulmonary nodule: comparison between classification accuracies using calculated image features and imaging findings annotated by radiologists.

    Science.gov (United States)

    Kawagishi, Masami; Chen, Bin; Furukawa, Daisuke; Sekiguchi, Hiroyuki; Sakai, Koji; Kubo, Takeshi; Yakami, Masahiro; Fujimoto, Koji; Sakamoto, Ryo; Emoto, Yutaka; Aoyama, Gakuto; Iizuka, Yoshio; Nakagomi, Keita; Yamamoto, Hiroyuki; Togashi, Kaori

    2017-05-01

    In our previous study, we developed a computer-aided diagnosis (CADx) system using imaging findings annotated by radiologists. The system, however, requires radiologists to input many imaging findings. In order to reduce such an interaction of radiologists, we further developed a CADx system using derived imaging findings based on calculated image features, in which the system only requires few user operations. The purpose of this study is to check whether calculated image features (CFT) or derived imaging findings (DFD) can represent information in imaging findings annotated by radiologists (AFD). We calculate 2282 image features and derive 39 imaging findings by using information on a nodule position and its type (solid or ground-glass). These image features are categorized into shape features, texture features and imaging findings-specific features. Each imaging finding is derived based on each corresponding classifier using random forest. To check whether CFT or DFD can represent information in AFD, under an assumption that the accuracies of classifiers are the same if information included in input is the same, we constructed classifiers by using various types of information (CTT, DFD and AFD) and compared accuracies on an inferred diagnosis of a nodule. We employ SVM with RBF kernel as classifier to infer a diagnosis name. Accuracies of classifiers using DFD, CFT, AFD and CFT [Formula: see text] AFD were 0.613, 0.577, 0.773 and 0.790, respectively. Concordance rates between DFD and AFD of shape findings, texture findings and surrounding findings were 0.644, 0.871 and 0.768, respectively. The results suggest that CFT and AFD are similar information and CFT represent only a portion of AFD. Particularly, CFT did not contain shape information in AFD. In order to decrease an interaction of radiologists, a development of a method which overcomes these problems is necessary.

  13. Cooperative and competitive concurrency in scientific computing. A full open-source upgrade of the program for dynamical calculations of RHEED intensity oscillations

    Science.gov (United States)

    Daniluk, Andrzej

    2011-06-01

    A computational model is a computer program, which attempts to simulate an abstract model of a particular system. Computational models use enormous calculations and often require supercomputer speed. As personal computers are becoming more and more powerful, more laboratory experiments can be converted into computer models that can be interactively examined by scientists and students without the risk and cost of the actual experiments. The future of programming is concurrent programming. The threaded programming model provides application programmers with a useful abstraction of concurrent execution of multiple tasks. The objective of this release is to address the design of architecture for scientific application, which may execute as multiple threads execution, as well as implementations of the related shared data structures. New version program summaryProgram title: GrowthCP Catalogue identifier: ADVL_v4_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADVL_v4_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 32 269 No. of bytes in distributed program, including test data, etc.: 8 234 229 Distribution format: tar.gz Programming language: Free Object Pascal Computer: multi-core x64-based PC Operating system: Windows XP, Vista, 7 Has the code been vectorised or parallelized?: No RAM: More than 1 GB. The program requires a 32-bit or 64-bit processor to run the generated code. Memory is addressed using 32-bit (on 32-bit processors) or 64-bit (on 64-bit processors with 64-bit addressing) pointers. The amount of addressed memory is limited only by the available amount of virtual memory. Supplementary material: The figures mentioned in the "Summary of revisions" section can be obtained here. Classification: 4.3, 7.2, 6.2, 8, 14 External routines: Lazarus [1] Catalogue

  14. Improved tissue assignment using dual-energy computed tomography in low-dose rate prostate brachytherapy for Monte Carlo dose calculation

    Energy Technology Data Exchange (ETDEWEB)

    Côté, Nicolas [Département de Physique, Université de Montréal, Pavillon Roger-Gaudry (D-428), 2900 Boulevard Édouard-Montpetit, Montréal, Québec H3T 1J4 (Canada); Bedwani, Stéphane [Département de Radio-Oncologie, Centre Hospitalier de l’Université de Montréal (CHUM), 1560 Rue Sherbrooke Est, Montréal, Québec H2L 4M1 (Canada); Carrier, Jean-François, E-mail: jean-francois.carrier.chum@ssss.gouv.qc.ca [Département de Physique, Université de Montréal, Pavillon Roger-Gaudry (D-428), 2900 Boulevard Édouard-Montpetit, Montréal, Québec H3T 1J4, Canada and Département de Radio-Oncologie, Centre Hospitalier de l’Université de Montréal (CHUM), 1560 Rue Sherbrooke Est, Montréal, Québec H2L 4M1 (Canada)

    2016-05-15

    Purpose: An improvement in tissue assignment for low-dose rate brachytherapy (LDRB) patients using more accurate Monte Carlo (MC) dose calculation was accomplished with a metallic artifact reduction (MAR) method specific to dual-energy computed tomography (DECT). Methods: The proposed MAR algorithm followed a four-step procedure. The first step involved applying a weighted blend of both DECT scans (I {sub H/L}) to generate a new image (I {sub Mix}). This action minimized Hounsfield unit (HU) variations surrounding the brachytherapy seeds. In the second step, the mean HU of the prostate in I {sub Mix} was calculated and shifted toward the mean HU of the two original DECT images (I {sub H/L}). The third step involved smoothing the newly shifted I {sub Mix} and the two original I {sub H/L}, followed by a subtraction of both, generating an image that represented the metallic artifact (I {sub A,(H/L)}) of reduced noise levels. The final step consisted of subtracting the original I {sub H/L} from the newly generated I {sub A,(H/L)} and obtaining a final image corrected for metallic artifacts. Following the completion of the algorithm, a DECT stoichiometric method was used to extract the relative electronic density (ρ{sub e}) and effective atomic number (Z {sub eff}) at each voxel of the corrected scans. Tissue assignment could then be determined with these two newly acquired physical parameters. Each voxel was assigned the tissue bearing the closest resemblance in terms of ρ{sub e} and Z {sub eff}, comparing with values from the ICRU 42 database. A MC study was then performed to compare the dosimetric impacts of alternative MAR algorithms. Results: An improvement in tissue assignment was observed with the DECT MAR algorithm, compared to the single-energy computed tomography (SECT) approach. In a phantom study, tissue misassignment was found to reach 0.05% of voxels using the DECT approach, compared with 0.40% using the SECT method. Comparison of the DECT and SECT D

  15. Reliability Calculations

    DEFF Research Database (Denmark)

    Petersen, Kurt Erling

    1986-01-01

    Risk and reliability analysis is increasingly being used in evaluations of plant safety and plant reliability. The analysis can be performed either during the design process or during the operation time, with the purpose to improve the safety or the reliability. Due to plant complexity and safety...... and availability requirements, sophisticated tools, which are flexible and efficient, are needed. Such tools have been developed in the last 20 years and they have to be continuously refined to meet the growing requirements. Two different areas of application were analysed. In structural reliability probabilistic...... approaches have been introduced in some cases for the calculation of the reliability of structures or components. A new computer program has been developed based upon numerical integration in several variables. In systems reliability Monte Carlo simulation programs are used especially in analysis of very...

  16. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...

  17. Description of input and examples for PHREEQC version 3: a computer program for speciation, batch-reaction, one-dimensional transport, and inverse geochemical calculations

    Science.gov (United States)

    Parkhurst, David L.; Appelo, C.A.J.

    2013-01-01

    PHREEQC version 3 is a computer program written in the C and C++ programming languages that is designed to perform a wide variety of aqueous geochemical calculations. PHREEQC implements several types of aqueous models: two ion-association aqueous models (the Lawrence Livermore National Laboratory model and WATEQ4F), a Pitzer specific-ion-interaction aqueous model, and the SIT (Specific ion Interaction Theory) aqueous model. Using any of these aqueous models, PHREEQC has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations with reversible and irreversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and pressure and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters within specified compositional uncertainty limits. Many new modeling features were added to PHREEQC version 3 relative to version 2. The Pitzer aqueous model (pitzer.dat database, with keyword PITZER) can be used for high-salinity waters that are beyond the range of application for the Debye-Hückel theory. The Peng-Robinson equation of state has been implemented for calculating the solubility of gases at high pressure. Specific volumes of aqueous species are calculated as a function of the dielectric properties of water and the ionic strength of the solution, which allows calculation of pressure effects on chemical reactions and the density of a solution. The specific conductance and the density of a solution are calculated and printed in the output file. In addition to Runge-Kutta integration, a stiff ordinary differential equation solver (CVODE) has been included for kinetic calculations with multiple rates that occur at widely different time scales

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...

  19. COMPUTING

    CERN Multimedia

    M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...

  20. ISICS2011, an updated version of ISICS: A program for calculation K-, L-, and M-shell cross sections from PWBA and ECPSSR theories using a personal computer

    Science.gov (United States)

    Cipolla, Sam J.

    2011-11-01

    In this new version of ISICS, called ISICS2011, a few omissions and incorrect entries in the built-in file of electron binding energies have been corrected; operational situations leading to un-physical behavior have been identified and flagged. New version program summaryProgram title: ISICS2011 Catalogue identifier: ADDS_v5_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADDS_v5_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 6011 No. of bytes in distributed program, including test data, etc.: 130 587 Distribution format: tar.gz Programming language: C Computer: 80486 or higher-level PCs Operating system: WINDOWS XP and all earlier operating systems Classification: 16.7 Catalogue identifier of previous version: ADDS_v4_0 Journal reference of previous version: Comput. Phys. Commun. 180 (2009) 1716. Does the new version supersede the previous version?: Yes Nature of problem: Ionization and X-ray production cross section calculations for ion-atom collisions. Solution method: Numerical integration of form factor using a logarithmic transform and Gaussian quadrature, plus exact integration limits. Reasons for new version: General need for higher precision in output format for projectile energies; some built-in binding energies needed correcting; some anomalous results occur due to faulty read-in data or calculated parameters becoming un-physical; erroneous calculations could result for the L and M shells when restricted K-shell options are inadvertently chosen; to achieve general compatibility with ISICSoo, a companion C++ version that is portable to Linux and MacOS platforms, has been submitted for publication in the CPC Program Library approximately at the same time as this present new standalone version of ISICS [1]. Summary of revisions: The format field for

  1. COMPUTING

    CERN Multimedia

    P. McBride

    It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...

  2. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

    Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...

  3. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...

  4. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...

  5. COMPUTING

    CERN Multimedia

    M. Kasemann

    CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes.  Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...

  6. COMPUTING

    CERN Multimedia

    I. Fisk

    2010-01-01

    Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...

  7. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...

  8. Multiphase flow calculation software

    Science.gov (United States)

    Fincke, James R.

    2003-04-15

    Multiphase flow calculation software and computer-readable media carrying computer executable instructions for calculating liquid and gas phase mass flow rates of high void fraction multiphase flows. The multiphase flow calculation software employs various given, or experimentally determined, parameters in conjunction with a plurality of pressure differentials of a multiphase flow, preferably supplied by a differential pressure flowmeter or the like, to determine liquid and gas phase mass flow rates of the high void fraction multiphase flows. Embodiments of the multiphase flow calculation software are suitable for use in a variety of applications, including real-time management and control of an object system.

  9. Magnetic Field Grid Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Properties Calculator will computes the estimated values of Earth's magnetic field(declination, inclination, vertical component, northerly...

  10. COMPUTING

    CERN Multimedia

    I. Fisk

    2013-01-01

    Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites.   Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month.   Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB.   Figure 3: The volume of data moved between CMS sites in the last six months   The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...

  11. COMPUTING

    CERN Multimedia

    I. Fisk

    2012-01-01

      Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently.  Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...

  12. COMPUTING

    CERN Multimedia

    2010-01-01

    Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...

  13. COMPUTING

    CERN Multimedia

    Matthias Kasemann

    Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...

  14. COMPUTING

    CERN Multimedia

    M. Kasemann

    Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...

  15. COMPUTING

    CERN Multimedia

    Contributions from I. Fisk

    2012-01-01

    Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences.  Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...

  16. COMPUTING

    CERN Multimedia

    P. MacBride

    The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...

  17. Exact p-value calculation for heterotypic clusters of regulatory motifs and its application in computational annotation of cis-regulatory modules

    Directory of Open Access Journals (Sweden)

    Roytberg Mikhail A

    2007-10-01

    Full Text Available Abstract Background cis-Regulatory modules (CRMs of eukaryotic genes often contain multiple binding sites for transcription factors. The phenomenon that binding sites form clusters in CRMs is exploited in many algorithms to locate CRMs in a genome. This gives rise to the problem of calculating the statistical significance of the event that multiple sites, recognized by different factors, would be found simultaneously in a text of a fixed length. The main difficulty comes from overlapping occurrences of motifs. So far, no tools have been developed allowing the computation of p-values for simultaneous occurrences of different motifs which can overlap. Results We developed and implemented an algorithm computing the p-value that s different motifs occur respectively k1, ..., ks or more times, possibly overlapping, in a random text. Motifs can be represented with a majority of popular motif models, but in all cases, without indels. Zero or first order Markov chains can be adopted as a model for the random text. The computational tool was tested on the set of cis-regulatory modules involved in D. melanogaster early development, for which there exists an annotation of binding sites for transcription factors. Our test allowed us to correctly identify transcription factors cooperatively/competitively binding to DNA. Method The algorithm that precisely computes the probability of simultaneous motif occurrences is inspired by the Aho-Corasick automaton and employs a prefix tree together with a transition function. The algorithm runs with the O(n|Σ|(m|ℋ MathType@MTEF@5@5@+=feaafiart1ev1aaatCvAUfKttLearuWrP9MDH5MBPbIqV92AaeXatLxBI9gBaebbnrfifHhDYfgasaacH8akY=wiFfYdH8Gipec8Eeeu0xXdbba9frFj0=OqFfea0dXdd9vqai=hGuQ8kuc9pgc9s8qqaq=dirpe0xb9q8qiLsFr0=vr0=vr0dc8meaabaqaciaacaGaaeqabaqabeGadaaakeaat0uy0HwzTfgDPnwy1egaryqtHrhAL1wy0L2yHvdaiqaacqWFlecsaaa@3762@| + K|σ|K ∏i ki time complexity, where n is the length of the text, |Σ| is the alphabet size, m is the

  18. COMPUTING

    CERN Multimedia

    I. Fisk

    2011-01-01

    Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...

  19. Computer-based malnutrition risk calculation may enhance the ability to identify pediatric patients at malnutrition-related risk for unfavorable outcome.

    Science.gov (United States)

    Karagiozoglou-Lampoudi, Thomais; Daskalou, Efstratia; Lampoudis, Dimitrios; Apostolou, Aggeliki; Agakidis, Charalampos

    2015-05-01

    The study aimed to test the hypothesis that computer-based calculation of malnutrition risk may enhance the ability to identify pediatric patients at malnutrition-related risk for an unfavorable outcome. The Pediatric Digital Scaled MAlnutrition Risk screening Tool (PeDiSMART), incorporating the World Health Organization (WHO) growth reference data and malnutrition-related parameters, was used. This was a prospective cohort study of 500 pediatric patients aged 1 month to 17 years. Upon admission, the PeDiSMART score was calculated and anthropometry was performed. Pediatric Yorkhill Malnutrition Score (PYMS), Screening Tool Risk on Nutritional Status and Growth (STRONGkids), and Screening Tool for the Assessment of Malnutrition in Pediatrics (STAMP) malnutrition screening tools were also applied. PeDiSMART's association with the clinical outcome measures (weight loss/nutrition support and hospitalization duration) was assessed and compared with the other screening tools. The PeDiSMART score was inversely correlated with anthropometry and bioelectrical impedance phase angle (BIA PhA). The score's grading scale was based on BIA Pha quartiles. Weight loss/nutrition support during hospitalization was significantly independently associated with the malnutrition risk group allocation on admission, after controlling for anthropometric parameters and age. Receiver operating characteristic curve analysis showed a sensitivity of 87% and a specificity of 75% and a significant area under the curve, which differed significantly from that of STRONGkids and STAMP. In the subgroups of patients with PeDiSMART-based risk allocation different from that based on the other tools, PeDiSMART allocation was more closely related to outcome measures. PeDiSMART, applicable to the full age range of patients hospitalized in pediatric departments, graded according to BIA PhA, and embeddable in medical electronic records, enhances efficacy and reproducibility in identifying pediatric patients at

  20. COMPUTING

    CERN Multimedia

    M. Kasemann

    CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...

  1. Declination Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...

  2. Blockade of neuronal α7-nAChR by α-conotoxin ImI explained by computational scanning and energy calculations.

    Directory of Open Access Journals (Sweden)

    Rilei Yu

    2011-03-01

    Full Text Available α-Conotoxins potently inhibit isoforms of nicotinic acetylcholine receptors (nAChRs, which are essential for neuronal and neuromuscular transmission. They are also used as neurochemical tools to study nAChR physiology and are being evaluated as drug leads to treat various neuronal disorders. A number of experimental studies have been performed to investigate the structure-activity relationships of conotoxin/nAChR complexes. However, the structural determinants of their binding interactions are still ambiguous in the absence of experimental structures of conotoxin-receptor complexes. In this study, the binding modes of α-conotoxin ImI to the α7-nAChR, currently the best-studied system experimentally, were investigated using comparative modeling and molecular dynamics simulations. The structures of more than 30 single point mutants of either the conotoxin or the receptor were modeled and analyzed. The models were used to explain qualitatively the change of affinities measured experimentally, including some nAChR positions located outside the binding site. Mutational energies were calculated using different methods that combine a conformational refinement procedure (minimization with a distance dependent dielectric constant or explicit water, or molecular dynamics using five restraint strategies and a binding energy function (MM-GB/SA or MM-PB/SA. The protocol using explicit water energy minimization and MM-GB/SA gave the best correlations with experimental binding affinities, with an R2 value of 0.74. The van der Waals and non-polar desolvation components were found to be the main driving force for binding of the conotoxin to the nAChR. The electrostatic component was responsible for the selectivity of the various ImI mutants. Overall, this study provides novel insights into the binding mechanism of α-conotoxins to nAChRs and the methodological developments reported here open avenues for computational scanning studies of a rapidly expanding

  3. Quantum chemical calculation of electron ionization mass spectra for general organic and inorganic molecules† †Electronic supplementary information (ESI) available: GFN-xTB calculated potential energy surfaces for example coordinates. Additional calculated mass spectra. Computational timing statistics. See DOI: 10.1039/c7sc00601b Click here for additional data file.

    Science.gov (United States)

    Ásgeirsson, Vilhjálmur; Bauer, Christoph A.

    2017-01-01

    We introduce a fully stand-alone version of the Quantum Chemistry Electron Ionization Mass Spectra (QCEIMS) program [S. Grimme, Angew. Chem. Int. Ed., 2013, 52, 6306] allowing efficient simulations for molecules composed of elements with atomic numbers up to Z = 86. The recently developed extended tight-binding semi-empirical method GFN-xTB has been combined with QCEIMS, thereby eliminating dependencies on third-party electronic structure software. Furthermore, for reasonable calculations of ionization potentials, as required by the method, a second tight-binding variant, IPEA-xTB, is introduced here. This novel combination of methods allows the automatic, fast and reasonably accurate computation of electron ionization mass spectra for structurally different molecules across the periodic table. In order to validate and inspect the transferability of the method, we perform large-scale simulations for some representative organic, organometallic, and main-group inorganic systems. Theoretical spectra for 23 molecules are compared directly to experimental data taken from standard databases. For the first time, realistic quantum chemistry based EI-MS for organometallic systems like ferrocene or copper(ii)acetylacetonate are presented. Compared to previously used semiempirical methods, GFN-xTB is faster, more robust, and yields overall higher quality spectra. The partially analysed theoretical reaction and fragmentation mechanisms are chemically reasonable and reveal in unprecedented detail the extreme complexity of high energy gas phase ion chemistry including complicated rearrangement reactions prior to dissociation. PMID:28959412

  4. Calculation of limits for significant unidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2015-01-01

    relative differences were computed to give limits for significant unidirectional differences with a constant cumulated maximum probability of both 95% (P ... concept on more than two results will increase the number of false-positive results. Therefore, a simple method is needed to interpret the significance of a difference when all available serial biomarker results are considered. METHODS: A computer simulation model using Excel was developed. Based on 10...

  5. MEMS Calculator

    Science.gov (United States)

    SRD 166 MEMS Calculator (Web, free access)   This MEMS Calculator determines the following thin film properties from data taken with an optical interferometer or comparable instrument: a) residual strain from fixed-fixed beams, b) strain gradient from cantilevers, c) step heights or thicknesses from step-height test structures, and d) in-plane lengths or deflections. Then, residual stress and stress gradient calculations can be made after an optical vibrometer or comparable instrument is used to obtain Young's modulus from resonating cantilevers or fixed-fixed beams. In addition, wafer bond strength is determined from micro-chevron test structures using a material test machine.

  6. Quantification of the computational accuracy of code systems on the burn-up credit using experimental re-calculations; Quantifizierung der Rechengenauigkeit von Codesystemen zum Abbrandkredit durch Experimentnachrechnungen

    Energy Technology Data Exchange (ETDEWEB)

    Behler, Matthias; Hannstein, Volker; Kilger, Robert; Moser, Franz-Eberhard; Pfeiffer, Arndt; Stuke, Maik

    2014-06-15

    In order to account for the reactivity-reducing effect of burn-up in the criticality safety analysis for systems with irradiated nuclear fuel (''burnup credit''), numerical methods to determine the enrichment and burnup dependent nuclide inventory (''burnup code'') and its resulting multiplication factor k{sub eff} (''criticality code'') are applied. To allow for reliable conclusions, for both calculation systems the systematic deviations of the calculation results from the respective true values, the bias and its uncertainty, are being quantified by calculation and analysis of a sufficient number of suitable experiments. This quantification is specific for the application case under scope and is also called validation. GRS has developed a methodology to validate a calculation system for the application of burnup credit in the criticality safety analysis for irradiated fuel assemblies from pressurized water reactors. This methodology was demonstrated by applying the GRS home-built KENOREST burnup code and the criticality calculation sequence CSAS5 from SCALE code package. It comprises a bounding approach and alternatively a stochastic, which both have been exemplarily demonstrated by use of a generic spent fuel pool rack and a generic dry storage cask, respectively. Based on publicly available post irradiation examination and criticality experiments, currently the isotopes of uranium and plutonium elements can be regarded for.

  7. Calculation of limits for significant bidirectional changes in two or more serial results of a biomarker based on a computer simulation model

    DEFF Research Database (Denmark)

    Lund, Flemming; Petersen, Per Hyltoft; Fraser, Callum G

    2015-01-01

    -subject biological variation plus the analytical variation. Each new result in this series was compared to the initial result. These successive serial differences were computed to give limits for significant bidirectional changes with constant cumulated maximum probabilities of 95% (p ... will increase the number of false positive results. METHODS: A computer simulation model was developed using Excel. Based on 10,000 simulated measurements among healthy individuals, a series of up to 20 results of a biomarker from each individual was generated using different values for the within...

  8. Algorithm of calculation of energy consumption on the basis of differential model of the production task performed on machines with computer numeric control (CNC)

    Science.gov (United States)

    Safarov, D. T.; Kondrashov, A. G.; Glinina, G. F.; Safarova, L. R.

    2017-09-01

    The calculation algorithm, power consumption of all consumers involved in the operation and production tasks developed by the example of workplaces equipped with CNC machines is developed. The algorithm takes into account the actual status, operating modes and switching sequence of all electricity consumers.

  9. Agreement between gamma passing rates using computed tomography in radiotherapy and secondary cancer risk prediction from more advanced dose calculated models.

    Science.gov (United States)

    Chaikh, Abdulhamid; Balosso, Jacques

    2017-06-01

    During the past decades, in radiotherapy, the dose distributions were calculated using density correction methods with pencil beam as type 'a' algorithm. The objectives of this study are to assess and evaluate the impact of dose distribution shift on the predicted secondary cancer risk (SCR), using modern advanced dose calculation algorithms, point kernel, as type 'b', which consider change in lateral electrons transport. Clinical examples of pediatric cranio-spinal irradiation patients were evaluated. For each case, two radiotherapy treatment plans with were generated using the same prescribed dose to the target resulting in different number of monitor units (MUs) per field. The dose distributions were calculated, respectively, using both algorithms types. A gamma index (γ) analysis was used to compare dose distribution in the lung. The organ equivalent dose (OED) has been calculated with three different models, the linear, the linear-exponential and the plateau dose response curves. The excess absolute risk ratio (EAR) was also evaluated as (EAR = OED type 'b' / OED type 'a'). The γ analysis results indicated an acceptable dose distribution agreement of 95% with 3%/3 mm. Although, the γ-maps displayed dose displacement >1 mm around the healthy lungs. Compared to type 'a', the OED values from type 'b' dose distributions' were about 8% to 16% higher, leading to an EAR ratio >1, ranged from 1.08 to 1.13 depending on SCR models. The shift of dose calculation in radiotherapy, according to the algorithm, can significantly influence the SCR prediction and the plan optimization, since OEDs are calculated from DVH for a specific treatment. The agreement between dose distribution and SCR prediction depends on dose response models and epidemiological data. In addition, the γ passing rates of 3%/3 mm does not translate the difference, up to 15%, in the predictions of SCR resulting from alternative algorithms. Considering that modern algorithms are more accurate, showing

  10. SFACTOR: a computer code for calculating dose equivalent to a target organ per microcurie-day residence of a radionuclide in a source organ

    Energy Technology Data Exchange (ETDEWEB)

    Dunning, D.E. Jr.; Pleasant, J.C.; Killough, G.G.

    1977-11-01

    A computer code SFACTOR was developed to estimate the average dose equivalent S (rem/..mu..Ci-day) to each of a specified list of target organs per microcurie-day residence of a radionuclide in source organs in man. Source and target organs of interest are specified in the input data stream, along with the nuclear decay information. The SFACTOR code computes components of the dose equivalent rate from each type of decay present for a particular radionuclide, including alpha, electron, and gamma radiation. For those transuranic isotopes which also decay by spontaneous fission, components of S from the resulting fission fragments, neutrons, betas, and gammas are included in the tabulation. Tabulations of all components of S are provided for an array of 22 source organs and 24 target organs for 52 radionuclides in an adult.

  11. Multi-user software of radio therapeutical calculation using a computational network; Software multiusuario de calculo radioterapeutico usando una red de computo

    Energy Technology Data Exchange (ETDEWEB)

    Allaucca P, J.J.; Picon C, C.; Zaharia B, M. [Departamento de Radioterapia, Instituto de Enfermedades Neoplasicas, Av. Angamos Este 2520, Lima 34 (Peru)

    1998-12-31

    It has been designed a hardware and software system for a radiotherapy Department. It runs under an Operative system platform Novell Network sharing the existing resources and of the server, it is centralized, multi-user and of greater safety. It resolves a variety of problems and calculation necessities, patient steps and administration, it is very fast and versatile, it contains a set of menus and options which may be selected with mouse, direction arrows or abbreviated keys. (Author)

  12. Hypersonic Experimental and Computational Capability, Improvement and Validation. Volume 2. (l’Hypersonique experimentale et de calcul - capacite, ameliorafion et validation)

    Science.gov (United States)

    1998-12-01

    and K. Friedrichs . Supersonic Flow and Shock Waves. Springer-Verlag, New York, 1948. A Conical Flow Figure 113: Spherical polar coordinates...flux-split, Gauss -Seidel relaxation nu- merical technique. A five species air model is used (N2, 02, N, O, NO) in the solutions. The computational...Flow Over Spheres", J Fluid Mech 199; 389-405 (1995). 43 Kastell, D., Carl , M., and Eitelberg, G. "Phase Step Holographic Interferometry

  13. EQ3NR, a computer program for geochemical aqueous speciation-solubility calculations: Theoretical manual, user`s guide, and related documentation (Version 7.0); Part 3

    Energy Technology Data Exchange (ETDEWEB)

    Wolery, T.J.

    1992-09-14

    EQ3NR is an aqueous solution speciation-solubility modeling code. It is part of the EQ3/6 software package for geochemical modeling. It computes the thermodynamic state of an aqueous solution by determining the distribution of chemical species, including simple ions, ion pairs, and complexes, using standard state thermodynamic data and various equations which describe the thermodynamic activity coefficients of these species. The input to the code describes the aqueous solution in terms of analytical data, including total (analytical) concentrations of dissolved components and such other parameters as the pH, pHCl, Eh, pe, and oxygen fugacity. The input may also include a desired electrical balancing adjustment and various constraints which impose equilibrium with special pure minerals, solid solution end-member components (of specified mole fractions), and gases (of specified fugacities). The code evaluates the degree of disequilibrium in terms of the saturation index (SI = 1og Q/K) and the thermodynamic affinity (A = {minus}2.303 RT log Q/K) for various reactions, such as mineral dissolution or oxidation-reduction in the aqueous solution itself. Individual values of Eh, pe, oxygen fugacity, and Ah (redox affinity) are computed for aqueous redox couples. Equilibrium fugacities are computed for gas species. The code is highly flexible in dealing with various parameters as either model inputs or outputs. The user can specify modification or substitution of equilibrium constants at run time by using options on the input file.

  14. Parallel computing for homogeneous diffusion and transport equations in neutronics; Calcul parallele pour les equations de diffusion et de transport homogenes en neutronique

    Energy Technology Data Exchange (ETDEWEB)

    Pinchedez, K

    1999-06-01

    Parallel computing meets the ever-increasing requirements for neutronic computer code speed and accuracy. In this work, two different approaches have been considered. We first parallelized the sequential algorithm used by the neutronics code CRONOS developed at the French Atomic Energy Commission. The algorithm computes the dominant eigenvalue associated with PN simplified transport equations by a mixed finite element method. Several parallel algorithms have been developed on distributed memory machines. The performances of the parallel algorithms have been studied experimentally by implementation on a T3D Cray and theoretically by complexity models. A comparison of various parallel algorithms has confirmed the chosen implementations. We next applied a domain sub-division technique to the two-group diffusion Eigen problem. In the modal synthesis-based method, the global spectrum is determined from the partial spectra associated with sub-domains. Then the Eigen problem is expanded on a family composed, on the one hand, from eigenfunctions associated with the sub-domains and, on the other hand, from functions corresponding to the contribution from the interface between the sub-domains. For a 2-D homogeneous core, this modal method has been validated and its accuracy has been measured. (author)

  15. Synthesis, characterization, computational calculation and biological studies of some 2,6-diaryl-1-(prop-2-yn-1-yl)piperidin-4-one oxime derivatives

    Science.gov (United States)

    Sundararajan, G.; Rajaraman, D.; Srinivasan, T.; Velmurugan, D.; Krishnasamy, K.

    2015-03-01

    A new series of 2,6-diaryl-1-(prop-2-yn-1-yl)piperidin-4-one oximes (17-24) were designed and synthesized from 2,6-diarylpiperidin-4-one oximes (9-16) with propargyl bromide. Unambiguous structural elucidation has been carried out by investigating IR, NMR (1H, 13C, 1H-1H COSY and HSQC), mass spectral techniques and theoretical (DFT) calculations. Further, crystal structure of compound 17 was evaluated by single crystal X-ray diffraction analysis. Single crystal X-ray structural analysis of compound 17 evidenced that the configuration about Cdbnd N double bond is syn to C-5 carbon (E-form). The existence of chair conformation was further confirmed by theoretical DFT calculation. All the synthesized compounds were screened for in vitro antimicrobial activity against a panel of selected bacterial and fungal strains using Ciprofloxacin and Ketoconazole as standards. The minimum inhibition concentration (MIC) results revealed that most of the 2,6-diaryl-1-(prop-2-yn-1-yl)piperidin-4-one oximes (17, 19, 20 and 23) exhibited better activity against the selected bacterial and fungal strains.

  16. Hydrogen bonding interactions and supramolecular assemblies in 2-amino guanidinium 4-methyl benzene sulphonate crystal structure: Hirshfeld surfaces and computational calculations

    Science.gov (United States)

    Muthuraja, P.; Joselin Beaula, T.; Balachandar, S.; Bena Jothy, V.; Dhandapani, M.

    2017-10-01

    2-aminoguanidinium 4-methyl benzene sulphonate (AGMS), an organic compound with big assembly of hydrogen bonding interactions was crystallized at room temperature. The structure of the compound was confirmed by FT-IR, NMR and single crystal X-ray diffraction analysis. Numerous hydrogen bonded interactions were found to form supramolecular assemblies in the molecular structure. Fingerprint plots of Hirshfeld surface analysis spells out the interactions in various directions. The molecular structure of AGMS was optimised by HF, MP2 and DFT (B3LYP and CAM-B3LYP) methods at 6-311G (d,p) basis set and the geometrical parameters were compared. Electrostatic potential calculations of the reactants and product confirm the transfer of proton. Optical properties of AGMS were ascertained by UV-Vis absorbance and reflectance spectra. The band gap of AGMS is found to be 2.689 eV. Due to numerous hydrogen bonds, the crystal is thermally stable up to 200 °C. Hyperconjugative interactions which are responsible for the second hyperpolarizabilities were accounted by NBO analysis. Static and frequency dependent optical properties were calculated at HF and DFT methods. The hyperpolarizabilities of AGMS increase rapidly at frequencies 0.0428 and 0.08 a.u. compared to static one. The compound exhibits violet and blue emission.

  17. Development of a computer code to calculate the distribution of radionuclides within the human body by the biokinetic models of the ICRP.

    Science.gov (United States)

    Matsumoto, Masaki; Yamanaka, Tsuneyasu; Hayakawa, Nobuhiro; Iwai, Satoshi; Sugiura, Nobuyuki

    2015-03-01

    This paper describes the Basic Radionuclide vAlue for Internal Dosimetry (BRAID) code, which was developed to calculate the time-dependent activity distribution in each organ and tissue characterised by the biokinetic compartmental models provided by the International Commission on Radiological Protection (ICRP). Translocation from one compartment to the next is taken to be governed by first-order kinetics, which is formulated by the first-order differential equations. In the source program of this code, the conservation equations are solved for the mass balance that describes the transfer of a radionuclide between compartments. This code is applicable to the evaluation of the radioactivity of nuclides in an organ or tissue without modification of the source program. It is also possible to handle easily the cases of the revision of the biokinetic model or the application of a uniquely defined model by a user, because this code is designed so that all information on the biokinetic model structure is imported from an input file. The sample calculations are performed with the ICRP model, and the results are compared with the analytic solutions using simple models. It is suggested that this code provides sufficient result for the dose estimation and interpretation of monitoring data. © The Author 2014. Published by Oxford University Press. All rights reserved. For Permissions, please email: journals.permissions@oup.com.

  18. Construction of a computational exposure model for dosimetric calculations using the EGS4 Monte Carlo code and voxel phantoms; Construcao de um modelo computacional de exposicao para calculos dosimetricos utilizando o codigo Monte Carlo EGS4 e fantomas de voxels

    Energy Technology Data Exchange (ETDEWEB)

    Vieira, Jose Wilson

    2004-07-15

    The MAX phantom has been developed from existing segmented images of a male adult body, in order to achieve a representation as close as possible to the anatomical properties of the reference adult male specified by the ICRP. In computational dosimetry, MAX can simulate the geometry of a human body under exposure to ionizing radiations, internal or external, with the objective of calculating the equivalent dose in organs and tissues for occupational, medical or environmental purposes of the radiation protection. This study presents a methodology used to build a new computational exposure model MAX/EGS4: the geometric construction of the phantom; the development of the algorithm of one-directional, divergent, and isotropic radioactive sources; new methods for calculating the equivalent dose in the red bone marrow and in the skin, and the coupling of the MAX phantom with the EGS4 Monte Carlo code. Finally, some results of radiation protection, in the form of conversion coefficients between equivalent dose (or effective dose) and free air-kerma for external photon irradiation are presented and discussed. Comparing the results presented with similar data from other human phantoms it is possible to conclude that the coupling MAX/EGS4 is satisfactory for the calculation of the equivalent dose in radiation protection. (author)

  19. BerkeleyGW: A massively parallel computer package for the calculation of the quasiparticle and optical properties of materials and nanostructures

    Science.gov (United States)

    Deslippe, Jack; Samsonidze, Georgy; Strubbe, David A.; Jain, Manish; Cohen, Marvin L.; Louie, Steven G.

    2012-06-01

    BerkeleyGW is a massively parallel computational package for electron excited-state properties that is based on the many-body perturbation theory employing the ab initio GW and GW plus Bethe-Salpeter equation methodology. It can be used in conjunction with many density-functional theory codes for ground-state properties, including PARATEC, PARSEC, Quantum ESPRESSO, SIESTA, and Octopus. The package can be used to compute the electronic and optical properties of a wide variety of material systems from bulk semiconductors and metals to nanostructured materials and molecules. The package scales to 10 000s of CPUs and can be used to study systems containing up to 100s of atoms. Program summaryProgram title: BerkeleyGW Catalogue identifier: AELG_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELG_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Open source BSD License. See code for licensing details. No. of lines in distributed program, including test data, etc.: 576 540 No. of bytes in distributed program, including test data, etc.: 110 608 809 Distribution format: tar.gz Programming language: Fortran 90, C, C++, Python, Perl, BASH Computer: Linux/UNIX workstations or clusters Operating system: Tested on a variety of Linux distributions in parallel and serial as well as AIX and Mac OSX RAM: (50-2000) MB per CPU (Highly dependent on system size) Classification: 7.2, 7.3, 16.2, 18 External routines: BLAS, LAPACK, FFTW, ScaLAPACK (optional), MPI (optional). All available under open-source licenses. Nature of problem: The excited state properties of materials involve the addition or subtraction of electrons as well as the optical excitations of electron-hole pairs. The excited particles interact strongly with other electrons in a material system. This interaction affects the electronic energies, wavefunctions and lifetimes. It is well known that ground-state theories, such as standard methods

  20. Possibilités actuelles du calcul des constantes élastiques de polymères par des méthodes de simulation atomistique Current Possibilities of the Computation of Elastic Constants of Polymers Using Atomistic Simulations

    Directory of Open Access Journals (Sweden)

    Dal Maso F.

    2006-12-01

    Full Text Available Les propriétés élastiques des phases amorphe et cristalline pures de polymères semi-cristallins ne sont en général pas mesurables directement avec les moyens physiques habituels. Il est donc nécessaire de recourir à des méthodes de calcul numérique. Cet article décrit certaines de ces méthodes, fondées sur des modélisations atomistiques, ainsi qu'une évaluation des implémentations actuelles. Il est montré que la méthode proposée par Zehnder et al. (1996 fournit les meilleurs résultats, au prix d'un temps long de calcul, dû à la dynamique moléculaire. Néanmoins, aucune de ces méthodes n'est vraiment utilisable simplement au jour le jour, car elles requièrent des moyens importants de calcul. Elastic properties of pure crystalline and amorphous phases of a semicrystalline polymer are usually not directly measurable by usual physical means. It therefore is necessary to resort to numerical computing methods. This paper describes some of these methods, based on atomistic simulations, as well as an assessment of current implementations. It is shown that the method proposed by Zehnder et al. (1996 gives the best results, at the expense of long computing time, due to molecular dynamic simulation. Nevertheless none of these methods are really usable on a daily basis, since there are demanding important computing capabilities.

  1. Computer tool to calculate the daily energy produced by a grid-connected photovoltaic system; Aplicacion informatica para estimar la energia diaria producida por sistemas fotovoltaicos conectados a red

    Energy Technology Data Exchange (ETDEWEB)

    Sidrach-de-Cardona, M.; Carretero, J.; Martin, B.; Mora-Lopez, L.; Ramirez, L.; Varela, M.

    2004-07-01

    We present a computer tool to calculate the daily energy produced by a grid-connected photovoltaic system. The main novelty of this tool is that it uses radiation and ambient temperature as input data; these maps allow us to obtain 365 values of these parameters in any point of the image. The radiation map has been obtained by using images of the satellite Meteosat. For the temperature map a system of geographical information has been used by using data of terrestrial meteorological stations. For the calculation of the daily energy an empiric model obtained from the study of the behavior of different photovoltaic systems is used. The program allows to design the photovoltaic generator, includes a database of photovoltaic products and allows to carry out a complete economic analysis. (Author)

  2. Application of a General Computer Algorithm Based on the Group-Additivity Method for the Calculation of Two Molecular Descriptors at Both Ends of Dilution: Liquid Viscosity and Activity Coefficient in Water at Infinite Dilution.

    Science.gov (United States)

    Naef, Rudolf; Acree, William E

    2017-12-21

    The application of a commonly used computer algorithm based on the group-additivity method for the calculation of the liquid viscosity coefficient at 293.15 K and the activity coefficient at infinite dilution in water at 298.15 K of organic molecules is presented. The method is based on the complete breakdown of the molecules into their constituting atoms, further subdividing them by their immediate neighborhood. A fast Gauss-Seidel fitting method using experimental data from literature is applied for the calculation of the atom groups' contributions. Plausibility tests have been carried out on each of the calculations using a ten-fold cross-validation procedure which confirms the excellent predictive quality of the method. The goodness of fit (Q²) and the standard deviation (σ) of the cross-validation calculations for the viscosity coefficient, expressed as log(η), was 0.9728 and 0.11, respectively, for 413 test molecules, and for the activity coefficient log(γ) ∞ the corresponding values were 0.9736 and 0.31, respectively, for 621 test compounds. The present approach has proven its versatility in that it enabled the simultaneous evaluation of the liquid viscosity of normal organic compounds as well as of ionic liquids.

  3. Contributing to global computing platform: gliding, tunneling standard services and high energy physics application; Contribution aux infrastructures de calcul global: delegation inter plates-formes, integration de services standards et application a la physique des hautes energies

    Energy Technology Data Exchange (ETDEWEB)

    Lodygensky, O

    2006-09-15

    Centralized computers have been replaced by 'client/server' distributed architectures which are in turn in competition with new distributed systems known as 'peer to peer'. These new technologies are widely spread, and trading, industry and the research world have understood the new goals involved and massively invest around these new technologies, named 'grid'. One of the fields is about calculating. This is the subject of the works presented here. At the Paris Orsay University, a synergy emerged between the Computing Science Laboratory (LRI) and the Linear Accelerator Laboratory (LAL) on grid infrastructure, opening new investigations fields for the first and new high computing perspective for the other. Works presented here are the results of this multi-discipline collaboration. They are based on XtremWeb, the LRI global computing platform. We first introduce a state of the art of the large scale distributed systems, its principles, its architecture based on services. We then introduce XtremWeb and detail modifications and improvements we had to specify and implement to achieve our goals. We present two different studies, first interconnecting grids in order to generalize resource sharing and secondly, be able to use legacy services on such platforms. We finally explain how a research community like the community of high energy cosmic radiation detection can gain access to these services and detail Monte Carlos and data analysis processes over the grids. (author)

  4. Computational model for calculating body-core temperature elevation in rabbits due to whole-body exposure at 2.45 GHz

    Science.gov (United States)

    Hirata, Akimasa; Sugiyama, Hironori; Kojima, Masami; Kawai, Hiroki; Yamashiro, Yoko; Fujiwara, Osamu; Watanabe, Soichi; Sasaki, Kazuyuki

    2008-06-01

    In the current international guidelines and standards with regard to human exposure to electromagnetic waves, the basic restriction is defined in terms of the whole-body average-specific absorption rate. The rationale for the guidelines is that the characteristic pattern of thermoregulatory response is observed for the whole-body average SAR above a certain level. However, the relationship between energy absorption and temperature elevation was not well quantified. In this study, we improved our thermal computation model for rabbits, which was developed for localized exposure on eye, in order to investigate the body-core temperature elevation due to whole-body exposure at 2.45 GHz. The effect of anesthesia on the body-core temperature elevation was also discussed in comparison with measured results. For the whole-body average SAR of 3.0 W kg-1, the body-core temperature in rabbits elevates with time, without becoming saturated. The administration of anesthesia suppressed body-core temperature elevation, which is attributed to the reduced basal metabolic rate.

  5. Computer program for the calculation of stresses in rotary equipment discs; Programas de computo para el calculo de esfuerzos en discos de equipo rotatorio

    Energy Technology Data Exchange (ETDEWEB)

    Gutierrez Delgado, Wilson; Kubiak, Janusz; Serrano Romero, Luis Enrique [Instituto de Investigaciones Electricas, Cuernavaca (Mexico)

    1990-12-31

    In the preliminary design and diagnosis of rotary machines is very common to utilize simple calculation methods for the mechanical and thermal stresses, dynamic and thermodynamic analysis and flow of fluids in this machines (Gutierrez et al., 1989). The analysis with these methods provides the necessary results for the project initial stage of the machine. Later on, more complex tools are employed to refine the design of some machine components. In the Gutierrez report et al., (1989) 34 programs were developed for the preliminary design and diagnosis of rotating equipment; in this article, one of them is presented in which a method for the analysis of mechanical and thermal stresses is applied in discs of uniform or variable thickness that are normally found in turbomachines and rotary equipment. [Espanol] En el diseno preliminar y diagnostico de maquinas rotatorias es muy comun emplear metodos de calculo sencillos para el analisis de esfuerzos mecanicos y termicos, analisis dinamico y termodinamico y de flujo de fluidos en estas maquinas (Gutierrez et al., 1989). El analisis con estos metodos proporcionan los resultados necesarios para la etapa del proyecto inicial de la maquina. Posteriormente, para refinar el diseno de algunos componentes de la maquina, se aplican las herramientas mas complejas. En el informe de Gutierrez et al., (1989) se desarrollan 34 programas para el diseno preliminar y diagnostico de equipo rotatorio; en este articulo, se presenta uno de ellos, en el que se emplea un metodo para el analisis de esfuerzos mecanicos y termicos en discos de espesor constante o variable que se encuentran comunmente en turbomaquinas y en equipos rotatorios.

  6. What's My Substrate? Computational Function Assignment of Candida parapsilosis ADH5 by Genome Database Search, Virtual Screening, and QM/MM Calculations.

    Science.gov (United States)

    Dhoke, Gaurao V; Ensari, Yunus; Davari, Mehdi D; Ruff, Anna Joëlle; Schwaneberg, Ulrich; Bocola, Marco

    2016-07-25

    Zinc-dependent medium chain reductase from Candida parapsilosis can be used in the reduction of carbonyl compounds to pharmacologically important chiral secondary alcohols. To date, the nomenclature of cpADH5 is differing (CPCR2/RCR/SADH) in the literature, and its natural substrate is not known. In this study, we utilized a substrate docking based virtual screening method combined with KEGG, MetaCyc pathway, and Candida genome databases search for the discovery of natural substrates of cpADH5. The virtual screening of 7834 carbonyl compounds from the ZINC database provided 94 aldehydes or methyl/ethyl ketones as putative carbonyl substrates. Out of which, 52 carbonyl substrates of cpADH5 with catalytically active docking pose were identified by employing mechanism based substrate docking protocol. Comparison of the virtual screening results with KEGG, MetaCyc database search, and Candida genome pathway analysis suggest that cpADH5 might be involved in the Ehrlich pathway (reduction of fusel aldehydes in leucine, isoleucine, and valine degradation). Our QM/MM calculations and experimental activity measurements affirmed that butyraldehyde substrates are the potential natural substrates of cpADH5, suggesting a carbonyl reductase role for this enzyme in butyraldehyde reduction in aliphatic amino acid degradation pathways. Phylogenetic tree analysis of known ADHs from Candida albicans shows that cpADH5 is close to caADH5. We therefore propose, according to the experimental substrate identification and sequence similarity, the common name butyraldehyde dehydrogenase cpADH5 for Candida parapsilosis CPCR2/RCR/SADH.

  7. Exploring excited-state tunability in luminescent tris-cyclometalated platinum(IV) complexes: synthesis of heteroleptic derivatives and computational calculations.

    Science.gov (United States)

    Juliá, Fabio; Aullón, Gabriel; Bautista, Delia; González-Herrero, Pablo

    2014-12-22

    The synthesis, structure, electrochemistry, and photophysical properties of a series of heteroleptic tris- cyclometalated Pt(IV) complexes are reported. The complexes mer-[Pt(C^N)2 (C'^N')]OTf, with C^N=C-deprotonated 2-(2,4-difluorophenyl)pyridine (dfppy) or 2-phenylpyridine (ppy), and C'^N'=C-deprotonated 2-(2-thienyl)pyridine (thpy) or 1-phenylisoquinoline (piq), were obtained by reacting bis- cyclometalated precursors [Pt(C^N)2 Cl2] with AgOTf (2 equiv) and an excess of the N'^C'H pro-ligand. The complex mer-[Pt(dfppy)2 (ppy)]OTf was obtained analogously and photoisomerized to its fac counterpart. The new complexes display long-lived luminescence at room temperature in the blue to orange color range. The emitting states involve electronic transitions almost exclusively localized on the ligand with the lowest π-π* energy gap and have very little metal character. DFT and time-dependent DFT (TD-DFT) calculations on mer-[Pt(ppy)2 (C'^N')](+) (C'^N'=thpy, piq) and mer/fac-[Pt(ppy)3](+) support this assignment and provide a basis for the understanding of the luminescence of tris-cyclometalated Pt(IV) complexes. Excited states of LMCT character may become thermally accessible from the emitting state in the mer isomers containing dfppy or ppy as chromophoric ligands, leading to strong nonradiative deactivation. This effect does not operate in the fac isomers or the mer complexes containing thpy or piq, for which nonradiative deactivation originates mainly from vibrational coupling to the ground state. © 2014 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.

  8. Polysulfide chemistry in sodium-sulfur batteries and related systems--a computational study by G3X(MP2) and PCM calculations.

    Science.gov (United States)

    Steudel, Ralf; Steudel, Yana

    2013-02-25

    The sodium-sulfur (NAS) battery is a candidate for energy storage and load leveling in power systems, by using the reversible reduction of elemental sulfur by sodium metal to give a liquid mixture of polysulfides (Na(2)S(n)) at approximately 320°C. We investigated a large number of reactions possibly occurring in such sodium polysulfide melts by using density functional calculations at the G3X(MP2)/B3LYP/6-31+G(2df,p) level of theory including polarizable continuum model (PCM) corrections for two polarizable phases, to obtain geometric and, for the first time, thermodynamic data for the liquid sodium-sulfur system. Novel reaction sequences for the electrochemical reduction of elemental sulfur are proposed on the basis of their Gibbs reaction energies. We suggest that the primary reduction product of S(8) is the radical anion S(8)(˙-), which decomposes at the operating temperature of NAS batteries exergonically to the radicals S(2)(˙-) and S(3)(˙-) together with the neutral species S(6) and S(5), respectively. In addition, S(8)(˙-) is predicted to disproportionate exergonically to S(8) and S(8)(2-) followed by the dissociation of the latter into two S(4)(˙-) radical ions. By recombination reactions of these radicals various polysulfide dianions can in principle be formed. However, polysulfide dianions larger than S(4)(2-) are thermally unstable at 320°C and smaller dianions as well as radical monoanions dominate in Na(2)S(n) (n=2-5) melts instead. The reverse reactions are predicted to take place when the NAS battery is charged. We show that ion pairs of the types NaS(2)˙, NaS(n)(-), and Na(2)S(n) can be expected at least for n=2 and 3 in NAS batteries, but are unlikely in aqueous sodium polysulfide except at high concentrations. The structures of such radicals and anions with up to nine sulfur atoms are reported, because they are predicted to play a key role in the electrochemical reduction process. A large number of isomerization, disproportionation, and

  9. Dead reckoning calculating without instruments

    CERN Document Server

    Doerfler, Ronald W

    1993-01-01

    No author has gone as far as Doerfler in covering methods of mental calculation beyond simple arithmetic. Even if you have no interest in competing with computers you'll learn a great deal about number theory and the art of efficient computer programming. -Martin Gardner

  10. Algorithm Calculates Cumulative Poisson Distribution

    Science.gov (United States)

    Bowerman, Paul N.; Nolty, Robert C.; Scheuer, Ernest M.

    1992-01-01

    Algorithm calculates accurate values of cumulative Poisson distribution under conditions where other algorithms fail because numbers are so small (underflow) or so large (overflow) that computer cannot process them. Factors inserted temporarily to prevent underflow and overflow. Implemented in CUMPOIS computer program described in "Cumulative Poisson Distribution Program" (NPO-17714).

  11. Heat transfer, insulation calculations simplified

    Energy Technology Data Exchange (ETDEWEB)

    Ganapathy, V.

    1985-08-19

    Determination of heat transfer coefficients for air, water, and steam flowing in tubes and calculation of heat loss through multilayered insulated surfaces have been simplified by two computer programs. The programs, written in BASIC, have been developed for the IBM and equivalent personal computers.

  12. Online plasma calculator

    Science.gov (United States)

    Wisniewski, H.; Gourdain, P.-A.

    2017-10-01

    APOLLO is an online, Linux based plasma calculator. Users can input variables that correspond to their specific plasma, such as ion and electron densities, temperatures, and external magnetic fields. The system is based on a webserver where a FastCGI protocol computes key plasma parameters including frequencies, lengths, velocities, and dimensionless numbers. FastCGI was chosen to overcome security problems caused by JAVA-based plugins. The FastCGI also speeds up calculations over PHP based systems. APOLLO is built upon the WT library, which turns any web browser into a versatile, fast graphic user interface. All values with units are expressed in SI units except temperature, which is in electron-volts. SI units were chosen over cgs units because of the gradual shift to using SI units within the plasma community. APOLLO is intended to be a fast calculator that also provides the user with the proper equations used to calculate the plasma parameters. This system is intended to be used by undergraduates taking plasma courses as well as graduate students and researchers who need a quick reference calculation.

  13. Parallel computations

    CERN Document Server

    1982-01-01

    Parallel Computations focuses on parallel computation, with emphasis on algorithms used in a variety of numerical and physical applications and for many different types of parallel computers. Topics covered range from vectorization of fast Fourier transforms (FFTs) and of the incomplete Cholesky conjugate gradient (ICCG) algorithm on the Cray-1 to calculation of table lookups and piecewise functions. Single tridiagonal linear systems and vectorized computation of reactive flow are also discussed.Comprised of 13 chapters, this volume begins by classifying parallel computers and describing techn

  14. Computational dosimetry

    Energy Technology Data Exchange (ETDEWEB)

    Siebert, B.R.L.; Thomas, R.H.

    1996-01-01

    The paper presents a definition of the term ``Computational Dosimetry`` that is interpreted as the sub-discipline of computational physics which is devoted to radiation metrology. It is shown that computational dosimetry is more than a mere collection of computational methods. Computational simulations directed at basic understanding and modelling are important tools provided by computational dosimetry, while another very important application is the support that it can give to the design, optimization and analysis of experiments. However, the primary task of computational dosimetry is to reduce the variance in the determination of absorbed dose (and its related quantities), for example in the disciplines of radiological protection and radiation therapy. In this paper emphasis is given to the discussion of potential pitfalls in the applications of computational dosimetry and recommendations are given for their avoidance. The need for comparison of calculated and experimental data whenever possible is strongly stressed.

  15. Advanced Computational Methods for Monte Carlo Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2018-01-12

    This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.

  16. Computational system for activity calculation of radiopharmaceuticals

    African Journals Online (AJOL)

    STORAGESEVER

    2008-12-29

    Dec 29, 2008 ... The preparation of radiopharmaceuticals for distribution to several hospitals is practised widely and the transport is usually by road and plain, this is specially practised in big countries like Brazil where the distance from one state to other is bigger than one country compared to others in continents like.

  17. Chronobiometry with pocket calculators and computer systems.

    Science.gov (United States)

    Cornélissen, G; Halberg, F; Stebbings, J; Halberg, E; Carandente, F; Hsi, B

    1980-01-01

    Selected methods for the study of biologic time series are reviewed and their relative merits are discussed in the light of underlying assumptions. Their potential applications are exemplified in several fields of biology and medicine. The monitoring of environmental integrity, notably of pollution, is investigated. The need for specifying optimal sampling requirements is underlined. An individualized and time-qualified definition of health by the establishment of reference intervals is required for increasingly rational individualized program for the prevention and/or treatment of disease. With these reference intervals and rhythm characteristics available, one can better interpret with single samples or time series an increased risk of a certain disease or the inception of the disease. For all of these aims the monitoring of environmental and/or personal marker rhythms is essential--to obtain large data bases from which information can be more easily derived for monitoring personal health, to recognize risk as well as to diagnose disease early and to optimize treatment by timing according to rhythms.

  18. Titration Calculations with Computer Algebra Software

    Science.gov (United States)

    Lachance, Russ; Biaglow, Andrew

    2012-01-01

    This article examines the symbolic algebraic solution of the titration equations for a diprotic acid, as obtained using "Mathematica," "Maple," and "Mathcad." The equilibrium and conservation equations are solved symbolically by the programs to eliminate the approximations that normally would be performed by the student. Of the three programs,…

  19. Computational system for activity calculation of radiopharmaceuticals

    African Journals Online (AJOL)

    STORAGESEVER

    2008-12-29

    Dec 29, 2008 ... 3- “Tempo de síntese”, “Controle de Qualidade” and. “Embalagen e Expedição”- this field are related to the estimates times spent in the respectively process of synthesis, quality control and packaging and shipment. Santos-Oliveira and Benevides 4983. After full entry of all these fields, the “Calcular” bottom.

  20. Simple, intuitive calculations of free energy of binding for protein-ligand complexes. 2. Computational titration and pH effects in molecular models of neuraminidase-inhibitor complexes.

    Science.gov (United States)

    Fornabaio, Micaela; Cozzini, Pietro; Mozzarelli, Andrea; Abraham, Donald J; Kellogg, Glen E

    2003-10-09

    One factor that can strongly influence predicted free energy of binding is the ionization state of functional groups on the ligands and at the binding site at which calculations are performed. This analysis is seldom performed except in very detailed computational simulations. In this work, we address the issues of (i) modeling the complexity resulting from the different ionization states of ligand and protein residues involved in binding, (ii) if, and how, computational methods can evaluate the pH dependence of ligand inhibition constants, and (iii) how to score the protonation-dependent models. We developed a new and fairly rapid protocol called "computational titration" that enables parallel modeling of multiple ionization ensembles for each distinct protonation level. Models for possible protonation combinations for site/ligand ionizable groups are built, and the free energy of interaction for each of them is quantified by the HINT (Hydropathic INTeractions) software. We applied this procedure to the evaluation of the binding affinity of nine inhibitors (six derived from 2,3-didehydro-2-deoxy-N-acetylneuraminic acid, DANA) of influenza virus neuraminidase (NA), a surface glycoprotein essential for virus replication and thus a pharmaceutically relevant target for the design of anti-influenza drugs. The three-dimensional structures of the NA enzyme-inhibitor complexes indicate considerable complexity as the ligand-protein recognition site contains several ionizable moieties. Each computational titration experiment reveals a peak HINT score as a function of added protons. This maximum HINT score indicates the optimum pH (or the optimum protonation state of each inhibitor-protein binding site) for binding. The pH at which inhibition is measured and/or crystals were grown and analyzed can vary from this optimum. A protonation model is proposed for each ligand that reconciles the experimental complex structure with measured inhibition and the free energy of binding

  1. Microcomputer calculations in physics

    Science.gov (United States)

    Killingbeck, J. P.

    1985-01-01

    The use of microcomputers to carry out computations in an interactive manner allows the judgement of the operator to be allied with the calculating power of the machine in a synthesis which speeds up the creation and testing of mathematical techniques for physical problems. This advantage is accompanied by a disadvantage, in that microcomputers are limited in capacity and power, and special analysis is needed to compensate for this. These two features together mean that there is a fairly recognisable body of methods which are particularly appropriate for interactive microcomputing. This article surveys a wide range of mathematical methods used in physics, indicating how they can be applied using microcomputers and giving several original calculations which illustrate the value of the microcomputer in stimulating the exploration of new methods. Particular emphasis is given to methods which use iteration, recurrence relation or extrapolation procedures which are well adapted to the capabilities of modern microcomputers.

  2. Calculation of the Poisson cumulative distribution function

    Science.gov (United States)

    Bowerman, Paul N.; Nolty, Robert G.; Scheuer, Ernest M.

    1990-01-01

    A method for calculating the Poisson cdf (cumulative distribution function) is presented. The method avoids computer underflow and overflow during the process. The computer program uses this technique to calculate the Poisson cdf for arbitrary inputs. An algorithm that determines the Poisson parameter required to yield a specified value of the cdf is presented.

  3. A Romberg Integral Spreadsheet Calculator

    Directory of Open Access Journals (Sweden)

    Kim Gaik Tay

    2015-04-01

    Full Text Available Motivated by the work of Richardson’s extrapolation spreadsheet calculator up to level 4 to approximate definite differentiation, we have developed a Romberg integral spreadsheet calculator to approximate definite integral. The main feature of this version of spreadsheet calculator is a friendly graphical user interface developed to capture the needed information to solve the integral by Romberg method. Users simply need to enter the variable in the integral, function to be integrated, lower and upper limits of the integral, select the desired accuracy of computation, select the exact function if it exists and lastly click the Compute button which is associated with VBA programming written to compute Romberg integral table. The full solution of the Romberg integral table up to any level can be obtained quickly and easily using this method. The attached spreadsheet calculator together with this paper helps educators to prepare their marking scheme easily and assist students in checking their answers instead of reconstructing the answers from scratch. A summative evaluation of this Romberg Spreadsheet Calculator has been conducted by involving 36 students as sample. The data was collected using questionnaire. The findings showed that the majority of the students agreed that the Romberg Spreadsheet Calculator provides a structured learning environment that allows learners to be guided through a step-by-step solution.

  4. Recursive calculation of Hansen coefficients

    Science.gov (United States)

    Branham, Richard L., Jr.

    1990-06-01

    Hansen coefficients are used in expansions of the elliptic motion. Three methods for calculating the coefficients are studied: Tisserand's method, the Von Zeipel-Andoyer (VZA) method with explicit representation of the polynomials required to compute the Hansen coefficients, and the VZA method with the values of the polynomials calculated recursively. The VZA method with explicit polynomials is by far the most rapid, but the tabulation of the polynomials only extends to 12th order in powers of the eccentricity, and unless one has access to the polynomials in machine-readable form their entry is laborious and error-prone. The recursive calculation of the VZA polynomials, needed to compute the Hansen coefficients, while slower, is faster than the calculation of the Hansen coefficients by Tisserand's method, up to 10th order in the eccentricity and is still relatively efficient for higher orders. The main advantages of the recursive calculation are the simplicity of the program and one's being able to extend the expansions to any order of eccentricity with ease. Because FORTRAN does not implement recursive procedures, this paper used C for all of the calculations. The most important conclusion is recursion's genuine usefulness in scientific computing.

  5. The digital computer

    CERN Document Server

    Parton, K C

    2014-01-01

    The Digital Computer focuses on the principles, methodologies, and applications of the digital computer. The publication takes a look at the basic concepts involved in using a digital computer, simple autocode examples, and examples of working advanced design programs. Discussions focus on transformer design synthesis program, machine design analysis program, solution of standard quadratic equations, harmonic analysis, elementary wage calculation, and scientific calculations. The manuscript then examines commercial and automatic programming, how computers work, and the components of a computer

  6. Calculation of collisional mixing

    Science.gov (United States)

    Koponen, I.; Hautala, M.

    1990-06-01

    Collisional mixing of markers is calculated by splitting the calculation into two parts. Relocation cross sections have been calculated using a realistic potential in a Monte Carlo simulation. The cross sections are used in the computation of marker relocation. The cumulative effect of successive relocations is assumed to be an uncorrelated transport process and it is treated as a weighted random walk. Matrix relocation was not included in the calculations. The results from this two-step simulation model are compared with analytical models. A fit to the simulated differential relocation cross sections has been found which makes the numerical integration of the Bothe formula feasible. The influence of primaries has been treated in this way. When all the recoils are included the relocation profiles are nearly Gaussian and the Pearson IV distributions yield acceptable profiles in the studied cases. The approximations and cut-off procedures which cause the major uncertainties in calculations are pointed out. The choice of the cut-off energy is shown to be the source of the largest uncertainty whereas the mathematical approximations can be used with good accuracy. The methods are used to study the broadening of a Pt marker in Si mixed by 300 keV Xe ions, broadening of a Ti marker in Al mixed by 300 keV Xe ions and broadening of a Ti marker in Hf mixed by 750 keV Kr ions. The fluence in each case is 2 × 10 16{ions}/{cm 2}. The calculated averages of half widths at half maximum vary between 11-18, 9-12 and 10-15 nm, respectively, depending on the cut-off energy and the mixing efficiencies vary between 11-29, 6-11 and 6-14 {Å5}/{eV}, respectively. The broadenings of Pt in Si and Ti in Al are about two times smaller than the measured values and the broadening of Ti in Hf is in agreement with the measured values.

  7. 47 CFR 1.1623 - Probability calculation.

    Science.gov (United States)

    2010-10-01

    ... 47 Telecommunication 1 2010-10-01 2010-10-01 false Probability calculation. 1.1623 Section 1.1623... Mass Media Services General Procedures § 1.1623 Probability calculation. (a) All calculations shall be computed to no less than three significant digits. Probabilities will be truncated to the number of...

  8. Apresentação de um programa de computador para calcular a discrepância de tamanho dentário de Bolton Presentation of a computer program to calculate the Bolton’s tooth size discrepancy

    Directory of Open Access Journals (Sweden)

    Adriano Francisco de Lucca Facholli

    2006-04-01

    Full Text Available O diagnóstico da discrepância de tamanho dentário de Bolton é de fundamental importância para a boa finalização do tratamento ortodôntico. Por meio da medição dos dentes, com o auxílio de um paquímetro digital e a inserção dos valores no programa de computador desenvolvido e apresentado pelos autores, o trabalho do ortodontista fica mais simples, pois não é necessário realizar nenhum cálculo matemático ou auxiliar-se de nenhuma tabela de valores, eliminando-se a probabilidade de erros. Além disso, o programa apresenta a localização da discrepância por segmento - total, anterior e posterior - e individual - por elemento dentário, permitindo assim maior precisão na planificação das estratégias para a resolução dos problemas, caminhando para um tratamento ortodôntico de sucesso.The diagnosis of the Bolton’s Tooth Size Discrepancy is of fundamental importance for the good orthodontics finalization. Through the measurement of the teeth with the aid of a digital caliper and the insert of the values in the computer program developed by the authors and which that it will be presented in this article, the orthodontist’s work is simpler, because it is not necessary to accomplish any mathematical calculation or to aid of any table of values, eliminating the probability of mistakes. Besides, the program presents the location of the discrepancy for segment - overall, anterior and posterior - and individual - for dental element, allowing larger precision in the planning of the strategies for the resolution of the problems and walking for a success orthodontic treatment.

  9. Pressure Vessel Calculations for VVER-440 Reactors

    Science.gov (United States)

    Hordósy, G.; Hegyi, Gy.; Keresztúri, A.; Maráczy, Cs.; Temesvári, E.; Vértes, P.; Zsolnay, É.

    2003-06-01

    Monte Carlo calculations were performed for a selected cycle of the Paks NPP Unit II to test a computational model. In the model the source term was calculated by the core design code KARATE and the neutron transport calculations were performed by the MCNP. Different forms of the source specification were examined. The calculated results were compared with measurements and in most cases fairly good agreement was found.

  10. Réduire les coûts de la simulation informatique grâce aux plans d'expériences : un exemple en calcul de procédé Reducing Computer Simulation Costs with Factorial Designs: an Example of Process Calculation

    Directory of Open Access Journals (Sweden)

    Murray M.

    2006-11-01

    Full Text Available Cet article est destiné à montrer que la méthode des Plans d'Expériences utilisée dans les laboratoires et les unités de fabrication est également applicable au calcul scientifique et en particulier, à la simulation informatique. Son emploi permet de réduire, dans une forte proportion, le nombre de passages informatiques. Il permet également d'écrire des modèles mathématiques empiriques qui orientent les recherches vers la bonne solution et qui fournissent une bonne image du phénomène étudié. The aim of this article is to show that Factorial Design, which is a commonly used method in laboratories and production units, can also be very successful for designing and computerized simulations. Computer runs can be reduced by a factor as great as four to achieve a comprehensive understanding of how a plant or a process runs. Simple models can then be constructed to provide a good image of the investigated phenomenom. The example given here is that of a plant processing raw Natural Gas whose outputs are a Sales Gas and an NGL which must meet simultaneously five specifications. The operator in charge of the simulations begins by defining the Experimental Range of Investigation (Table 1. Calculations (Table 1, Fig. 2 are set in a pattern defined by Factorial Design (Table 2. These correspond to the apices of the Experimental cube (Fig. 2. Results of the simulations are then reported on Table 3. These require analysis, using Factorial Design Theory, in conjunction with each specification. A graphical approach is used to define the regions for which each specification is met: Fig. 3 shows the zone authorized for the first specification, the Wobbe Index and Fig. 4 gives the results for the outlet pressure of the Turbo-Expander. Figs. 5, 6 and 7 show the zones allowed for the CO2/C2 ratio, the TVP and the C2/C3 ratio. A satisfactory zone is found, for this last ratio, outside of the investigated range. The results acquired so far enable us

  11. Computational Chemistry for Kids

    National Research Council Canada - National Science Library

    Naef, Olivier

    2000-01-01

    This article aims to show that computational chemistry is not exclusively restricted to molecular energy and structure calculations but also includes chemical process control and reaction simulation...

  12. Magnetic Field Calculator

    Data.gov (United States)

    National Oceanic and Atmospheric Administration, Department of Commerce — The Magnetic Field Calculator will calculate the total magnetic field, including components (declination, inclination, horizontal intensity, northerly intensity,...

  13. Alcohol Calorie Calculator

    Science.gov (United States)

    ... NIAAA College Materials Supporting Research Special Features CollegeAIM College Administrators Parents & Students Home > Special Features > Calculators > Alcohol Calorie Calculator Weekly Total 0 Calories Alcohol Calorie ...

  14. Méthodes de calcul pour la conception des systèmes de protection cathodique des structures longilignes Computing Methods for Designing Cathodic Protection Systemes for Elongate Stuctures

    Directory of Open Access Journals (Sweden)

    Roche M.

    2006-11-01

    Full Text Available Les différentes structures longilignes qu'utilise l'industrie des hydrocarbures sont, dans la plupart des cas, soumises à un système de protection cathodique par anodes sacrificielles ou par courant imposé. La conception de ces systèmes doit être basée sur l'étude de la variation du potentiel et de l'intensité le long de la structure causée par la chute ohmique. La méthode classique de calcul résoud couramment le cas des structures longilignes à diamètre constant traversant un terrain dont la résistivité est considérée comme constante sur toute la longueur. Dans le cas où la constitution de la structure varie, comme celui des casings de puits de forage, ou quand celle-ci traverse plusieurs types de terrain, le problème se complique. Nous proposons une méthode générale permettant de traiter rapidement tout problème de ce type, le nombre de tronçons n'étant pas limité. Cette méthode fait appel à des notions de facteur de réflexion et de résistance équivalente déjà exposées dans la littérature mais dont l'usage ne semble pas s'être répandu. The different elongate structures used by the hydrocarbon industry are, in most cases, subjected ta a cathodic protection system consisting of sacrificial anodes or an impressed current. Desings of such systems must be boséd on an analysis of variations in the potential and intensity along the structure as the result of the ohm drop. The conventional computing method commonly solves cases of elongote structures with a constant diameter, running through ground whose resistivity is considéred to be constant over the entire length. When the~nake-up of the structure varies, as is the case for borehole casings, or when it goes through several types of formations, the problem gets more complicated. We propose a general method for quickly dealing with any problem of this type, with no limit ta the number of lengths involved. This method makes use of reflection factor and

  15. Influence of the Different Primary Cancers and Different Types of Bone Metastasis on the Lesion-based Artificial Neural Network Value Calculated by a Computer-aided Diagnostic System,BONENAVI, on Bone Scintigraphy Images

    Directory of Open Access Journals (Sweden)

    TAKURO ISODA

    2017-01-01

    Full Text Available Objective(s: BONENAVI, a computer-aided diagnostic system, is used in bone scintigraphy. This system provides the artificial neural network (ANN and bone scan index (BSI values. ANN is associated with the possibility of bone metastasis, while BSI is related to the amount of bone metastasis. The degree of uptake on bone scintigraphy can be affected by the type of bone metastasis. Therefore, the ANN value provided by BONENAVI may be influenced by the characteristics of bone metastasis. In this study, we aimed to assess the relationship between ANN value and characteristics of bone metastasis. Methods: We analyzed 50 patients (36 males, 14 females; age range: 42–87 yrs, median age: 72.5 yrs with prostate, breast, or lung cancer who had undergone bone scintigraphy and were diagnosed with bone metastasis (32 cases of prostate cancer, nine cases of breast cancer, and nine cases of lung cancer. Those who had received systematic therapy over the past years were excluded. Bone metastases were diagnosed clinically, and the type of bone metastasis (osteoblastic, mildly osteoblastic,osteolytic, and mixed components was decided visually by the agreement of two radiologists. We compared the ANN values (case-based and lesion-based among the three primary cancers and four types of bone metastasis.Results: There was no significant difference in case-based ANN values among prostate, breast, and lung cancers. However, the lesion-based ANN values were the highest in cases with prostate cancer and the lowest in cases of lung cancer (median values: prostate cancer, 0.980; breast cancer, 0.909; and lung cancer, 0.864. Mildly osteoblastic lesions showed significantly lower ANN values than the other three types of bone metastasis (median values: osteoblastic, 0.939; mildly osteoblastic, 0.788; mixed type, 0.991; and osteolytic, 0.969. The possibility of a lesion-based ANN value below 0.5 was 10.9% for bone metastasis in prostate cancer, 12.9% for breast cancer, and 37

  16. Influence of the Different Primary Cancers and Different Types of Bone Metastasis on the Lesion-based Artificial Neural Network Value Calculated by a Computer-aided Diagnostic System, BONENAVI, on Bone Scintigraphy Images.

    Science.gov (United States)

    Isoda, Takuro; BaBa, Shingo; Maruoka, Yasuhiro; Kitamura, Yoshiyuki; Tahara, Keiichiro; Sasaki, Masayuki; Hatakenaka, Masamitsu; Honda, Hiroshi

    2017-01-01

    BONENAVI, a computer-aided diagnostic system, is used in bone scintigraphy. This system provides the artificial neural network (ANN) and bone scan index (BSI) values. ANN is associated with the possibility of bone metastasis, while BSI is related to the amount of bone metastasis. The degree of uptake on bone scintigraphy can be affected by the type of bone metastasis. Therefore, the ANN value provided by BONENAVI may be influenced by the characteristics of bone metastasis. In this study, we aimed to assess the relationship between ANN value and characteristics of bone metastasis. We analyzed 50 patients (36 males,14 females; age range: 87-42 yrs median age:72.5 yrs) with prostate, breast, or lung cancer who had undergone bone scintigraphy and were diagnosed with bone metastasis (32 cases of prostate cancer, nine cases of breast cancer, and nine cases of lung cancer). Those who had received systematic therapy over the past years were excluded. Bone metastases were diagnosed clinically, and the type of bone metastasis (osteoblastic, mildly osteoblastic, osteolytic, and mixed components) was decided visually by the agreement of two radiologists. We compared the ANN values (case-based and lesion-based) among the three primary cancers and four types of bone metastasis. There was no significant difference in case-based ANN values among prostate, breast, and lung cancers. However, the lesion-based ANN values were the highest in cases with prostate cancer and the lowest in cases of lung cancer (median values: prostate cancer, 0.980; breast cancer 0.909; and lung cancer, 0.864). Mildly osteoblastic lesions showed significantly lower ANN values than the other three types of bone metastasis (median values: osteoblastic,; 0.939 mildly osteoblastic; 0.788, mixed type; 0.991, and osteolytic. 0.969) The possibility of a lesion-based ANN value below 0.5 was %10.9 for bone metastasis in prostate cancer, %12.9 for breast cancer, and %37.2 for lung cancer. The corresponding

  17. Selvester QRS score and total perfusion deficit calculated by quantitative gated single-photon emission computed tomography in patients with prior anterior myocardial infarction in the coronary intervention era.

    Science.gov (United States)

    Kurisu, Satoshi; Shimonaga, Takashi; Ikenaga, Hiroki; Watanabe, Noriaki; Higaki, Tadanao; Ishibashi, Ken; Dohi, Yoshihiro; Fukuda, Yukihiro; Kihara, Yasuki

    2017-04-01

    Selvester QRS scoring system has an advantage of being inexpensive and easily accessible for estimating myocardial infarct (MI) size. We assessed the correlation and agreement between QRS score and total perfusion deficit (TPD) calculated by quantitative gated single-photon emission computed tomography (QGS) in patients with prior anterior MI undergoing coronary intervention. Sixty-six patients with prior anterior MI and 66 age- and sex-matched control subjects were enrolled. QRS score was obtained using a 50-criteria and 31-point system. QRS score was significantly higher in patients with prior anterior MI than control subjects (12.8 ± 8.9 vs 1.1 ± 2.7 %, p < 0.001). In overall patients (n = 132), QRS score was correlated well with TPD (r = 0.81, p < 0.001). This good correlation was found even in patients with TPD ≤40 % (n = 126) or in patients with TPD ≤30 % (n = 117). In overall patients, MI size estimated by QRS score was 7.0 ± 8.8 %, which was significantly smaller than TPD, 11.4 ± 14.0 % (p < 0.001). Bland-Altman plot showed that there was an increasing difference between QRS score and TPD with increasing MI size. When Blant-Altman plots were applied to patients with TPD ≤40 % and further in patients with TPD ≤30 %, the difference between QRS score and TPD became smaller, and the agreement became better. In overall patients, QRS score was correlated well with QGS measurements, such as end-diastolic volume (r = 0.62, p < 0.001), end-systolic volume (r = 0.67, p < 0.001), or ejection fraction (r = -0.73, p < 0.001). Our results suggest that QRS score reflects TPD well in patients with prior anterior MI, whose TPD is less than approximately 30 % even in the coronary intervention era.

  18. Cloud Computing Quality

    Directory of Open Access Journals (Sweden)

    Anamaria Şiclovan

    2013-02-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offered to the consumers as a product delivered online. This paper is meant to describe the quality of cloud computing services, analyzing the advantages and characteristics offered by it. It is a theoretical paper.Keywords: Cloud computing, QoS, quality of cloud computing

  19. Computers and data processing

    CERN Document Server

    Deitel, Harvey M

    1985-01-01

    Computers and Data Processing provides information pertinent to the advances in the computer field. This book covers a variety of topics, including the computer hardware, computer programs or software, and computer applications systems.Organized into five parts encompassing 19 chapters, this book begins with an overview of some of the fundamental computing concepts. This text then explores the evolution of modern computing systems from the earliest mechanical calculating devices to microchips. Other chapters consider how computers present their results and explain the storage and retrieval of

  20. Efficient Finite Element Calculation of Nγ

    DEFF Research Database (Denmark)

    Clausen, Johan; Damkilde, Lars; Krabbenhøft, K.

    2007-01-01

    This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing.......This paper deals with the computational aspects of the Mohr-Coulomb material model, in particular the calculation of the bearing capacity factor Nγfor a strip and a circular footing....

  1. Calculation of Rydberg interaction potentials

    Science.gov (United States)

    Weber, Sebastian; Tresp, Christoph; Menke, Henri; Urvoy, Alban; Firstenberg, Ofer; Büchler, Hans Peter; Hofferberth, Sebastian

    2017-07-01

    The strong interaction between individual Rydberg atoms provides a powerful tool exploited in an ever-growing range of applications in quantum information science, quantum simulation and ultracold chemistry. One hallmark of the Rydberg interaction is that both its strength and angular dependence can be fine-tuned with great flexibility by choosing appropriate Rydberg states and applying external electric and magnetic fields. More and more experiments are probing this interaction at short atomic distances or with such high precision that perturbative calculations as well as restrictions to the leading dipole-dipole interaction term are no longer sufficient. In this tutorial, we review all relevant aspects of the full calculation of Rydberg interaction potentials. We discuss the derivation of the interaction Hamiltonian from the electrostatic multipole expansion, numerical and analytical methods for calculating the required electric multipole moments and the inclusion of electromagnetic fields with arbitrary direction. We focus specifically on symmetry arguments and selection rules, which greatly reduce the size of the Hamiltonian matrix, enabling the direct diagonalization of the Hamiltonian up to higher multipole orders on a desktop computer. Finally, we present example calculations showing the relevance of the full interaction calculation to current experiments. Our software for calculating Rydberg potentials including all features discussed in this tutorial is available as open source.

  2. Precision Calculations in Supersymmetric Theories

    Directory of Open Access Journals (Sweden)

    L. Mihaila

    2013-01-01

    Full Text Available In this paper we report on the newest developments in precision calculations in supersymmetric theories. An important issue related to this topic is the construction of a regularization scheme preserving simultaneously gauge invariance and supersymmetry. In this context, we discuss in detail dimensional reduction in component field formalism as it is currently the preferred framework employed in the literature. Furthermore, we set special emphasis on the application of multi-loop calculations to the analysis of gauge coupling unification, the prediction of the lightest Higgs boson mass, and the computation of the hadronic Higgs production and decay rates in supersymmetric models. Such precise theoretical calculations up to the fourth order in perturbation theory are required in order to cope with the expected experimental accuracy on the one hand and to enable us to distinguish between the predictions of the Standard Model and those of supersymmetric theories on the other hand.

  3. Monte Carlo calculations for HTRs

    Energy Technology Data Exchange (ETDEWEB)

    Hogenbirk, A. [ECN Nuclear Research, Petten (Netherlands)

    1998-09-01

    From a neutronics point of view pebble-bed HTRs are completely different from standard LWRs. The most important differences are to be found in the reactor geometry, the properties of the moderator (graphite instead of water) and the self-shielding of the fuel regions. Therefore, computer packages normally used for core analyses should be validated with experimental data before they can be used for HTR analyses. This especially holds for deterministic computer codes, in which approximations are made which may not be valid in pebble-bed HTRs. Monte Carlo codes more based on first principles suffer much less from this problem. In order to study small- and medium-sized LEU-HTR systems in the late 1980s an IAEA Coordinated Research Programme (CRP) was started. This CRP was mainly directed to the effects of water ingress and neutron streaming. The PROTEUS facility at the Paul Scherrer Institute (PSI) in Villigen, Switzerland, played a central role in this CRP. Benchmark quality measurements were provided in clean, easy-to-interpret critical configurations, using pebble-type fuel. ECN in Petten, Netherlands, contributed to the CRP by performing reactor calculations using the WIMS code system with deterministic calculations. However, a need was felt for reference calculations, in which as few approximations as possible were made. These analyses were performed with the Monte Carlo code MCNP-4A. In this contribution the results are given of the main MCNP-calculations. In these analyses a detailed model of the PROTEUS experimental set-up was used, whereas in the calculations use was made of high-quality continuous-energy cross-section data. The attention was focused on the calculation of the value of k{sub eff} and of streaming effects in the pebble-bed core. 15 refs.

  4. Field-theoretic calculation of kinetic helicity flux

    Indian Academy of Sciences (India)

    bulence and compute the fluxes of energy and kinetic helicity. The renormalized viscosity computed using RG procedure is used in the calculation. Contrast this with the arbitrary constant used in EDQNM calculation. In addition, the EDQNM calculations require numerical integration of energy equation, which is not required.

  5. Methods for Melting Temperature Calculation

    Science.gov (United States)

    Hong, Qi-Jun

    Melting temperature calculation has important applications in the theoretical study of phase diagrams and computational materials screenings. In this thesis, we present two new methods, i.e., the improved Widom's particle insertion method and the small-cell coexistence method, which we developed in order to capture melting temperatures both accurately and quickly. We propose a scheme that drastically improves the efficiency of Widom's particle insertion method by efficiently sampling cavities while calculating the integrals providing the chemical potentials of a physical system. This idea enables us to calculate chemical potentials of liquids directly from first-principles without the help of any reference system, which is necessary in the commonly used thermodynamic integration method. As an example, we apply our scheme, combined with the density functional formalism, to the calculation of the chemical potential of liquid copper. The calculated chemical potential is further used to locate the melting temperature. The calculated results closely agree with experiments. We propose the small-cell coexistence method based on the statistical analysis of small-size coexistence MD simulations. It eliminates the risk of a metastable superheated solid in the fast-heating method, while also significantly reducing the computer cost relative to the traditional large-scale coexistence method. Using empirical potentials, we validate the method and systematically study the finite-size effect on the calculated melting points. The method converges to the exact result in the limit of a large system size. An accuracy within 100 K in melting temperature is usually achieved when the simulation contains more than 100 atoms. DFT examples of Tantalum, high-pressure Sodium, and ionic material NaCl are shown to demonstrate the accuracy and flexibility of the method in its practical applications. The method serves as a promising approach for large-scale automated material screening in which

  6. Computer Spectrometers

    Science.gov (United States)

    Dattani, Nikesh S.

    2017-06-01

    Ideally, the cataloguing of spectroscopic linelists would not demand laborious and expensive experiments. Whatever an experiment might achieve, the same information would be attainable by running a calculation on a computer. Kolos and Wolniewicz were the first to demonstrate that calculations on a computer can outperform even the most sophisticated molecular spectroscopic experiments of the time, when their 1964 calculations of the dissociation energies of H_2 and D_{2} were found to be more than 1 cm^{-1} larger than the best experiments by Gerhard Herzberg, suggesting the experiment violated a strict variational principle. As explained in his Nobel Lecture, it took 5 more years for Herzberg to perform an experiment which caught up to the accuracy of the 1964 calculations. Today, numerical solutions to the Schrödinger equation, supplemented with relativistic and higher-order quantum electrodynamics (QED) corrections can provide ro-vibrational spectra for molecules that we strongly believe to be correct, even in the absence of experimental data. Why do we believe these calculated spectra are correct if we do not have experiments against which to test them? All evidence seen so far suggests that corrections due to gravity or other forces are not needed for a computer simulated QED spectrum of ro-vibrational energy transitions to be correct at the precision of typical spectrometers. Therefore a computer-generated spectrum can be considered to be as good as one coming from a more conventional spectrometer, and this has been shown to be true not just for the H_2 energies back in 1964, but now also for several other molecules. So are we at the stage where we can launch an array of calculations, each with just the atomic number changed in the input file, to reproduce the NIST energy level databases? Not quite. But I will show that for the 6e^- molecule Li_2, we have reproduced the vibrational spacings to within 0.001 cm^{-1} of the experimental spectrum, and I will

  7. Test Your Calculator IQ.

    Science.gov (United States)

    Williams, David E.

    1981-01-01

    This short quiz for teachers is intended to help them to brush up on their calculator operating skills and to prepare for the types of questions their students will ask about calculator idiosyncracies. (SJL)

  8. Calculating correct compilers

    OpenAIRE

    Bahr, Patrick; Hutton, Graham

    2015-01-01

    In this article we present a new approach to the problem of calculating compilers. In particular, we develop a simple but general technique that allows us to derive correct compilers from high- level semantics by systematic calculation, with all details of the implementation of the compilers falling naturally out of the calculation process. Our approach is based upon the use of standard equational reasoning techniques, and has been applied to calculate compilers for a wide range of language f...

  9. IPC - Isoelectric Point Calculator.

    Science.gov (United States)

    Kozlowski, Lukasz P

    2016-10-21

    Accurate estimation of the isoelectric point (pI) based on the amino acid sequence is useful for many analytical biochemistry and proteomics techniques such as 2-D polyacrylamide gel electrophoresis, or capillary isoelectric focusing used in combination with high-throughput mass spectrometry. Additionally, pI estimation can be helpful during protein crystallization trials. Here, I present the Isoelectric Point Calculator (IPC), a web service and a standalone program for the accurate estimation of protein and peptide pI using different sets of dissociation constant (pKa) values, including two new computationally optimized pKa sets. According to the presented benchmarks, the newly developed IPC pKa sets outperform previous algorithms by at least 14.9 % for proteins and 0.9 % for peptides (on average, 22.1 % and 59.6 %, respectively), which corresponds to an average error of the pI estimation equal to 0.87 and 0.25 pH units for proteins and peptides, respectively. Moreover, the prediction of pI using the IPC pKa's leads to fewer outliers, i.e., predictions affected by errors greater than a given threshold. The IPC service is freely available at http://isoelectric.ovh.org Peptide and protein datasets used in the study and the precalculated pI for the PDB and some of the most frequently used proteomes are available for large-scale analysis and future development. This article was reviewed by Frank Eisenhaber and Zoltán Gáspári.

  10. The rating reliability calculator

    Directory of Open Access Journals (Sweden)

    Solomon David J

    2004-04-01

    Full Text Available Abstract Background Rating scales form an important means of gathering evaluation data. Since important decisions are often based on these evaluations, determining the reliability of rating data can be critical. Most commonly used methods of estimating reliability require a complete set of ratings i.e. every subject being rated must be rated by each judge. Over fifty years ago Ebel described an algorithm for estimating the reliability of ratings based on incomplete data. While his article has been widely cited over the years, software based on the algorithm is not readily available. This paper describes an easy-to-use Web-based utility for estimating the reliability of ratings based on incomplete data using Ebel's algorithm. Methods The program is available public use on our server and the source code is freely available under GNU General Public License. The utility is written in PHP, a common open source imbedded scripting language. The rating data can be entered in a convenient format on the user's personal computer that the program will upload to the server for calculating the reliability and other statistics describing the ratings. Results When the program is run it displays the reliability, number of subject rated, harmonic mean number of judges rating each subject, the mean and standard deviation of the averaged ratings per subject. The program also displays the mean, standard deviation and number of ratings for each subject rated. Additionally the program will estimate the reliability of an average of a number of ratings for each subject via the Spearman-Brown prophecy formula. Conclusion This simple web-based program provides a convenient means of estimating the reliability of rating data without the need to conduct special studies in order to provide complete rating data. I would welcome other researchers revising and enhancing the program.

  11. Autistic Savant Calendar Calculators.

    Science.gov (United States)

    Patti, Paul J.

    This study identified 10 savants with developmental disabilities and an exceptional ability to calculate calendar dates. These "calendar calculators" were asked to demonstrate their abilities, and their strategies were analyzed. The study found that the ability to calculate dates into the past or future varied widely among these…

  12. Flexible Mental Calculation.

    Science.gov (United States)

    Threlfall, John

    2002-01-01

    Suggests that strategy choice is a misleading characterization of efficient mental calculation and that teaching mental calculation methods as a whole is not conducive to flexibility. Proposes an alternative in which calculation is thought of as an interaction between noticing and knowledge. Presents an associated teaching approach to promote…

  13. La résistance de vague des carènes. Calcul de la fonction de Green par intégration numérique et par une méthode asymptotique. 1° Partie Hull Resistance to Wave? Computing the Green Function by Numerical Integration and by an Asymptotic Method. Part One

    Directory of Open Access Journals (Sweden)

    Carou A.

    2006-11-01

    Full Text Available Le calcul de la résistance de vague d'une carène par éléments finis concentrés sur un ouvert borné nécessite la connaissance de la fonction de Green du problème à grande distance. Cette fonction est très difficile à calculer numériquement. On justifie dans ce travail une méthode asymptotique rapide, remplaçant avantageusement l'intégration numérique. Computing wave resistance -by finite elements concentrated on a bounded open set requires the prior knowledge of the Green function of the problem at a great distance. Computing this function is numerically very difficult. A fast asymptotic method is iustified in this article, and it can be used ta advantage as a replacemenf for numerical integration.

  14. Instructions for the use of the CIVM-Jet 4C finite-strain computer code to calculate the transient structural responses of partial and/or complete arbitrarily-curved rings subjected to fragment impact

    Science.gov (United States)

    Rodal, J. J. A.; French, S. E.; Witmer, E. A.; Stagliano, T. R.

    1979-01-01

    The CIVM-JET 4C computer program for the 'finite strain' analysis of 2 d transient structural responses of complete or partial rings and beams subjected to fragment impact stored on tape as a series of individual files. Which subroutines are found in these files are described in detail. All references to the CIVM-JET 4C program are made assuming that the user has a copy of NASA CR-134907 (ASRL TR 154-9) which serves as a user's guide to (1) the CIVM-JET 4B computer code and (2) the CIVM-JET 4C computer code 'with the use of the modified input instructions' attached hereto.

  15. Radiation doses from radiation sources of neutrons and photons by different computer calculation; Tecniche di calcolo di intensita` di dose da sorgenti di radiazione neutronica e fotonica con l`uso di codici basati su metodologie diverse

    Energy Technology Data Exchange (ETDEWEB)

    Siciliano, F.; Lippolis, G.; Bruno, S.G. [ENEA, Centro Ricerche Trisaia, Rotondella (Italy)

    1995-11-01

    In the present paper the calculation technique aspects of dose rate from neutron and photon radiation sources are covered with reference both to the basic theoretical modeling of the MERCURE-4, XSDRNPM-S and MCNP-3A codes and from practical point of view performing safety analyses of irradiation risk of two transportation casks. The input data set of these calculations -regarding the CEN 10/200 HLW container and dry PWR spent fuel assemblies shipping cask- is frequently commented as for as connecting points of input data and understanding theoretic background are concerned.

  16. Lateral hydraulic forces calculation on PWR fuel assemblies with computational fluid dynamics codes; Calculo de fuerzas laterales hidraulicas en elementos combustibles tipo PWR con codigos de dinamica de fluidos coputacional

    Energy Technology Data Exchange (ETDEWEB)

    Corpa Masa, R.; Jimenez Varas, G.; Moreno Garcia, B.

    2016-08-01

    To be able to simulate the behavior of nuclear fuel under operating conditions, it is required to include all the representative loads, including the lateral hydraulic forces which were not included traditionally because of the difficulty of calculating them in a reliable way. Thanks to the advance in CFD codes, now it is possible to assess them. This study calculates the local lateral hydraulic forces, caused by the contraction and expansion of the flow due to the bow of the surrounding fuel assemblies, on of fuel assembly under typical operating conditions from a three loop Westinghouse PWR reactor. (Author)

  17. Contribution to the algorithmic and efficient programming of new parallel architectures including accelerators for neutron physics and shielding computations; Contribution a l'algorithmique et a la programmation efficace des nouvelles architectures paralleles comportant des accelerateurs de calcul dans le domaine de la neutronique et de la radioprotection

    Energy Technology Data Exchange (ETDEWEB)

    Dubois, J.

    2011-10-13

    In science, simulation is a key process for research or validation. Modern computer technology allows faster numerical experiments, which are cheaper than real models. In the field of neutron simulation, the calculation of eigenvalues is one of the key challenges. The complexity of these problems is such that a lot of computing power may be necessary. The work of this thesis is first the evaluation of new computing hardware such as graphics card or massively multi-core chips, and their application to eigenvalue problems for neutron simulation. Then, in order to address the massive parallelism of supercomputers national, we also study the use of asynchronous hybrid methods for solving eigenvalue problems with this very high level of parallelism. Then we experiment the work of this research on several national supercomputers such as the Titane hybrid machine of the Computing Center, Research and Technology (CCRT), the Curie machine of the Very Large Computing Centre (TGCC), currently being installed, and the Hopper machine at the Lawrence Berkeley National Laboratory (LBNL). We also do our experiments on local workstations to illustrate the interest of this research in an everyday use with local computing resources. (author) [French] Les travaux de cette these concernent dans un premier temps l'evaluation des nouveaux materiels de calculs tels que les cartes graphiques ou les puces massivement multicoeurs, et leur application aux problemes de valeurs propres pour la neutronique. Ensuite, dans le but d'utiliser le parallelisme massif des supercalculateurs, nous etudions egalement l'utilisation de methodes hybrides asynchrones pour resoudre des problemes a valeur propre avec ce tres haut niveau de parallelisme. Nous experimentons ensuite le resultat de ces recherches sur plusieurs supercalculateurs nationaux tels que la machine hybride Titane du Centre de Calcul, Recherche et Technologies (CCRT), la machine Curie du Tres Grand Centre de Calcul (TGCC) qui

  18. Applications of computer algebra

    CERN Document Server

    1985-01-01

    Today, certain computer software systems exist which surpass the computational ability of researchers when their mathematical techniques are applied to many areas of science and engineering. These computer systems can perform a large portion of the calculations seen in mathematical analysis. Despite this massive power, thousands of people use these systems as a routine resource for everyday calculations. These software programs are commonly called "Computer Algebra" systems. They have names such as MACSYMA, MAPLE, muMATH, REDUCE and SMP. They are receiving credit as a computational aid with in­ creasing regularity in articles in the scientific and engineering literature. When most people think about computers and scientific research these days, they imagine a machine grinding away, processing numbers arithmetically. It is not generally realized that, for a number of years, computers have been performing non-numeric computations. This means, for example, that one inputs an equa­ tion and obtains a closed for...

  19. Mathematica as program support in the integral calculations

    OpenAIRE

    Zlatanovska, Biljana; Stojanova, Aleksandra; Kocaleva, Mirjana; Stojkovic, Natasa; Krstev, Aleksandar

    2016-01-01

    In this paper, we give а connection between the mathematical notions and using the computer as educational support at university level. Specifically, mathematical notions used in integral calculations will be explained with help of computer program. The notions, indefinite and definite integral, their calculations and their applications can be easily understand using the computer programs for their presentation. Images obtained with computer programs allows the students to better understand a...

  20. Core calculations of JMTR

    Energy Technology Data Exchange (ETDEWEB)

    Nagao, Yoshiharu [Japan Atomic Energy Research Inst., Oarai, Ibaraki (Japan). Oarai Research Establishment

    1998-03-01

    In material testing reactors like the JMTR (Japan Material Testing Reactor) of 50 MW in Japan Atomic Energy Research Institute, the neutron flux and neutron energy spectra of irradiated samples show complex distributions. It is necessary to assess the neutron flux and neutron energy spectra of an irradiation field by carrying out the nuclear calculation of the core for every operation cycle. In order to advance core calculation, in the JMTR, the application of MCNP to the assessment of core reactivity and neutron flux and spectra has been investigated. In this study, in order to reduce the time for calculation and variance, the comparison of the results of the calculations by the use of K code and fixed source and the use of Weight Window were investigated. As to the calculation method, the modeling of the total JMTR core, the conditions for calculation and the adopted variance reduction technique are explained. The results of calculation are shown. Significant difference was not observed in the results of neutron flux calculations according to the difference of the modeling of fuel region in the calculations by K code and fixed source. The method of assessing the results of neutron flux calculation is described. (K.I.)

  1. Algorithm project weight calculation aircraft

    Directory of Open Access Journals (Sweden)

    Г. В. Абрамова

    2013-07-01

    Full Text Available The paper describes the process of a complex technical object design on the example of the aircraft, using information technology such as CAD/CAM/CAE-systems, presents the basic models of aircraft which are developed in the process of designing and reflect the different aspects of its structure and function. The idea of control parametric model at complex technical object design is entered, which is a set of initial data for the development of design stations and enables the optimal complex technical object control at all stages of design using modern computer technology. The paper discloses a process of weight design, which is associated with all stages of development aircraft and its production. Usage of a scheduling algorithm that allows to organize weight calculations are carried out at various stages of planning and weighing options to optimize the use of available database of formulas and methods of calculation

  2. Ab initio calculations of biomolecules

    Science.gov (United States)

    Leś, Andrzej; Adamowicz, Ludwik

    1995-08-01

    Ab initio quantum mechanical calculations are valuable tools for interpretation and elucidation of elemental processes in biochemical systems. With the ab initio approach one can calculate data that sometimes are difficult to obtain by experimental techniques. The most popular computational theoretical methods include the Hartree-Fock method as well as some lower-level variational and perturbational post-Hartree Fock approaches which allow to predict molecular structures and to calculate spectral properties. We have been involved in a number of joined theoretical and experimental studies in the past and some examples of these studies are given in this presentation. The systems chosen cover a wide variety of simple biomolecules, such as precursors of nucleic acids, double-proton transferring molecules, and simple systems involved in processes related to first stages of substrate-enzyme interactions. In particular, examples of some ab initio calculations used in the assignment of IR spectra of matrix isolated pyrimidine nucleic bases are shown. Some radiation-induced transformations in model chromophores are also presented. Lastly, we demonstrate how the ab-initio approach can be used to determine the initial several steps of the molecular mechanism of thymidylate synthase inhibition by dUMP analogues.

  3. Calculation of confined swirling jets

    Science.gov (United States)

    Chen, C. P.

    1986-01-01

    Computations of a confined coaxial swirling jet are carried out using a standard two-equation (k-epsilon) model and two modifications of this model based on Richardson-number corrections of the length-scale (epsilon) governing equation. To avoid any uncertainty involved in the setting up of inlet boundary conditions, actual measurements are used at the inlet plane of this calculation domain. The results of the numerical investigation indicate that the k-epsilon model is inadequate for the predictions of confined swirling flows. Although marginal improvement of the flow predictions can be achieved by these two corrections, neither can be judged satisfactory.

  4. Computational mechanics

    Energy Technology Data Exchange (ETDEWEB)

    Goudreau, G.L.

    1993-03-01

    The Computational Mechanics thrust area sponsors research into the underlying solid, structural and fluid mechanics and heat transfer necessary for the development of state-of-the-art general purpose computational software. The scale of computational capability spans office workstations, departmental computer servers, and Cray-class supercomputers. The DYNA, NIKE, and TOPAZ codes have achieved world fame through our broad collaborators program, in addition to their strong support of on-going Lawrence Livermore National Laboratory (LLNL) programs. Several technology transfer initiatives have been based on these established codes, teaming LLNL analysts and researchers with counterparts in industry, extending code capability to specific industrial interests of casting, metalforming, and automobile crash dynamics. The next-generation solid/structural mechanics code, ParaDyn, is targeted toward massively parallel computers, which will extend performance from gigaflop to teraflop power. Our work for FY-92 is described in the following eight articles: (1) Solution Strategies: New Approaches for Strongly Nonlinear Quasistatic Problems Using DYNA3D; (2) Enhanced Enforcement of Mechanical Contact: The Method of Augmented Lagrangians; (3) ParaDyn: New Generation Solid/Structural Mechanics Codes for Massively Parallel Processors; (4) Composite Damage Modeling; (5) HYDRA: A Parallel/Vector Flow Solver for Three-Dimensional, Transient, Incompressible Viscous How; (6) Development and Testing of the TRIM3D Radiation Heat Transfer Code; (7) A Methodology for Calculating the Seismic Response of Critical Structures; and (8) Reinforced Concrete Damage Modeling.

  5. Radar Signature Calculation Facility

    Data.gov (United States)

    Federal Laboratory Consortium — FUNCTION: The calculation, analysis, and visualization of the spatially extended radar signatures of complex objects such as ships in a sea multipath environment and...

  6. Waste Package Lifting Calculation

    Energy Technology Data Exchange (ETDEWEB)

    H. Marr

    2000-05-11

    The objective of this calculation is to evaluate the structural response of the waste package during the horizontal and vertical lifting operations in order to support the waste package lifting feature design. The scope of this calculation includes the evaluation of the 21 PWR UCF (pressurized water reactor uncanistered fuel) waste package, naval waste package, 5 DHLW/DOE SNF (defense high-level waste/Department of Energy spent nuclear fuel)--short waste package, and 44 BWR (boiling water reactor) UCF waste package. Procedure AP-3.12Q, Revision 0, ICN 0, calculations, is used to develop and document this calculation.

  7. Electrical installation calculations advanced

    CERN Document Server

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for advanced electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practiceA step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3For apprentices and electrical installatio

  8. Evapotranspiration Calculator Desktop Tool

    Science.gov (United States)

    The Evapotranspiration Calculator estimates evapotranspiration time series data for hydrological and water quality models for the Hydrologic Simulation Program - Fortran (HSPF) and the Stormwater Management Model (SWMM).

  9. Electronics Environmental Benefits Calculator

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Electronics Environmental Benefits Calculator (EEBC) was developed to assist organizations in estimating the environmental benefits of greening their purchase,...

  10. Electrical installation calculations basic

    CERN Document Server

    Kitcher, Christopher

    2013-01-01

    All the essential calculations required for basic electrical installation workThe Electrical Installation Calculations series has proved an invaluable reference for over forty years, for both apprentices and professional electrical installation engineers alike. The book provides a step-by-step guide to the successful application of electrical installation calculations required in day-to-day electrical engineering practice. A step-by-step guide to everyday calculations used on the job An essential aid to the City & Guilds certificates at Levels 2 and 3Fo

  11. Comment on: ``FT-IR, FT-Raman and UV spectral investigation; computed frequency estimation analysis and electronic structure calculations on 1-nitronaphthalene'' by M. Govindarajan and M. Karabacak [Spectrochim. Acta A 85 (2012) 251-260

    Science.gov (United States)

    Alparone, Andrea; Librando, Vito

    2012-12-01

    The title paper [1] incorrectly establishes that, in gas phase the global minimum energy structure of 1-nitronaphthalene is planar (Cs symmetry). By contrast, present calculations indicate that the planar Cs form is an unstable structure on the potential energy surface, exhibiting an imaginary vibrational wavenumber value corresponding to the torsional mode of the nitro group around the C-N bond. At the B3LYP/6-311++G(d,p) level of calculation the global minimum energy structure of 1-nitronaphthalene in gas phase has a non-planar geometry, characterized by O-N-C-C dihedral angles of ca. 30° and lying 0.35 kcal/mol below the Cs form.

  12. Contributing to the design of run-time systems dedicated to high performance computing; Contribution a l'elaboration d'environnements de programmation dedies au calcul scientifique hautes performances

    Energy Technology Data Exchange (ETDEWEB)

    Perache, M

    2006-10-15

    In the field of intensive scientific computing, the quest for performance has to face the increasing complexity of parallel architectures. Nowadays, these machines exhibit a deep memory hierarchy which complicates the design of efficient parallel applications. This thesis proposes a programming environment allowing to design efficient parallel programs on top of clusters of multi-processors. It features a programming model centered around collective communications and synchronizations, and provides load balancing facilities. The programming interface, named MPC, provides high level paradigms which are optimized according to the underlying architecture. The environment is fully functional and used within the CEA/DAM (TERANOVA) computing center. The evaluations presented in this document confirm the relevance of our approach. (author)

  13. Calculation of conversion coefficients of dose of a computational anthropomorphic simulator sit exposed to a plane source; Calculo de coeficientes de conversao de dose de um simulador antropomorfico computacional sentado exposto a uma fonte plana

    Energy Technology Data Exchange (ETDEWEB)

    Santos, William S.; Carvalho Junior, Alberico B. de; Pereira, Ariana J.S.; Santos, Marcos S.; Maia, Ana F., E-mail: williathan@yahoo.com.b, E-mail: ablohem@gmail.co, E-mail: ariana-jsp@hotmail.co, E-mail: m_souzasantos@hotmail.co, E-mail: afmaia@ufs.b [Universidade Federal de Sergipe (UFS), Aracaju, SE (Brazil)

    2011-10-26

    In this paper conversion coefficients (CCs) of equivalent dose and effective in terms of kerma in the air were calculated suggested by the ICRP 74. These dose coefficients were calculated considering a plane radiation source and monoenergetic for a spectrum of energy varying from 10 keV to 2 MeV. The CCs were obtained for four geometries of irradiation, anterior-posterior, posterior-anterior, lateral right side and lateral left side. It was used the radiation transport code Visual Monte Carlo (VMC), and a anthropomorphic simulator of sit female voxel. The observed differences in the found values for the CCs at the four irradiation sceneries are direct results of the body organs disposition, and the distance of these organs to the irradiation source. The obtained CCs will be used for estimative more precise of dose in situations that the exposed individual be sit, as the normally the CCs available in the literature were calculated by using simulators always lying or on their feet

  14. Chemical calculations and chemicals that might calculate

    Science.gov (United States)

    Barnett, Michael P.

    I summarize some applications of symbolic calculation to the evaluation of molecular integrals over Slater orbitals, and discuss some spin-offs of this work that have wider potential. These include the exploration of the mechanized use of analogy. I explain the methods that I use to do this, in relation to mathematical proofs and to modeling step by step processes such as organic syntheses and NMR pulse sequences. Another spin-off relates to biological information processing. Some challenges and opportunities in the information infrastructure of interdisciplinary research are discussed.

  15. Improvement of Sodium Neutronic Nuclear Data for the Computation of Generation IV Reactors; Contribution a l'amelioration des donnees nucleaires neutroniques du sodium pour le calcul des reacteurs de generation IV

    Energy Technology Data Exchange (ETDEWEB)

    Archier, P.

    2011-09-14

    The safety criteria to be met for Generation IV sodium fast reactors (SFR) require reduced and mastered uncertainties on neutronic quantities of interest. Part of these uncertainties come from nuclear data and, in the particular case of SFR, from sodium nuclear data, which show significant differences between available international libraries (JEFF-3.1.1, ENDF/B-VII.0, JENDL-4.0). The objective of this work is to improve the knowledge on sodium nuclear data for a better calculation of SFR neutronic parameters and reliable associated uncertainties. After an overview of existing {sup 23}Na data, the impact of the differences is quantified, particularly on sodium void reactivity effects, with both deterministic and stochastic neutronic codes. Results show that it is necessary to completely re-evaluate sodium nuclear data. Several developments have been made in the evaluation code Conrad, to integrate new nuclear reactions models and their associated parameters and to perform adjustments with integral measurements. Following these developments, the analysis of differential data and the experimental uncertainties propagation have been performed with Conrad. The resolved resonances range has been extended up to 2 MeV and the continuum range begins directly beyond this energy. A new {sup 23}Na evaluation and the associated multigroup covariances matrices were generated for future uncertainties calculations. The last part of this work focuses on the sodium void integral data feedback, using methods of integral data assimilation to reduce the uncertainties on sodium cross sections. This work ends with uncertainty calculations for industrial-like SFR, which show an improved prediction of their neutronic parameters with the new evaluation. (author) [French] Les criteres de surete exiges pour les reacteurs rapides au sodium de Generation IV (RNR-Na) se traduisent par la necessite d'incertitudes reduites et maitrisees sur les grandeurs neutroniques d'interet. Une part

  16. [Understanding dosage calculations].

    Science.gov (United States)

    Benlahouès, Daniel

    2016-01-01

    The calculation of dosages in paediatrics is the concern of the whole medical and paramedical team. This activity must generate a minimum of risks in order to prevent care-related adverse events. In this context, the calculation of dosages is a practice which must be understood by everyone. Copyright © 2016 Elsevier Masson SAS. All rights reserved.

  17. Resolving resonances in R-matrix calculations

    Energy Technology Data Exchange (ETDEWEB)

    Ramirez, J.M.; Bautista, Manuel A. [Centro de Fisica, Instituto Venezolano de Investigaciones Cientificas (IVIC), Caracas (Venezuela)

    2002-10-28

    We present a technique to obtain detailed resonance structures from R-matrix calculations of atomic cross sections for both collisional and radiative processes. The resolving resonances (RR) method relies on the QB method of Quigley-Berrington (Quigley L, Berrington K A and Pelan J 1998 Comput. Phys. Commun. 114 225) to find the position and width of resonances directly from the reactance matrix. Then one determines the symmetry parameters of these features and generates an energy mesh whereby fully resolved cross sections are calculated with minimum computational cost. The RR method is illustrated with the calculation of the photoionization cross sections and the unified recombination rate coefficients of Fe XXIV, O VI, and Fe XVII. The RR method reduces numerical errors arising from unresolved R-matrix cross sections in the computation of synthetic bound-free opacities, thermally averaged collision strengths and recombination rate coefficients. (author)

  18. NASCAP/LEO calculations of current collection

    Science.gov (United States)

    Mandell, Myron J.; Katz, Ira; Davis, Victoria A.; Kuharski, Robert A.

    1990-12-01

    NASCAP/LEO is a 3-dimensional computer code for calculating the interaction of a high-voltage spacecraft with the cold dense plasma found in Low Earth Orbit. Although based on a cubic grid structure, NASCAP/LEO accepts object definition input from standard computer aided design (CAD) programs so that a model may be correctly proportioned and important features resolved. The potential around the model is calculated by solving the finite element formulation of Poisson's equation with an analytic space charge function. Five previously published NASCAP/LEO calculations for three ground test experiments and two space flight experiments are presented. The three ground test experiments are a large simulated panel, a simulated pinhole, and a 2-slit experiment with overlapping sheaths. The two space flight experiments are a solar panel biased up to 1000 volts, and a rocket-mounted sphere biased up to 46 kilovolts. In all cases, the authors find good agreement between calculation and measurement.

  19. GASP: A computer code for calculating the thermodynamic and transport properties for ten fluids: Parahydrogen, helium, neon, methane, nitrogen, carbon monoxide, oxygen, fluorine, argon, and carbon dioxide. [enthalpy, entropy, thermal conductivity, and specific heat

    Science.gov (United States)

    Hendricks, R. C.; Baron, A. K.; Peller, I. C.

    1975-01-01

    A FORTRAN IV subprogram called GASP is discussed which calculates the thermodynamic and transport properties for 10 pure fluids: parahydrogen, helium, neon, methane, nitrogen, carbon monoxide, oxygen, fluorine, argon, and carbon dioxide. The pressure range is generally from 0.1 to 400 atmospheres (to 100 atm for helium and to 1000 atm for hydrogen). The temperature ranges are from the triple point to 300 K for neon; to 500 K for carbon monoxide, oxygen, and fluorine; to 600 K for methane and nitrogen; to 1000 K for argon and carbon dioxide; to 2000 K for hydrogen; and from 6 to 500 K for helium. GASP accepts any two of pressure, temperature and density as input conditions along with pressure, and either entropy or enthalpy. The properties available in any combination as output include temperature, density, pressure, entropy, enthalpy, specific heats, sonic velocity, viscosity, thermal conductivity, and surface tension. The subprogram design is modular so that the user can choose only those subroutines necessary to the calculations.

  20. Towards the development of run times leveraging virtualization for high performance computing; Contribution a l'elaboration de supports executifs exploitant la virtualisation pour le calcul hautes performances

    Energy Technology Data Exchange (ETDEWEB)

    Diakhate, F.

    2010-12-15

    In recent years, there has been a growing interest in using virtualization to improve the efficiency of data centers. This success is rooted in virtualization's excellent fault tolerance and isolation properties, in the overall flexibility it brings, and in its ability to exploit multi-core architectures efficiently. These characteristics also make virtualization an ideal candidate to tackle issues found in new compute cluster architectures. However, in spite of recent improvements in virtualization technology, overheads in the execution of parallel applications remain, which prevent its use in the field of high performance computing. In this thesis, we propose a virtual device dedicated to message passing between virtual machines, so as to improve the performance of parallel applications executed in a cluster of virtual machines. We also introduce a set of techniques facilitating the deployment of virtualized parallel applications. These functionalities have been implemented as part of a runtime system which allows to benefit from virtualization's properties in a way that is as transparent as possible to the user while minimizing performance overheads. (author)

  1. Implicit upwind schemes for computational fluid dynamics. Solution by domain decomposition; Etude des schemas decentres implicites pour le calcul numerique en mecanique des fluides. Resolution par decomposition de domaine

    Energy Technology Data Exchange (ETDEWEB)

    Clerc, S

    1998-07-01

    In this work, the numerical simulation of fluid dynamics equations is addressed. Implicit upwind schemes of finite volume type are used for this purpose. The first part of the dissertation deals with the improvement of the computational precision in unfavourable situations. A non-conservative treatment of some source terms is studied in order to correct some shortcomings of the usual operator-splitting method. Besides, finite volume schemes based on Godunov's approach are unsuited to compute low Mach number flows. A modification of the up-winding by preconditioning is introduced to correct this defect. The second part deals with the solution of steady-state problems arising from an implicit discretization of the equations. A well-posed linearized boundary value problem is formulated. We prove the convergence of a domain decomposition algorithm of Schwartz type for this problem. This algorithm is implemented either directly, or in a Schur complement framework. Finally, another approach is proposed, which consists in decomposing the non-linear steady state problem. (author)

  2. Development of a computational model for the calculation of neutron dose equivalent in laminated primary barriers of radiotherapy rooms; Desenvolvimento de um modelo computacional para calculo do equivalente de dose de neutrons em barreiras primarias laminadas de salas de radioterapia

    Energy Technology Data Exchange (ETDEWEB)

    Rezende, Gabriel Fonseca da Silva

    2015-06-01

    Many radiotherapy centers acquire 15 and 18 MV linear accelerators to perform more effective treatments for deep tumors. However, the acquisition of these equipment must be accompanied by an additional care in shielding planning of the rooms that will house them. In cases where space is restricted, it is common to find primary barriers made of concrete and metal. The drawback of this type of barrier is the photoneutron emission when high energy photons (e.g. 15 and 18 MV spectra) interact with the metallic material of the barrier. The emission of these particles constitutes a problem of radiation protection inside and outside of radiotherapy rooms, which should be properly assessed. A recent work has shown that the current model underestimate the dose of neutrons outside the treatment rooms. In this work, a computational model for the aforementioned problem was created from Monte Carlo Simulations and Artificial Intelligence. The developed model was composed by three neural networks, each being formed of a pair of material and spectrum: Pb18, Pb15 and Fe18. In a direct comparison with the McGinley method, the Pb18 network exhibited the best responses for approximately 78% of the cases tested; the Pb15 network showed better results for 100% of the tested cases, while the Fe18 network produced better answers to 94% of the tested cases. Thus, the computational model composed by the three networks has shown more consistent results than McGinley method. (author)

  3. Flow calculation in a bulb turbine

    Energy Technology Data Exchange (ETDEWEB)

    Goede, E.; Pestalozzi, J.

    1987-02-01

    In recent years remarkable progress has been made in the field of computational fluid dynamics. Sometimes the impression may arise when reading the relevant literature that most of the problems in this field have already been solved. Upon studying the matter more deeply, however, it is apparent that some questions still remain unanswered. The use of the quasi-3D (Q3D) computational method for calculating the flow in a fuel hydraulic turbine is described.

  4. Calculation methods of the nuclear characteristics

    OpenAIRE

    Dubovichenko, S. B.

    2010-01-01

    In the book the mathematical methods of nuclear cross sections and phases of elastic scattering, energy and characteristics of bound states in two- and three-particle nuclear systems, when the potentials of interaction contain not only central, but also tensor component, are presented. Are given the descriptions of the mathematical numerical calculation methods and computer programs in the algorithmic language "BASIC" for "Turbo Basic" of firm "Borland" for the computers of the type IBM PC AT...

  5. Computational Finance

    DEFF Research Database (Denmark)

    Rasmussen, Lykke

    One of the major challenges in todays post-crisis finance environment is calculating the sensitivities of complex products for hedging and risk management. Historically, these derivatives have been determined using bump-and-revalue, but due to the increasing magnitude of these computations does...... this get increasingly dicult on available hardware. In this paper three alternative methods for evaluating derivatives are compared: the complex-step derivative approximation, the algorithmic forward mode and the algorithmic backward mode. These are applied to the price of the Credit Value Adjustment...

  6. Calculativeness and trust

    DEFF Research Database (Denmark)

    Frederiksen, Morten

    2014-01-01

    Williamson’s characterisation of calculativeness as inimical to trust contradicts most sociological trust research. However, a similar argument is found within trust phenomenology. This paper re-investigates Williamson’s argument from the perspective of Løgstrup’s phenomenological theory of trust....... Contrary to Williamson, however, Løgstrup’s contention is that trust, not calculativeness, is the default attitude and only when suspicion is awoken does trust falter. The paper argues that while Williamson’s distinction between calculativeness and trust is supported by phenomenology, the analysis needs...... to take actual subjective experience into consideration. It points out that, first, Løgstrup places trust alongside calculativeness as a different mode of engaging in social interaction, rather conceiving of trust as a state or the outcome of a decision-making process. Secondly, the analysis must take...

  7. Unit Cost Compendium Calculations

    Data.gov (United States)

    U.S. Environmental Protection Agency — The Unit Cost Compendium (UCC) Calculations raw data set was designed to provide for greater accuracy and consistency in the use of unit costs across the USEPA...

  8. National Stormwater Calculator

    Science.gov (United States)

    EPA’s National Stormwater Calculator (SWC) is a desktop application that estimates the annual amount of rainwater and frequency of runoff from a specific site anywhere in the United States (including Puerto Rico).

  9. Calculation Tool for Engineering

    OpenAIRE

    Lampinen, Samuli

    2016-01-01

    The Study was conducted as qualitative research for K-S Konesuunnittelu Oy. The company provides mechanical engineering for technology suppliers in the Finnish export industries. The main objective was to study if the competitiveness of the case company could be improved using a self-made Calculation Tool (Excel Tool). The mission was to clarify processes in the case company to see the possibilities of Excel Tool and to compare it with other potential calculation applications. In addition,...

  10. Current interruption transients calculation

    CERN Document Server

    Peelo, David F

    2014-01-01

    Provides an original, detailed and practical description of current interruption transients, origins, and the circuits involved, and how they can be calculated Current Interruption Transients Calculationis a comprehensive resource for the understanding, calculation and analysis of the transient recovery voltages (TRVs) and related re-ignition or re-striking transients associated with fault current interruption and the switching of inductive and capacitive load currents in circuits. This book provides an original, detailed and practical description of current interruption transients, origins,

  11. Floating calculation in Mesopotamia

    OpenAIRE

    Proust, Christine

    2016-01-01

    Sophisticated computation methods were developed 4000 years ago in Mesopotamia in the context of scribal schools. The basics of the computation can be detected in clay tablets written by young students educated in these scribal schools. At first glance, the mathematical exercises contained in school tablets seem to be very simple and quite familiar, and therefore, they have little attracted the attention of historians of mathematics. Yet if we look more closely at these modest writings, their...

  12. Computing meaning v.4

    CERN Document Server

    Bunt, Harry; Pulman, Stephen

    2013-01-01

    This book is a collection of papers by leading researchers in computational semantics. It presents a state-of-the-art overview of recent and current research in computational semantics, including descriptions of new methods for constructing and improving resources for semantic computation, such as WordNet, VerbNet, and semantically annotated corpora. It also presents new statistical methods in semantic computation, such as the application of distributional semantics in the compositional calculation of sentence meanings. Computing the meaning of sentences, texts, and spoken or texted dialogue i

  13. Sampling the potential energy surface of a DNA duplex damaged by a food carcinogen: Force field parameterization by ab initio quantum calculations and conformational searching using molecular mechanics computations

    Science.gov (United States)

    Wu, Xiangyang

    1999-07-01

    The heterocyclic amine 2-amino-3-methylimidazo (4, 5-f) quinoline (IQ) is one of a number of carcinogens found in barbecued meat and fish. It induces tumors in mammals and is probably involved in human carcinogenesis, because of great exposure to such food carcinogens. IQ is biochemically activated to a derivative which reacts with DNA to form a covalent adduct. This adduct may deform the DNA and consequently cause a mutation. which may initiate carcinogenesis. To understand this cancer initiating event, it is necessary to obtain atomic resolution structures of the damaged DNA. No such structures are available experimentally due to synthesis difficulties. Therefore, we employ extensive molecular mechanics and dynamics calculations for this purpose. The major IQ-DNA adduct in the specific DNA sequence d(5'G1G2C G3CCA3') - d(5'TGGCGCC3') with IQ modified at G3 is studied. The d(5'G1G2C G3CC3') sequence has recently been shown to be a hot-spot for mutations when IQ modification is at G3. Although this sequence is prone to -2 deletions via a ``slippage mechanism'' even when unmodified, a key question is why IQ increases the mutation frequency of the unmodified DNA by about 104 fold. Is there a structural feature imposed by IQ that is responsible? The molecular mechanics and dynamics program AMBER for nucleic acids with the latest force field was chosen for this work. This force field has been demonstrated to reproduce well the B-DNA structure. However, some parameters, the partial charges, bond lengths and angles, dihedral parameters of the modified residue, are not available in the AMBER database. We parameterized the force field using high level ab initio quantum calculations. We created 800 starting conformations which uniformly sampled in combination at 18° intervals three torsion angles that govern the IQ-DNA orientations, and energy minimized them. The most important structures are abnormal; the IQ damaged guanine is rotated out of its standard B

  14. Computer-assisted Crystallization.

    Science.gov (United States)

    Semeister, Joseph J., Jr.; Dowden, Edward

    1989-01-01

    To avoid a tedious task for recording temperature, a computer was used for calculating the heat of crystallization for the compound sodium thiosulfate. Described are the computer-interfacing procedures. Provides pictures of laboratory equipment and typical graphs from experiments. (YP)

  15. The Computational Materials Repository

    DEFF Research Database (Denmark)

    Landis, David D.; Hummelshøj, Jens S.; Nestorov, Svetlozar

    2012-01-01

    The possibilities for designing new materials based on quantum physics calculations are rapidly growing, but these design efforts lead to a significant increase in the amount of computational data created. The Computational Materials Repository (CMR) addresses this data challenge and provides...

  16. Experimental evaluation of quantum computing elements (qubits) made of electrons trapped over a liquid helium film; Evaluation experimentale d'elements de calcul quantique (qubit) formes d'electrons pieges sur l'helium liquide

    Energy Technology Data Exchange (ETDEWEB)

    Rousseau, E

    2006-12-15

    An electron on helium presents a quantized energy spectrum. The interaction with the environment is considered sufficiently weak in order to allow the realization of a quantum bit (qubit) by using the first two energy levels. The first stage in the realization of this qubit was to trap and control a single electron. This is carried out thanks to a set of micro-fabricated electrodes defining a well of potential in which the electron is trapped. We are able with such a sample to trap and detect a variables number of electrons varying between one and around twenty. This then allowed us to study the static behaviour of a small number of electrons in a trap. They are supposed to crystallize and form structures called Wigner molecules. Such molecules have not yet been observed yet with electrons above helium. Our results bring circumstantial evidence for of Wigner crystallization. We then sought to characterize the qubit more precisely. We sought to carry out a projective reading (depending on the state of the qubit) and a measurement of the relaxation time. The results were obtained by exciting the electron with an incoherent electric field. A clean measurement of the relaxation time would require a coherent electric field. The conclusion cannot thus be final but it would seem that the relaxation time is shorter than calculated theoretically. That is perhaps due to a measurement of the relaxation between the oscillating states in the trap and not between the states of the qubit. (author)

  17. Probability calculations for three-part mineral resource assessments

    Science.gov (United States)

    Ellefsen, Karl J.

    2017-06-27

    Three-part mineral resource assessment is a methodology for predicting, in a specified geographic region, both the number of undiscovered mineral deposits and the amount of mineral resources in those deposits. These predictions are based on probability calculations that are performed with computer software that is newly implemented. Compared to the previous implementation, the new implementation includes new features for the probability calculations themselves and for checks of those calculations. The development of the new implementation lead to a new understanding of the probability calculations, namely the assumptions inherent in the probability calculations. Several assumptions strongly affect the mineral resource predictions, so it is crucial that they are checked during an assessment. The evaluation of the new implementation leads to new findings about the probability calculations,namely findings regarding the precision of the computations,the computation time, and the sensitivity of the calculation results to the input.

  18. Calcul des efforts de deuxième ordre à très haute fréquence sur des plates-formes à lignes tendues Computing High-Frequency Second Order Loads on Tension Leg Platforms

    Directory of Open Access Journals (Sweden)

    Chen X.

    2006-11-01

    Full Text Available Le problème considéré ici est celui de l'évaluation des efforts excitateurs de deuxième ordre (en mode somme, c'est-à-dire prenant place aux sommes deux à deux des fréquences de houle sur des plates-formes à lignes tendues. Ces efforts sont tenus pour responsables de comportements résonnants (en roulis, tangage et pilonnement observés lors d'essais en bassin et pourraient réduire sensiblement la durée de vie en fatigue des tendons. Des résultats sont tout d'abord présentés pour une structure simplifiée, consistant en 4 cylindres verticaux reposant sur le fond marin. L'intérêt de cette géométrie est que tous les calculs peuvent être menés à terme de façon quasi analytique. Les résultats obtenus permettent d'illustrer le haut degré d'interaction entre les colonnes, et la faible décroissance du potentiel de diffraction de deuxième ordre avec la profondeur. On présente ensuite des résultats pour une plate-forme réelle, celle de Snorre. Tension Leg Platforms (TLP's are now regarded as a promising technology for the development of deep offshore fields. As the water depth increases however, their natural periods of heave, roll and pitch tend to increase as well (roughly to the one-half power, and it is not clear yet what the maximum permissible values for these natural periods can be. For the Snorre TLP for instance, they are only about 2. 5 seconds, which seems to be sufficiently low since there is very limited free wave energy at such periods. Model tests, however, have shown some resonant response in sea states with peak periods of about 5 seconds. Often referred to as springing , this resonant motion can severely affect the fatigue life of tethers and increase their design loads. In order to calculate this springing motion at the design stage, it is necessary to identify and evaluate both the exciting loads and the mechanisms of energy dissipation. With the help of the French Norwegian Foundation a joint effort was

  19. Simple Calculation Programs for Biology Immunological Methods

    Indian Academy of Sciences (India)

    First page Back Continue Last page Overview Graphics. Simple Calculation Programs for Biology Immunological Methods. Computation of Ab/Ag Concentration from EISA data. Graphical Method; Raghava et al., 1992, J. Immuno. Methods 153: 263. Determination of affinity of Monoclonal Antibody. Using non-competitive ...

  20. Fast calculation of best focus position

    NARCIS (Netherlands)

    Bezzubik, V.; Belashenkov, N.; Vdovin, G.V.

    2015-01-01

    New computational technique based on linear-scale differential analysis (LSDA) of digital image is proposed to find the best focus position in digital microscopy by means of defocus estimation in two near-focal positions only. The method is based on the calculation of local gradients of the image on

  1. LHC Bellows Impedance Calculations

    CERN Document Server

    Dyachkov, M

    1997-01-01

    To compensate for thermal expansion the LHC ring has to accommodate about 2500 bellows which, together with beam position monitors, are the main contributors to the LHC broad-band impedance budget. In order to reduce this impedance to an acceptable value the bellows have to be shielded. In this paper we compare different designs proposed for the bellows and calculate their transverse and longitudinal wakefields and impedances. Owing to the 3D geometry of the bellows, the code MAFIA was used for the wakefield calculations; when possible the MAFIA results were compared to those obtained with ABCI. The results presented in this paper indicate that the latest bellows design, in which shielding is provided by sprung fingers which can slide along the beam screen, has impedances smaller tha those previously estimated according to a rather conservative scaling of SSC calculations and LEP measurements. Several failure modes, such as missing fingers and imperfect RF contact, have also been studied.

  2. INVAP's Nuclear Calculation System

    Directory of Open Access Journals (Sweden)

    Ignacio Mochi

    2011-01-01

    Full Text Available Since its origins in 1976, INVAP has been on continuous development of the calculation system used for design and optimization of nuclear reactors. The calculation codes have been polished and enhanced with new capabilities as they were needed or useful for the new challenges that the market imposed. The actual state of the code packages enables INVAP to design nuclear installations with complex geometries using a set of easy-to-use input files that minimize user errors due to confusion or misinterpretation. A set of intuitive graphic postprocessors have also been developed providing a fast and complete visualization tool for the parameters obtained in the calculations. The capabilities and general characteristics of this deterministic software package are presented throughout the paper including several examples of its recent application.

  3. Calculating Quenching Weights

    CERN Document Server

    Salgado, C A; Salgado, Carlos A.; Wiedemann, Urs Achim

    2003-01-01

    We calculate the probability (``quenching weight'') that a hard parton radiates an additional energy fraction due to scattering in spatially extended QCD matter. This study is based on an exact treatment of finite in-medium path length, it includes the case of a dynamically expanding medium, and it extends to the angular dependence of the medium-induced gluon radiation pattern. All calculations are done in the multiple soft scattering approximation (Baier-Dokshitzer-Mueller-Peign\\'e-Schiff--Zakharov ``BDMPS-Z''-formalism) and in the single hard scattering approximation (N=1 opacity approximation). By comparison, we establish a simple relation between transport coefficient, Debye screening mass and opacity, for which both approximations lead to comparable results. Together with this paper, a CPU-inexpensive numerical subroutine for calculating quenching weights is provided electronically. To illustrate its applications, we discuss the suppression of hadronic transverse momentum spectra in nucleus-nucleus colli...

  4. Graphing Calculator Mini Course

    Science.gov (United States)

    Karnawat, Sunil R.

    1996-01-01

    The "Graphing Calculator Mini Course" project provided a mathematically-intensive technologically-based summer enrichment workshop for teachers of American Indian students on the Turtle Mountain Indian Reservation. Eleven such teachers participated in the six-day workshop in summer of 1996 and three Sunday workshops in the academic year. The project aimed to improve science and mathematics education on the reservation by showing teachers effective ways to use high-end graphing calculators as teaching and learning tools in science and mathematics courses at all levels. In particular, the workshop concentrated on applying TI-82's user-friendly features to understand the various mathematical and scientific concepts.

  5. Good Practices in Free-energy Calculations

    Science.gov (United States)

    Pohorille, Andrew; Jarzynski, Christopher; Chipot, Christopher

    2013-01-01

    As access to computational resources continues to increase, free-energy calculations have emerged as a powerful tool that can play a predictive role in drug design. Yet, in a number of instances, the reliability of these calculations can be improved significantly if a number of precepts, or good practices are followed. For the most part, the theory upon which these good practices rely has been known for many years, but often overlooked, or simply ignored. In other cases, the theoretical developments are too recent for their potential to be fully grasped and merged into popular platforms for the computation of free-energy differences. The current best practices for carrying out free-energy calculations will be reviewed demonstrating that, at little to no additional cost, free-energy estimates could be markedly improved and bounded by meaningful error estimates. In energy perturbation and nonequilibrium work methods, monitoring the probability distributions that underlie the transformation between the states of interest, performing the calculation bidirectionally, stratifying the reaction pathway and choosing the most appropriate paradigms and algorithms for transforming between states offer significant gains in both accuracy and precision. In thermodynamic integration and probability distribution (histogramming) methods, properly designed adaptive techniques yield nearly uniform sampling of the relevant degrees of freedom and, by doing so, could markedly improve efficiency and accuracy of free energy calculations without incurring any additional computational expense.

  6. FLAG-SGH Sedov calculations

    Energy Technology Data Exchange (ETDEWEB)

    Fung, Jimmy [Los Alamos National Laboratory; Schofield, Sam [LLNL; Shashkov, Mikhail J. [Los Alamos National Laboratory

    2012-06-25

    We did not run with a 'cylindrically painted region'. However, we did compute two general variants of the original problem. Refinement studies where a single zone at each level of refinement contains the entire internal energy at t=0 or A 'finite' energy source which has the same physical dimensions as that for the 91 x 46 mesh, but consisting of increasing numbers of zones with refinement. Nominal mesh resolution: 91 x 46. Other mesh resolutions: 181 x 92 and 361 x 184. Note, not identical to the original specification. To maintain symmetry for the 'fixed' energy source, the mesh resolution was adjusted slightly. FLAG Lagrange or full (Eulerian) ALE was used with various options for each simulation. Observation - for either Lagrange or ALE, point or 'fixed' source, calculations converge on density and pressure with mesh resolution, but not energy, (not vorticity either).

  7. When computers were human

    CERN Document Server

    Grier, David Alan

    2013-01-01

    Before Palm Pilots and iPods, PCs and laptops, the term ""computer"" referred to the people who did scientific calculations by hand. These workers were neither calculating geniuses nor idiot savants but knowledgeable people who, in other circumstances, might have become scientists in their own right. When Computers Were Human represents the first in-depth account of this little-known, 200-year epoch in the history of science and technology. Beginning with the story of his own grandmother, who was trained as a human computer, David Alan Grier provides a poignant introduction to the wider wo

  8. Research in Computational Astrobiology

    Science.gov (United States)

    Chaban, Galina; Colombano, Silvano; Scargle, Jeff; New, Michael H.; Pohorille, Andrew; Wilson, Michael A.

    2003-01-01

    We report on several projects in the field of computational astrobiology, which is devoted to advancing our understanding of the origin, evolution and distribution of life in the Universe using theoretical and computational tools. Research projects included modifying existing computer simulation codes to use efficient, multiple time step algorithms, statistical methods for analysis of astrophysical data via optimal partitioning methods, electronic structure calculations on water-nuclei acid complexes, incorporation of structural information into genomic sequence analysis methods and calculations of shock-induced formation of polycylic aromatic hydrocarbon compounds.

  9. Green's function calculation from equipartition theorem.

    Science.gov (United States)

    Perton, Mathieu; Sánchez-Sesma, Francisco José

    2016-08-01

    A method is presented to calculate the elastodynamic Green's functions by using the equipartition principle. The imaginary parts are calculated as the average cross correlations of the displacement fields generated by the incidence of body and surface waves with amplitudes weighted by partition factors. The real part is retrieved using the Hilbert transform. The calculation of the partition factors is discussed for several geometrical configurations in two dimensional space: the full-space, a basin in a half-space and for layered media. For the last case, it results in a fast computation of the full Green's functions. Additionally, if the contribution of only selected states is desired, as for instance the surface wave part, the computation is even faster. Its use for full waveform inversion may then be advantageous.

  10. Gravitational constant calculation methodologies

    OpenAIRE

    Shakhparonov, V. M.; Karagioz, O. V.; Izmailov, V. P.

    2011-01-01

    We consider the gravitational constant calculation methodologies for a rectangular block of the torsion balance body presented in the papers Phys. Rev. Lett. 102, 240801 (2009) and Phys.Rev. D. 82, 022001 (2010). We have established the influence of non-equilibrium gas flows on the obtained values of G.

  11. The Quality of the Embedding Potential Is Decisive for Minimal Quantum Region Size in Embedding Calculations

    DEFF Research Database (Denmark)

    Nåbo, Lina J; Olsen, Jógvan Magnus Haugaard; Martínez, Todd J

    2017-01-01

    The calculation of spectral properties for photoactive proteins is challenging because of the large cost of electronic structure calculations on large systems. Mixed quantum mechanical (QM) and molecular mechanical (MM) methods are typically employed to make such calculations computationally...

  12. [Structure and function of the cardiotocographic score (CTG-score) calculated by the "quantitative cardiotocography" computer method. Determining the significance of its components for the accuracy of the estimates for the ph of the fetus].

    Science.gov (United States)

    Ignatov, P; Atanasov, B

    2011-01-01

    In the last three years "quantitative cardiotocography" has become the main method for fetal monitoring during late pregnancy and birth in Sheynovo hospital - Sofia, Bulgaria. Our previous studies presented opportunities for increasing the diagnostic potential of the methodology. In this paper we offer a new approach to further improve the accuracy of prognostic values for fetal pH during labor. This is achieved by analyzing the individual components of the CTG-score (microfluctuation - OSZ, basic fetal heart rate - FRQ and decelerations - DEC). Several groups of CTG-scores have been formed, according to the composition of the score and the correlation between forecast and actual results for the pH of the fetus. For each of the stored 171 recordings we compared the CTG-score, produced prior to the delivery, with the pH measured in the umbilical artery (UA) before cutting the umbilical cord. As fetal pH forecast is based strictly on the CTG-score value, the difference between actual and prognostic results for the pH actually shows how accurate is the CTG score itself. We used standard deviation (Std. deviation) to assess this variability. We defined several groups of CTG-score based on its composition and the respective standard deviations. Each group includes CTG-scores with no significant statistical difference between the calculated standard deviations: CTG-score with low (composed of OSZ; Std. Dev. 0.065), satisfactory (composed of OSZ + FRQ and FRQ; Std. dev 0048 and 0044), high (composed of OSZ + DEC and DEC; Std. dev 0032 and 0027) and very high (composed of FRQ + DEC and OSZ + FRQ + DEC; Std. dev. 0019 and 0012) predictive value. We observed a substantial variety in the prognostic results, depending on which components of the CTG-score are involved in the evaluation of pH. The composition of the CTG-score seems to be crucial for the accuracy of the prognostic fetal pH values. In order to organize the gathered information it is necessary to develop clinical

  13. Calcul des paramètres de l'équation de Wilson. Analyse comparative des représentations d'équilibres liquide-vapeur isothermes par les modèles de Wilson et NRTL Computing Parameters in the Wilson Equation. Comparatrice Analysis of Representations of Isothermal Liquid-Vapor Equilibria by Wilson and Nrtl Models

    Directory of Open Access Journals (Sweden)

    Desplanches H.

    2006-11-01

    Full Text Available Un programme de calcul des paramètres de l'équation de Wilson est mis au point. II utilise une méthode itérative de minimisation des écarts sur la pression et la composition de vapeur ou sur chaque grandeur prise séparément. Les méthodes utilisées sont testées sur neuf équilibres liquide-vapeur isothermes de mélanges binaires à déviations positives ou négatives. Les écarts moyens entre les valeurs expérimentales : - de la pression; - de la composition de la vapeur; - de l'enthalpie libre d'excès; et les valeurs calculées à partir des paramètres de Wilson sont comparés à ceux obtenus d'après le modèle NRTL. A program has been developed for computing the parameters in the Wilson equation. It uses an iterative method of minimizing the differences in pressure and steam composition or in each magnitude token separately. The methods used are tried out on nine isothermal liquid-vapor equilibria of binary mixtures with positive or negative deviations. The mean differences between experimental values of the : - pressure; - vapor composition; - excess free enthalpy; and values computed from Wilson parameters are compared with those obtained from an NRTL model.

  14. Environmental flow allocation and statistics calculator

    Science.gov (United States)

    Konrad, Christopher P.

    2011-01-01

    The Environmental Flow Allocation and Statistics Calculator (EFASC) is a computer program that calculates hydrologic statistics based on a time series of daily streamflow values. EFASC will calculate statistics for daily streamflow in an input file or will generate synthetic daily flow series from an input file based on rules for allocating and protecting streamflow and then calculate statistics for the synthetic time series. The program reads dates and daily streamflow values from input files. The program writes statistics out to a series of worksheets and text files. Multiple sites can be processed in series as one run. EFASC is written in MicrosoftRegistered Visual BasicCopyright for Applications and implemented as a macro in MicrosoftOffice Excel 2007Registered. EFASC is intended as a research tool for users familiar with computer programming. The code for EFASC is provided so that it can be modified for specific applications. All users should review how output statistics are calculated and recognize that the algorithms may not comply with conventions used to calculate streamflow statistics published by the U.S. Geological Survey.

  15. Computational modeling of the mathematical dummy of the Brazilian woman for calculations of internal dosimetry and ends of comparison of the fractions absorbed specific with the woman reference; Modelagem computacional do manequim matematico da mulher brasileira para calculos de dosimetria interna e para fins de comparacao das fracoes absorvidas especificas com a mulher referencia

    Energy Technology Data Exchange (ETDEWEB)

    Ximenes, Edmir

    2006-07-01

    Tools for dosimetric calculations are of the utmost importance for the basic principles of radiological protection, not only in nuclear medicine, but also in other scientific calculations. In this work a mathematical model of the Brazilian woman is developed in order to be used as a basis for calculations of Specific Absorbed Fractions (SAFs) in internal organs and in the skeleton, in accord with the objectives of diagnosis or therapy in nuclear medicine. The model developed here is similar in form to that of Snyder, but modified to be more relevant to the case of the Brazilian woman. To do this, the formalism of the Monte Carlo method was used by means of the ALGAM- 97{sup R} computational code. As a contribution to the objectives of this thesis, we developed the computational system cSAF - consultation for Specific Absorbed Fractions (cFAE from Portuguese acronym) - which furnishes several 'look-up' facilities for the research user. The dialogue interface with the operator was planned following current practices in the utilization of event-oriented languages. This interface permits the user to navigate by means of the reference models, choose the source organ, the energy desired, and receive an answer through an efficient and intuitive dialogue. The system furnishes, in addition to the data referring to the Brazilian woman, data referring to the model of Snyder and to the model of the Brazilian man. The system makes available not only individual data to the SAFs of the three models, but also a comparison among them. (author)

  16. Development of a computer program of fast calculation for the pre design of advanced nuclear fuel 10 x 10 for BWR type reactors; Desarrollo de un program de computo de calculo rapido para el prediseno de celdas de combustible nuclear avanzado 10 x 10 para reactores de agua en ebullicion

    Energy Technology Data Exchange (ETDEWEB)

    Perusquia, R.; Montes, J.L.; Ortiz, J.J. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: mrpc@nuclear.inin.mx

    2005-07-01

    In the National Institute of Nuclear Research (ININ) a methodology is developed to optimize the design of cells 10x10 of assemble fuels for reactors of water in boil or BWR. It was proposed a lineal calculation formula based on a coefficients matrix (of the change reason of the relative power due to changes in the enrichment of U-235) for estimate the relative powers by pin of a cell. With this it was developed the computer program of fast calculation named PreDiCeldas. The one which by means of a simple search algorithm allows to minimize the relative power peak maximum of cell or LPPF. This is achieved varying the distribution of U-235 inside the cell, maintaining in turn fixed its average enrichment. The accuracy in the estimation of the relative powers for pin is of the order from 1.9% when comparing it with results of the 'best estimate' HELIOS code. With the PreDiCeldas it was possible, at one minimum time of calculation, to re-design a reference cell diminishing the LPPF, to the beginning of the life, of 1.44 to a value of 1.31. With the cell design with low LPPF is sought to even design cycles but extensive that those reached at the moment in the BWR of the Laguna Verde Central. (Author)

  17. CONVEYOR FOUNDATIONS CALCULATION

    Energy Technology Data Exchange (ETDEWEB)

    S. Romanos

    1995-03-10

    The purpose of these calculations is to design foundations for all conveyor supports for the surface conveyors that transport the muck resulting from the TBM operation, from the belt storage to the muck stockpile. These conveyors consist of: (1) Conveyor W-TO3, from the belt storage, at the starter tunnel, to the transfer tower. (2) Conveyor W-SO1, from the transfer tower to the material stacker, at the muck stockpile.

  18. Clinical calculators in hospital medicine: Availability, classification, and needs.

    Science.gov (United States)

    Dziadzko, Mikhail A; Gajic, Ognjen; Pickering, Brian W; Herasevich, Vitaly

    2016-09-01

    Clinical calculators are widely used in modern clinical practice, but are not generally applied to electronic health record (EHR) systems. Important barriers to the application of these clinical calculators into existing EHR systems include the need for real-time calculation, human-calculator interaction, and data source requirements. The objective of this study was to identify, classify, and evaluate the use of available clinical calculators for clinicians in the hospital setting. Dedicated online resources with medical calculators and providers of aggregated medical information were queried for readily available clinical calculators. Calculators were mapped by clinical categories, mechanism of calculation, and the goal of calculation. Online statistics from selected Internet resources and clinician opinion were used to assess the use of clinical calculators. One hundred seventy-six readily available calculators in 4 categories, 6 primary specialties, and 40 subspecialties were identified. The goals of calculation included prediction, severity, risk estimation, diagnostic, and decision-making aid. A combination of summation logic with cutoffs or rules was the most frequent mechanism of computation. Combined results, online resources, statistics, and clinician opinion identified 13 most utilized calculators. Although not an exhaustive list, a total of 176 validated calculators were identified, classified, and evaluated for usefulness. Most of these calculators are used for adult patients in the critical care or internal medicine settings. Thirteen of 176 clinical calculators were determined to be useful in our institution. All of these calculators have an interface for manual input. Copyright © 2016 Elsevier Ireland Ltd. All rights reserved.

  19. Calculations in furnace technology

    CERN Document Server

    Davies, Clive; Hopkins, DW; Owen, WS

    2013-01-01

    Calculations in Furnace Technology presents the theoretical and practical aspects of furnace technology. This book provides information pertinent to the development, application, and efficiency of furnace technology. Organized into eight chapters, this book begins with an overview of the exothermic reactions that occur when carbon, hydrogen, and sulfur are burned to release the energy available in the fuel. This text then evaluates the efficiencies to measure the quantity of fuel used, of flue gases leaving the plant, of air entering, and the heat lost to the surroundings. Other chapters consi

  20. Approximate calculation of integrals

    CERN Document Server

    Krylov, V I

    2006-01-01

    A systematic introduction to the principal ideas and results of the contemporary theory of approximate integration, this volume approaches its subject from the viewpoint of functional analysis. In addition, it offers a useful reference for practical computations. Its primary focus lies in the problem of approximate integration of functions of a single variable, rather than the more difficult problem of approximate integration of functions of more than one variable.The three-part treatment begins with concepts and theorems encountered in the theory of quadrature. The second part is devoted to t

  1. Calculating Speed of Sound

    Science.gov (United States)

    Bhatnagar, Shalabh

    2017-01-01

    Sound is an emerging source of renewable energy but it has some limitations. The main limitation is, the amount of energy that can be extracted from sound is very less and that is because of the velocity of the sound. The velocity of sound changes as per medium. If we could increase the velocity of the sound in a medium we would be probably able to extract more amount of energy from sound and will be able to transfer it at a higher rate. To increase the velocity of sound we should know the speed of sound. If we go by the theory of classic mechanics speed is the distance travelled by a particle divided by time whereas velocity is the displacement of particle divided by time. The speed of sound in dry air at 20 °C (68 °F) is considered to be 343.2 meters per second and it won't be wrong in saying that 342.2 meters is the velocity of sound not the speed as it's the displacement of the sound not the total distance sound wave covered. Sound travels in the form of mechanical wave, so while calculating the speed of sound the whole path of wave should be considered not just the distance traveled by sound. In this paper I would like to focus on calculating the actual speed of sound wave which can help us to extract more energy and make sound travel with faster velocity.

  2. Multilayer optical calculations

    CERN Document Server

    Byrnes, Steven J

    2016-01-01

    When light hits a multilayer planar stack, it is reflected, refracted, and absorbed in a way that can be derived from the Fresnel equations. The analysis is treated in many textbooks, and implemented in many software programs, but certain aspects of it are difficult to find explicitly and consistently worked out in the literature. Here, we derive the formulas underlying the transfer-matrix method of calculating the optical properties of these stacks, including oblique-angle incidence, absorption-vs-position profiles, and ellipsometry parameters. We discuss and explain some strange consequences of the formulas in the situation where the incident and/or final (semi-infinite) medium are absorptive, such as calculating $T>1$ in the absence of gain. We also discuss some implementation details like complex-plane branch cuts. Finally, we derive modified formulas for including one or more "incoherent" layers, i.e. very thick layers in which interference can be neglected. This document was written in conjunction with ...

  3. Representation and calculation of economic uncertainties

    DEFF Research Database (Denmark)

    Schjær-Jacobsen, Hans

    2002-01-01

    the economic uncertainties involved, different procedures have been suggested. This paper discusses the representation of economic uncertainties by intervals,fuzzy numbers and probabilities, including double, triple and quadruple estimates and the problems of applying the four basic arithmetical operations...... additional uncertainties not present in the original economic problem. The paper will finally discuss the applicability and limitations of a few computational procedures based on available computer programs used for practical economic calculations with uncertain values. (C) 2002 Elsevier Science B.V. All...

  4. The CPC Risk Calculator

    DEFF Research Database (Denmark)

    Røder, Martin Andreas; Berg, Kasper Drimer; Loft, Mathias Dyrberg

    2017-01-01

    included. OUTCOME MEASUREMENTS AND STATISTICAL ANALYSIS: Time to BR was defined as the first PSA result ≥0.2 ng/ml. BR risk was computed using multiple cause-specific Cox regression including preoperative PSA, pT category, RP Gleason score (GS), and surgical margin (R) status. Death without BR......BACKGROUND: It can be challenging to predict the risk of biochemical recurrence (BR) during follow-up after radical prostatectomy (RP) in men who have undetectable prostate-specific antigen (PSA), even years after surgery. OBJECTIVE: To establish and validate a contemporary nomogram that predicts...... the absolute risk of BR every year after RP in men with undetectable PSA while accounting for competing risks of death. DESIGN, SETTING, AND PARTICIPANTS: A total of 3746 patients from Rigshospitalet (Copenhagen, Denmark) and Stanford Urology (Stanford, CA, USA) who underwent RP between 1995 and 2013 were...

  5. Methods and computer codes for nuclear systems calculations

    Indian Academy of Sciences (India)

    Pramana – Journal of Physics. Current Issue : Vol. 89, Issue 2 · Current Issue Volume 89 | Issue 2. August 2017. Home · Volumes & Issues · Special Issues · Forthcoming Articles · Search · Editorial Board · Information for Authors · Subscription ...

  6. Comparing Implementations of a Calculator for Exact Real Number Computation

    Directory of Open Access Journals (Sweden)

    José Raymundo Marcial-Romero

    2012-01-01

    Full Text Available Al ser uno de los primeros lenguajes de programación teóricos para el cómputo con números reales, Real PCF demostró ser impráctico debido a los constructores paralelos que necesita para el cálculo de funciones básicas. Posteriormente, se propuso LRT como una variante de Real PCF el cual evita el uso de constructores paralelos introduciendo un constructor no determinista dentro del lenguaje. En este artículo se presenta la implementación de una calculadora para el cómputo con números reales exactos basada en LRT y se compara su eficacia con una aplicación de números reales estándar en un lenguaje de programación imperativo. Finalmente, la implementación se compara con una implementación estándar de computación de números reales exactos, basada en la representación de dígitos con signo, que a su vez se basa sobre la computación de números reales exactos.

  7. Paintings, photographs, and computer graphics are calculated appearances

    Science.gov (United States)

    McCann, John

    2012-03-01

    Painters reproduce the appearances they see, or visualize. The entire human visual system is the first part of that process, providing extensive spatial processing. Painters have used spatial techniques since the Renaissance to render HDR scenes. Silver halide photography responds to the light falling on single film pixels. Film can only mimic the retinal response of the cones at the start of the visual process. Film cannot mimic the spatial processing in humans. Digital image processing can. This talk studies three dramatic visual illusions and uses the spatial mechanisms found in human vision to interpret their appearances.

  8. Computers and Computer Cultures.

    Science.gov (United States)

    Papert, Seymour

    1981-01-01

    Instruction using computers is viewed as different from most other approaches to education, by allowing more than right or wrong answers, by providing models for systematic procedures, by shifting the boundary between formal and concrete processes, and by influencing the development of thinking in many new ways. (MP)

  9. Through-Flow Calculations in Axial Turbomachinery

    Science.gov (United States)

    1976-10-01

    Glassman , Lewis Research Center, NASA SP-290, 1973. 3. Dzung, L.S.: Schaufelgitter mit dicker Hinterkante, Technical Note BBC, (unpublished) 4...of peak efficiency was taken from - Warner L.S. : ASME Paper 61-WA-37 - Glassman A.J. : NASA TN-D-6702 The method for computing incidence losses is...devise more intelligent ý , flow models whicn will enable us to do semi-empirical simpler calculations. One of the things that I have in mind and has not

  10. Theoretical Calculations of Atomic Data for Spectroscopy

    Science.gov (United States)

    Bautista, Manuel A.

    2000-01-01

    Several different approximations and techniques have been developed for the calculation of atomic structure, ionization, and excitation of atoms and ions. These techniques have been used to compute large amounts of spectroscopic data of various levels of accuracy. This paper presents a review of these theoretical methods to help non-experts in atomic physics to better understand the qualities and limitations of various data sources and assess how reliable are spectral models based on those data.

  11. XML in scientific computing

    CERN Document Server

    Pozrikidis, C

    2013-01-01

    While the extensible markup language (XML) has received a great deal of attention in web programming and software engineering, far less attention has been paid to XML in mainstream computational science and engineering. Correcting this imbalance, XML in Scientific Computing introduces XML to scientists and engineers in a way that illustrates the similarities and differences with traditional programming languages and suggests new ways of saving and sharing the results of scientific calculations. The author discusses XML in the context of scientific computing, demonstrates how the extensible stylesheet language (XSL) can be used to perform various calculations, and explains how to create and navigate through XML documents using traditional languages such as Fortran, C++, and MATLAB®. A suite of computer programs are available on the author’s website.

  12. Computation of the flow structure of an hydrogen/air mixture downstream of a steady system of shock waves; Calcul de la structure de l`ecoulement d`un melange air-hydrogene a l`aval d`un systeme stationnaire d`ondes de choc

    Energy Technology Data Exchange (ETDEWEB)

    D`Angelo, Y. [CERMICS, INRIA, 06 - Saphia-Antipolis (France)

    1997-07-01

    This paper deals with the analysis of the flow structure when a supersonic air-hydrogen mixture encounters a deflection ramp. We are interested in the conditions of a deflection, a normal reflection, or a Mach reflection, involving a portion of a curved quasi-normal shock wave. Behind this last shock, due to the rise of temperature, one way expect combustion to be stabilized. To conduct the analysis, we first determine the physical state of the flow by computing the `deflected-shock` and `reflected-shock` polars, the deflection angle and the incident Mach number being given. These polars are computed in both cases, assuming no or complete combustion behind the shock, and taking into account two models for the enthalpy of the gas mixture (affine function of the temperature or a fifth-degree polynomial of the temperature). We thus show that the combustion effects cannot be neglected when predicting the structure of the flow and the ignition lengths, and that the realistic model leads to highly diverging quantitative results, in comparison with the usual simplified model. For a given configuration, we have made the complete calculation of the Mach reflection. It should be noted that, in this particular case, a `very hot` region is observed near the point where the 3 shocks meet, region where the temperature is significantly higher than in the portion behind the normal shock. (author) 13 refs.

  13. SEECAL: Program to calculate age-dependent

    Energy Technology Data Exchange (ETDEWEB)

    Cristy, M.; Eckerman, K.F.

    1993-12-01

    This report describes the computer program SEECAL, which calculates specific effective energies (SEE) to specified target regions for ages newborn, 1 y, 5 y, 10 y, 15 y, a 70-kg adult male, and a 58-kg adult female. The dosimetric methodology is that of the International Commission on Radiological Protection (ICRP) and is generally consistent with the schema of the Medical Internal Radiation Dose committee of the US Society of Nuclear Medicine. Computation of SEEs is necessary in the computation of equivalent dose rate in a target region, for occupational or public exposure to radionuclides taken into the body. Program SEECAL replaces the program SEE that was previously used by the Dosimetry Research Group at Oak Ridge National Laboratory. The program SEE was used in the dosimetric calculations for occupational exposures for ICRP Publication 30 and is limited to adults. SEECAL was used to generate age-dependent SEEs for ICRP Publication 56, Part 1. SEECAL is also incorporated into DCAL, a radiation dose and risk calculational system being developed for the Environmental Protection Agency. Electronic copies of the program and data files and this report are available from the Radiation Shielding Information Center at Oak Ridge National Laboratory.

  14. Cloud Computing

    Indian Academy of Sciences (India)

    IAS Admin

    2014-03-01

    Mar 1, 2014 ... group of computers connected to the Internet in a cloud-like boundary (Box 1)). In essence computing is transitioning from an era of users owning computers to one in which users do not own computers but have access to computing hardware and software maintained by providers. Users access the ...

  15. Recent computational chemistry

    Energy Technology Data Exchange (ETDEWEB)

    Onishi, Taku [Department of Chemistry for Materials, and The Center of Ultimate Technology on nano-Electronics, Mie University (Japan); Center for Theoretical and Computational Chemistry, Department of Chemistry, University of Oslo (Norway)

    2015-12-31

    Now we can investigate quantum phenomena for the real materials and molecules, and can design functional materials by computation, due to the previous developments of quantum theory and calculation methods. As there still exist the limit and problem in theory, the cooperation between theory and computation is getting more important to clarify the unknown quantum mechanism, and discover more efficient functional materials. It would be next-generation standard. Finally, our theoretical methodology for boundary solid is introduced.

  16. Parallelizing Gaussian Process Calculations in R

    Directory of Open Access Journals (Sweden)

    Christopher J. Paciorek

    2015-02-01

    Full Text Available We consider parallel computation for Gaussian process calculations to overcome computational and memory constraints on the size of datasets that can be analyzed. Using a hybrid parallelization approach that uses both threading (shared memory and message-passing (distributed memory, we implement the core linear algebra operations used in spatial statistics and Gaussian process regression in an R package called bigGP that relies on C and MPI. The approach divides the covariance matrix into blocks such that the computational load is balanced across processes while communication between processes is limited. The package provides an API enabling R programmers to implement Gaussian process-based methods by using the distributed linear algebra operations without any C or MPI coding. We illustrate the approach and software by analyzing an astrophysics dataset with n = 67, 275 observations.

  17. Massively parallel self-consistent-field calculations

    Energy Technology Data Exchange (ETDEWEB)

    Tilson, J.L.

    1994-10-29

    The advent of supercomputers with many computational nodes each with its own independent memory makes possible extremely fast computations. The author`s work, as part of the US High Performance Computing and Communications Program (HPCCP), is focused on the development of electronic structure techniques for the solution of Grand Challenge-size molecules containing hundreds of atoms. Their efforts have resulted in a fully scalable Direct-SCF program that is portable and efficient. This code, named NWCHEM, is built around a distributed-data model. This distributed data is managed by a software package called Global Arrays developed within the HPCCP. They present performance results for Direct-SCF calculations of interest to the consortium.

  18. Calculation of aberration coefficients by ray tracing.

    Science.gov (United States)

    Oral, M; Lencová, B

    2009-10-01

    In this paper we present an approach for the calculation of aberration coefficients using accurate ray tracing. For a given optical system, intersections of a large number of trajectories with a given plane are computed. In the Gaussian image plane the imaging with the selected optical system can be described by paraxial and aberration coefficients (geometric and chromatic) that can be calculated by least-squares fitting of the analytical model on the computed trajectory positions. An advantage of such a way of computing the aberration coefficients is that, in comparison with the aberration integrals and the differential algebra method, it is relatively easy to use and its complexity stays almost constant with the growing complexity of the optical system. This paper shows a tested procedure for choosing proper initial conditions and computing the coefficients of the fifth-order geometrical and third-order, first-degree chromatic aberrations by ray tracing on an example of a weak electrostatic lens. The results are compared with the values for the same lens from a paper Liu [Ultramicroscopy 106 (2006) 220-232].

  19. Undergraduate paramedic students cannot do drug calculations

    Science.gov (United States)

    Eastwood, Kathryn; Boyle, Malcolm J; Williams, Brett

    2012-01-01

    BACKGROUND: Previous investigation of drug calculation skills of qualified paramedics has highlighted poor mathematical ability with no published studies having been undertaken on undergraduate paramedics. There are three major error classifications. Conceptual errors involve an inability to formulate an equation from information given, arithmetical errors involve an inability to operate a given equation, and finally computation errors are simple errors of addition, subtraction, division and multiplication. The objective of this study was to determine if undergraduate paramedics at a large Australia university could accurately perform common drug calculations and basic mathematical equations normally required in the workplace. METHODS: A cross-sectional study methodology using a paper-based questionnaire was administered to undergraduate paramedic students to collect demographical data, student attitudes regarding their drug calculation performance, and answers to a series of basic mathematical and drug calculation questions. Ethics approval was granted. RESULTS: The mean score of correct answers was 39.5% with one student scoring 100%, 3.3% of students (n=3) scoring greater than 90%, and 63% (n=58) scoring 50% or less, despite 62% (n=57) of the students stating they ‘did not have any drug calculations issues’. On average those who completed a minimum of year 12 Specialist Maths achieved scores over 50%. Conceptual errors made up 48.5%, arithmetical 31.1% and computational 17.4%. CONCLUSIONS: This study suggests undergraduate paramedics have deficiencies in performing accurate calculations, with conceptual errors indicating a fundamental lack of mathematical understanding. The results suggest an unacceptable level of mathematical competence to practice safely in the unpredictable prehospital environment. PMID:25215067

  20. Coil current and vacuum magnetic flux calculation for axisymmetric equilibria

    Science.gov (United States)

    Guazzotto, L.

    2017-12-01

    In fixed-boundary axisymmetric equilibrium calculations the plasma shape is assigned from input. In several circumstances, the plasma shape may not be known a priori, or one may desire to also compute the magnetic field in the volume surrounding the plasma through the calculation of a free-boundary equilibrium. This requires either the coil currents or the magnetic poloidal flux on a curve in the vacuum region to be assigned as input for the free-boundary equilibrium calculation. The FREE-FIX code presented in this article is a general tool for calculating coil currents being given a fixed-boundary calculation. A new formulation is presented, which considerably reduces the computational cost of the calculation. FREE-FIX performs well for different geometries and experiments.

  1. Numerical calculation of the ground state of Helium atom using ...

    African Journals Online (AJOL)

    Hylleraas did the calculation of the ground state in 1926 using the variational parameter a. In this paper we trace Hylleraas historic calculation, the use of computer enables us to improve the approximation found by Hylleraas . The program was written in FORTRAN language, designed in such away that for a particular value ...

  2. Quantum Computation

    Indian Academy of Sciences (India)

    Home; Journals; Resonance – Journal of Science Education; Volume 16; Issue 9. Quantum Computation - Particle and Wave Aspects of Algorithms. Apoorva Patel. General Article Volume 16 ... Keywords. Boolean logic; computation; computational complexity; digital language; Hilbert space; qubit; superposition; Feynman.

  3. Spreadsheet Based Scaling Calculations and Membrane Performance

    Energy Technology Data Exchange (ETDEWEB)

    Wolfe, T D; Bourcier, W L; Speth, T F

    2000-12-28

    Many membrane element manufacturers provide a computer program to aid buyers in the use of their elements. However, to date there are few examples of fully integrated public domain software available for calculating reverse osmosis and nanofiltration system performance. The Total Flux and Scaling Program (TFSP), written for Excel 97 and above, provides designers and operators new tools to predict membrane system performance, including scaling and fouling parameters, for a wide variety of membrane system configurations and feedwaters. The TFSP development was funded under EPA contract 9C-R193-NTSX. It is freely downloadable at www.reverseosmosis.com/download/TFSP.zip. TFSP includes detailed calculations of reverse osmosis and nanofiltration system performance. Of special significance, the program provides scaling calculations for mineral species not normally addressed in commercial programs, including aluminum, iron, and phosphate species. In addition, ASTM calculations for common species such as calcium sulfate (CaSO{sub 4}{times}2H{sub 2}O), BaSO{sub 4}, SrSO{sub 4}, SiO{sub 2}, and LSI are also provided. Scaling calculations in commercial membrane design programs are normally limited to the common minerals and typically follow basic ASTM methods, which are for the most part graphical approaches adapted to curves. In TFSP, the scaling calculations for the less common minerals use subsets of the USGS PHREEQE and WATEQ4F databases and use the same general calculational approach as PHREEQE and WATEQ4F. The activities of ion complexes are calculated iteratively. Complexes that are unlikely to form in significant concentration were eliminated to simplify the calculations. The calculation provides the distribution of ions and ion complexes that is used to calculate an effective ion product ''Q.'' The effective ion product is then compared to temperature adjusted solubility products (Ksp's) of solids in order to calculate a Saturation Index (SI

  4. Computer Music

    Science.gov (United States)

    Cook, Perry R.

    This chapter covers algorithms, technologies, computer languages, and systems for computer music. Computer music involves the application of computers and other digital/electronic technologies to music composition, performance, theory, history, and the study of perception. The field combines digital signal processing, computational algorithms, computer languages, hardware and software systems, acoustics, psychoacoustics (low-level perception of sounds from the raw acoustic signal), and music cognition (higher-level perception of musical style, form, emotion, etc.).

  5. The experience of GPU calculations at Lunarc

    Science.gov (United States)

    Sjöström, Anders; Lindemann, Jonas; Church, Ross

    2011-09-01

    To meet the ever increasing demand for computational speed and use of ever larger datasets, multi GPU instal- lations look very tempting. Lunarc and the Theoretical Astrophysics group at Lund Observatory collaborate on a pilot project to evaluate and utilize multi-GPU architectures for scientific calculations. Starting with a small workshop in 2009, continued investigations eventually lead to the procurement of the GPU-resource Timaeus, which is a four-node eight-GPU cluster with two Nvidia m2050 GPU-cards per node. The resource is housed within the larger cluster Platon and share disk-, network- and system resources with that cluster. The inaugu- ration of Timaeus coincided with the meeting "Computational Physics with GPUs" in November 2010, hosted by the Theoretical Astrophysics group at Lund Observatory. The meeting comprised of a two-day workshop on GPU-computing and a two-day science meeting on using GPUs as a tool for computational physics research, with a particular focus on astrophysics and computational biology. Today Timaeus is used by research groups from Lund, Stockholm and Lule in fields ranging from Astrophysics to Molecular Chemistry. We are investigating the use of GPUs with commercial software packages and user supplied MPI-enabled codes. Looking ahead, Lunarc will be installing a new cluster during the summer of 2011 which will have a small number of GPU-enabled nodes that will enable us to continue working with the combination of parallel codes and GPU-computing. It is clear that the combination of GPUs/CPUs is becoming an important part of high performance computing and here we will describe what has been done at Lunarc regarding GPU-computations and how we will continue to investigate the new and coming multi-GPU servers and how they can be utilized in our environment.

  6. Calculations of optical rotation: Influence of molecular structure

    Directory of Open Access Journals (Sweden)

    Yu Jia

    2012-01-01

    Full Text Available Ab initio Hartree-Fock (HF method and Density Functional Theory (DFT were used to calculate the optical rotation of 26 chiral compounds. The effects of theory and basis sets used for calculation, solvents influence on the geometry and values of calculated optical rotation were all discussed. The polarizable continuum model, included in the calculation, did not improve the accuracy effectively, but it was superior to γs. Optical rotation of five or sixmembered of cyclic compound has been calculated and 17 pyrrolidine or piperidine derivatives which were calculated by HF and DFT methods gave acceptable predictions. The nitrogen atom affects the calculation results dramatically, and it is necessary in the molecular structure in order to get an accurate computation result. Namely, when the nitrogen atom was substituted by oxygen atom in the ring, the calculation result deteriorated.

  7. An atlas of functions: with equator, the atlas function calculator

    National Research Council Canada - National Science Library

    Oldham, Keith

    2008-01-01

    ... of arguments. The first edition of An Atlas of Functions, the product of collaboration between a mathematician and a chemist, appeared during an era when the programmable calculator was the workhorse for the numerical evaluation of functions. That role has now been taken over by the omnipresent computer, and therefore the second edition delegates this duty to Equator, the Atlas function calculator. This is a software program that, as well as carrying out other tasks, will calculate va...

  8. Digital computers in action

    CERN Document Server

    Booth, A D

    1965-01-01

    Digital Computers in Action is an introduction to the basics of digital computers as well as their programming and various applications in fields such as mathematics, science, engineering, economics, medicine, and law. Other topics include engineering automation, process control, special purpose games-playing devices, machine translation and mechanized linguistics, and information retrieval. This book consists of 14 chapters and begins by discussing the history of computers, from the idea of performing complex arithmetical calculations to the emergence of a modern view of the structure of a ge

  9. Calculation of Weighted Geometric Dilution of Precision

    Directory of Open Access Journals (Sweden)

    Chien-Sheng Chen

    2013-01-01

    Full Text Available To achieve high accuracy in wireless positioning systems, both accurate measurements and good geometric relationship between the mobile device and the measurement units are required. Geometric dilution of precision (GDOP is widely used as a criterion for selecting measurement units, since it represents the geometric effect on the relationship between measurement error and positioning determination error. In the calculation of GDOP value, the maximum volume method does not necessarily guarantee the selection of the optimal four measurement units with minimum GDOP. The conventional matrix inversion method for GDOP calculation demands a large amount of operation and causes high power consumption. To select the subset of the most appropriate location measurement units which give the minimum positioning error, we need to consider not only the GDOP effect but also the error statistics property. In this paper, we employ the weighted GDOP (WGDOP, instead of GDOP, to select measurement units so as to improve the accuracy of location. The handheld global positioning system (GPS devices and mobile phones with GPS chips can merely provide limited calculation ability and power capacity. Therefore, it is very imperative to obtain WGDOP accurately and efficiently. This paper proposed two formations of WGDOP with less computation when four measurements are available for location purposes. The proposed formulae can reduce the computational complexity required for computing the matrix inversion. The simpler WGDOP formulae for both the 2D and the 3D location estimation, without inverting a matrix, can be applied not only to GPS but also to wireless sensor networks (WSN and cellular communication systems. Furthermore, the proposed formulae are able to provide precise solution of WGDOP calculation without incurring any approximation error.

  10. Entropy in spin foam models: the statistical calculation

    Energy Technology Data Exchange (ETDEWEB)

    Garcia-Islas, J Manuel, E-mail: jmgislas@leibniz.iimas.unam.m [Instituto de Investigaciones en Matematicas Aplicadas y en Sistemas, Universidad Nacional Autonoma de Mexico, UNAM, A. Postal 20-726, 01000, Mexico DF (Mexico)

    2010-07-21

    Recently an idea for computing the entropy of black holes in the spin foam formalism has been introduced. Particularly complete calculations for the three-dimensional Euclidean BTZ black hole were performed. The whole calculation is based on observables living at the horizon of the black hole universe. Departing from this idea of observables living at the horizon, we now go further and compute the entropy of the BTZ black hole in the spirit of statistical mechanics. We compare both calculations and show that they are very interrelated and equally valid. This latter behaviour is certainly due to the importance of the observables.

  11. Consequences and Limitations of Conventional Computers and their Solutions through Quantum Computers

    OpenAIRE

    Nilesh BARDE; Thakur, Deepak; Pranav BARDAPURKAR; Sanjaykumar DALVI

    2012-01-01

    Quantum computer is the current topic of research in the field of computational science, which uses principles of quantum mechanics. Quantum computers will be much more powerful than the classical computer due to its enormous computational speed. Recent developments in quantum computers which are based on the laws of quantum mechanics, shows different ways of performing efficient calculations along with the various results which are not possible on the classical computers in an efficient peri...

  12. Computer-aided solvent screening for biocatalysis

    NARCIS (Netherlands)

    Abildskov, J.; Leeuwen, van M.B.; Boeriu, C.G.; Broek, van den L.A.M.

    2013-01-01

    A computer-aidedsolventscreening methodology is described and tested for biocatalytic systems composed of enzyme, essential water and substrates/products dissolved in a solvent medium, without cells. The methodology is computationally simple, using group contribution methods for calculating

  13. Computational invariant theory

    CERN Document Server

    Derksen, Harm

    2015-01-01

    This book is about the computational aspects of invariant theory. Of central interest is the question how the invariant ring of a given group action can be calculated. Algorithms for this purpose form the main pillars around which the book is built. There are two introductory chapters, one on Gröbner basis methods and one on the basic concepts of invariant theory, which prepare the ground for the algorithms. Then algorithms for computing invariants of finite and reductive groups are discussed. Particular emphasis lies on interrelations between structural properties of invariant rings and computational methods. Finally, the book contains a chapter on applications of invariant theory, covering fields as disparate as graph theory, coding theory, dynamical systems, and computer vision. The book is intended for postgraduate students as well as researchers in geometry, computer algebra, and, of course, invariant theory. The text is enriched with numerous explicit examples which illustrate the theory and should be ...

  14. Point Defect Calculations in Tungsten

    National Research Council Canada - National Science Library

    Danilowicz, Ronald

    1968-01-01

    .... The vacancy migration energy for tungsten was calculated. The calculated value of 1.73 electron volts, together with experimental data, suggests that vacancies migrate in stage III recovery in tungsten...

  15. Computing Logarithms by Hand

    Science.gov (United States)

    Reed, Cameron

    2016-01-01

    How can old-fashioned tables of logarithms be computed without technology? Today, of course, no practicing mathematician, scientist, or engineer would actually use logarithms to carry out a calculation, let alone worry about deriving them from scratch. But high school students may be curious about the process. This article develops a…

  16. Computer Technology for Industry

    Science.gov (United States)

    1982-01-01

    Shell Oil Company used a COSMIC program, called VISCEL to insure the accuracy of the company's new computer code for analyzing polymers, and chemical compounds. Shell reported that there were no other programs available that could provide the necessary calculations. Shell produces chemicals for plastic products used in the manufacture of automobiles, housewares, appliances, film, textiles, electronic equipment and furniture.

  17. Efforts to transform computers reach milestone

    CERN Multimedia

    Johnson, G

    2001-01-01

    Scientists in San Jose, Californina, have performed the most complex calculation ever using a quantum computer - factoring the number 15. In contast to the switches in conventional computers, which although tiny consist of billions of atoms, quantum computations are carried out by manipulating single atoms. The laws of quantum mechanics which govern these actions in fact mean that multiple computations could be done in parallel, this would drastically cut down the time needed to carry out very complex calculations.

  18. Analog computing

    CERN Document Server

    Ulmann, Bernd

    2013-01-01

    This book is a comprehensive introduction to analog computing. As most textbooks about this powerful computing paradigm date back to the 1960s and 1970s, it fills a void and forges a bridge from the early days of analog computing to future applications. The idea of analog computing is not new. In fact, this computing paradigm is nearly forgotten, although it offers a path to both high-speed and low-power computing, which are in even more demand now than they were back in the heyday of electronic analog computers.

  19. Une méthode de calcul par éléments finis de la résistence de vague des corps flottants ou immergés en théorie linéaire A Finite Elements Method for Computing the Resistance of Floating Or Submerged Bodies to Wave Action Using a Linear Theory

    Directory of Open Access Journals (Sweden)

    Cariou A.

    2006-11-01

    Full Text Available Pour calculer le potentiel de l'écoulement autour d'un corps en mouvement rectiligne uniforme, soit en fluide illimité (engin sous-marin, soit sur une mer infinie (corps flottant ou voisin de la surface libre, on se place dans le cadre du problème de Neumann extérieur ou du problème de Neumann Kelvin. Pour résoudre ces problèmes on se propose de délimiter autour de la carène un domaine fluide fini (,ri dont les frontières sont : la carène (SC, une surface (SE entourant la carène et éventuellement la portion de surface libre (SI. limitée par les lignes de flottaison de SC et SE. La solution à l'intérieur de (,ri est déterminée à l'aide d'une méthode d'éléments finis et elle est raccordée à la solution en domaine infini elle-même calculée grâce aux fonctions de Green du problème (ou solutions élémentaires. For computing the flow potential around a body in uniform rectilinear movement, either in an unlimited fluid (subsea croft or on an infinite sea (body floating near the free surface, consideration must be given ta the outside Neumann problem or ta the Neumann Kelvin problem. Ta solve these problems, this article proposes ta delimit a finite fluid realm (T: around the body. The limits of this realm are: I the body (SC, 2 a surface (SE surrounding the body, and eventually 3 the portion of free surface (SU bounded by the waterlines of SC and SE. The solution within iri is determined by a finite elements method, and it is related ta the solution in on infinite realm which in turn is computed by the Green functions of the problem (or elementary solutions.

  20. Computational composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.; Redström, Johan

    2007-01-01

    Computational composite is introduced as a new type of composite material. Arguing that this is not just a metaphorical maneuver, we provide an analysis of computational technology as material in design, which shows how computers share important characteristics with other materials used in design...... and architecture. We argue that the notion of computational composites provides a precise understanding of the computer as material, and of how computations need to be combined with other materials to come to expression as material. Besides working as an analysis of computers from a designer’s point of view......, the notion of computational composites may also provide a link for computer science and human-computer interaction to an increasingly rapid development and use of new materials in design and architecture....

  1. Quantum computing

    OpenAIRE

    Traub, Joseph F.

    2014-01-01

    The aim of this thesis was to explain what quantum computing is. The information for the thesis was gathered from books, scientific publications, and news articles. The analysis of the information revealed that quantum computing can be broken down to three areas: theories behind quantum computing explaining the structure of a quantum computer, known quantum algorithms, and the actual physical realizations of a quantum computer. The thesis reveals that moving from classical memor...

  2. Relativistic Few-Body Hadronic Physics Calculations

    Energy Technology Data Exchange (ETDEWEB)

    Polyzou, Wayne [Univ. of Iowa, Iowa City, IA (United States)

    2016-06-20

    The goal of this research proposal was to use ``few-body'' methods to understand the structure and reactions of systems of interacting hadrons (neutrons, protons, mesons, quarks) over a broad range of energy scales. Realistic mathematical models of few-hadron systems have the advantage that they are sufficiently simple that they can be solved with mathematically controlled errors. These systems are also simple enough that it is possible to perform complete accurate experimental measurements on these systems. Comparison between theory and experiment puts strong constraints on the structure of the models. Even though these systems are ``simple'', both the experiments and computations push the limits of technology. The important property of ``few-body'' systems is that the ``cluster property'' implies that the interactions that appear in few-body systems are identical to the interactions that appear in complicated many-body systems. Of particular interest are models that correctly describe physics at distance scales that are sensitive to the internal structure of the individual nucleons. The Heisenberg uncertainty principle implies that in order to be sensitive to physics on distance scales that are a fraction of the proton or neutron radius, a relativistic treatment of quantum mechanics is necessary. The research supported by this grant involved 30 years of effort devoted to studying all aspects of interacting two and three-body systems. Realistic interactions were used to compute bound states of two- and three-nucleon, and two- and three-quark systems. Scattering observables for these systems were computed for a broad range of energies - from zero energy scattering to few GeV scattering, where experimental evidence of sub-nucleon degrees of freedom is beginning to appear. Benchmark calculations were produced, which when compared with calculations of other groups provided an essential check on these complicated calculations. In

  3. Elliptic curves a computational approach

    CERN Document Server

    Schmitt, Susanne; Pethö, Attila

    2003-01-01

    The basics of the theory of elliptic curves should be known to everybody, be he (or she) a mathematician or a computer scientist. Especially everybody concerned with cryptography should know the elements of this theory. The purpose of the present textbook is to give an elementary introduction to elliptic curves. Since this branch of number theory is particularly accessible to computer-assisted calculations, the authors make use of it by approaching the theory under a computational point of view. Specifically, the computer-algebra package SIMATH can be applied on several occasions. However, the book can be read also by those not interested in any computations. Of course, the theory of elliptic curves is very comprehensive and becomes correspondingly sophisticated. That is why the authors made a choice of the topics treated. Topics covered include the determination of torsion groups, computations regarding the Mordell-Weil group, height calculations, S-integral points. The contents is kept as elementary as poss...

  4. Gas flow calculation method of a ramjet engine

    Science.gov (United States)

    Kostyushin, Kirill; Kagenov, Anuar; Eremin, Ivan; Zhiltsov, Konstantin; Shuvarikov, Vladimir

    2017-11-01

    At the present study calculation methodology of gas dynamics equations in ramjet engine is presented. The algorithm is based on Godunov`s scheme. For realization of calculation algorithm, the system of data storage is offered, the system does not depend on mesh topology, and it allows using the computational meshes with arbitrary number of cell faces. The algorithm of building a block-structured grid is given. Calculation algorithm in the software package "FlashFlow" is implemented. Software package is verified on the calculations of simple configurations of air intakes and scramjet models.

  5. An expert system for the calculation of sample size.

    Science.gov (United States)

    Ebell, M H; Neale, A V; Hodgkins, B J

    1994-06-01

    Calculation of sample size is a useful technique for researchers who are designing a study, and for clinicians who wish to interpret research findings. The elements that must be specified to calculate the sample size include alpha, beta, Type I and Type II errors, 1- and 2-tail tests, confidence intervals, and confidence levels. A computer software program written by one of the authors (MHE), Sample Size Expert, facilitates sample size calculations. The program uses an expert system to help inexperienced users calculate sample sizes for analytic and descriptive studies. The software is available at no cost from the author or electronically via several on-line information services.

  6. Labview virtual instruments for calcium buffer calculations.

    Science.gov (United States)

    Reitz, Frederick B; Pollack, Gerald H

    2003-01-01

    Labview VIs based upon the calculator programs of Fabiato and Fabiato (J. Physiol. Paris 75 (1979) 463) are presented. The VIs comprise the necessary computations for the accurate preparation of multiple-metal buffers, for the back-calculation of buffer composition given known free metal concentrations and stability constants used, for the determination of free concentrations from a given buffer composition, and for the determination of apparent stability constants from absolute constants. As implemented, the VIs can concurrently account for up to three divalent metals, two monovalent metals and four ligands thereof, and the modular design of the VIs facilitates further extension of their capacity. As Labview VIs are inherently graphical, these VIs may serve as useful templates for those wishing to adapt this software to other platforms.

  7. Calculation of coherent synchrotron radiation using mesh

    Directory of Open Access Journals (Sweden)

    T. Agoh

    2004-05-01

    Full Text Available We develop a new method to simulate coherent synchrotron radiation numerically. It is based on the mesh calculation of the electromagnetic field in the frequency domain. We make an approximation in the Maxwell equation which allows a mesh size much larger than the relevant wavelength so that the computing time is tolerable. Using the equation, we can perform a mesh calculation of coherent synchrotron radiation in transient states with shielding effects by the vacuum chamber. The simulation results obtained by this method are compared with analytic solutions. Though, for the comparison with theories, we adopt simplifications such as longitudinal Gaussian distribution, zero-width transverse distribution, horizontal uniform bend, and a vacuum chamber with rectangular cross section, the method is applicable to general cases.

  8. Comparative Study of Daylighting Calculation Methods

    Directory of Open Access Journals (Sweden)

    Mandala Ariani

    2018-01-01

    Full Text Available The aim of this study is to assess five daylighting calculation method commonly used in architectural study. The methods used include hand calculation methods (SNI/DPMB method and BRE Daylighting Protractors, scale models studied in an artificial sky simulator and computer programs using Dialux and Velux lighting software. The test room is conditioned by the uniform sky conditions, simple room geometry with variations of the room reflectance (black, grey, and white color. The analyses compared the result (including daylight factor, illumination, and coefficient of uniformity value and examines the similarity and contrast the result different. The color variations trial is used to analyses the internally reflection factor contribution to the result.

  9. Improving the calculation of interdiffusion coefficients

    Science.gov (United States)

    Kapoor, Rakesh R.; Eagar, Thomas W.

    1990-12-01

    Least-squares spline interpolation techniques are reviewed and presented as a mathematical tool for noise reduction and interpolation of diffusion profiles. Numerically simulated diffusion profiles were interpolated using a sixth-order spline. The spline fit data were successfully used in conjunction with the Boltzmann-Matano treatment to compute the interdiffusion coefficient, demonstrating the usefulness of splines as a numerical tool for such calculations. Simulations conducted on noisy data indicate that the technique can extract the correct diffusivity data given compositional data that contain only three digits of information and are contaminated with a noise level of 0.001. Splines offer a reproducible and reliable alternative to graphical evaluation of the slope of a diffusion profile, which is used in the Boltzmann-Matano treatment. Hence, use of splines reduces the numerical errors associated with calculation of interdiffusion coefficients from raw diffusion profile data.

  10. Distributed Function Calculation over Noisy Networks

    Directory of Open Access Journals (Sweden)

    Zhidun Zeng

    2016-01-01

    Full Text Available Considering any connected network with unknown initial states for all nodes, the nearest-neighbor rule is utilized for each node to update its own state at every discrete-time step. Distributed function calculation problem is defined for one node to compute some function of the initial values of all the nodes based on its own observations. In this paper, taking into account uncertainties in the network and observations, an algorithm is proposed to compute and explicitly characterize the value of the function in question when the number of successive observations is large enough. While the number of successive observations is not large enough, we provide an approach to obtain the tightest possible bounds on such function by using linear programing optimization techniques. Simulations are provided to demonstrate the theoretical results.

  11. IDA: An implicit, parallelizable method for calculating drainage area

    Science.gov (United States)

    Richardson, Alan; Hill, Christopher N.; Perron, J. Taylor

    2014-05-01

    Models of landscape evolution or hydrological processes typically depend on the accurate determination of upslope drainage area from digital elevation data, but such calculations can be very computationally demanding when applied to high-resolution topographic data. To overcome this limitation, we propose calculating drainage area in an implicit, iterative manner using linear solvers. The basis of this method is a recasting of the flow routing problem as a sparse system of linear equations, which can be solved using established computational techniques. This approach is highly parallelizable, enabling data to be spread over multiple computer processors. Good scalability is exhibited, rendering it suitable for contemporary high-performance computing architectures with many processors, such as graphics processing units (GPUs). In addition, the iterative nature of the computational algorithms we use to solve the linear system creates the possibility of accelerating the solution by providing an initial guess, making the method well suited to iterative calculations such as numerical landscape evolution models. We compare this method with a previously proposed parallel drainage area algorithm and present several examples illustrating its advantages, including a continent-scale flow routing calculation at 3 arc sec resolution, improvements to models of fluvial sediment yield, and acceleration of drainage area calculations in a landscape evolution model. We additionally describe a modification that allows the method to be used for parallel basin delineation.

  12. The Future of Cloud Computing

    Directory of Open Access Journals (Sweden)

    Anamaroa SIclovan

    2011-12-01

    Full Text Available Cloud computing was and it will be a new way of providing Internet services and computers. This calculation approach is based on many existing services, such as the Internet, grid computing, Web services. Cloud computing as a system aims to provide on demand services more acceptable as price and infrastructure. It is exactly the transition from computer to a service offeredto the consumers as a product delivered online. This represents an advantage for the organization both regarding the cost and the opportunity for the new business. This paper presents the future perspectives in cloud computing. The paper presents some issues of the cloud computing paradigm. It is a theoretical paper.Keywords: Cloud Computing, Pay-per-use

  13. {ital Ab} {ital initio} calculations of biomolecules

    Energy Technology Data Exchange (ETDEWEB)

    Les, A. [Department of Chemistry, University of Warsaw, 02-093 Warsaw (Poland)]|[Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States); Adamowicz, L. [Department of Theoretical Chemistry, University of Lund, Lund, S-22100 (Sweden)]|[Department of Chemistry, University of Arizona, Tucson, Arizona 85721 (United States)

    1995-08-01

    {ital Ab} {ital initio} quantum mechanical calculations are valuable tools for interpretation and elucidation of elemental processes in biochemical systems. With the {ital ab} {ital initio} approach one can calculate data that sometimes are difficult to obtain by experimental techniques. The most popular computational theoretical methods include the Hartree-Fock method as well as some lower-level variational and perturbational post-Hartree Fock approaches which allow to predict molecular structures and to calculate spectral properties. We have been involved in a number of joined theoretical and experimental studies in the past and some examples of these studies are given in this presentation. The systems chosen cover a wide variety of simple biomolecules, such as precursors of nucleic acids, double-proton transferring molecules, and simple systems involved in processes related to first stages of substrate-enzyme interactions. In particular, examples of some {ital ab} {ital initio} calculations used in the assignment of IR spectra of matrix isolated pyrimidine nucleic bases are shown. Some radiation-induced transformations in model chromophores are also presented. Lastly, we demonstrate how the {ital ab}-{ital initio} approach can be used to determine the initial several steps of the molecular mechanism of thymidylate synthase inhibition by dUMP analogues.

  14. Calculation of persistent currents in superconducting magnets

    Directory of Open Access Journals (Sweden)

    C. Völlinger

    2000-12-01

    Full Text Available This paper describes a semianalytical hysteresis model for hard superconductors. The model is based on the critical state model considering the dependency of the critical current density on the varying local field in the superconducting filaments. By combining this hysteresis model with numerical field computation methods, it is possible to calculate the persistent current multipole errors in the magnet taking local saturation effects in the magnetic iron parts into consideration. As an application of the method, the use of soft magnetic iron sheets (coil protection sheets mounted between the coils and the collars for partial compensation of the multipole errors during the ramping of the magnets is investigated.

  15. Cobalamins uncovered by modern electronic structure calculations

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Ryde, Ulf

    2009-01-01

    This review describes how computational methods have contributed to the held of cobalamin chemistry since the start of the new millennium. Cobalamins are cobalt-dependent cofactors that are used for alkyl transfer and radical initiation by several classes of enzymes. Since the entry of modern...... electronic-structure calculations, in particular density functional methods, the understanding of the molecular mechanism of cobalamins has changed dramatically, going from a dominating view of trans-steric strain effects to a much more complex view involving an arsenal of catalytic strategies. Among...

  16. Atomic Reference Data for Electronic Structure Calculations

    CERN Document Server

    Kotochigova, S; Shirley, E L

    We have generated data for atomic electronic structure calculations, to provide a standard reference for results of specified accuracy under commonly used approximations. Results are presented here for total energies and orbital energy eigenvalues for all atoms from H to U, at microHartree accuracy in the total energy, as computed in the local-density approximation (LDA) the local-spin-density approximation (LSD); the relativistic local-density approximation (RLDA); and scalar-relativistic local-density approximation (ScRLDA).

  17. Using reciprocity in Boundary Element Calculations

    DEFF Research Database (Denmark)

    Juhl, Peter Møller; Cutanda Henriquez, Vicente

    2010-01-01

    as the reciprocal radiation problem. The present paper concerns the situation of having a point source (which is reciprocal to a point receiver) at or near a discretized boundary element surface. The accuracy of the original and the reciprocal problem is compared in a test case for which an analytical solution......The concept of reciprocity is widely used in both theoretical and experimental work. In Boundary Element calculations reciprocity is sometimes employed in the solution of computationally expensive scattering problems, which sometimes can be more efficiently dealt with when formulated...

  18. Quantum computing

    OpenAIRE

    Li, Shu-shen; Long, Gui-Lu; Bai, Feng-Shan; Feng, Song-Lin; Zheng, Hou-Zhi

    2001-01-01

    Quantum computing is a quickly growing research field. This article introduces the basic concepts of quantum computing, recent developments in quantum searching, and decoherence in a possible quantum dot realization.

  19. Green Computing

    Directory of Open Access Journals (Sweden)

    K. Shalini

    2013-01-01

    Full Text Available Green computing is all about using computers in a smarter and eco-friendly way. It is the environmentally responsible use of computers and related resources which includes the implementation of energy-efficient central processing units, servers and peripherals as well as reduced resource consumption and proper disposal of electronic waste .Computers certainly make up a large part of many people lives and traditionally are extremely damaging to the environment. Manufacturers of computer and its parts have been espousing the green cause to help protect environment from computers and electronic waste in any way.Research continues into key areas such as making the use of computers as energy-efficient as Possible, and designing algorithms and systems for efficiency-related computer technologies.

  20. Computers boost structural technology

    Science.gov (United States)

    Noor, Ahmed K.; Venneri, Samuel L.

    1989-01-01

    Derived from matrix methods of structural analysis and finite element methods developed over the last three decades, computational structures technology (CST) blends computer science, numerical analysis, and approximation theory into structural analysis and synthesis. Recent significant advances in CST include stochastic-based modeling, strategies for performing large-scale structural calculations on new computing systems, and the integration of CST with other disciplinary modules for multidisciplinary analysis and design. New methodologies have been developed at NASA for integrated fluid-thermal structural analysis and integrated aerodynamic-structure-control design. The need for multiple views of data for different modules also led to the development of a number of sophisticated data-base management systems. For CST to play a role in the future development of structures technology and in the multidisciplinary design of future flight vehicles, major advances and computational tools are needed in a number of key areas.

  1. Phenomenological Computation?

    DEFF Research Database (Denmark)

    Brier, Søren

    2014-01-01

    Open peer commentary on the article “Info-computational Constructivism and Cognition” by Gordana Dodig-Crnkovic. Upshot: The main problems with info-computationalism are: (1) Its basic concept of natural computing has neither been defined theoretically or implemented practically. (2. It cannot en...... cybernetics and Maturana and Varela’s theory of autopoiesis, which are both erroneously taken to support info-computationalism....

  2. Cognitive Computing

    OpenAIRE

    2015-01-01

    "Cognitive Computing" has initiated a new era in computer science. Cognitive computers are not rigidly programmed computers anymore, but they learn from their interactions with humans, from the environment and from information. They are thus able to perform amazing tasks on their own, such as driving a car in dense traffic, piloting an aircraft in difficult conditions, taking complex financial investment decisions, analysing medical-imaging data, and assist medical doctors in diagnosis and th...

  3. Computable models

    CERN Document Server

    Turner, Raymond

    2009-01-01

    Computational models can be found everywhere in present day science and engineering. In providing a logical framework and foundation for the specification and design of specification languages, Raymond Turner uses this framework to introduce and study computable models. In doing so he presents the first systematic attempt to provide computational models with a logical foundation. Computable models have wide-ranging applications from programming language semantics and specification languages, through to knowledge representation languages and formalism for natural language semantics. They are al

  4. Quantum Computing for Computer Architects

    CERN Document Server

    Metodi, Tzvetan

    2011-01-01

    Quantum computers can (in theory) solve certain problems far faster than a classical computer running any known classical algorithm. While existing technologies for building quantum computers are in their infancy, it is not too early to consider their scalability and reliability in the context of the design of large-scale quantum computers. To architect such systems, one must understand what it takes to design and model a balanced, fault-tolerant quantum computer architecture. The goal of this lecture is to provide architectural abstractions for the design of a quantum computer and to explore

  5. Computing fundamentals introduction to computers

    CERN Document Server

    Wempen, Faithe

    2014-01-01

    The absolute beginner's guide to learning basic computer skills Computing Fundamentals, Introduction to Computers gets you up to speed on basic computing skills, showing you everything you need to know to conquer entry-level computing courses. Written by a Microsoft Office Master Instructor, this useful guide walks you step-by-step through the most important concepts and skills you need to be proficient on the computer, using nontechnical, easy-to-understand language. You'll start at the very beginning, getting acquainted with the actual, physical machine, then progress through the most common

  6. Computational Complexity

    Directory of Open Access Journals (Sweden)

    J. A. Tenreiro Machado

    2017-02-01

    Full Text Available Complex systems (CS involve many elements that interact at different scales in time and space. The challenges in modeling CS led to the development of novel computational tools with applications in a wide range of scientific areas. The computational problems posed by CS exhibit intrinsic difficulties that are a major concern in Computational Complexity Theory. [...

  7. Optical Computing

    Indian Academy of Sciences (India)

    tal computers are still some years away, however a number of devices that can ultimately lead to real optical computers have already been manufactured, including optical logic gates, optical switches, optical interconnections, and opti- cal memory. The most likely near-term optical computer will really be a hybrid composed ...

  8. Quantum Computing

    Indian Academy of Sciences (India)

    In the early 1980s Richard Feynman noted that quan- tum systems cannot be efficiently simulated on a clas- sical computer. Till then the accepted view was that any reasonable !{lodel of computation can be efficiently simulated on a classical computer. Hence, this observa- tion led to a lot of rethinking about the basic ...

  9. Pervasive Computing

    NARCIS (Netherlands)

    Silvis-Cividjian, N.

    This book provides a concise introduction to Pervasive Computing, otherwise known as Internet of Things (IoT) and Ubiquitous Computing (Ubicomp) which addresses the seamless integration of computing systems within everyday objects. By introducing the core topics and exploring assistive pervasive

  10. Cloud Computing

    Indian Academy of Sciences (India)

    Cloud computing; services on a cloud; cloud types; computing utility; risks in using cloud computing. Author Affiliations. V Rajaraman1. Supercomputer Education and Research Centre, Indian Institute of Science, Bangalore 560 012, India. Resonance – Journal of Science Education. Current Issue : Vol. 22, Issue 11. Current ...

  11. Computational vision

    CERN Document Server

    Wechsler, Harry

    1990-01-01

    The book is suitable for advanced courses in computer vision and image processing. In addition to providing an overall view of computational vision, it contains extensive material on topics that are not usually covered in computer vision texts (including parallel distributed processing and neural networks) and considers many real applications.

  12. Coupled-cluster calculations of nucleonic matter

    Science.gov (United States)

    Hagen, G.; Papenbrock, T.; Ekström, A.; Wendt, K. A.; Baardsen, G.; Gandolfi, S.; Hjorth-Jensen, M.; Horowitz, C. J.

    2014-01-01

    Background: The equation of state (EoS) of nucleonic matter is central for the understanding of bulk nuclear properties, the physics of neutron star crusts, and the energy release in supernova explosions. Because nuclear matter exhibits a finely tuned saturation point, its EoS also constrains nuclear interactions. Purpose: This work presents coupled-cluster calculations of infinite nucleonic matter using modern interactions from chiral effective field theory (EFT). It assesses the role of correlations beyond particle-particle and hole-hole ladders, and the role of three-nucleon forces (3NFs) in nuclear matter calculations with chiral interactions. Methods: This work employs the optimized nucleon-nucleon (NN) potential NNLOopt at next-to-next-to leading order, and presents coupled-cluster computations of the EoS for symmetric nuclear matter and neutron matter. The coupled-cluster method employs up to selected triples clusters and the single-particle space consists of a momentum-space lattice. We compare our results with benchmark calculations and control finite-size effects and shell oscillations via twist-averaged boundary conditions. Results: We provide several benchmarks to validate the formalism and show that our results exhibit a good convergence toward the thermodynamic limit. Our calculations agree well with recent coupled-cluster results based on a partial wave expansion and particle-particle and hole-hole ladders. For neutron matter at low densities, and for simple potential models, our calculations agree with results from quantum Monte Carlo computations. While neutron matter with interactions from chiral EFT is perturbative, symmetric nuclear matter requires nonperturbative approaches. Correlations beyond the standard particle-particle ladder approximation yield non-negligible contributions. The saturation point of symmetric nuclear matter is sensitive to the employed 3NFs and the employed regularization scheme. 3NFs with nonlocal cutoffs exhibit a

  13. Human Computation

    CERN Multimedia

    CERN. Geneva

    2008-01-01

    What if people could play computer games and accomplish work without even realizing it? What if billions of people collaborated to solve important problems for humanity or generate training data for computers? My work aims at a general paradigm for doing exactly that: utilizing human processing power to solve computational problems in a distributed manner. In particular, I focus on harnessing human time and energy for addressing problems that computers cannot yet solve. Although computers have advanced dramatically in many respects over the last 50 years, they still do not possess the basic conceptual intelligence or perceptual capabilities...

  14. Computers in the Mathematics Curriculum: Spreadsheets | Mereku ...

    African Journals Online (AJOL)

    Today school mathematics stresses the place of information technology, i.e. calculators and computers, in the development of mathematical concepts in students. Calculators (i.e. ordinary, scientific and/or graphic) and computers can be used to provide an ideal environment for teaching the subject. This paper examines ...

  15. Using the Computer in Evolution Studies

    Science.gov (United States)

    Mariner, James L.

    1973-01-01

    Describes a high school biology exercise in which a computer greatly reduces time spent on calculations. Genetic equilibrium demonstrated by the Hardy-Weinberg principle and the subsequent effects of violating any of its premises are more readily understood when frequencies of alleles through many generations are calculated by the computer. (JR)

  16. Direct Computation on the Kinetic Spectrophotometry

    DEFF Research Database (Denmark)

    Hansen, Jørgen-Walther; Broen Pedersen, P.

    1974-01-01

    This report describes an analog computer designed for calculations of transient absorption from photographed recordings of the oscilloscope trace of the transmitted light intensity. The computer calculates the optical density OD, the natural logarithm of OD, and the natural logarithm of the diffe...

  17. relline: Relativistic line profiles calculation

    Science.gov (United States)

    Dauser, Thomas

    2015-05-01

    relline calculates relativistic line profiles; it is compatible with the common X-ray data analysis software XSPEC (ascl:9910.005) and ISIS (ascl:1302.002). The two basic forms are an additive line model (RELLINE) and a convolution model to calculate relativistic smearing (RELCONV).

  18. Using graphics processors to accelerate protein docking calculations.

    Science.gov (United States)

    Ritchie, David W; Venkatraman, Vishwesh; Mavridis, Lazaros

    2010-01-01

    Protein docking is the computationally intensive task of calculating the three-dimensional structure of a protein complex starting from the individual structures of the constituent proteins. In order to make the calculation tractable, most docking algorithms begin by assuming that the structures to be docked are rigid. This article describes some recent developments we have made to adapt our FFT-based "Hex" rigid-body docking algorithm to exploit the computational power of modern graphics processors (GPUs). The Hex algorithm is very efficient on conventional central processor units (CPUs), yet significant further speed-ups have been obtained by using GPUs. Thus, FFT-based docking calculations which formerly took many hours to complete using CPUs may now be carried out in a matter of seconds using GPUs. The Hex docking program and access to a server version of Hex on a GPU-based compute cluster are both available for public use.

  19. An ab-initio calculation

    Indian Academy of Sciences (India)

    Tufan Roy

    2017-06-19

    Jun 19, 2017 ... 2Theory and Simulations Laboratory, Human Resources Development Section, Raja Ramanna Centre for Advanced. Technology, Indore 452 013, ... We study the magnetic exchange interaction between the atoms for the materials ..... computing group, computer centre of RRCAT, Indore and P Thander are ...

  20. Computer sciences

    Science.gov (United States)

    Smith, Paul H.

    1988-01-01

    The Computer Science Program provides advanced concepts, techniques, system architectures, algorithms, and software for both space and aeronautics information sciences and computer systems. The overall goal is to provide the technical foundation within NASA for the advancement of computing technology in aerospace applications. The research program is improving the state of knowledge of fundamental aerospace computing principles and advancing computing technology in space applications such as software engineering and information extraction from data collected by scientific instruments in space. The program includes the development of special algorithms and techniques to exploit the computing power provided by high performance parallel processors and special purpose architectures. Research is being conducted in the fundamentals of data base logic and improvement techniques for producing reliable computing systems.

  1. Calculated neutron intensities for SINQ

    Energy Technology Data Exchange (ETDEWEB)

    Atchison, F

    1998-03-01

    A fully detailed calculation of the performance of the SINQ neutron source, using the PSI version of the HETC code package, was made in 1996 to provide information useful for source commissioning. Relevant information about the formulation of the problem, cascade analysis and some of the results are presented. Aspects of the techniques used to verify the results are described and discussed together with a limited comparison with earlier results obtained from neutron source design calculations. A favourable comparison between the measured and calculated differential neutron flux in one of the guides gives further indirect evidence that such calculations can give answers close to reality in absolute terms. Due to the complex interaction between the many nuclear (and other) models involved, no quantitative evaluation of the accuracy of the calculational method in general terms can be given. (author) refs., 13 figs., 9 tabs.

  2. Propulsion controlled aircraft computer

    Science.gov (United States)

    Cogan, Bruce R. (Inventor)

    2010-01-01

    A low-cost, easily retrofit Propulsion Controlled Aircraft (PCA) system for use on a wide range of commercial and military aircraft consists of an propulsion controlled aircraft computer that reads in aircraft data including aircraft state, pilot commands and other related data, calculates aircraft throttle position for a given maneuver commanded by the pilot, and then displays both current and calculated throttle position on a cockpit display to show the pilot where to move throttles to achieve the commanded maneuver, or is automatically sent digitally to command the engines directly.

  3. ARTc: Anisotropic reflectivity and transmissivity calculator

    Science.gov (United States)

    Malehmir, Reza; Schmitt, Douglas R.

    2016-08-01

    While seismic anisotropy is known to exist within the Earth's crust and even deeper, isotropic or even highly symmetric elastic anisotropic assumptions for seismic imaging is an over-simplification which may create artifacts in the image, target mis-positioning and hence flawed interpretation. In this paper, we have developed the ARTc algorithm to solve reflectivity, transmissivity as well as velocity and particle polarization in the most general case of elastic anisotropy. This algorithm is able to provide reflectivity solution from the boundary between two anisotropic slabs with arbitrary symmetry and orientation up to triclinic. To achieve this, the algorithm solves full elastic wave equation to find polarization, slowness and amplitude of all six wave-modes generated from the incident plane-wave and welded interface. In the first step to calculate the reflectivity, the algorithm solves properties of the incident wave such as particle polarization and slowness. After calculation of the direction of generated waves, the algorithm solves their respective slowness and particle polarization. With this information, the algorithm then solves a system of equations incorporating the imposed boundary conditions to arrive at the scattered wave amplitudes, and thus reflectivity and transmissivity. Reflectivity results as well as slowness and polarization are then tested in complex computational anisotropic models to ensure their accuracy and reliability. ARTc is coded in MATLAB ® and bundled with an interactive GUI and bash script to run on single or multi-processor computers.

  4. Calculating potential fields using microchannel spatial light modulators

    Science.gov (United States)

    Reid, Max B.

    1993-01-01

    We describe and present experimental results of the optical calculation of potential field maps suitable for mobile robot navigation. The optical computation employs two write modes of a microchannel spatial light modulator (MSLM). In one mode, written patterns expand spatially, and this characteristic is used to create an extended two dimensional function representing the influence of the goal in a robot's workspace. Distinct obstacle patterns are written in a second, non-expanding, mode. A model of the mechanisms determining MSLM write mode characteristics is developed and used to derive the optical calculation time for full potential field maps. Field calculations at a few hertz are possible with current technology, and calculation time vs. map size scales favorably in comparison to digital electronic computation.

  5. Calculation of wind turbine aeroelastic behaviour. The Garrad Hassan approach

    Energy Technology Data Exchange (ETDEWEB)

    Quarton, D.C. [Garrad Hassan and Partners Ltd., Bristol (United Kingdom)

    1996-09-01

    The Garrad Hassan approach to the prediction of wind turbine loading and response has been developed over the last decade. The goal of this development has been to produce calculation methods that contain realistic representation of the wind, include sensible aerodynamic and dynamic models of the turbine and can be used to predict fatigue and extreme loads for design purposes. The Garrad Hassan calculation method is based on a suite of four key computer programs: WIND3D for generation of the turbulent wind field; EIGEN for modal analysis of the rotor and support structure; BLADED for time domain calculation of the structural loads; and SIGNAL for post-processing of the BLADED predictions. The interaction of these computer programs is illustrated. A description of the main elements of the calculation method will be presented. (au)

  6. Efficient Calculation of Near Fields in the FDTD Method

    DEFF Research Database (Denmark)

    Franek, Ondrej

    2011-01-01

    When calculating frequency-domain near fields by the FDTD method, almost 50 % reduction in memory and CPU operations can be achieved if only E-fields are stored during the main time-stepping loop and H-fields computed later. An improved method of obtaining the H-fields from Faraday's Law is prese......When calculating frequency-domain near fields by the FDTD method, almost 50 % reduction in memory and CPU operations can be achieved if only E-fields are stored during the main time-stepping loop and H-fields computed later. An improved method of obtaining the H-fields from Faraday's Law...

  7. Computational aerodynamics and artificial intelligence

    Science.gov (United States)

    Mehta, U. B.; Kutler, P.

    1984-01-01

    The general principles of artificial intelligence are reviewed and speculations are made concerning how knowledge based systems can accelerate the process of acquiring new knowledge in aerodynamics, how computational fluid dynamics may use expert systems, and how expert systems may speed the design and development process. In addition, the anatomy of an idealized expert system called AERODYNAMICIST is discussed. Resource requirements for using artificial intelligence in computational fluid dynamics and aerodynamics are examined. Three main conclusions are presented. First, there are two related aspects of computational aerodynamics: reasoning and calculating. Second, a substantial portion of reasoning can be achieved with artificial intelligence. It offers the opportunity of using computers as reasoning machines to set the stage for efficient calculating. Third, expert systems are likely to be new assets of institutions involved in aeronautics for various tasks of computational aerodynamics.

  8. Computer programming and computer systems

    CERN Document Server

    Hassitt, Anthony

    1966-01-01

    Computer Programming and Computer Systems imparts a "reading knowledge? of computer systems.This book describes the aspects of machine-language programming, monitor systems, computer hardware, and advanced programming that every thorough programmer should be acquainted with. This text discusses the automatic electronic digital computers, symbolic language, Reverse Polish Notation, and Fortran into assembly language. The routine for reading blocked tapes, dimension statements in subroutines, general-purpose input routine, and efficient use of memory are also elaborated.This publication is inten

  9. On the Carter's Factor Calculation for Slotted Electric Machines

    Directory of Open Access Journals (Sweden)

    VIOREL, I. A.

    2007-11-01

    Full Text Available The air-gap flux density in a single side slotted unsaturated machine is computed via two dimensions finite element method (2D-FEM and via some analytical approximations. The Carter's factor values are calculated using different equations and a comparison between the obtained results is presented, allowing for pertinent conclusions concerning the flux density analytical estimation or the Carter's factor calculation.

  10. Identification of HIV Inhibitors Guided by Free Energy Perturbation Calculations

    OpenAIRE

    Acevedo, Orlando; Ambrose, Zandrea; Patrick T. Flaherty; Aamer, Hadega; Jain, Prashi; Sambasivarao, Somisetti V.

    2012-01-01

    Free energy perturbation (FEP) theory coupled to molecular dynamics (MD) or Monte Carlo (MC) statistical mechanics offers a theoretically precise method for determining the free energy differences of related biological inhibitors. Traditionally requiring extensive computational resources and expertise, it is only recently that its impact is being felt in drug discovery. A review of computer-aided anti-HIV efforts employing FEP calculations is provided here that describes early and recent succ...

  11. Modifying and Accelerating the Method of Moments Calculation

    OpenAIRE

    René Harťanský; Viktor Smieško; Michal Rafaj

    2017-01-01

    This manuscript deals with optimizing the numerical method called the method of moments (MoM). This method is widely utilized for field computation of 3D structures. MoM is exploited in hydraulics as well as in the electromagnetic field theory. Emphasis is put on minimizing calculations necessary for constructing a system of linear equations exploiting symmetry or similarity of elements of geometric structure. The manuscript also contains a comparison of computing times using standard MoM and...

  12. Adaptive Navier-Stokes Calculations for Vortical Flows

    Science.gov (United States)

    1993-03-12

    Progress in Aeronautics and Astronautics, 1992. 8.) Modiano , D. L., "Adaptive Mesh Euler Equation Computation of Vortex Breakdown in Delta Wing Flow...Conference, Orlando, Fla, July 6-9, 1993. 11.) Modiano , D. and Murman, E.M,, "An Acceleration Technique for Time Accurate Calculations", Submitted to the...Open Forum Session, l1th AIAA Computational Fluid Dynamics Conference, Orlando, Fla, July 6-9, 1993. 6 4 12.) Modiano , D. and Murman, E. M., "Adaptive

  13. Alaska Village Electric Load Calculator

    Energy Technology Data Exchange (ETDEWEB)

    Devine, M.; Baring-Gould, E. I.

    2004-10-01

    As part of designing a village electric power system, the present and future electric loads must be defined, including both seasonal and daily usage patterns. However, in many cases, detailed electric load information is not readily available. NREL developed the Alaska Village Electric Load Calculator to help estimate the electricity requirements in a village given basic information about the types of facilities located within the community. The purpose of this report is to explain how the load calculator was developed and to provide instructions on its use so that organizations can then use this model to calculate expected electrical energy usage.

  14. Practical astronomy with your calculator

    CERN Document Server

    Duffett-Smith, Peter

    1989-01-01

    Practical Astronomy with your Calculator, first published in 1979, has enjoyed immense success. The author's clear and easy to follow routines enable you to solve a variety of practical and recreational problems in astronomy using a scientific calculator. Mathematical complexity is kept firmly in the background, leaving just the elements necessary for swiftly making calculations. The major topics are: time, coordinate systems, the Sun, the planetary system, binary stars, the Moon, and eclipses. In the third edition there are entirely new sections on generalised coordinate transformations, nutr

  15. Water waves, fixed cylinders and floating spheres: fully nonlinear diffraction calculations compared to detailed experiments

    NARCIS (Netherlands)

    Ballast, A.

    2004-01-01

    To increase the capabilities of the computer calculations a computer code for fully nonlinear potential calculations with water waves and floating bodies had been developed earlier. It uses a boundary integral equation formulation, which is discretized to give a higher order panel method. In the

  16. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  17. Calculate Your Body Mass Index

    Science.gov (United States)

    ... Institutes of Health Contact Us Get Email Alerts Font Size Accessible Search Form Search the NHLBI, use ... Be Physically Active Healthy Weight Tools BMI Calculator Menu Plans Portion Distortion Key Recommendations Healthy Weight Resources ...

  18. Landfill Gas Energy Benefits Calculator

    Science.gov (United States)

    This page contains the LFG Energy Benefits Calculator to estimate direct, avoided, and total greenhouse gas reductions, as well as environmental and energy benefits, for a landfill gas energy project.

  19. Nursing students' mathematic calculation skills.

    Science.gov (United States)

    Rainboth, Lynde; DeMasi, Chris

    2006-12-01

    This mixed method study used a pre-test/post-test design to evaluate the efficacy of a teaching strategy in improving beginning nursing student learning outcomes. During a 4-week student teaching period, a convenience sample of 54 sophomore level nursing students were required to complete calculation assignments, taught one calculation method, and mandated to attend medication calculation classes. These students completed pre- and post-math tests and a major medication mathematic exam. Scores from the intervention student group were compared to those achieved by the previous sophomore class. Results demonstrated a statistically significant improvement from pre- to post-test and the students who received the intervention had statistically significantly higher scores on the major medication calculation exam than did the students in the control group. The evaluation completed by the intervention group showed that the students were satisfied with the method and outcome.

  20. Quantum Computing

    Science.gov (United States)

    Steffen, Matthias

    Solving computational problems require resources such as time, memory, and space. In the classical model of computation, computational complexity theory has categorized problems according to how difficult it is to solve them as the problem size increases. Remarkably, a quantum computer could solve certain problems using fundamentally fewer resources compared to a conventional computer, and therefore has garnered significant attention. Yet because of the delicate nature of entangled quantum states, the construction of a quantum computer poses an enormous challenge for experimental and theoretical scientists across multi-disciplinary areas including physics, engineering, materials science, and mathematics. While the field of quantum computing still has a long way to grow before reaching full maturity, state-of-the-art experiments on the order of 10 qubits are beginning to reach a fascinating stage at which they can no longer be emulated using even the fastest supercomputer. This raises the hope that small quantum computer demonstrations could be capable of approximately simulating or solving problems that also have practical applications. In this talk I will review the concepts behind quantum computing, and focus on the status of superconducting qubits which includes steps towards quantum error correction and quantum simulations.

  1. Biological computation

    CERN Document Server

    Lamm, Ehud

    2011-01-01

    Introduction and Biological BackgroundBiological ComputationThe Influence of Biology on Mathematics-Historical ExamplesBiological IntroductionModels and Simulations Cellular Automata Biological BackgroundThe Game of Life General Definition of Cellular Automata One-Dimensional AutomataExamples of Cellular AutomataComparison with a Continuous Mathematical Model Computational UniversalitySelf-Replication Pseudo Code Evolutionary ComputationEvolutionary Biology and Evolutionary ComputationGenetic AlgorithmsExample ApplicationsAnalysis of the Behavior of Genetic AlgorithmsLamarckian Evolution Genet

  2. Computational Composites

    DEFF Research Database (Denmark)

    Vallgårda, Anna K. A.

    to understand the computer as a material like any other material we would use for design, like wood, aluminum, or plastic. That as soon as the computer forms a composition with other materials it becomes just as approachable and inspiring as other smart materials. I present a series of investigations of what...... Computational Composite, and Telltale). Through the investigations, I show how the computer can be understood as a material and how it partakes in a new strand of materials whose expressions come to be in context. I uncover some of their essential material properties and potential expressions. I develop a way...

  3. Transfer Area Mechanical Handling Calculation

    Energy Technology Data Exchange (ETDEWEB)

    B. Dianda

    2004-06-23

    This calculation is intended to support the License Application (LA) submittal of December 2004, in accordance with the directive given by DOE correspondence received on the 27th of January 2004 entitled: ''Authorization for Bechtel SAX Company L.L. C. to Include a Bare Fuel Handling Facility and Increased Aging Capacity in the License Application, Contract Number DE-AC28-01R W12101'' (Arthur, W.J., I11 2004). This correspondence was appended by further Correspondence received on the 19th of February 2004 entitled: ''Technical Direction to Bechtel SAIC Company L.L. C. for Surface Facility Improvements, Contract Number DE-AC28-OIRW12101; TDL No. 04-024'' (BSC 2004a). These documents give the authorization for a Fuel Handling Facility to be included in the baseline. The purpose of this calculation is to establish preliminary bounding equipment envelopes and weights for the Fuel Handling Facility (FHF) transfer areas equipment. This calculation provides preliminary information only to support development of facility layouts and preliminary load calculations. The limitations of this preliminary calculation lie within the assumptions of section 5 , as this calculation is part of an evolutionary design process. It is intended that this calculation is superseded as the design advances to reflect information necessary to support License Application. The design choices outlined within this calculation represent a demonstration of feasibility and may or may not be included in the completed design. This calculation provides preliminary weight, dimensional envelope, and equipment position in building for the purposes of defining interface variables. This calculation identifies and sizes major equipment and assemblies that dictate overall equipment dimensions and facility interfaces. Sizing of components is based on the selection of commercially available products, where applicable. This is not a specific recommendation for the future use

  4. GPU/CPU Algorithm for Generalized Born/Solvent-Accessible Surface Area Implicit Solvent Calculations.

    Science.gov (United States)

    Tanner, David E; Phillips, James C; Schulten, Klaus

    2012-07-10

    Molecular dynamics methodologies comprise a vital research tool for structural biology. Molecular dynamics has benefited from technological advances in computing, such as multi-core CPUs and graphics processing units (GPUs), but harnessing the full power of hybrid GPU/CPU computers remains difficult. The generalized Born/solvent-accessible surface area implicit solvent model (GB/SA) stands to benefit from hybrid GPU/CPU computers, employing the GPU for the GB calculation and the CPU for the SA calculation. Here, we explore the computational challenges facing GB/SA calculations on hybrid GPU/CPU computers and demonstrate how NAMD, a parallel molecular dynamics program, is able to efficiently utilize GPUs and CPUs simultaneously for fast GB/SA simulations. The hybrid computation principles demonstrated here are generally applicable to parallel applications employing hybrid GPU/CPU calculations.

  5. Platform computing

    CERN Multimedia

    2002-01-01

    "Platform Computing releases first grid-enabled workload management solution for IBM eServer Intel and UNIX high performance computing clusters. This Out-of-the-box solution maximizes the performance and capability of applications on IBM HPC clusters" (1/2 page) .

  6. Cloud Computing

    DEFF Research Database (Denmark)

    Krogh, Simon

    2013-01-01

    with technological changes, the paradigmatic pendulum has swung between increased centralization on one side and a focus on distributed computing that pushes IT power out to end users on the other. With the introduction of outsourcing and cloud computing, centralization in large data centers is again dominating...

  7. Computational Deception

    NARCIS (Netherlands)

    Nijholt, Antinus; Acosta, P.S.; Cravo, P.

    2010-01-01

    In the future our daily life interactions with other people, with computers, robots and smart environments will be recorded and interpreted by computers or embedded intelligence in environments, furniture, robots, displays, and wearables. These sensors record our activities, our behaviour, and our

  8. Computational astrophysics

    Science.gov (United States)

    Miller, Richard H.

    1987-01-01

    Astronomy is an area of applied physics in which unusually beautiful objects challenge the imagination to explain observed phenomena in terms of known laws of physics. It is a field that has stimulated the development of physical laws and of mathematical and computational methods. Current computational applications are discussed in terms of stellar and galactic evolution, galactic dynamics, and particle motions.

  9. Computational Pathology

    Science.gov (United States)

    Louis, David N.; Feldman, Michael; Carter, Alexis B.; Dighe, Anand S.; Pfeifer, John D.; Bry, Lynn; Almeida, Jonas S.; Saltz, Joel; Braun, Jonathan; Tomaszewski, John E.; Gilbertson, John R.; Sinard, John H.; Gerber, Georg K.; Galli, Stephen J.; Golden, Jeffrey A.; Becich, Michael J.

    2016-01-01

    Context We define the scope and needs within the new discipline of computational pathology, a discipline critical to the future of both the practice of pathology and, more broadly, medical practice in general. Objective To define the scope and needs of computational pathology. Data Sources A meeting was convened in Boston, Massachusetts, in July 2014 prior to the annual Association of Pathology Chairs meeting, and it was attended by a variety of pathologists, including individuals highly invested in pathology informatics as well as chairs of pathology departments. Conclusions The meeting made recommendations to promote computational pathology, including clearly defining the field and articulating its value propositions; asserting that the value propositions for health care systems must include means to incorporate robust computational approaches to implement data-driven methods that aid in guiding individual and population health care; leveraging computational pathology as a center for data interpretation in modern health care systems; stating that realizing the value proposition will require working with institutional administrations, other departments, and pathology colleagues; declaring that a robust pipeline should be fostered that trains and develops future computational pathologists, for those with both pathology and non-pathology backgrounds; and deciding that computational pathology should serve as a hub for data-related research in health care systems. The dissemination of these recommendations to pathology and bioinformatics departments should help facilitate the development of computational pathology. PMID:26098131

  10. Analysis of a Model for Computer Virus Transmission

    OpenAIRE

    Qin, Peng

    2015-01-01

    Computer viruses remain a significant threat to computer networks. In this paper, the incorporation of new computers to the network and the removing of old computers from the network are considered. Meanwhile, the computers are equipped with antivirus software on the computer network. The computer virus model is established. Through the analysis of the model, disease-free and endemic equilibrium points are calculated. The stability conditions of the equilibria are derived. To illustrate our t...

  11. Method and computer program product for maintenance and modernization backlogging

    Science.gov (United States)

    Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M

    2013-02-19

    According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.

  12. Computational Streetscapes

    Directory of Open Access Journals (Sweden)

    Paul M. Torrens

    2016-09-01

    Full Text Available Streetscapes have presented a long-standing interest in many fields. Recently, there has been a resurgence of attention on streetscape issues, catalyzed in large part by computing. Because of computing, there is more understanding, vistas, data, and analysis of and on streetscape phenomena than ever before. This diversity of lenses trained on streetscapes permits us to address long-standing questions, such as how people use information while mobile, how interactions with people and things occur on streets, how we might safeguard crowds, how we can design services to assist pedestrians, and how we could better support special populations as they traverse cities. Amid each of these avenues of inquiry, computing is facilitating new ways of posing these questions, particularly by expanding the scope of what-if exploration that is possible. With assistance from computing, consideration of streetscapes now reaches across scales, from the neurological interactions that form among place cells in the brain up to informatics that afford real-time views of activity over whole urban spaces. For some streetscape phenomena, computing allows us to build realistic but synthetic facsimiles in computation, which can function as artificial laboratories for testing ideas. In this paper, I review the domain science for studying streetscapes from vantages in physics, urban studies, animation and the visual arts, psychology, biology, and behavioral geography. I also review the computational developments shaping streetscape science, with particular emphasis on modeling and simulation as informed by data acquisition and generation, data models, path-planning heuristics, artificial intelligence for navigation and way-finding, timing, synthetic vision, steering routines, kinematics, and geometrical treatment of collision detection and avoidance. I also discuss the implications that the advances in computing streetscapes might have on emerging developments in cyber

  13. Mordred: a molecular descriptor calculator.

    Science.gov (United States)

    Moriwaki, Hirotomo; Tian, Yu-Shi; Kawashita, Norihito; Takagi, Tatsuya

    2018-02-06

    Molecular descriptors are widely employed to present molecular characteristics in cheminformatics. Various molecular-descriptor-calculation software programs have been developed. However, users of those programs must contend with several issues, including software bugs, insufficient update frequencies, and software licensing constraints. To address these issues, we propose Mordred, a developed descriptor-calculation software application that can calculate more than 1800 two- and three-dimensional descriptors. It is freely available via GitHub. Mordred can be easily installed and used in the command line interface, as a web application, or as a high-flexibility Python package on all major platforms (Windows, Linux, and macOS). Performance benchmark results show that Mordred is at least twice as fast as the well-known PaDEL-Descriptor and it can calculate descriptors for large molecules, which cannot be accomplished by other software. Owing to its good performance, convenience, number of descriptors, and a lax licensing constraint, Mordred is a promising choice of molecular descriptor calculation software that can be utilized for cheminformatics studies, such as those on quantitative structure-property relationships.

  14. On the accuracy of HITEMP-2010 calculated emissivities of Water Vapor and Carbon Dioxide

    DEFF Research Database (Denmark)

    Alberti, M.; Weber, R.; Mancini, M.

    Nowadays, spectral Line-by-Line calculations using either HITRAN or HITEMP data bases are frequently used for calculating gas radiation properties like absorption coefficients or emissivities. Such calculations are computationally very expensive because of the vast number of spectral lines and...

  15. Methods for the algorithms for calculation of tunable coaxial bandpass microwave filters

    Directory of Open Access Journals (Sweden)

    Parfilov A. A.

    2012-12-01

    Full Text Available The article describes the features of the models and algorithms used for calculation of the characteristics of mechanically tunable coaxial bandpass microwave filters, on the basis of which a calculation computer program can be written. The ways are proposed to resolve ambiguities that arise in the course of development of the analytical algorithm for calculating coaxial tunable bandpass filters.

  16. Visual Method for Spectral Energy Distribution Calculation of ...

    Indian Academy of Sciences (India)

    c Indian Academy of Sciences. Visual Method for Spectral Energy Distribution Calculation of Blazars. Y. Huang1,3 & J. H. Fan2,3,∗. 1School of Computer Science and Education Software, Guangzhou University,. Guangzhou 510006, China. 2Centre for Astrophysics, Guangzhou University, Guangzhou 510006, China.

  17. Drying schedules calculation of Camiyani Black Pine ( Pinus nigra ...

    African Journals Online (AJOL)

    In this study, computer aided drying schedules were developed for Camiyani Black Pine (Pinus nigra var. pallasiana) lumber for less than 30 mm thick, between 30-60 mm thick and larger than 60 mm. Schedules were calculated on drying gradient basis. In this software (named KILNBRAIN), users can find more then one ...

  18. 17 CFR 190.07 - Calculation of allowed net equity.

    Science.gov (United States)

    2010-04-01

    ... equity. 190.07 Section 190.07 Commodity and Securities Exchanges COMMODITY FUTURES TRADING COMMISSION BANKRUPTCY § 190.07 Calculation of allowed net equity. Allowed net equity shall be computed as follows: (a) Allowed claim. The allowed net equity claim of a customer shall be equal to the aggregate of the funded...

  19. Viscous-Inviscid Interaction Method for Wing Calculations

    NARCIS (Netherlands)

    Coenen, Edith G.M.; Veldman, Arthur E.P.; Patrianakos, George

    2000-01-01

    A quasi-simultaneous viscous-inviscid coupling method is developed for the calculation of three-dimensional steady incompressible flow over transport wing configurations. The external inviscid flow is computed with a constant-potential (Dirichlet) panel method, constructed from a constant source and

  20. Prospects in deterministic three dimensional whole-core transport calculations

    Energy Technology Data Exchange (ETDEWEB)

    Sanchez, Richard [Commissariat a l' Energie Atomique et aux Energies Alternatives Direction de l' Energie Nucleaire, Service d.Etudes de Reacteurs et de Mathematiques Appliquees CEA de Saclay, Cedex (France)

    2012-03-15

    The point we made in this paper is that, although detailed and precise three-dimensional (3D) whole-core transport calculations may be obtained in the future with massively parallel computers, they would have an application to only some of the problems of the nuclear industry, more precisely those regarding multiphysics or for methodology validation or nuclear safety calculations. On the other hand, typical design reactor cycle calculations comprising many one-point core calculations can have very strict constraints in computing time and will not directly benefit from the advances in computations in large scale computers. Consequently, in this paper we review some of the deterministic 3D transport methods which in the very near future may have potential for industrial applications and, even with low-order approximations such as a low resolution in energy, might represent an advantage as compared with present industrial methodology, for which one of the main approximations is due to power reconstruction. These methods comprise the response-matrix method and methods based on the two-dimensional (2D) method of characteristics, such as the fusion method.