Computational methods for ab initio detection of microRNAs
Directory of Open Access Journals (Sweden)
Malik eYousef
2012-10-01
Full Text Available MicroRNAs are small RNA sequences of 18-24 nucleotides in length, which serve as templates to drive post transcriptional gene silencing. The canonical microRNA pathway starts with transcription from DNA and is followed by processing via the Microprocessor complex, yielding a hairpin structure. Which is then exported into the cytosol where it is processed by Dicer and then incorporated into the RNA induced silencing complex. All of these biogenesis steps add to the overall specificity of miRNA production and effect. Unfortunately, their modes of action are just beginning to be elucidated and therefore computational prediction algorithms cannot model the process but are usually forced to employ machine learning approaches. This work focuses on ab initio prediction methods throughout; and therefore homology-based miRNA detection methods are not discussed. Current ab initio prediction algorithms, their ties to data mining, and their prediction accuracy are detailed.
Studies of urea geometry by means of ab initio methods and computer simulations of liquids
Cirino, José Jair Vianna; Bertran, Celso Aparecido
2002-01-01
A study was carried out on the urea geometries using ab initio calculation and Monte Carlo computational simulation of liquids. The ab initio calculated results showed that urea has a non-planar conformation in the gas phase in which the hydrogen atoms are out of the plane formed by the heavy atoms. Free energies associated to the rotation of the amino groups of urea in water were obtained using the Monte Carlo method in which the thermodynamic perturbation theory is implemented. The magnitud...
International Nuclear Information System (INIS)
Jursic, B.S.
1996-01-01
Up to four ionization potentials of elements from the second-row of the periodic table were computed using the ab initio (HF, MP2, MP3, MP4, QCISD, GI, G2, and G2MP2) and DFT (B3LY, B3P86, B3PW91, XALPHA, HFS, HFB, BLYP, BP86, BPW91, BVWN, XAPLY, XAP86, XAPW91, XAVWN, SLYR SP86, SPW91 and SVWN) methods. In all of the calculations, the large 6-311++G(3df,3pd) gaussian type of basis set was used. The computed values were compared with the experimental results and suitability of the ab initio and DFF methods were discussed, in regard to reproducing the experimental data. From the computed ionization potentials of the second-row elements, it can be concluded that the HF ab initio computation is not capable of reproducing the experimental results. The computed ionization potentials are too low. However, by using the ab initio methods that include electron correlation, the computed IPs are becoming much closer to the experimental values. In all cases, with the exception of the first ionization potential for oxygen, the G2 computation result produces ionization potentials that are indistinguishable from the experimental results
Cirino J.J.V.; Bertran C.A.
2002-01-01
A study was carried out on the urea geometries using ab initio calculation and Monte Carlo computational simulation of liquids. The ab initio calculated results showed that urea has a non-planar conformation in the gas phase in which the hydrogen atoms are out of the plane formed by the heavy atoms. Free energies associated to the rotation of the amino groups of urea in water were obtained using the Monte Carlo method in which the thermodynamic perturbation theory is implemented. The magnitud...
Sumowski, Chris Vanessa; Hanni, Matti; Schweizer, Sabine; Ochsenfeld, Christian
2014-01-14
The structural sensitivity of NMR chemical shifts as computed by quantum chemical methods is compared to a variety of empirical approaches for the example of a prototypical peptide, the 38-residue kaliotoxin KTX comprising 573 atoms. Despite the simplicity of empirical chemical shift prediction programs, the agreement with experimental results is rather good, underlining their usefulness. However, we show in our present work that they are highly insensitive to structural changes, which renders their use for validating predicted structures questionable. In contrast, quantum chemical methods show the expected high sensitivity to structural and electronic changes. This appears to be independent of the quantum chemical approach or the inclusion of solvent effects. For the latter, explicit solvent simulations with increasing number of snapshots were performed for two conformers of an eight amino acid sequence. In conclusion, the empirical approaches neither provide the expected magnitude nor the patterns of NMR chemical shifts determined by the clearly more costly ab initio methods upon structural changes. This restricts the use of empirical prediction programs in studies where peptide and protein structures are utilized for the NMR chemical shift evaluation such as in NMR refinement processes, structural model verifications, or calculations of NMR nuclear spin relaxation rates.
Perspective: Ab initio force field methods derived from quantum mechanics
Xu, Peng; Guidez, Emilie B.; Bertoni, Colleen; Gordon, Mark S.
2018-03-01
It is often desirable to accurately and efficiently model the behavior of large molecular systems in the condensed phase (thousands to tens of thousands of atoms) over long time scales (from nanoseconds to milliseconds). In these cases, ab initio methods are difficult due to the increasing computational cost with the number of electrons. A more computationally attractive alternative is to perform the simulations at the atomic level using a parameterized function to model the electronic energy. Many empirical force fields have been developed for this purpose. However, the functions that are used to model interatomic and intermolecular interactions contain many fitted parameters obtained from selected model systems, and such classical force fields cannot properly simulate important electronic effects. Furthermore, while such force fields are computationally affordable, they are not reliable when applied to systems that differ significantly from those used in their parameterization. They also cannot provide the information necessary to analyze the interactions that occur in the system, making the systematic improvement of the functional forms that are used difficult. Ab initio force field methods aim to combine the merits of both types of methods. The ideal ab initio force fields are built on first principles and require no fitted parameters. Ab initio force field methods surveyed in this perspective are based on fragmentation approaches and intermolecular perturbation theory. This perspective summarizes their theoretical foundation, key components in their formulation, and discusses key aspects of these methods such as accuracy and formal computational cost. The ab initio force fields considered here were developed for different targets, and this perspective also aims to provide a balanced presentation of their strengths and shortcomings. Finally, this perspective suggests some future directions for this actively developing area.
Many-body optimization using an ab initio monte carlo method.
Haubein, Ned C; McMillan, Scott A; Broadbelt, Linda J
2003-01-01
Advances in computing power have made it possible to study solvated molecules using ab initio quantum chemistry. Inclusion of discrete solvent molecules is required to determine geometric information about solute/solvent clusters. Monte Carlo methods are well suited to finding minima in many-body systems, and ab initio methods are applicable to the widest range of systems. A first principles Monte Carlo (FPMC) method was developed to find minima in many-body systems, and emphasis was placed on implementing moves that increase the likelihood of finding minimum energy structures. Partial optimization and molecular interchange moves aid in finding minima and overcome the incomplete sampling that is unavoidable when using ab initio methods. FPMC was validated by studying the boron trifluoride-water system, and then the method was used to examine the methyl carbenium ion in water to demonstrate its application to solvation problems.
Computational prediction of muon stopping sites using ab initio random structure searching (AIRSS)
Liborio, Leandro; Sturniolo, Simone; Jochym, Dominik
2018-04-01
The stopping site of the muon in a muon-spin relaxation experiment is in general unknown. There are some techniques that can be used to guess the muon stopping site, but they often rely on approximations and are not generally applicable to all cases. In this work, we propose a purely theoretical method to predict muon stopping sites in crystalline materials from first principles. The method is based on a combination of ab initio calculations, random structure searching, and machine learning, and it has successfully predicted the MuT and MuBC stopping sites of muonium in Si, diamond, and Ge, as well as the muonium stopping site in LiF, without any recourse to experimental results. The method makes use of Soprano, a Python library developed to aid ab initio computational crystallography, that was publicly released and contains all the software tools necessary to reproduce our analysis.
International Nuclear Information System (INIS)
Sakane, Shinichi; Yezdimer, Eric M.; Liu, Wenbin; Barriocanal, Jose A.; Doren, Douglas J.; Wood, Robert H.
2000-01-01
The ab initio/classical free energy perturbation (ABC-FEP) method proposed previously by Wood et al. [J. Chem. Phys. 110, 1329 (1999)] uses classical simulations to calculate solvation free energies within an empirical potential model, then applies free energy perturbation theory to determine the effect of changing the empirical solute-solvent interactions to corresponding interactions calculated from ab initio methods. This approach allows accurate calculation of solvation free energies using an atomistic description of the solvent and solute, with interactions calculated from first principles. Results can be obtained at a feasible computational cost without making use of approximations such as a continuum solvent or an empirical cavity formation energy. As such, the method can be used far from ambient conditions, where the empirical parameters needed for approximate theories of solvation may not be available. The sources of error in the ABC-FEP method are the approximations in the ab initio method, the finite sample of configurations, and the classical solvent model. This article explores the accuracy of various approximations used in the ABC-FEP method by comparing to the experimentally well-known free energy of hydration of water at two state points (ambient conditions, and 973.15 K and 600 kg/m3). The TIP4P-FQ model [J. Chem. Phys. 101, 6141 (1994)] is found to be a reliable solvent model for use with this method, even at supercritical conditions. Results depend strongly on the ab initio method used: a gradient-corrected density functional theory is not adequate, but a localized MP2 method yields excellent agreement with experiment. Computational costs are reduced by using a cluster approximation, in which ab initio pair interaction energies are calculated between the solute and up to 60 solvent molecules, while multi-body interactions are calculated with only a small cluster (5 to 12 solvent molecules). Sampling errors for the ab initio contribution to
Use of ab initio quantum chemical methods in battery technology
Energy Technology Data Exchange (ETDEWEB)
Deiss, E [Paul Scherrer Inst. (PSI), Villigen (Switzerland)
1997-06-01
Ab initio quantum chemistry can nowadays predict physical and chemical properties of molecules and solids. An attempt should be made to use this tool more widely for predicting technologically favourable materials. To demonstrate the use of ab initio quantum chemistry in battery technology, the theoretical energy density (energy per volume of active electrode material) and specific energy (energy per mass of active electrode material) of a rechargeable lithium-ion battery consisting of a graphite electrode and a nickel oxide electrode has been calculated with this method. (author) 1 fig., 1 tab., 7 refs.
Predicting lattice thermal conductivity with help from ab initio methods
Broido, David
2015-03-01
The lattice thermal conductivity is a fundamental transport parameter that determines the utility a material for specific thermal management applications. Materials with low thermal conductivity find applicability in thermoelectric cooling and energy harvesting. High thermal conductivity materials are urgently needed to help address the ever-growing heat dissipation problem in microelectronic devices. Predictive computational approaches can provide critical guidance in the search and development of new materials for such applications. Ab initio methods for calculating lattice thermal conductivity have demonstrated predictive capability, but while they are becoming increasingly efficient, they are still computationally expensive particularly for complex crystals with large unit cells . In this talk, I will review our work on first principles phonon transport for which the intrinsic lattice thermal conductivity is limited only by phonon-phonon scattering arising from anharmonicity. I will examine use of the phase space for anharmonic phonon scattering and the Grüneisen parameters as measures of the thermal conductivities for a range of materials and compare these to the widely used guidelines stemming from the theory of Liebfried and Schölmann. This research was supported primarily by the NSF under Grant CBET-1402949, and by the S3TEC, an Energy Frontier Research Center funded by the US DOE, office of Basic Energy Sciences under Award No. DE-SC0001299.
Ab initio calculations of mechanical properties: Methods and applications
Czech Academy of Sciences Publication Activity Database
Pokluda, J.; Černý, Miroslav; Šob, Mojmír; Umeno, Y.
2015-01-01
Roč. 73, AUG (2015), s. 127-158 ISSN 0079-6425 R&D Projects: GA ČR(CZ) GAP108/12/0311 Institutional support: RVO:68081723 Keywords : Ab initio methods * Elastic moduli * Intrinsic hardness * Stability analysis * Theoretical strength * Intrinsic brittleness/ductility Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 31.083, year: 2015
A Force Balanced Fragmentation Method for ab Initio Molecular Dynamic Simulation of Protein
Directory of Open Access Journals (Sweden)
Mingyuan Xu
2018-05-01
Full Text Available A force balanced generalized molecular fractionation with conjugate caps (FB-GMFCC method is proposed for ab initio molecular dynamic simulation of proteins. In this approach, the energy of the protein is computed by a linear combination of the QM energies of individual residues and molecular fragments that account for the two-body interaction of hydrogen bond between backbone peptides. The atomic forces on the caped H atoms were corrected to conserve the total force of the protein. Using this approach, ab initio molecular dynamic simulation of an Ace-(ALA9-NME linear peptide showed the conservation of the total energy of the system throughout the simulation. Further a more robust 110 ps ab initio molecular dynamic simulation was performed for a protein with 56 residues and 862 atoms in explicit water. Compared with the classical force field, the ab initio molecular dynamic simulations gave better description of the geometry of peptide bonds. Although further development is still needed, the current approach is highly efficient, trivially parallel, and can be applied to ab initio molecular dynamic simulation study of large proteins.
Efficient approach to compute melting properties fully from ab initio with application to Cu
Zhu, Li-Fang; Grabowski, Blazej; Neugebauer, Jörg
2017-12-01
Applying thermodynamic integration within an ab initio-based free-energy approach is a state-of-the-art method to calculate melting points of materials. However, the high computational cost and the reliance on a good reference system for calculating the liquid free energy have so far hindered a general application. To overcome these challenges, we propose the two-optimized references thermodynamic integration using Langevin dynamics (TOR-TILD) method in this work by extending the two-stage upsampled thermodynamic integration using Langevin dynamics (TU-TILD) method, which has been originally developed to obtain anharmonic free energies of solids, to the calculation of liquid free energies. The core idea of TOR-TILD is to fit two empirical potentials to the energies from density functional theory based molecular dynamics runs for the solid and the liquid phase and to use these potentials as reference systems for thermodynamic integration. Because the empirical potentials closely reproduce the ab initio system in the relevant part of the phase space the convergence of the thermodynamic integration is very rapid. Therefore, the proposed approach improves significantly the computational efficiency while preserving the required accuracy. As a test case, we apply TOR-TILD to fcc Cu computing not only the melting point but various other melting properties, such as the entropy and enthalpy of fusion and the volume change upon melting. The generalized gradient approximation (GGA) with the Perdew-Burke-Ernzerhof (PBE) exchange-correlation functional and the local-density approximation (LDA) are used. Using both functionals gives a reliable ab initio confidence interval for the melting point, the enthalpy of fusion, and entropy of fusion.
Sumner, Isaiah; Iyengar, Srinivasan S
2007-10-18
We have introduced a computational methodology to study vibrational spectroscopy in clusters inclusive of critical nuclear quantum effects. This approach is based on the recently developed quantum wavepacket ab initio molecular dynamics method that combines quantum wavepacket dynamics with ab initio molecular dynamics. The computational efficiency of the dynamical procedure is drastically improved (by several orders of magnitude) through the utilization of wavelet-based techniques combined with the previously introduced time-dependent deterministic sampling procedure measure to achieve stable, picosecond length, quantum-classical dynamics of electrons and nuclei in clusters. The dynamical information is employed to construct a novel cumulative flux/velocity correlation function, where the wavepacket flux from the quantized particle is combined with classical nuclear velocities to obtain the vibrational density of states. The approach is demonstrated by computing the vibrational density of states of [Cl-H-Cl]-, inclusive of critical quantum nuclear effects, and our results are in good agreement with experiment. A general hierarchical procedure is also provided, based on electronic structure harmonic frequencies, classical ab initio molecular dynamics, computation of nuclear quantum-mechanical eigenstates, and employing quantum wavepacket ab initio dynamics to understand vibrational spectroscopy in hydrogen-bonded clusters that display large degrees of anharmonicities.
Ab initio methods for electron-molecule collisions
International Nuclear Information System (INIS)
Collins, L.A.; Schneider, B.I.
1987-01-01
This review concentrates on the recent advances in treating the electronic aspect of the electron-molecule interaction and leaves to other articles the description of the rotational and vibrational motions. Those methods which give the most complete treatment of the direct, exchange, and correlation effects are focused on. Such full treatments are generally necessary at energies below a few Rydbergs (≅ 60 eV). This choice unfortunately necessitates omission of those active and vital areas devoted to the development of model potentials and approximate scattering formulations. The ab initio and model approaches complement each other and are both extremely important to the full explication of the electron-scattering process. Due to the rapid developments of recent years, the approaches that provide the fullest treatment are concentrated on. 81 refs
International Nuclear Information System (INIS)
Bernholc, J.
1998-01-01
The field of computational materials physics has grown very quickly in the past decade, and it is now possible to simulate properties of complex materials completely from first principles. The presentation has mostly focused on first-principles dynamic simulations. Such simulations have been pioneered by Car and Parrinello, who introduced a method for performing realistic simulations within the context of density functional theory. The Car-Parrinello method and related plane wave approaches are reviewed in depth. The Car-Parrinello method was reviewed and illustrated with several applications: the dynamics of the C 60 solid, diffusion across Si steps, and computing free energy differences. Alternative ab initio simulation schemes, which use preconditioned conjugate gradient techniques for energy minimization and dynamics were also discussed
Summation of Parquet diagrams as an ab initio method in nuclear structure calculations
International Nuclear Information System (INIS)
Bergli, Elise; Hjorth-Jensen, Morten
2011-01-01
Research highlights: → We present a Green's function based approach for doing ab initio nuclear structure calculations. → In particular the sum the subset of so-called Parquet diagrams. → Applying the theory to a simple but realistic model, results in good agreement with other ab initio methods. → This opens up for ab initio calculations for medium-heavy nuclei. - Abstract: In this work we discuss the summation of the Parquet class of diagrams within Green's function theory as a possible framework for ab initio nuclear structure calculations. The theory is presented and some numerical details are discussed, in particular the approximations employed. We apply the Parquet method to a simple model, and compare our results with those from an exact solution. The main conclusion is that even at the level of approximation presented here, the results shows good agreement with other comparable ab initio approaches.
An Efficient Method for Electron-Atom Scattering Using Ab-initio Calculations
Energy Technology Data Exchange (ETDEWEB)
Xu, Yuan; Yang, Yonggang; Xiao, Liantuan; Jia, Suotang [Shanxi University, Taiyuan (China)
2017-02-15
We present an efficient method based on ab-initio calculations to investigate electron-atom scatterings. Those calculations profit from methods implemented in standard quantum chemistry programs. The new approach is applied to electron-helium scattering. The results are compared with experimental and other theoretical references to demonstrate the efficiency of our method.
Essential numerical computer methods
Johnson, Michael L
2010-01-01
The use of computers and computational methods has become ubiquitous in biological and biomedical research. During the last 2 decades most basic algorithms have not changed, but what has is the huge increase in computer speed and ease of use, along with the corresponding orders of magnitude decrease in cost. A general perception exists that the only applications of computers and computer methods in biological and biomedical research are either basic statistical analysis or the searching of DNA sequence data bases. While these are important applications they only scratch the surface of the current and potential applications of computers and computer methods in biomedical research. The various chapters within this volume include a wide variety of applications that extend far beyond this limited perception. As part of the Reliable Lab Solutions series, Essential Numerical Computer Methods brings together chapters from volumes 210, 240, 321, 383, 384, 454, and 467 of Methods in Enzymology. These chapters provide ...
Energy Technology Data Exchange (ETDEWEB)
Orimoto, Yuuichi, E-mail: orimoto.yuuichi.888@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Aoki, Yuriko [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)
2016-07-14
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.
International Nuclear Information System (INIS)
Orimoto, Yuuichi; Aoki, Yuriko
2016-01-01
An automated property optimization method was developed based on the ab initio O(N) elongation (ELG) method and applied to the optimization of nonlinear optical (NLO) properties in DNA as a first test. The ELG method mimics a polymerization reaction on a computer, and the reaction terminal of a starting cluster is attacked by monomers sequentially to elongate the electronic structure of the system by solving in each step a limited space including the terminal (localized molecular orbitals at the terminal) and monomer. The ELG-finite field (ELG-FF) method for calculating (hyper-)polarizabilities was used as the engine program of the optimization method, and it was found to show linear scaling efficiency while maintaining high computational accuracy for a random sequenced DNA model. Furthermore, the self-consistent field convergence was significantly improved by using the ELG-FF method compared with a conventional method, and it can lead to more feasible NLO property values in the FF treatment. The automated optimization method successfully chose an appropriate base pair from four base pairs (A, T, G, and C) for each elongation step according to an evaluation function. From test optimizations for the first order hyper-polarizability (β) in DNA, a substantial difference was observed depending on optimization conditions between “choose-maximum” (choose a base pair giving the maximum β for each step) and “choose-minimum” (choose a base pair giving the minimum β). In contrast, there was an ambiguous difference between these conditions for optimizing the second order hyper-polarizability (γ) because of the small absolute value of γ and the limitation of numerical differential calculations in the FF method. It can be concluded that the ab initio level property optimization method introduced here can be an effective step towards an advanced computer aided material design method as long as the numerical limitation of the FF method is taken into account.
Ab Initio Molecular-Dynamics Simulation of Neuromorphic Computing in Phase-Change Memory Materials.
Skelton, Jonathan M; Loke, Desmond; Lee, Taehoon; Elliott, Stephen R
2015-07-08
We present an in silico study of the neuromorphic-computing behavior of the prototypical phase-change material, Ge2Sb2Te5, using ab initio molecular-dynamics simulations. Stepwise changes in structural order in response to temperature pulses of varying length and duration are observed, and a good reproduction of the spike-timing-dependent plasticity observed in nanoelectronic synapses is demonstrated. Short above-melting pulses lead to instantaneous loss of structural and chemical order, followed by delayed partial recovery upon structural relaxation. We also investigate the link between structural order and electrical and optical properties. These results pave the way toward a first-principles understanding of phase-change physics beyond binary switching.
De Almeida, Wagner B.
2000-01-01
The determination of the molecular structure of molecules is of fundamental importance in chemistry. X-rays and electron diffraction methods constitute in important tools for the elucidation of the molecular structure of systems in the solid state and gas phase, respectively. The use of quantum mechanical molecular orbital ab initio methods offer an alternative for conformational analysis studies. Comparison between theoretical results and those obtained experimentally in the gas phase can ma...
Bistafa, Carlos; Kitamura, Yukichi; Martins-Costa, Marilia T C; Nagaoka, Masataka; Ruiz-López, Manuel F
2018-05-22
We describe a method to locate stationary points in the free-energy hypersurface of complex molecular systems using high-level correlated ab initio potentials. In this work, we assume a combined QM/MM description of the system although generalization to full ab initio potentials or other theoretical schemes is straightforward. The free-energy gradient (FEG) is obtained as the mean force acting on relevant nuclei using a dual level strategy. First, a statistical simulation is carried out using an appropriate, low-level quantum mechanical force-field. Free-energy perturbation (FEP) theory is then used to obtain the free-energy derivatives for the target, high-level quantum mechanical force-field. We show that this composite FEG-FEP approach is able to reproduce the results of a standard free-energy minimization procedure with high accuracy, while simultaneously allowing for a drastic reduction of both computational and wall-clock time. The method has been applied to study the structure of the water molecule in liquid water at the QCISD/aug-cc-pVTZ level of theory, using the sampling from QM/MM molecular dynamics simulations at the B3LYP/6-311+G(d,p) level. The obtained values for the geometrical parameters and for the dipole moment of the water molecule are within the experimental error, and they also display an excellent agreement when compared to other theoretical estimations. The developed methodology represents therefore an important step toward the accurate determination of the mechanism, kinetics, and thermodynamic properties of processes in solution, in enzymes, and in other disordered chemical systems using state-of-the-art ab initio potentials.
International Nuclear Information System (INIS)
Almeida, Wagner B. de
2000-01-01
The determination of the molecular structure of molecules is of fundamental importance in chemistry. X-rays and electron diffraction methods constitute in important tools for the elucidation of the molecular structure of systems in the solid state and gas phase, respectively. The use of quantum mechanical molecular orbital ab initio methods offer an alternative for conformational analysis studies. Comparison between theoretical results and those obtained experimentally in the gas phase can make a significant contribution for an unambiguous determination of the geometrical parameters. In this article the determination for an unambiguous determination of the geometrical parameters. In this article the determination of the molecular structure of the cyclooctane molecule by electron diffraction in the gas phase an initio calculations will be addressed, providing an example of a comparative analysis of theoretical and experimental predictions. (author)
Odell, Anders; Delin, Anna; Johansson, Bö rje; Ulman, Kanchan; Narasimhan, Shobhana; Rungger, Ivan; Sanvito, Stefano
2011-01-01
The influence of the electrode's Fermi surface on the transport properties of a photoswitching molecule is investigated with state-of-the-art ab initio transport methods. We report results for the conducting properties of the two forms
Projector augmented wave method: ab initio molecular dynamics ...
Indian Academy of Sciences (India)
Unknown
kinetic energy is small and the wave function is smooth. However, the wave ... and various strategies have been developed. ... methods let us briefly review the history of augmented ..... alleviated by adding an intelligent zero: If an operator B.
Room temperature linelists for CO2 asymmetric isotopologues with ab initio computed intensities
Zak, Emil J.; Tennyson, Jonathan; Polyansky, Oleg L.; Lodi, Lorenzo; Zobov, Nikolay F.; Tashkun, Sergei A.; Perevalov, Valery I.
2017-12-01
The present paper reports room temperature line lists for six asymmetric isotopologues of carbon dioxide: 16O12C18O (628), 16O12C17O (627), 16O13C18O (638),16O13C17O (637), 17O12C18O (728) and 17O13C18O (738), covering the range 0-8000 cm-1. Variational rotation-vibration wavefunctions and energy levels are computed using the DVR3D software suite and a high quality semi-empirical potential energy surface (PES), followed by computation of intensities using an ab initio dipole moment surface (DMS). A theoretical procedure for quantifying sensitivity of line intensities to minor distortions of the PES/DMS renders our theoretical model as critically evaluated. Several recent high quality measurements and theoretical approaches are discussed to provide a benchmark of our results against the most accurate available data. Indeed, the thesis of transferability of accuracy among different isotopologues with the use of mass-independent PES is supported by several examples. Thereby, we conclude that the majority of line intensities for strong bands are predicted with sub-percent accuracy. Accurate line positions are generated using an effective Hamiltonian, constructed from the latest experiments. This study completes the list of relevant isotopologues of carbon dioxide; these line lists are available to remote sensing studies and inclusion in databases.
Quantum chemistry the development of ab initio methods in molecular electronic structure theory
Schaefer III, Henry F
2004-01-01
This guide is guaranteed to prove of keen interest to the broad spectrum of experimental chemists who use electronic structure theory to assist in the interpretation of their laboratory findings. A list of 150 landmark papers in ab initio molecular electronic structure methods, it features the first page of each paper (which usually encompasses the abstract and introduction). Its primary focus is methodology, rather than the examination of particular chemical problems, and the selected papers either present new and important methods or illustrate the effectiveness of existing methods in predi
Energy Technology Data Exchange (ETDEWEB)
Ribeiro, M., E-mail: ribeiro.jr@oorbit.com.br [Office of Operational Research for Business Intelligence and Technology, Principal Office, Buffalo, Wyoming 82834 (United States)
2015-06-21
Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost.
International Nuclear Information System (INIS)
Ribeiro, M.
2015-01-01
Ab initio calculations of hydrogen-passivated Si nanowires were performed using density functional theory within LDA-1/2, to account for the excited states properties. A range of diameters was calculated to draw conclusions about the ability of the method to correctly describe the main trends of bandgap, quantum confinement, and self-energy corrections versus the diameter of the nanowire. Bandgaps are predicted with excellent accuracy if compared with other theoretical results like GW, and with the experiment as well, but with a low computational cost
Ab initio computational study of vincristine as a biological active compound: NMR and NBO analyses
Directory of Open Access Journals (Sweden)
Shiva Joohari
2015-06-01
Full Text Available Vincristine is a biological active alkaloid that has been used clinically against a variety of neoplasms. In the current study we have theoretically investigated the magnetic properties of titled compound to predict physical and chemical properties of vincristine as a biological inhibitor. Ab initio computation using HF and B3LYP with 3-21G(d and 6-31G(d level of theory have been performed and then magnetic shielding tensor (, ppm, shielding asymmetry (, magnetic shielding anisotropy (aniso, ppm, the skew of a tensor (K, chemical shift anisotropy ( and chemical shift ( were calculated to indicate the details of the interaction mechanism between microtubules and vincristine. Moreover, EHOMO, ELUMO and Ebg were evaluated. The maximum and minimum values of Ebg were found in HF/3-21g and B3LYP/3-21g respectively. It was also uggested that O24, O37, O49 and O55 with minimum values of iso, are active sites of titled compound. Furthermore the calculated chemical shifts were compared with experimental data in DMSO and CDCl3 solvents.
Light focusing through a multiple scattering medium: ab initio computer simulation
Danko, Oleksandr; Danko, Volodymyr; Kovalenko, Andrey
2018-01-01
The present study considers ab initio computer simulation of the light focusing through a complex scattering medium. The focusing is performed by shaping the incident light beam in order to obtain a small focused spot on the opposite side of the scattering layer. MSTM software (Auburn University) is used to simulate the propagation of an arbitrary monochromatic Gaussian beam and obtain 2D distribution of the optical field in the selected plane of the investigated volume. Based on the set of incident and scattered fields, the pair of right and left eigen bases and corresponding singular values were calculated. The pair of right and left eigen modes together with the corresponding singular value constitute the transmittance eigen channel of the disordered media. Thus, the scattering process is described in three steps: 1) initial field decomposition in the right eigen basis; 2) scaling of decomposition coefficients for the corresponding singular values; 3) assembling of the scattered field as the composition of the weighted left eigen modes. Basis fields are represented as a linear combination of the original Gaussian beams and scattered fields. It was demonstrated that 60 independent control channels provide focusing the light into a spot with the minimal radius of approximately 0.4 μm at half maximum. The intensity enhancement in the focal plane was equal to 68 that coincided with theoretical prediction.
A computational ab initio study of surface diffusion of sulfur on the CdTe (111) surface
Energy Technology Data Exchange (ETDEWEB)
Naderi, Ebadollah, E-mail: enaderi42@gmail.com [Department of Physics, Savitribai Phule Pune University (SPPU), Pune-411007 (India); Ghaisas, S. V. [Department of Electronic Science, Savitribai Phule Pune University (SPPU), Pune-411007 (India)
2016-08-15
In order to discern the formation of epitaxial growth of CdS shell over CdTe nanocrystals, kinetics related to the initial stages of the growth of CdS on CdTe is investigated using ab-initio methods. We report diffusion of sulfur adatom on the CdTe (111) A-type (Cd-terminated) and B-type (Te-terminated) surfaces within the density functional theory (DFT). The barriers are computed by applying the climbing Nudge Elastic Band (c-NEB) method. From the results surface hopping emerges as the major mode of diffusion. In addition, there is a distinct contribution from kick-out type diffusion in which a CdTe surface atom is kicked out from its position and is replaced by the diffusing sulfur atom. Also, surface vacancy substitution contributes to the concomitant dynamics. There are sites on the B- type surface that are competitively close in terms of the binding energy to the lowest energy site of epitaxy on the surface. The kick-out process is more likely for B-type surface where a Te atom of the surface is displaced by a sulfur adatom. Further, on the B-type surface, subsurface migration of sulfur is indicated. Furthermore, the binding energies of S on CdTe reveal that on the A-type surface, epitaxial sites provide relatively higher binding energies and barriers than on B-type.
A computational ab initio study of surface diffusion of sulfur on the CdTe (111) surface
Naderi, Ebadollah; Ghaisas, S. V.
2016-08-01
In order to discern the formation of epitaxial growth of CdS shell over CdTe nanocrystals, kinetics related to the initial stages of the growth of CdS on CdTe is investigated using ab-initio methods. We report diffusion of sulfur adatom on the CdTe (111) A-type (Cd-terminated) and B-type (Te-terminated) surfaces within the density functional theory (DFT). The barriers are computed by applying the climbing Nudge Elastic Band (c-NEB) method. From the results surface hopping emerges as the major mode of diffusion. In addition, there is a distinct contribution from kick-out type diffusion in which a CdTe surface atom is kicked out from its position and is replaced by the diffusing sulfur atom. Also, surface vacancy substitution contributes to the concomitant dynamics. There are sites on the B- type surface that are competitively close in terms of the binding energy to the lowest energy site of epitaxy on the surface. The kick-out process is more likely for B-type surface where a Te atom of the surface is displaced by a sulfur adatom. Further, on the B-type surface, subsurface migration of sulfur is indicated. Furthermore, the binding energies of S on CdTe reveal that on the A-type surface, epitaxial sites provide relatively higher binding energies and barriers than on B-type.
A computational ab initio study of surface diffusion of sulfur on the CdTe (111) surface
International Nuclear Information System (INIS)
Naderi, Ebadollah; Ghaisas, S. V.
2016-01-01
In order to discern the formation of epitaxial growth of CdS shell over CdTe nanocrystals, kinetics related to the initial stages of the growth of CdS on CdTe is investigated using ab-initio methods. We report diffusion of sulfur adatom on the CdTe (111) A-type (Cd-terminated) and B-type (Te-terminated) surfaces within the density functional theory (DFT). The barriers are computed by applying the climbing Nudge Elastic Band (c-NEB) method. From the results surface hopping emerges as the major mode of diffusion. In addition, there is a distinct contribution from kick-out type diffusion in which a CdTe surface atom is kicked out from its position and is replaced by the diffusing sulfur atom. Also, surface vacancy substitution contributes to the concomitant dynamics. There are sites on the B- type surface that are competitively close in terms of the binding energy to the lowest energy site of epitaxy on the surface. The kick-out process is more likely for B-type surface where a Te atom of the surface is displaced by a sulfur adatom. Further, on the B-type surface, subsurface migration of sulfur is indicated. Furthermore, the binding energies of S on CdTe reveal that on the A-type surface, epitaxial sites provide relatively higher binding energies and barriers than on B-type.
Ab-initio Computation of the Electronic, transport, and Bulk Properties of Calcium Oxide.
Mbolle, Augustine; Banjara, Dipendra; Malozovsky, Yuriy; Franklin, Lashounda; Bagayoko, Diola
We report results from ab-initio, self-consistent, local Density approximation (LDA) calculations of electronic and related properties of calcium oxide (CaO) in the rock salt structure. We employed the Ceperley and Alder LDA potential and the linear combination of atomic orbitals (LCAO) formalism. Our calculations are non-relativistic. We implemented the LCAO formalism following the Bagayoko, Zhao, and Williams (BZW) method, as enhanced by Ekuma and Franklin (BZW-EF). The BZW-EF method involves a methodical search for the optimal basis set that yields the absolute minima of the occupied energies, as required by density functional theory (DFT). Our calculated, indirect band gap of 6.91eV, from towards the L point, is in excellent agreement with experimental value of 6.93-7.7eV, at room temperature (RT). We have also calculated the total (DOS) and partial (pDOS) densities of states as well as the bulk modulus. Our calculated bulk modulus is in excellent agreement with experiment. Work funded in part by the US Department of Energy (DOE), National Nuclear Security Administration (NNSA) (Award No.DE-NA0002630), the National Science Foundation (NSF) (Award No, 1503226), LaSPACE, and LONI-SUBR.
International Nuclear Information System (INIS)
Nakayama, Akira; Taketsugu, Tetsuya; Shiga, Motoyuki
2009-01-01
Efficiency of the ab initio hybrid Monte Carlo and ab initio path integral hybrid Monte Carlo methods is enhanced by employing an auxiliary potential energy surface that is used to update the system configuration via molecular dynamics scheme. As a simple illustration of this method, a dual-level approach is introduced where potential energy gradients are evaluated by computationally less expensive ab initio electronic structure methods. (author)
Directory of Open Access Journals (Sweden)
Hirokazu Takaki
2014-01-01
Full Text Available We present an efficient computation technique for ab-initio electron transport calculations based on density functional theory and the nonequilibrium Green’s function formalism for application to heterostructures with two-dimensional (2D interfaces. The computational load for constructing the Green’s functions, which depends not only on the energy but also on the 2D Bloch wave vector along the interfaces and is thus catastrophically heavy, is circumvented by parallel computational techniques with the message passing interface, which divides the calculations of the Green’s functions with respect to energy and wave vectors. To demonstrate the computational efficiency of the present code, we perform ab-initio electron transport calculations of Al(100-Si(100-Al(100 heterostructures, one of the most typical metal-semiconductor-metal systems, and show their transmission spectra, density of states (DOSs, and dependence on the thickness of the Si layers.
Indian Academy of Sciences (India)
Chemistry for their pioneering contri butions to the development of computational methods in quantum chemistry and density functional theory .... program of Pop Ie for ab-initio electronic structure calculation of molecules. This ab-initio MO ...
Czech Academy of Sciences Publication Activity Database
Ma, D.; Friák, Martin; von Pezold, J.; Raabe, D.; Neugebauer, J.
2015-01-01
Roč. 85, FEB (2015), s. 53-66 ISSN 1359-6454 Institutional support: RVO:68081723 Keywords : Solid-solution strengthening * DFT * Peierls–Nabarro model * Ab initio * Al alloys Subject RIV: BM - Solid Matter Physics ; Magnetism Impact factor: 5.058, year: 2015
Indian Academy of Sciences (India)
mechanisms of two molecular crystals: An ab initio molecular dynamics ... for Computation in Molecular and Materials Science and Department of Chemistry, School of ..... NSAF Foundation of National Natural Science Foun- ... Matter 14 2717.
Energy Technology Data Exchange (ETDEWEB)
Palmer, Michael H., E-mail: m.h.palmer@ed.ac.uk; Ridley, Trevor, E-mail: t.ridley@ed.ac.uk, E-mail: vronning@phys.au.dk, E-mail: nykj@phys.au.dk, E-mail: marcello.coreno@elettra.eu, E-mail: desimone@iom.cnr.it, E-mail: malgorzata.biczysko@sns.it, E-mail: kipeters@wsu.edu [School of Chemistry, University of Edinburgh, Joseph Black Building, David Brewster Road, Edinburgh EH9 3FJ, Scotland (United Kingdom); Hoffmann, Søren Vrønning, E-mail: t.ridley@ed.ac.uk, E-mail: vronning@phys.au.dk, E-mail: nykj@phys.au.dk, E-mail: marcello.coreno@elettra.eu, E-mail: desimone@iom.cnr.it, E-mail: malgorzata.biczysko@sns.it, E-mail: kipeters@wsu.edu; Jones, Nykola C., E-mail: t.ridley@ed.ac.uk, E-mail: vronning@phys.au.dk, E-mail: nykj@phys.au.dk, E-mail: marcello.coreno@elettra.eu, E-mail: desimone@iom.cnr.it, E-mail: malgorzata.biczysko@sns.it, E-mail: kipeters@wsu.edu [ISA, Department of Physics and Astronomy, Aarhus University, Ny Munkegade 120, DK-8000 Aarhus C (Denmark); Coreno, Marcello, E-mail: t.ridley@ed.ac.uk, E-mail: vronning@phys.au.dk, E-mail: nykj@phys.au.dk, E-mail: marcello.coreno@elettra.eu, E-mail: desimone@iom.cnr.it, E-mail: malgorzata.biczysko@sns.it, E-mail: kipeters@wsu.edu [CNR-IMIP, Montelibretti, c/o Laboratorio Elettra, Trieste (Italy); Simone, Monica de, E-mail: t.ridley@ed.ac.uk, E-mail: vronning@phys.au.dk, E-mail: nykj@phys.au.dk, E-mail: marcello.coreno@elettra.eu, E-mail: desimone@iom.cnr.it, E-mail: malgorzata.biczysko@sns.it, E-mail: kipeters@wsu.edu [CNR-IOM Laboratorio TASC, Trieste (Italy); Grazioli, Cesare [CNR-IOM Laboratorio TASC, Trieste (Italy); Department of Chemical and Pharmaceutical Sciences, University of Trieste, Trieste (Italy); Zhang, Teng [Department of Physics and Astronomy, University of Uppsala, Uppsala (Sweden); and others
2015-10-28
New photoelectron, ultraviolet (UV), and vacuum UV (VUV) spectra have been obtained for bromobenzene by synchrotron study with higher sensitivity and resolution than previous work. This, together with use of ab initio calculations with both configuration interaction and time dependent density functional theoretical methods, has led to major advances in interpretation. The VUV spectrum has led to identification of a considerable number of Rydberg states for the first time. The Franck-Condon (FC) analyses including both hot and cold bands lead to identification of the vibrational structure of both ionic and electronically excited states including two Rydberg states. The UV onset has been interpreted in some detail, and an interpretation based on the superposition of FC and Herzberg-Teller contributions has been performed. In a similar way, the 6 eV absorption band which is poorly resolved is analysed in terms of the presence of two ππ* states of {sup 1}A{sub 1} (higher oscillator strength) and {sup 1}B{sub 2} (lower oscillator strength) symmetries, respectively. The detailed analysis of the vibrational structure of the 2{sup 2}B{sub 1} ionic state is particularly challenging, and the best interpretation is based on equation-of-motion-coupled cluster with singles and doubles computations. A number of equilibrium structures of the ionic and singlet excited states show that the molecular structure is less subject to variation than corresponding studies for iodobenzene. The equilibrium structures of the 3b{sub 1}3s and 6b{sub 2}3s (valence shell numbering) Rydberg states have been obtained and compared with the corresponding ionic limit structures.
International Nuclear Information System (INIS)
Ghosh, G.; Olson, G.B.
2007-01-01
An optimal integration of modern computational tools and efficient experimentation is presented for the accelerated design of Nb-based superalloys. Integrated within a systems engineering framework, we have used ab initio methods along with alloy theory tools to predict phase stability of solid solutions and intermetallics to accelerate assessment of thermodynamic and kinetic databases enabling comprehensive predictive design of multicomponent multiphase microstructures as dynamic systems. Such an approach is also applicable for the accelerated design and development of other high performance materials. Based on established principles underlying Ni-based superalloys, the central microstructural concept is a precipitation strengthened system in which coherent cubic aluminide phase(s) provide both creep strengthening and a source of Al for Al 2 O 3 passivation enabled by a Nb-based alloy matrix with required ductile-to-brittle transition temperature, atomic transport kinetics and oxygen solubility behaviors. Ultrasoft and PAW pseudopotentials, as implemented in VASP, are used to calculate total energy, density of states and bonding charge densities of aluminides with B2 and L2 1 structures relevant to this research. Characterization of prototype alloys by transmission and analytical electron microscopy demonstrates the precipitation of B2 or L2 1 aluminide in a (Nb) matrix. Employing Thermo-Calc and DICTRA software systems, thermodynamic and kinetic databases are developed for substitutional alloying elements and interstitial oxygen to enhance the diffusivity ratio of Al to O for promotion of Al 2 O 3 passivation. However, the oxidation study of a Nb-Hf-Al alloy, with enhanced solubility of Al in (Nb) than in binary Nb-Al alloys, at 1300 deg. C shows the presence of a mixed oxide layer of NbAlO 4 and HfO 2 exhibiting parabolic growth
International Nuclear Information System (INIS)
Hicks, Latorya D.; Fry, Albert J.; Kurzweil, Vanessa C.
2004-01-01
The electron affinities (EAs) of a training set of 29 monosubstituted benzalacetophenones (chalcones) were computed at the ab initio density functional B3LYP/6-31G * level of theory. The EAs and experimental reduction potentials of the training set are highly linearly correlated (correlation coefficient of 0.969 and standard deviation of 10.8 mV). An additional 72 di-, tri-, and tetrasubstituted chalcones were then synthesized. Their reduction potentials were predicted from computed EAs using the linear correlation derived from the training set. Agreement between the experimental and computed reduction potentials is remarkably good, with a standard deviation of less than 22 mV for this very large set of substances whose potentials extend over a range of almost 700 mV
Computing Nash equilibria through computational intelligence methods
Pavlidis, N. G.; Parsopoulos, K. E.; Vrahatis, M. N.
2005-03-01
Nash equilibrium constitutes a central solution concept in game theory. The task of detecting the Nash equilibria of a finite strategic game remains a challenging problem up-to-date. This paper investigates the effectiveness of three computational intelligence techniques, namely, covariance matrix adaptation evolution strategies, particle swarm optimization, as well as, differential evolution, to compute Nash equilibria of finite strategic games, as global minima of a real-valued, nonnegative function. An issue of particular interest is to detect more than one Nash equilibria of a game. The performance of the considered computational intelligence methods on this problem is investigated using multistart and deflection.
Nahar, S. N.
2003-01-01
Most astrophysical plasmas entail a balance between ionization and recombination. We present new results from a unified method for self-consistent and ab initio calculations for the inverse processes of photoionization and (e + ion) recombination. The treatment for (e + ion) recombination subsumes the non-resonant radiative recombination and the resonant dielectronic recombination processes in a unified scheme (S.N. Nahar and A.K. Pradhan, Phys. Rev. A 49, 1816 (1994);H.L. Zhang, S.N. Nahar, and A.K. Pradhan, J.Phys.B, 32,1459 (1999)). Calculations are carried out using the R-matrix method in the close coupling approximation using an identical wavefunction expansion for both processes to ensure self-consistency. The results for photoionization and recombination cross sections may also be compared with state-of-the-art experiments on synchrotron radiation sources for photoionization, and on heavy ion storage rings for recombination. The new experiments display heretofore unprecedented detail in terms of resonances and background cross sections and thereby calibrate the theoretical data precisely. We find a level of agreement between theory and experiment at about 10% for not only the ground state but also the metastable states. The recent experiments therefore verify the estimated accuracy of the vast amount of photoionization data computed under the OP, IP and related works. features. Present work also reports photoionization cross sections including relativistic effects in the Breit-Pauli R-matrix (BPRM) approximation. Detailed features in the calculated cross sections exhibit the missing resonances due to fine structure. Self-consistent datasets for photoionization and recombination have so far been computed for approximately 45 atoms and ions. These are being reported in a continuing series of publications in Astrophysical J. Supplements (e.g. references below). These data will also be available from the electronic database TIPTOPBASE (http://heasarc.gsfc.nasa.gov)
Computational Chemistry Comparison and Benchmark Database
SRD 101 NIST Computational Chemistry Comparison and Benchmark Database (Web, free access) The NIST Computational Chemistry Comparison and Benchmark Database is a collection of experimental and ab initio thermochemical properties for a selected set of molecules. The goals are to provide a benchmark set of molecules for the evaluation of ab initio computational methods and allow the comparison between different ab initio computational methods for the prediction of thermochemical properties.
Method for computed tomography
International Nuclear Information System (INIS)
Wagner, W.
1980-01-01
In transversal computer tomography apparatus, in which the positioning zone in which the patient can be positioned is larger than the scanning zone in which a body slice can be scanned, reconstruction errors are liable to occur. These errors are caused by incomplete irradiation of the body during examination. They become manifest not only as an incorrect image of the area not irradiated, but also have an adverse effect on the image of the other, completely irradiated areas. The invention enables reduction of these errors
Computational methods working group
International Nuclear Information System (INIS)
Gabriel, T.A.
1997-09-01
During the Cold Moderator Workshop several working groups were established including one to discuss calculational methods. The charge for this working group was to identify problems in theory, data, program execution, etc., and to suggest solutions considering both deterministic and stochastic methods including acceleration procedures.
The In-Medium Similarity Renormalization Group: A novel ab initio method for nuclei
Energy Technology Data Exchange (ETDEWEB)
Hergert, H., E-mail: hergert@nscl.msu.edu [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics, The Ohio State University, Columbus, OH 43210 (United States); Bogner, S.K., E-mail: bogner@nscl.msu.edu [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Morris, T.D., E-mail: morrist@nscl.msu.edu [Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Schwenk, A., E-mail: schwenk@physik.tu-darmstadt.de [Institut für Kernphysik, Technische Universität Darmstadt, 64289 Darmstadt (Germany); ExtreMe Matter Institute EMMI, GSI Helmholtzzentrum für Schwerionenforschung GmbH, 64291 Darmstadt (Germany); Tsukiyama, K., E-mail: tsuki.kr@gmail.com [Center for Nuclear Study, Graduate School of Science, University of Tokyo, Hongo, Tokyo, 113-0033 (Japan)
2016-03-21
We present a comprehensive review of the In-Medium Similarity Renormalization Group (IM-SRG), a novel ab initio method for nuclei. The IM-SRG employs a continuous unitary transformation of the many-body Hamiltonian to decouple the ground state from all excitations, thereby solving the many-body problem. Starting from a pedagogical introduction of the underlying concepts, the IM-SRG flow equations are developed for systems with and without explicit spherical symmetry. We study different IM-SRG generators that achieve the desired decoupling, and how they affect the details of the IM-SRG flow. Based on calculations of closed-shell nuclei, we assess possible truncations for closing the system of flow equations in practical applications, as well as choices of the reference state. We discuss the issue of center-of-mass factorization and demonstrate that the IM-SRG ground-state wave function exhibits an approximate decoupling of intrinsic and center-of-mass degrees of freedom, similar to Coupled Cluster (CC) wave functions. To put the IM-SRG in context with other many-body methods, in particular many-body perturbation theory and non-perturbative approaches like CC, a detailed perturbative analysis of the IM-SRG flow equations is carried out. We conclude with a discussion of ongoing developments, including IM-SRG calculations with three-nucleon forces, the multi-reference IM-SRG for open-shell nuclei, first non-perturbative derivations of shell-model interactions, and the consistent evolution of operators in the IM-SRG. We dedicate this review to the memory of Gerry Brown, one of the pioneers of many-body calculations of nuclei.
The In-Medium Similarity Renormalization Group: A novel ab initio method for nuclei
International Nuclear Information System (INIS)
Hergert, H.; Bogner, S.K.; Morris, T.D.; Schwenk, A.; Tsukiyama, K.
2016-01-01
We present a comprehensive review of the In-Medium Similarity Renormalization Group (IM-SRG), a novel ab initio method for nuclei. The IM-SRG employs a continuous unitary transformation of the many-body Hamiltonian to decouple the ground state from all excitations, thereby solving the many-body problem. Starting from a pedagogical introduction of the underlying concepts, the IM-SRG flow equations are developed for systems with and without explicit spherical symmetry. We study different IM-SRG generators that achieve the desired decoupling, and how they affect the details of the IM-SRG flow. Based on calculations of closed-shell nuclei, we assess possible truncations for closing the system of flow equations in practical applications, as well as choices of the reference state. We discuss the issue of center-of-mass factorization and demonstrate that the IM-SRG ground-state wave function exhibits an approximate decoupling of intrinsic and center-of-mass degrees of freedom, similar to Coupled Cluster (CC) wave functions. To put the IM-SRG in context with other many-body methods, in particular many-body perturbation theory and non-perturbative approaches like CC, a detailed perturbative analysis of the IM-SRG flow equations is carried out. We conclude with a discussion of ongoing developments, including IM-SRG calculations with three-nucleon forces, the multi-reference IM-SRG for open-shell nuclei, first non-perturbative derivations of shell-model interactions, and the consistent evolution of operators in the IM-SRG. We dedicate this review to the memory of Gerry Brown, one of the pioneers of many-body calculations of nuclei.
The ab initio model potential method. Second series transition metal elements
International Nuclear Information System (INIS)
Barandiaran, Z.; Seijo, L.; Huzinaga, S.
1990-01-01
The ab initio core method potential model (AIMP) has already been presented in its nonrelativistic version and applied to the main group and first series transition metal elements [J. Chem. Phys. 86, 2132 (1987); 91, 7011 (1989)]. In this paper we extend the AIMP method to include relativistic effects within the Cowan--Griffin approximation and we present relativistic Zn-like core model potentials and valence basis sets, as well as their nonrelativistic Zn-like core and Kr-like core counterparts. The pilot molecular calculations on YO, TcO, AgO, and AgH reveal that the 4p orbital is indeed a core orbital only at the end part of the series, whereas the 4s orbital can be safely frozen from Y to Cd. The all-electron and model potential results agree in 0.01--0.02 A in R e and 25--50 cm -1 in bar ν e if the same type of valence part of the basis set is used. The comparison of the relativistic results on AgH with those of the all-electron Dirac--Fock calculations by Lee and McLean is satisfactory: the absolute value of R e is reproduced within the 0.01 A margin and the relativistic contraction of 0.077 A is also very well reproduced (0.075 A). Finally, the relative magnitude of the effects of the core orbital change, mass--velocity potential, and Darwin potential on the net relativistic effects are analyzed in the four molecules studied
Computational chemistry, in conjunction with gas chromatography/mass spectrometry/Fourier transform infrared spectrometry (GC/MS/FT-IR), was used to tentatively identify seven tetrachlorobutadiene (TCBD) isomers detected in an environmental sample. Computation of the TCBD infrare...
Short-range order in ab initio computer generated amorphous and liquid Cu–Zr alloys: A new approach
International Nuclear Information System (INIS)
Galván-Colín, Jonathan; Valladares, Ariel A.; Valladares, Renela M.; Valladares, Alexander
2015-01-01
Using ab initio molecular dynamics and a new approach based on the undermelt-quench method we generated amorphous and liquid samples of Cu x Zr 100−x (x=64, 50, 36) alloys. We characterized the topology of our resulting structures by means of the pair distribution function and the bond-angle distribution; a coordination number distribution was also calculated. Our results for both amorphous and liquids agree well with experiment. Dependence of short-range order with the concentration is reported. We found that icosahedron-like geometry plays a major role whenever the alloys are Cu-rich or Zr-rich disregarding if the samples are amorphous or liquid. The validation of these results, in turn would let us calculate other properties so far disregarded in the literature
Short-range order in ab initio computer generated amorphous and liquid Cu–Zr alloys: A new approach
Energy Technology Data Exchange (ETDEWEB)
Galván-Colín, Jonathan, E-mail: jgcolin@ciencias.unam.mx [Instituto de Investigaciones en Materiales, Universidad Nacional Autónoma de México, Apartado Postal 70-360, México, D.F. 04510, México (Mexico); Valladares, Ariel A., E-mail: valladar@unam.mx [Instituto de Investigaciones en Materiales, Universidad Nacional Autónoma de México, Apartado Postal 70-360, México, D.F. 04510, México (Mexico); Valladares, Renela M.; Valladares, Alexander [Facultad de Ciencias, Universidad Nacional Autónoma de México, Apartado Postal 70-542, México, D.F. 04510, México (Mexico)
2015-10-15
Using ab initio molecular dynamics and a new approach based on the undermelt-quench method we generated amorphous and liquid samples of Cu{sub x}Zr{sub 100−x} (x=64, 50, 36) alloys. We characterized the topology of our resulting structures by means of the pair distribution function and the bond-angle distribution; a coordination number distribution was also calculated. Our results for both amorphous and liquids agree well with experiment. Dependence of short-range order with the concentration is reported. We found that icosahedron-like geometry plays a major role whenever the alloys are Cu-rich or Zr-rich disregarding if the samples are amorphous or liquid. The validation of these results, in turn would let us calculate other properties so far disregarded in the literature.
Joshi, Prasad Ramesh; Ramanathan, N; Sundararajan, K; Sankaran, K
2015-04-09
The weak interaction between PCl3 and CH3OH was investigated using matrix isolation infrared spectroscopy and ab initio computations. In a nitrogen matrix at low temperature, the noncovalent adduct was generated and characterized using Fourier transform infrared spectroscopy. Computations were performed at B3LYP/6-311++G(d,p), B3LYP/aug-cc-pVDZ, and MP2/6-311++G(d,p) levels of theory to optimize the possible geometries of PCl3-CH3OH adducts. Computations revealed two minima on the potential energy surface, of which, the global minimum is stabilized by a noncovalent P···O interaction, known as a pnictogen bonding (phosphorus bonding or P-bonding). The local minimum corresponded to a cyclic adduct, stabilized by the conventional hydrogen bonding (Cl···H-O and Cl···H-C interactions). Experimentally, 1:1 P-bonded PCl3-CH3OH adduct in nitrogen matrix was identified, where shifts in the P-Cl modes of PCl3, O-C, and O-H modes of CH3OH submolecules were observed. The observed vibrational frequencies of the P-bonded adduct in a nitrogen matrix agreed well with the computed frequencies. Furthermore, computations also predicted that the P-bonded adduct is stronger than H-bonded adduct by ∼1.56 kcal/mol. Atoms in molecules and natural bond orbital analyses were performed to understand the nature of interactions and effect of charge transfer interaction on the stability of the adducts.
Computational Methods in Medicine
Directory of Open Access Journals (Sweden)
Angel Garrido
2010-01-01
Full Text Available Artificial Intelligence requires Logic. But its Classical version shows too many insufficiencies. So, it is absolutely necessary to introduce more sophisticated tools, such as Fuzzy Logic, Modal Logic, Non-Monotonic Logic, and so on [2]. Among the things that AI needs to represent are Categories, Objects, Properties, Relations between objects, Situations, States, Time, Events, Causes and effects, Knowledge about knowledge, and so on. The problems in AI can be classified in two general types
[3, 4], Search Problems and Representation Problem. There exist different ways to reach this objective. So, we have [3] Logics, Rules, Frames, Associative Nets, Scripts and so on, that are often interconnected. Also, it will be very useful, in dealing with problems of uncertainty and causality, to introduce Bayesian Networks and particularly, a principal tool as the Essential Graph. We attempt here to show the scope of application of such versatile methods, currently fundamental in Medicine.
Ab initio computational study of reaction mechanism of peptide bond formation on HF/6-31G(d,p) level
Siahaan, P.; Lalita, M. N. T.; Cahyono, B.; Laksitorini, M. D.; Hildayani, S. Z.
2017-02-01
Peptide plays an important role in modulation of various cell functions. Therefore, formation reaction of the peptide is important for chemical reactions. One way to probe the reaction of peptide synthesis is a computational method. The purpose of this research is to determine the reaction mechanism for peptide bond formation on Ac-PV-NH2 and Ac-VP-NH2 synthesis from amino acid proline and valine by ab initio computational approach. The calculations were carried out by theory and basis set HF/6-31G(d,p) for four mechanisms (path 1 to 4) that proposed in this research. The results show that the highest of the rate determining step between reactant and transition state (TS) for path 1, 2, 3, and 4 are 163.06 kJ.mol-1, 1868 kJ.mol-1, 5685 kJ.mol-1, and 1837 kJ.mol-1. The calculation shows that the most preferred reaction of Ac-PV-NH2 and Ac-VP-NH2 synthesis from amino acid proline and valine are on the path 1 (initiated with the termination of H+ in proline amino acid) that produce Ac-PV-NH2.
Numerical methods in matrix computations
Björck, Åke
2015-01-01
Matrix algorithms are at the core of scientific computing and are indispensable tools in most applications in engineering. This book offers a comprehensive and up-to-date treatment of modern methods in matrix computation. It uses a unified approach to direct and iterative methods for linear systems, least squares and eigenvalue problems. A thorough analysis of the stability, accuracy, and complexity of the treated methods is given. Numerical Methods in Matrix Computations is suitable for use in courses on scientific computing and applied technical areas at advanced undergraduate and graduate level. A large bibliography is provided, which includes both historical and review papers as well as recent research papers. This makes the book useful also as a reference and guide to further study and research work. Åke Björck is a professor emeritus at the Department of Mathematics, Linköping University. He is a Fellow of the Society of Industrial and Applied Mathematics.
Numerical computer methods part D
Johnson, Michael L
2004-01-01
The aim of this volume is to brief researchers of the importance of data analysis in enzymology, and of the modern methods that have developed concomitantly with computer hardware. It is also to validate researchers' computer programs with real and synthetic data to ascertain that the results produced are what they expected. Selected Contents: Prediction of protein structure; modeling and studying proteins with molecular dynamics; statistical error in isothermal titration calorimetry; analysis of circular dichroism data; model comparison methods.
Computational Methods in Plasma Physics
Jardin, Stephen
2010-01-01
Assuming no prior knowledge of plasma physics or numerical methods, Computational Methods in Plasma Physics covers the computational mathematics and techniques needed to simulate magnetically confined plasmas in modern magnetic fusion experiments and future magnetic fusion reactors. Largely self-contained, the text presents the basic concepts necessary for the numerical solution of partial differential equations. Along with discussing numerical stability and accuracy, the author explores many of the algorithms used today in enough depth so that readers can analyze their stability, efficiency,
Calibration of Sn-119 isomer shift using ab initio wave function methods
Kurian, Reshmi; Filatov, Michael
2009-01-01
The isomer shift for the 23.87 keV M1 resonant transition in the Sn-119 nucleus is calibrated with the help of ab initio calculations. The calibration constant alpha(Sn-119) obtained from Hartree-Fock (HF) calculations (alpha(HF)(Sn-119)=(0.081 +/- 0.002)a(0)(-3) mm/s) and from second-order
Legrain, Fleur; Carrete, Jesús; van Roekeghem, Ambroise; Madsen, Georg K H; Mingo, Natalio
2018-01-18
Machine learning (ML) is increasingly becoming a helpful tool in the search for novel functional compounds. Here we use classification via random forests to predict the stability of half-Heusler (HH) compounds, using only experimentally reported compounds as a training set. Cross-validation yields an excellent agreement between the fraction of compounds classified as stable and the actual fraction of truly stable compounds in the ICSD. The ML model is then employed to screen 71 178 different 1:1:1 compositions, yielding 481 likely stable candidates. The predicted stability of HH compounds from three previous high-throughput ab initio studies is critically analyzed from the perspective of the alternative ML approach. The incomplete consistency among the three separate ab initio studies and between them and the ML predictions suggests that additional factors beyond those considered by ab initio phase stability calculations might be determinant to the stability of the compounds. Such factors can include configurational entropies and quasiharmonic contributions.
Optical absorption spectra and g factor of MgO: Mn2+explored by ab initio and semi empirical methods
Andreici Eftimie, E.-L.; Avram, C. N.; Brik, M. G.; Avram, N. M.
2018-02-01
In this paper we present a methodology for calculations of the optical absorption spectra, ligand field parameters and g factor for the Mn2+ (3d5) ions doped in MgO host crystal. The proposed technique combines two methods: the ab initio multireference (MR) and the semi empirical ligand field (LF) in the framework of the exchange charge model (ECM) respectively. Both methods of calculations are applied to the [MnO6]10-cluster embedded in an extended point charge field of host matrix ligands based on Gellé-Lepetit procedure. The first step of such investigations was the full optimization of the cubic structure of perfect MgO crystal, followed by the structural optimization of the doped of MgO:Mn2+ system, using periodic density functional theory (DFT). The ab initio MR wave functions approaches, such as complete active space self-consistent field (CASSCF), N-electron valence second order perturbation theory (NEVPT2) and spectroscopy oriented configuration interaction (SORCI), are used for the calculations. The scalar relativistic effects have also been taken into account through the second order Douglas-Kroll-Hess (DKH2) procedure. Ab initio ligand field theory (AILFT) allows to extract all LF parameters and spin-orbit coupling constant from such calculations. In addition, the ECM of ligand field theory (LFT) has been used for modelling theoptical absorption spectra. The perturbation theory (PT) was employed for the g factor calculation in the semi empirical LFT. The results of each of the aforementioned types of calculations are discussed and the comparisons between the results obtained and the experimental results show a reasonable agreement, which justifies this new methodology based on the simultaneous use of both methods. This study establishes fundamental principles for the further modelling of larger embedded cluster models of doped metal oxides.
Study on the effects of fluorine and oxygen deficiency on YBa2Cu3O7 by ab initio method
Institute of Scientific and Technical Information of China (English)
刘洪霖; 曹晓卫; 瞿丽曼; 陈念贻
1997-01-01
The calculations of clusters modeling the fluorine-doping and oxygen deficiency of YBa2Cu3O2,have been performed by the method of all-electron ab initio Hartree-Fock with self-consistent crystal field Results show that in CuO planes electric charge significantly increases,the chemical valence of Cu decreases and the covalent bonding of Cu-O greatly weakens owing to oxygen deficiency,while the effect of F restores the local electronic structure of YBa2Cu3O7 The reported opinion that F occupied the oxygen vacancy in Cu-O chains seems disputable according to the calculated bonding characteristics.
Computational methods in earthquake engineering
Plevris, Vagelis; Lagaros, Nikos
2017-01-01
This is the third book in a series on Computational Methods in Earthquake Engineering. The purpose of this volume is to bring together the scientific communities of Computational Mechanics and Structural Dynamics, offering a wide coverage of timely issues on contemporary Earthquake Engineering. This volume will facilitate the exchange of ideas in topics of mutual interest and can serve as a platform for establishing links between research groups with complementary activities. The computational aspects are emphasized in order to address difficult engineering problems of great social and economic importance. .
Directory of Open Access Journals (Sweden)
F. Kiani
2017-07-01
Full Text Available Analytical measurement of materials requires exact knowledge of their acid dissociation constant (pKa values. In recent years, quantum mechanical calculations have been extensively used to study of acidities in the aqueous solutions and the results were compared with the experimental values. In this study, a theoretical study was carried out on xylenol orange (in water solution by ab initio method. We calculated the pKa values of xylenol orange in water, using high-level ab initio (PM3, DFT (HF, B3LYP/6-31+G(d and SCRF methods. The experimental determination of these values (pKa,s is a challenge because xylenol orange has a low solubility in water. We considered several ionization reactions and equilibriums in water that constitute the indispensable theoretical basis to calculate the pKa values of xylenol orange. The results show that the calculated pKa values have a comparable agreement with the experimentally determined pKa values. Therefore, this method can be used to predict such properties for indicators, drugs and other important molecules.
Methods for computing color anaglyphs
McAllister, David F.; Zhou, Ya; Sullivan, Sophia
2010-02-01
A new computation technique is presented for calculating pixel colors in anaglyph images. The method depends upon knowing the RGB spectral distributions of the display device and the transmission functions of the filters in the viewing glasses. It requires the solution of a nonlinear least-squares program for each pixel in a stereo pair and is based on minimizing color distances in the CIEL*a*b* uniform color space. The method is compared with several techniques for computing anaglyphs including approximation in CIE space using the Euclidean and Uniform metrics, the Photoshop method and its variants, and a method proposed by Peter Wimmer. We also discuss the methods of desaturation and gamma correction for reducing retinal rivalry.
Exploring proton transfer in 1,2,3-triazole-triazolium dimer with ab initio method
Li, Ailin; Yan, Tianying; Shen, Panwen
Ab initio calculations are utilized to search for transition state structures for proton transfer in the 1,2,3-triazole-triazolium complexes on the basis of optimized dimers. The result suggests six transition state structures for single proton transfer in the complexes, most of which are coplanar. The energy barriers, between different stable and transition states structures with zero point energy (ZPE) corrections, show that proton transfer occurs at room temperature with coplanar configuration that has the lowest energy. The results clearly support that reorientation gives triazole flexibility for proton transfer.
Exploring proton transfer in 1,2,3-triazole-triazolium dimer with ab initio method
Energy Technology Data Exchange (ETDEWEB)
Li, Ailin; Yan, Tianying; Shen, Panwen [Department of Material Chemistry, Institute of New Energy Material Chemistry, Nankai University, Tianjin, 300071 (China)
2011-02-01
Ab initio calculations are utilized to search for transition state structures for proton transfer in the 1,2,3-triazole-triazolium complexes on the basis of optimized dimers. The result suggests six transition state structures for single proton transfer in the complexes, most of which are coplanar. The energy barriers, between different stable and transition states structures with zero point energy (ZPE) corrections, show that proton transfer occurs at room temperature with coplanar configuration that has the lowest energy. The results clearly support that reorientation gives triazole flexibility for proton transfer. (author)
Energy Technology Data Exchange (ETDEWEB)
Almeida, Wagner B. de [Minas Gerais Univ., Belo Horizonte, MG (Brazil). Dept. de Quimica
2000-10-01
The determination of the molecular structure of molecules is of fundamental importance in chemistry. X-rays and electron diffraction methods constitute in important tools for the elucidation of the molecular structure of systems in the solid state and gas phase, respectively. The use of quantum mechanical molecular orbital ab initio methods offer an alternative for conformational analysis studies. Comparison between theoretical results and those obtained experimentally in the gas phase can make a significant contribution for an unambiguous determination of the geometrical parameters. In this article the determination for an unambiguous determination of the geometrical parameters. In this article the determination of the molecular structure of the cyclooctane molecule by electron diffraction in the gas phase an initio calculations will be addressed, providing an example of a comparative analysis of theoretical and experimental predictions. (author)
Computational methods in drug discovery
Directory of Open Access Journals (Sweden)
Sumudu P. Leelananda
2016-12-01
Full Text Available The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery projects. Additionally, increasing knowledge of biological structures, as well as increasing computer power have made it possible to use computational methods effectively in various phases of the drug discovery and development pipeline. The importance of in silico tools is greater than ever before and has advanced pharmaceutical research. Here we present an overview of computational methods used in different facets of drug discovery and highlight some of the recent successes. In this review, both structure-based and ligand-based drug discovery methods are discussed. Advances in virtual high-throughput screening, protein structure prediction methods, protein–ligand docking, pharmacophore modeling and QSAR techniques are reviewed.
Combinatorial methods with computer applications
Gross, Jonathan L
2007-01-01
Combinatorial Methods with Computer Applications provides in-depth coverage of recurrences, generating functions, partitions, and permutations, along with some of the most interesting graph and network topics, design constructions, and finite geometries. Requiring only a foundation in discrete mathematics, it can serve as the textbook in a combinatorial methods course or in a combined graph theory and combinatorics course.After an introduction to combinatorics, the book explores six systematic approaches within a comprehensive framework: sequences, solving recurrences, evaluating summation exp
International Nuclear Information System (INIS)
Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg
2015-01-01
We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement
Computational methods for fluid dynamics
Ferziger, Joel H
2002-01-01
In its 3rd revised and extended edition the book offers an overview of the techniques used to solve problems in fluid mechanics on computers and describes in detail those most often used in practice. Included are advanced methods in computational fluid dynamics, like direct and large-eddy simulation of turbulence, multigrid methods, parallel computing, moving grids, structured, block-structured and unstructured boundary-fitted grids, free surface flows. The 3rd edition contains a new section dealing with grid quality and an extended description of discretization methods. The book shows common roots and basic principles for many different methods. The book also contains a great deal of practical advice for code developers and users, it is designed to be equally useful to beginners and experts. The issues of numerical accuracy, estimation and reduction of numerical errors are dealt with in detail, with many examples. A full-feature user-friendly demo-version of a commercial CFD software has been added, which ca...
Numerical computer methods part E
Johnson, Michael L
2004-01-01
The contributions in this volume emphasize analysis of experimental data and analytical biochemistry, with examples taken from biochemistry. They serve to inform biomedical researchers of the modern data analysis methods that have developed concomitantly with computer hardware. Selected Contents: A practical approach to interpretation of SVD results; modeling of oscillations in endocrine networks with feedback; quantifying asynchronous breathing; sample entropy; wavelet modeling and processing of nasal airflow traces.
Computational methods for stellerator configurations
International Nuclear Information System (INIS)
Betancourt, O.
1992-01-01
This project had two main objectives. The first one was to continue to develop computational methods for the study of three dimensional magnetic confinement configurations. The second one was to collaborate and interact with researchers in the field who can use these techniques to study and design fusion experiments. The first objective has been achieved with the development of the spectral code BETAS and the formulation of a new variational approach for the study of magnetic island formation in a self consistent fashion. The code can compute the correct island width corresponding to the saturated island, a result shown by comparing the computed island with the results of unstable tearing modes in Tokamaks and with experimental results in the IMS Stellarator. In addition to studying three dimensional nonlinear effects in Tokamaks configurations, these self consistent computed island equilibria will be used to study transport effects due to magnetic island formation and to nonlinearly bifurcated equilibria. The second objective was achieved through direct collaboration with Steve Hirshman at Oak Ridge, D. Anderson and R. Talmage at Wisconsin as well as through participation in the Sherwood and APS meetings
Directory of Open Access Journals (Sweden)
Peng Xu
2018-04-01
Full Text Available The understanding of the excited-state properties of electron donors, acceptors and their interfaces in organic optoelectronic devices is a fundamental issue for their performance optimization. In order to obtain a balanced description of the different excitation types for electron-donor-acceptor systems, including the singlet charge transfer (CT, local excitations, and triplet excited states, several ab initio and density functional theory (DFT methods for excited-state calculations were evaluated based upon the selected model system of benzene-tetracyanoethylene (B-TCNE complexes. On the basis of benchmark calculations of the equation-of-motion coupled-cluster with single and double excitations method, the arithmetic mean of the absolute errors and standard errors of the electronic excitation energies for the different computational methods suggest that the M11 functional in DFT is superior to the other tested DFT functionals, and time-dependent DFT (TDDFT with the Tamm–Dancoff approximation improves the accuracy of the calculated excitation energies relative to that of the full TDDFT. The performance of the M11 functional underlines the importance of kinetic energy density, spin-density gradient, and range separation in the development of novel DFT functionals. According to the TDDFT results, the performances of the different TDDFT methods on the CT properties of the B-TCNE complexes were also analyzed.
Ab initio and Gordon--Kim intermolecular potentials for two nitrogen molecules
International Nuclear Information System (INIS)
Ree, F.H.; Winter, N.W.
1980-01-01
Both ab initio MO--LCAO--SCF and the electron-gas (or Gordon--Kim) methods have been used to compute the intermolecular potential (Phi) of N 2 molecules for seven different N 2 --N 2 orientations. The ab initio calculations were carried out using a [4s3p] contracted Gaussian basis set with and without 3d polarization functions. The larger basis set provides adequate results for Phi>0.002 hartree or intermolecular separations less than 6.5--7 bohr. We use a convenient analytic expression to represent the ab initio data in terms of the intermolecular distance and three angles defining the orientations of the two N 2 molecules. The Gordon--Kim method with Rae's self-exchange correction yields Phi, which agrees reasonably well over a large repulsive range. However, a detailed comparison of the electron kinetic energy contributions shows a large difference between the ab initio and the Gordon--Kim calculations. Using the ab initio data we derive an atom--atom potential of the two N 2 molecules. Although this expression does not accurately fit the data at some orientations, its spherical average agrees with the corresponding average of the ab initio Phi remarkably well. The spherically averaged ab initio Phi is also compared with the corresponding quantities derived from experimental considerations. The approach of the ab initio Phi to the classical quadrupole--quadrupole interaction at large intermolecular separation is also discussed
Computational methods for molecular imaging
Shi, Kuangyu; Li, Shuo
2015-01-01
This volume contains original submissions on the development and application of molecular imaging computing. The editors invited authors to submit high-quality contributions on a wide range of topics including, but not limited to: • Image Synthesis & Reconstruction of Emission Tomography (PET, SPECT) and other Molecular Imaging Modalities • Molecular Imaging Enhancement • Data Analysis of Clinical & Pre-clinical Molecular Imaging • Multi-Modal Image Processing (PET/CT, PET/MR, SPECT/CT, etc.) • Machine Learning and Data Mining in Molecular Imaging. Molecular imaging is an evolving clinical and research discipline enabling the visualization, characterization and quantification of biological processes taking place at the cellular and subcellular levels within intact living subjects. Computational methods play an important role in the development of molecular imaging, from image synthesis to data analysis and from clinical diagnosis to therapy individualization. This work will bring readers fro...
Odell, Anders
2011-10-03
The influence of the electrode\\'s Fermi surface on the transport properties of a photoswitching molecule is investigated with state-of-the-art ab initio transport methods. We report results for the conducting properties of the two forms of dithienylethene attached either to Ag or to nonmagnetic Ni leads. The I-V curves of the Ag/dithienylethene/Ag device are found to be very similar to those reported previously for Au. In contrast, when Ni is used as the electrode material the zero-bias transmission coefficient is profoundly different as a result of the role played by the Ni d bands in the bonding between the molecule and the electrodes. Intriguingly, despite these differences the overall conducting properties depend little on the electrode material. We thus conclude that electron transport in dithienylethene is, for the cases studied, mainly governed by the intrinsic electronic structure of the molecule. © 2011 American Physical Society.
Hao, Xiao-Hu; Zhang, Gui-Jun; Zhou, Xiao-Gen; Yu, Xu-Feng
2016-01-01
To address the searching problem of protein conformational space in ab-initio protein structure prediction, a novel method using abstract convex underestimation (ACUE) based on the framework of evolutionary algorithm was proposed. Computing such conformations, essential to associate structural and functional information with gene sequences, is challenging due to the high-dimensionality and rugged energy surface of the protein conformational space. As a consequence, the dimension of protein conformational space should be reduced to a proper level. In this paper, the high-dimensionality original conformational space was converted into feature space whose dimension is considerably reduced by feature extraction technique. And, the underestimate space could be constructed according to abstract convex theory. Thus, the entropy effect caused by searching in the high-dimensionality conformational space could be avoided through such conversion. The tight lower bound estimate information was obtained to guide the searching direction, and the invalid searching area in which the global optimal solution is not located could be eliminated in advance. Moreover, instead of expensively calculating the energy of conformations in the original conformational space, the estimate value is employed to judge if the conformation is worth exploring to reduce the evaluation time, thereby making computational cost lower and the searching process more efficient. Additionally, fragment assembly and the Monte Carlo method are combined to generate a series of metastable conformations by sampling in the conformational space. The proposed method provides a novel technique to solve the searching problem of protein conformational space. Twenty small-to-medium structurally diverse proteins were tested, and the proposed ACUE method was compared with It Fix, HEA, Rosetta and the developed method LEDE without underestimate information. Test results show that the ACUE method can more rapidly and more
Computational modeling in biomechanics
Mofrad, Mohammad
2010-01-01
This book provides a glimpse of the diverse and important roles that modern computational technology is playing in various areas of biomechanics. It includes unique chapters on ab initio quantum mechanical, molecular dynamic and scale coupling methods..
Conformational study of glyoxal bis(amidinohydrazone) by ab initio methods
Mannfors, B.; Koskinen, J. T.; Pietilä, L.-O.
1997-08-01
We report the first ab initio molecular orbital study on the ground state of the endiamine tautomer of glyoxal bis(amidinohydrazone) (or glyoxal bis(guanylhydrazone), GBG) free base. The calculations were performed at the following levels of theory: Hartree-Fock, second-order Møller-Plesset perturbation theory and density functional theory (B-LYP and B3-LYP) as implemented in the Gaussian 94 software. The standard basis set 6-31G(d) was found to be sufficient. The default fine grid of Gaussian 94 was used in the density functional calculations. Molecular properties, such as optimized structures, total energies and the electrostatic potential derived (CHELPG) atomic charges, were studied as functions of C-C and N-N conformations. The lowest energy conformation was found to be all- trans, in agreement with the experimental solid-state structure. The second conformer with respect to rotation around the central C-C bond was found to be the cis conformer with an MP2//HF energy of 4.67 kcal mol -1. For rotation around the N-N bond the energy increased monotonically from the trans conformation to the cis conformation, the cis energy being very high, 22.01 kcal mol -1 (MP2//HF). The atomic charges were shown to be conformation dependent, and the bond charge increments and especially the conformational changes of the bond charge increments were found to be easily transferable between structurally related systems.
Analysis of the zirconia structure by 'ab initio' and Rietveld methods
International Nuclear Information System (INIS)
Bechepeche, A.P.; Nasar, R.S.; Longo, E.; Treu Junior, O.; Varela, J.A.
1995-01-01
The zirconia was doped with 0,113 mol of Mg O e 0,005 mol of Ti O 2 , and it was calcined in 1550 d eg C and it was analyzed by XRD. The results shows that pure zirconia contains 96,19% of monoclinic phase and 3,18% of cubic. However, the doping magnesia stabilizes the zirconia in 17,24 of monoclinic; 29,63 of tetragonal and 53,13% of cubic phase. The addition of titanium in zirconia gives 25,85% of tetragonal phase and 37,66% of cubic, and this shows the no stabilizing action of this transition metal. By the other side, the results with ab-initio calculating shows the same tendency resulting in the next values of total energy: pure zirconia - monoclinic -11.316,86ua; tetragonal -8742,09 ua and cubic -8742,80 ua and Zr O 2 Ti O 2 system - monoclinic -9463,02 ua, tetragonal -9459,39 ua and cubic -9459,97 ua (author)
Berggren, Elisabet; White, Andrew; Ouedraogo, Gladys; Paini, Alicia; Richarz, Andrea-Nicole; Bois, Frederic Y; Exner, Thomas; Leite, Sofia; Grunsven, Leo A van; Worth, Andrew; Mahony, Catherine
2017-11-01
We describe and illustrate a workflow for chemical safety assessment that completely avoids animal testing. The workflow, which was developed within the SEURAT-1 initiative, is designed to be applicable to cosmetic ingredients as well as to other types of chemicals, e.g. active ingredients in plant protection products, biocides or pharmaceuticals. The aim of this work was to develop a workflow to assess chemical safety without relying on any animal testing, but instead constructing a hypothesis based on existing data, in silico modelling, biokinetic considerations and then by targeted non-animal testing. For illustrative purposes, we consider a hypothetical new ingredient x as a new component in a body lotion formulation. The workflow is divided into tiers in which points of departure are established through in vitro testing and in silico prediction, as the basis for estimating a safe external dose in a repeated use scenario. The workflow includes a series of possible exit (decision) points, with increasing levels of confidence, based on the sequential application of the Threshold of Toxicological (TTC) approach, read-across, followed by an "ab initio" assessment, in which chemical safety is determined entirely by new in vitro testing and in vitro to in vivo extrapolation by means of mathematical modelling. We believe that this workflow could be applied as a tool to inform targeted and toxicologically relevant in vitro testing, where necessary, and to gain confidence in safety decision making without the need for animal testing.
Computer methods in general relativity: algebraic computing
Araujo, M E; Skea, J E F; Koutras, A; Krasinski, A; Hobill, D; McLenaghan, R G; Christensen, S M
1993-01-01
Karlhede & MacCallum [1] gave a procedure for determining the Lie algebra of the isometry group of an arbitrary pseudo-Riemannian manifold, which they intended to im- plement using the symbolic manipulation package SHEEP but never did. We have recently ﬁnished making this procedure explicit by giving an algorithm suitable for implemen- tation on a computer [2]. Specifically, we have written an algorithm for determining the isometry group of a spacetime (in four dimensions), and partially implemented this algorithm using the symbolic manipulation package CLASSI, which is an extension of SHEEP.
Fast computation of the characteristics method on vector computers
International Nuclear Information System (INIS)
Kugo, Teruhiko
2001-11-01
Fast computation of the characteristics method to solve the neutron transport equation in a heterogeneous geometry has been studied. Two vector computation algorithms; an odd-even sweep (OES) method and an independent sequential sweep (ISS) method have been developed and their efficiency to a typical fuel assembly calculation has been investigated. For both methods, a vector computation is 15 times faster than a scalar computation. From a viewpoint of comparison between the OES and ISS methods, the followings are found: 1) there is a small difference in a computation speed, 2) the ISS method shows a faster convergence and 3) the ISS method saves about 80% of computer memory size compared with the OES method. It is, therefore, concluded that the ISS method is superior to the OES method as a vectorization method. In the vector computation, a table-look-up method to reduce computation time of an exponential function saves only 20% of a whole computation time. Both the coarse mesh rebalance method and the Aitken acceleration method are effective as acceleration methods for the characteristics method, a combination of them saves 70-80% of outer iterations compared with a free iteration. (author)
Study of the behaviour of cesium fission product in uranium dioxide by the ab initio method
International Nuclear Information System (INIS)
Gupta, Florence
2008-01-01
The knowledge of the behaviour of fission products in the nuclear fuel is very important for safety considerations and for understanding the evolution of the fuel properties under irradiation. In this work, we focussed mainly on the behaviour of caesium in UO 2 through ab initio studies of its solubility at point defects in the matrix, its diffusion and its contribution to the formation of solid phases in the fuel. The role of electronic correlation effects of the f electrons of uranium on these properties and on the description of the defect free crystal, is assessed. The formation energies of the main point defects are calculated and their concentration as a function of fuel stoichiometry and temperature is estimated. The migration barriers and migration paths for the self-diffusion of oxygen and uranium vacancies and oxygen interstitials in UO 2 are discussed. The solubility of Cs is found to be very low in UO 2 in agreement with experimental findings. The most favourable trapping sites are determined as a function of oxygen concentration in the fuel. Our results show that in the hyper-stoichiometric regime, the diffusion of Cs from its most favourable trapping site is limited by the uranium vacancy diffusion mechanism. We also considered the formation of the main solid phases of caesium resulting from its oxidation (Cs 2 O, Cs 2 O 2 , CsO 2 ) and from its interaction with the fuel (Cs 2 UO 4 ), with molybdenum (Cs 2 MoO 4 ) and with the zirconium of the clad (Cs 2 ZrO 3 ), since the formation of such phases, their solubility and their interdependence will affect the release of caesium. (author)
Energy Technology Data Exchange (ETDEWEB)
Bechepeche, A.P.; Nasar, R.S.; Longo, E. [Sao Carlos Univ., SP (Brazil). Dept. de Quimica; Treu Junior, O.; Varela, J.A. [UNESP, Araraquara, SP (Brazil). Inst. de Quimica
1995-12-31
The zirconia was doped with 0,113 mol of Mg O e 0,005 mol of Ti O{sub 2}, and it was calcined in 1550{sup d}eg C and it was analyzed by XRD. The results shows that pure zirconia contains 96,19% of monoclinic phase and 3,18% of cubic. However, the doping magnesia stabilizes the zirconia in 17,24 of monoclinic; 29,63 of tetragonal and 53,13% of cubic phase. The addition of titanium in zirconia gives 25,85% of tetragonal phase and 37,66% of cubic, and this shows the no stabilizing action of this transition metal. By the other side, the results with ab-initio calculating shows the same tendency resulting in the next values of total energy: pure zirconia - monoclinic -11.316,86ua; tetragonal -8742,09 ua and cubic -8742,80 ua and Zr O{sub 2} Ti O{sub 2} system - monoclinic -9463,02 ua, tetragonal -9459,39 ua and cubic -9459,97 ua (author) 3 figs., 2 tabs.
Czech Academy of Sciences Publication Activity Database
Sychrovský, Vladimír; Buděšínský, Miloš; Benda, Ladislav; Špirko, Vladimír; Vokáčová, Zuzana; Šebestík, Jaroslav; Bouř, Petr
2008-01-01
Roč. 112, č. 6 (2008), s. 1796-1805 ISSN 1520-6106 R&D Projects: GA ČR GA203/06/0420; GA ČR GA202/07/0732; GA AV ČR IAA400550702; GA AV ČR IAA400550701; GA MŠk LC512 Institutional research plan: CEZ:AV0Z40550506 Keywords : NMR * ab initio * dipeptide Subject RIV: CF - Physical ; Theoretical Chemistry Impact factor: 4.189, year: 2008
Computational Methods and Function Theory
Saff, Edward; Salinas, Luis; Varga, Richard
1990-01-01
The volume is devoted to the interaction of modern scientific computation and classical function theory. Many problems in pure and more applied function theory can be tackled using modern computing facilities: numerically as well as in the sense of computer algebra. On the other hand, computer algorithms are often based on complex function theory, and dedicated research on their theoretical foundations can lead to great enhancements in performance. The contributions - original research articles, a survey and a collection of problems - cover a broad range of such problems.
Computational methods for reversed-field equilibrium
International Nuclear Information System (INIS)
Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.
1980-01-01
Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described
International Nuclear Information System (INIS)
Branda, M.M.; Ferullo, R.; Castellani, N.J.
1990-01-01
The application of an atomic Hartree-Fock-Slater method is exposed in the present work for the simultaneous obtainment of all parameters used in the extended Hueckel method with charge interaction (IEH): The diagonal elements of the Hamiltonian, the constants of the quadratic relation between. (Author). 16 refs., 3 tabs
Energy Technology Data Exchange (ETDEWEB)
Orimoto, Yuuichi; Xie, Peng; Liu, Kai [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Yamamoto, Ryohei [Department of Molecular and Material Sciences, Interdisciplinary Graduate School of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Imamura, Akira [Hiroshima Kokusai Gakuin University, 6-20-1 Nakano, Aki-ku, Hiroshima 739-0321 (Japan); Aoki, Yuriko, E-mail: aoki.yuriko.397@m.kyushu-u.ac.jp [Department of Material Sciences, Faculty of Engineering Sciences, Kyushu University, 6-1 Kasuga-Park, Fukuoka 816-8580 (Japan); Japan Science and Technology Agency, CREST, 4-1-8 Hon-chou, Kawaguchi, Saitama 332-0012 (Japan)
2015-03-14
An Elongation-counterpoise (ELG-CP) method was developed for performing accurate and efficient interaction energy analysis and correcting the basis set superposition error (BSSE) in biosystems. The method was achieved by combining our developed ab initio O(N) elongation method with the conventional counterpoise method proposed for solving the BSSE problem. As a test, the ELG-CP method was applied to the analysis of the DNAs’ inter-strands interaction energies with respect to the alkylation-induced base pair mismatch phenomenon that causes a transition from G⋯C to A⋯T. It was found that the ELG-CP method showed high efficiency (nearly linear-scaling) and high accuracy with a negligibly small energy error in the total energy calculations (in the order of 10{sup −7}–10{sup −8} hartree/atom) as compared with the conventional method during the counterpoise treatment. Furthermore, the magnitude of the BSSE was found to be ca. −290 kcal/mol for the calculation of a DNA model with 21 base pairs. This emphasizes the importance of BSSE correction when a limited size basis set is used to study the DNA models and compare small energy differences between them. In this work, we quantitatively estimated the inter-strands interaction energy for each possible step in the transition process from G⋯C to A⋯T by the ELG-CP method. It was found that the base pair replacement in the process only affects the interaction energy for a limited area around the mismatch position with a few adjacent base pairs. From the interaction energy point of view, our results showed that a base pair sliding mechanism possibly occurs after the alkylation of guanine to gain the maximum possible number of hydrogen bonds between the bases. In addition, the steps leading to the A⋯T replacement accompanied with replications were found to be unfavorable processes corresponding to ca. 10 kcal/mol loss in stabilization energy. The present study indicated that the ELG-CP method is promising for
Ab initio pair potentials for FCC metals: An application of the method of Moebius transform
International Nuclear Information System (INIS)
Mookerjee, A.; Chen Nanxian; Kumar, V.; Satter, M.A.
1991-10-01
We use the method of Moebius transform introduced by one of us (Chen, Phys. Rev. Lett. 64, 1193 (1990)) to obtain pair potentials for fcc metals from first principles total energy calculations. The derivation is exact for radial potentials and it converges much faster than the earlier reported method of Carlsson-Gelatt-Ehrenreich. We have tested this formulation for Cu using the tight binding representation of the linear muffin tin orbital method. Our results agree with those obtained by Carlsson et al. and qualitatively with the other Morse-type pair potentials derived from effective medium theories. (author). 18 refs, 3 figs, 3 tabs
DEFF Research Database (Denmark)
Chen, Jingzhe; Thygesen, Kristian S.; Jacobsen, Karsten W.
2012-01-01
We present an efficient implementation of a nonequilibrium Green's function method for self-consistent calculations of electron transport and forces in nanostructured materials. The electronic structure is described at the level of density functional theory using the projector augmented wave method...... over k points and real space makes the code highly efficient and applicable to systems containing several hundreds of atoms. The method is applied to a number of different systems, demonstrating the effects of bias and gate voltages, multiterminal setups, nonequilibrium forces, and spin transport....
DEFF Research Database (Denmark)
Svendsen, Casper Steinmann; Jensen, Jan; Fedorov, Dmitri
2013-01-01
We extend the Effective Fragment Molecular Orbital (EFMO) method to the frozen domain approach where only the geometry of an active part is optimized, while the many-body polarization effects are considered for the whole system. The new approach efficiently mapped out the entire reaction path of ...
International Nuclear Information System (INIS)
Vidal-Vidal, Á.; Pérez-Rodríguez, M.; Piñeiro, M.M.
2017-01-01
Highlights: • OCS hydrolysis equilibrium constants were calculated using QM composite methods. • CBS-QB3 was found to be the most adequate method for OCS thermodynamic calculations. • Calculated hydrolysis yields decrease when temperature increases. • The isotopic effect is less significant than temperature or initial concentration dependences. - Abstract: Carbonyl sulphide is the predominant sulphur compound in the atmosphere, contributing to the formation of aerosol particles affecting global climate. Human activity has significantly increased its total amount since the beginning of the Industrial Revolution due to its presence in petroleum and coal, reason why it is necessary to understand and control its emissions. On the other hand, carbonyl sulphide is an undesired substance for catalysis in important industrial processes. Hydrolysis is the most promising among the different strategies to reduce its presence, giving as products carbon dioxide and hydrogen sulphide. In the present work, the mechanism of reaction of carbonyl sulphide hydrolysis process in gas phase was studied from 400 K to 1500 K, equilibrium constants were obtained and reaction yields were estimated, by means of composite quantum-computational methods. Good agreement with literature experimental results confirms the suitability of the chosen methods, specially CBS-QB3, in supporting the reaction mechanism, giving accurate equilibrium constant values, and obtaining realistic yields. The effect of isotopic substitution in OCS was also studied, from 300 K to 1500 K, being much less significant than temperature dependence.
Novel methods in computational finance
Günther, Michael; Maten, E
2017-01-01
This book discusses the state-of-the-art and open problems in computational finance. It presents a collection of research outcomes and reviews of the work from the STRIKE project, an FP7 Marie Curie Initial Training Network (ITN) project in which academic partners trained early-stage researchers in close cooperation with a broader range of associated partners, including from the private sector. The aim of the project was to arrive at a deeper understanding of complex (mostly nonlinear) financial models and to develop effective and robust numerical schemes for solving linear and nonlinear problems arising from the mathematical theory of pricing financial derivatives and related financial products. This was accomplished by means of financial modelling, mathematical analysis and numerical simulations, optimal control techniques and validation of models. In recent years the computational complexity of mathematical models employed in financial mathematics has witnessed tremendous growth. Advanced numerical techni...
COMPUTER METHODS OF GENETIC ANALYSIS.
Directory of Open Access Journals (Sweden)
A. L. Osipov
2017-02-01
Full Text Available The basic statistical methods used in conducting the genetic analysis of human traits. We studied by segregation analysis, linkage analysis and allelic associations. Developed software for the implementation of these methods support.
Computational methods in drug discovery
Sumudu P. Leelananda; Steffen Lindert
2016-01-01
The process for drug discovery and development is challenging, time consuming and expensive. Computer-aided drug discovery (CADD) tools can act as a virtual shortcut, assisting in the expedition of this long process and potentially reducing the cost of research and development. Today CADD has become an effective and indispensable tool in therapeutic development. The human genome project has made available a substantial amount of sequence data that can be used in various drug discovery project...
International Nuclear Information System (INIS)
Ho, Minhhuy; Schmider, H.; Edgecombe, K.E.
1994-01-01
Topological properties of the charge density p(→) of a series of diatomic molecules, as well as ethane, ethene, and acetylene are calculated at the Hartree-Fock level employing various basis sets, and by the AM1 method. The effect of the core orbitals on the bonding regions in these molecules is examined. The results help to evaluate the utility of AM1 wavefunctions for analyzing the topological properties of the charge density
Hybrid Monte Carlo methods in computational finance
Leitao Rodriguez, A.
2017-01-01
Monte Carlo methods are highly appreciated and intensively employed in computational finance in the context of financial derivatives valuation or risk management. The method offers valuable advantages like flexibility, easy interpretation and straightforward implementation. Furthermore, the
Advanced computational electromagnetic methods and applications
Li, Wenxing; Elsherbeni, Atef; Rahmat-Samii, Yahya
2015-01-01
This new resource covers the latest developments in computational electromagnetic methods, with emphasis on cutting-edge applications. This book is designed to extend existing literature to the latest development in computational electromagnetic methods, which are of interest to readers in both academic and industrial areas. The topics include advanced techniques in MoM, FEM and FDTD, spectral domain method, GPU and Phi hardware acceleration, metamaterials, frequency and time domain integral equations, and statistics methods in bio-electromagnetics.
Computational Methods for Biomolecular Electrostatics
Dong, Feng; Olsen, Brett; Baker, Nathan A.
2008-01-01
An understanding of intermolecular interactions is essential for insight into how cells develop, operate, communicate and control their activities. Such interactions include several components: contributions from linear, angular, and torsional forces in covalent bonds, van der Waals forces, as well as electrostatics. Among the various components of molecular interactions, electrostatics are of special importance because of their long range and their influence on polar or charged molecules, including water, aqueous ions, and amino or nucleic acids, which are some of the primary components of living systems. Electrostatics, therefore, play important roles in determining the structure, motion and function of a wide range of biological molecules. This chapter presents a brief overview of electrostatic interactions in cellular systems with a particular focus on how computational tools can be used to investigate these types of interactions. PMID:17964951
Thiessen, P. A.; Treder, H.-J.
Der gegenwärtige Stand der physikalischen Erkenntnis, in Sonderheit die Atomistik und die Quantentheorie, ermöglicht (in wohldefinierten Energie-Bereichen) eine ab initio-Berechnung aller physikalischen und chemischen Prozesse und Strukturen. Die Schrödinger-Gleichung erlaubt zusammen mit den Prinzipien der Quantenstatistik (Pauli-Prinzip) aus dem Planckschen Wirkungsquantum h und den atomischen Konstanten die Berechnung aller Energieumsätze, Zeitabläufe etc., die insbesondere die chemische Physik bestimmen. Die Rechenresultate gelten auch quantitativ bis auf die unvermeidliche Stochastik.Die ab initio-Berechnungen korrespondieren einerseits und sind andererseits komplementär zu den auf den Methoden der theoretischen Chemie und der klassischen Thermodynamik beruhenden Ergebnissen ex eventu. Die theoretische Behandlung ab initio führt zu mathematischen Experimenten, die die Laboratoriums-Experimente ergänzen oder auch substituieren.Translated AbstractAb initio vel ex eventuThe present state of physical knowledge, in peculiar atomistic and quantum theory, makes an ab initio calculation of all physical and chemical processes and structures possible (in well defined reaches of energy). The Schrödinger equation together with the principles of quantum statistics (Pauli principle) permits from the Planck and atomistic constants to calculate all exchanges of energy, courses of time, etc. which govern chemical physics. The calculated results are valid even quantitatively apart from the unavoidable stochastics.These ab initio calculations on the one hand correspond and are on the other complimentary to results ex eventu based on the methods of theoretical chemistry and classical thermodynamics. Theoretical treatment ab initio leads to mathematical experiments which add to or even substitute experiments in the laboratory.
Computational methods in power system analysis
Idema, Reijer
2014-01-01
This book treats state-of-the-art computational methods for power flow studies and contingency analysis. In the first part the authors present the relevant computational methods and mathematical concepts. In the second part, power flow and contingency analysis are treated. Furthermore, traditional methods to solve such problems are compared to modern solvers, developed using the knowledge of the first part of the book. Finally, these solvers are analyzed both theoretically and experimentally, clearly showing the benefits of the modern approach.
Computational methods for data evaluation and assimilation
Cacuci, Dan Gabriel
2013-01-01
Data evaluation and data combination require the use of a wide range of probability theory concepts and tools, from deductive statistics mainly concerning frequencies and sample tallies to inductive inference for assimilating non-frequency data and a priori knowledge. Computational Methods for Data Evaluation and Assimilation presents interdisciplinary methods for integrating experimental and computational information. This self-contained book shows how the methods can be applied in many scientific and engineering areas. After presenting the fundamentals underlying the evaluation of experiment
Electromagnetic field computation by network methods
Felsen, Leopold B; Russer, Peter
2009-01-01
This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.
Methods in computed angiotomography of the brain
International Nuclear Information System (INIS)
Yamamoto, Yuji; Asari, Shoji; Sadamoto, Kazuhiko.
1985-01-01
Authors introduce the methods in computed angiotomography of the brain. Setting of the scan planes and levels and the minimum dose bolus (MinDB) injection of contrast medium are described in detail. These methods are easily and safely employed with the use of already propagated CT scanners. Computed angiotomography is expected for clinical applications in many institutions because of its diagnostic value in screening of cerebrovascular lesions and in demonstrating the relationship between pathological lesions and cerebral vessels. (author)
Methods and experimental techniques in computer engineering
Schiaffonati, Viola
2014-01-01
Computing and science reveal a synergic relationship. On the one hand, it is widely evident that computing plays an important role in the scientific endeavor. On the other hand, the role of scientific method in computing is getting increasingly important, especially in providing ways to experimentally evaluate the properties of complex computing systems. This book critically presents these issues from a unitary conceptual and methodological perspective by addressing specific case studies at the intersection between computing and science. The book originates from, and collects the experience of, a course for PhD students in Information Engineering held at the Politecnico di Milano. Following the structure of the course, the book features contributions from some researchers who are working at the intersection between computing and science.
Multiple time step integrators in ab initio molecular dynamics
International Nuclear Information System (INIS)
Luehr, Nathan; Martínez, Todd J.; Markland, Thomas E.
2014-01-01
Multiple time-scale algorithms exploit the natural separation of time-scales in chemical systems to greatly accelerate the efficiency of molecular dynamics simulations. Although the utility of these methods in systems where the interactions are described by empirical potentials is now well established, their application to ab initio molecular dynamics calculations has been limited by difficulties associated with splitting the ab initio potential into fast and slowly varying components. Here we present two schemes that enable efficient time-scale separation in ab initio calculations: one based on fragment decomposition and the other on range separation of the Coulomb operator in the electronic Hamiltonian. We demonstrate for both water clusters and a solvated hydroxide ion that multiple time-scale molecular dynamics allows for outer time steps of 2.5 fs, which are as large as those obtained when such schemes are applied to empirical potentials, while still allowing for bonds to be broken and reformed throughout the dynamics. This permits computational speedups of up to 4.4x, compared to standard Born-Oppenheimer ab initio molecular dynamics with a 0.5 fs time step, while maintaining the same energy conservation and accuracy
Bicanonical ab Initio Molecular Dynamics for Open Systems.
Frenzel, Johannes; Meyer, Bernd; Marx, Dominik
2017-08-08
Performing ab initio molecular dynamics simulations of open systems, where the chemical potential rather than the number of both nuclei and electrons is fixed, still is a challenge. Here, drawing on bicanonical sampling ideas introduced two decades ago by Swope and Andersen [ J. Chem. Phys. 1995 , 102 , 2851 - 2863 ] to calculate chemical potentials of liquids and solids, an ab initio simulation technique is devised, which introduces a fictitious dynamics of two superimposed but otherwise independent periodic systems including full electronic structure, such that either the chemical potential or the average fractional particle number of a specific chemical species can be kept constant. As proof of concept, we demonstrate that solvation free energies can be computed from these bicanonical ab initio simulations upon directly superimposing pure bulk water and the respective aqueous solution being the two limiting systems. The method is useful in many circumstances, for instance for studying heterogeneous catalytic processes taking place on surfaces where the chemical potential of reactants rather than their number is controlled and opens a pathway toward ab initio simulations at constant electrochemical potential.
Computational techniques of the simplex method
Maros, István
2003-01-01
Computational Techniques of the Simplex Method is a systematic treatment focused on the computational issues of the simplex method. It provides a comprehensive coverage of the most important and successful algorithmic and implementation techniques of the simplex method. It is a unique source of essential, never discussed details of algorithmic elements and their implementation. On the basis of the book the reader will be able to create a highly advanced implementation of the simplex method which, in turn, can be used directly or as a building block in other solution algorithms.
Faas, S.; Snijders, Jaap; van Lenthe, J.H.; HernandezLaguna, A; Maruani, J; McWeeny, R; Wilson, S
2000-01-01
In this paper we present the first application of the ZORA (Zeroth Order Regular Approximation of the Dirac Fock equation) formalism in Ab Initio electronic structure calculations. The ZORA method, which has been tested previously in the context of Density Functional Theory, has been implemented in
Computational and mathematical methods in brain atlasing.
Nowinski, Wieslaw L
2017-12-01
Brain atlases have a wide range of use from education to research to clinical applications. Mathematical methods as well as computational methods and tools play a major role in the process of brain atlas building and developing atlas-based applications. Computational methods and tools cover three areas: dedicated editors for brain model creation, brain navigators supporting multiple platforms, and atlas-assisted specific applications. Mathematical methods in atlas building and developing atlas-aided applications deal with problems in image segmentation, geometric body modelling, physical modelling, atlas-to-scan registration, visualisation, interaction and virtual reality. Here I overview computational and mathematical methods in atlas building and developing atlas-assisted applications, and share my contribution to and experience in this field.
Siahaan, P.; Salimah, S. N. M.; Sipangkar, M. J.; Hudiyanti, D.; Djunaidi, M. C.; Laksitorini, M. D.
2018-04-01
Chitosan application in pharmaceutics and cosmeceutics industries is limited by its solubility issue. Modification of -NH2 and -OH fuctional groups of chitosan by adding carboxyl group has been shown to improve its solubility and application. Attempt to synthesize carboxymethyl chitosan (CMC) from monocloroacetic acid (MCAA) has been done prior this report. However no information is available wether –OH (-O-C bonding formation) or -NH2 (-N-C bonding formation) is the preference for - CH2COOH to attach. In the current study, the reaction mechanism between chitosan and MCAA reactants into carboxymethyl chitosan (CMC) was examined by computational approach. Dimer from of chitosan used as a molecular model in calculation All the molecular structure involved in the reaction mechanism was optimized by ab initio computational on the theory and basis set HF/6-31G(d,p). The results showed that the - N-C bonding formation via SN2 than the -O-C bonding formation via SN2 which have activation energy 469.437 kJ/mol and 533.219 kJ/mol respectively. However, the -O-C bonding formation more spontaneous than the -N-C bonding formation because ΔG the formation of O-CMC-2 reaction is more negative than ΔG of formation N-CMC-2 reaction is -4.353 kJ/mol and -1.095 kJ/mol respectively. The synthesis of N,O-CMC first forms -O-CH2COOH, then continues to form -NH-CH2COOH. This information is valuable to further optimize the reaction codition for CMC synthesis.
Numerical Methods for Stochastic Computations A Spectral Method Approach
Xiu, Dongbin
2010-01-01
The first graduate-level textbook to focus on fundamental aspects of numerical methods for stochastic computations, this book describes the class of numerical methods based on generalized polynomial chaos (gPC). These fast, efficient, and accurate methods are an extension of the classical spectral methods of high-dimensional random spaces. Designed to simulate complex systems subject to random inputs, these methods are widely used in many areas of computer science and engineering. The book introduces polynomial approximation theory and probability theory; describes the basic theory of gPC meth
Empirical evaluation methods in computer vision
Christensen, Henrik I
2002-01-01
This book provides comprehensive coverage of methods for the empirical evaluation of computer vision techniques. The practical use of computer vision requires empirical evaluation to ensure that the overall system has a guaranteed performance. The book contains articles that cover the design of experiments for evaluation, range image segmentation, the evaluation of face recognition and diffusion methods, image matching using correlation methods, and the performance of medical image processing algorithms. Sample Chapter(s). Foreword (228 KB). Chapter 1: Introduction (505 KB). Contents: Automate
A computational method for sharp interface advection
DEFF Research Database (Denmark)
Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje
2016-01-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volu...
Computing discharge using the index velocity method
Levesque, Victor A.; Oberg, Kevin A.
2012-01-01
Application of the index velocity method for computing continuous records of discharge has become increasingly common, especially since the introduction of low-cost acoustic Doppler velocity meters (ADVMs) in 1997. Presently (2011), the index velocity method is being used to compute discharge records for approximately 470 gaging stations operated and maintained by the U.S. Geological Survey. The purpose of this report is to document and describe techniques for computing discharge records using the index velocity method. Computing discharge using the index velocity method differs from the traditional stage-discharge method by separating velocity and area into two ratings—the index velocity rating and the stage-area rating. The outputs from each of these ratings, mean channel velocity (V) and cross-sectional area (A), are then multiplied together to compute a discharge. For the index velocity method, V is a function of such parameters as streamwise velocity, stage, cross-stream velocity, and velocity head, and A is a function of stage and cross-section shape. The index velocity method can be used at locations where stage-discharge methods are used, but it is especially appropriate when more than one specific discharge can be measured for a specific stage. After the ADVM is selected, installed, and configured, the stage-area rating and the index velocity rating must be developed. A standard cross section is identified and surveyed in order to develop the stage-area rating. The standard cross section should be surveyed every year for the first 3 years of operation and thereafter at a lesser frequency, depending on the susceptibility of the cross section to change. Periodic measurements of discharge are used to calibrate and validate the index rating for the range of conditions experienced at the gaging station. Data from discharge measurements, ADVMs, and stage sensors are compiled for index-rating analysis. Index ratings are developed by means of regression
Computational efficiency for the surface renewal method
Kelley, Jason; Higgins, Chad
2018-04-01
Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.
Computational methods in molecular imaging technologies
Gunjan, Vinit Kumar; Venkatesh, C; Amarnath, M
2017-01-01
This book highlights the experimental investigations that have been carried out on magnetic resonance imaging and computed tomography (MRI & CT) images using state-of-the-art Computational Image processing techniques, and tabulates the statistical values wherever necessary. In a very simple and straightforward way, it explains how image processing methods are used to improve the quality of medical images and facilitate analysis. It offers a valuable resource for researchers, engineers, medical doctors and bioinformatics experts alike.
Ab initio pseudopotential theory
International Nuclear Information System (INIS)
Yin, M.T.; Cohen, M.L.
1982-01-01
The ab initio norm-conserving pseudopotential is generated from a reference atomic configuration in which the pseudoatomic eigenvalues and wave functions outside the core region agree with the corresponding ab initio all-electron results within the density-functional formalism. This paper explains why such pseudopotentials accurately reproduce the all-electron results in both atoms and in multiatomic systems. In particular, a theorem is derived to demonstrate the energy- and perturbation-independent properties of ab initio pseudopotentials
Digital image processing mathematical and computational methods
Blackledge, J M
2005-01-01
This authoritative text (the second part of a complete MSc course) provides mathematical methods required to describe images, image formation and different imaging systems, coupled with the principle techniques used for processing digital images. It is based on a course for postgraduates reading physics, electronic engineering, telecommunications engineering, information technology and computer science. This book relates the methods of processing and interpreting digital images to the 'physics' of imaging systems. Case studies reinforce the methods discussed, with examples of current research
Zonal methods and computational fluid dynamics
International Nuclear Information System (INIS)
Atta, E.H.
1985-01-01
Recent advances in developing numerical algorithms for solving fluid flow problems, and the continuing improvement in the speed and storage of large scale computers have made it feasible to compute the flow field about complex and realistic configurations. Current solution methods involve the use of a hierarchy of mathematical models ranging from the linearized potential equation to the Navier Stokes equations. Because of the increasing complexity of both the geometries and flowfields encountered in practical fluid flow simulation, there is a growing emphasis in computational fluid dynamics on the use of zonal methods. A zonal method is one that subdivides the total flow region into interconnected smaller regions or zones. The flow solutions in these zones are then patched together to establish the global flow field solution. Zonal methods are primarily used either to limit the complexity of the governing flow equations to a localized region or to alleviate the grid generation problems about geometrically complex and multicomponent configurations. This paper surveys the application of zonal methods for solving the flow field about two and three-dimensional configurations. Various factors affecting their accuracy and ease of implementation are also discussed. From the presented review it is concluded that zonal methods promise to be very effective for computing complex flowfields and configurations. Currently there are increasing efforts to improve their efficiency, versatility, and accuracy
Domain decomposition methods and parallel computing
International Nuclear Information System (INIS)
Meurant, G.
1991-01-01
In this paper, we show how to efficiently solve large linear systems on parallel computers. These linear systems arise from discretization of scientific computing problems described by systems of partial differential equations. We show how to get a discrete finite dimensional system from the continuous problem and the chosen conjugate gradient iterative algorithm is briefly described. Then, the different kinds of parallel architectures are reviewed and their advantages and deficiencies are emphasized. We sketch the problems found in programming the conjugate gradient method on parallel computers. For this algorithm to be efficient on parallel machines, domain decomposition techniques are introduced. We give results of numerical experiments showing that these techniques allow a good rate of convergence for the conjugate gradient algorithm as well as computational speeds in excess of a billion of floating point operations per second. (author). 5 refs., 11 figs., 2 tabs., 1 inset
Directory of Open Access Journals (Sweden)
Fredrik Nilsson
2018-03-01
Full Text Available Substantial progress has been achieved in the last couple of decades in computing the electronic structure of correlated materials from first principles. This progress has been driven by parallel development in theory and numerical algorithms. Theoretical development in combining ab initio approaches and many-body methods is particularly promising. A crucial role is also played by a systematic method for deriving a low-energy model, which bridges the gap between real and model systems. In this article, an overview is given tracing the development from the LDA+U to the latest progress in combining the G W method and (extended dynamical mean-field theory ( G W +EDMFT. The emphasis is on conceptual and theoretical aspects rather than technical ones.
Computational and instrumental methods in EPR
Bender, Christopher J
2006-01-01
Computational and Instrumental Methods in EPR Prof. Bender, Fordham University Prof. Lawrence J. Berliner, University of Denver Electron magnetic resonance has been greatly facilitated by the introduction of advances in instrumentation and better computational tools, such as the increasingly widespread use of the density matrix formalism. This volume is devoted to both instrumentation and computation aspects of EPR, while addressing applications such as spin relaxation time measurements, the measurement of hyperfine interaction parameters, and the recovery of Mn(II) spin Hamiltonian parameters via spectral simulation. Key features: Microwave Amplitude Modulation Technique to Measure Spin-Lattice (T1) and Spin-Spin (T2) Relaxation Times Improvement in the Measurement of Spin-Lattice Relaxation Time in Electron Paramagnetic Resonance Quantitative Measurement of Magnetic Hyperfine Parameters and the Physical Organic Chemistry of Supramolecular Systems New Methods of Simulation of Mn(II) EPR Spectra: Single Cryst...
Proceedings of computational methods in materials science
International Nuclear Information System (INIS)
Mark, J.E. Glicksman, M.E.; Marsh, S.P.
1992-01-01
The Symposium on which this volume is based was conceived as a timely expression of some of the fast-paced developments occurring throughout materials science and engineering. It focuses particularly on those involving modern computational methods applied to model and predict the response of materials under a diverse range of physico-chemical conditions. The current easy access of many materials scientists in industry, government laboratories, and academe to high-performance computers has opened many new vistas for predicting the behavior of complex materials under realistic conditions. Some have even argued that modern computational methods in materials science and engineering are literally redefining the bounds of our knowledge from which we predict structure-property relationships, perhaps forever changing the historically descriptive character of the science and much of the engineering
Computational botany methods for automated species identification
Remagnino, Paolo; Wilkin, Paul; Cope, James; Kirkup, Don
2017-01-01
This book discusses innovative methods for mining information from images of plants, especially leaves, and highlights the diagnostic features that can be implemented in fully automatic systems for identifying plant species. Adopting a multidisciplinary approach, it explores the problem of plant species identification, covering both the concepts of taxonomy and morphology. It then provides an overview of morphometrics, including the historical background and the main steps in the morphometric analysis of leaves together with a number of applications. The core of the book focuses on novel diagnostic methods for plant species identification developed from a computer scientist’s perspective. It then concludes with a chapter on the characterization of botanists' visions, which highlights important cognitive aspects that can be implemented in a computer system to more accurately replicate the human expert’s fixation process. The book not only represents an authoritative guide to advanced computational tools fo...
International Nuclear Information System (INIS)
Zeng Xiancheng; Hu Hao; Hu Xiangqian; Yang Weitao
2009-01-01
A quantum mechanical/molecular mechanical minimum free energy path (QM/MM-MFEP) method was developed to calculate the redox free energies of large systems in solution with greatly enhanced efficiency for conformation sampling. The QM/MM-MFEP method describes the thermodynamics of a system on the potential of mean force surface of the solute degrees of freedom. The molecular dynamics (MD) sampling is only carried out with the QM subsystem fixed. It thus avoids 'on-the-fly' QM calculations and thus overcomes the high computational cost in the direct QM/MM MD sampling. In the applications to two metal complexes in aqueous solution, the new QM/MM-MFEP method yielded redox free energies in good agreement with those calculated from the direct QM/MM MD method. Two larger biologically important redox molecules, lumichrome and riboflavin, were further investigated to demonstrate the efficiency of the method. The enhanced efficiency and uncompromised accuracy are especially significant for biochemical systems. The QM/MM-MFEP method thus provides an efficient approach to free energy simulation of complex electron transfer reactions.
Computer-Aided Modelling Methods and Tools
DEFF Research Database (Denmark)
Cameron, Ian; Gani, Rafiqul
2011-01-01
The development of models for a range of applications requires methods and tools. In many cases a reference model is required that allows the generation of application specific models that are fit for purpose. There are a range of computer aided modelling tools available that help to define the m...
Applying Human Computation Methods to Information Science
Harris, Christopher Glenn
2013-01-01
Human Computation methods such as crowdsourcing and games with a purpose (GWAP) have each recently drawn considerable attention for their ability to synergize the strengths of people and technology to accomplish tasks that are challenging for either to do well alone. Despite this increased attention, much of this transformation has been focused on…
The asymptotic expansion method via symbolic computation
Navarro, Juan F.
2012-01-01
This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.
The Asymptotic Expansion Method via Symbolic Computation
Directory of Open Access Journals (Sweden)
Juan F. Navarro
2012-01-01
Full Text Available This paper describes an algorithm for implementing a perturbation method based on an asymptotic expansion of the solution to a second-order differential equation. We also introduce a new symbolic computation system which works with the so-called modified quasipolynomials, as well as an implementation of the algorithm on it.
Computationally efficient methods for digital control
Guerreiro Tome Antunes, D.J.; Hespanha, J.P.; Silvestre, C.J.; Kataria, N.; Brewer, F.
2008-01-01
The problem of designing a digital controller is considered with the novelty of explicitly taking into account the computation cost of the controller implementation. A class of controller emulation methods inspired by numerical analysis is proposed. Through various examples it is shown that these
International Nuclear Information System (INIS)
1979-01-01
Goal of this workshop was to provide an introduction to the use of state-of-the-art computer codes for the semi-empirical and ab initio computation of the electronic structure and geometry of small and large molecules. The workshop consisted of 15 lectures on the theoretical foundations of the codes, followed by laboratory sessions which utilized these codes
Advances of evolutionary computation methods and operators
Cuevas, Erik; Oliva Navarro, Diego Alberto
2016-01-01
The goal of this book is to present advances that discuss alternative Evolutionary Computation (EC) developments and non-conventional operators which have proved to be eﬀective in the solution of several complex problems. The book has been structured so that each chapter can be read independently from the others. The book contains nine chapters with the following themes: 1) Introduction, 2) the Social Spider Optimization (SSO), 3) the States of Matter Search (SMS), 4) the collective animal behavior (CAB) algorithm, 5) the Allostatic Optimization (AO) method, 6) the Locust Search (LS) algorithm, 7) the Adaptive Population with Reduced Evaluations (APRE) method, 8) the multimodal CAB, 9) the constrained SSO method.
Computational Methods in Stochastic Dynamics Volume 2
Stefanou, George; Papadopoulos, Vissarion
2013-01-01
The considerable influence of inherent uncertainties on structural behavior has led the engineering community to recognize the importance of a stochastic approach to structural problems. Issues related to uncertainty quantification and its influence on the reliability of the computational models are continuously gaining in significance. In particular, the problems of dynamic response analysis and reliability assessment of structures with uncertain system and excitation parameters have been the subject of continuous research over the last two decades as a result of the increasing availability of powerful computing resources and technology. This book is a follow up of a previous book with the same subject (ISBN 978-90-481-9986-0) and focuses on advanced computational methods and software tools which can highly assist in tackling complex problems in stochastic dynamic/seismic analysis and design of structures. The selected chapters are authored by some of the most active scholars in their respective areas and...
Directory of Open Access Journals (Sweden)
Leszek Bober
2012-05-01
Full Text Available Pharmacological and physicochemical classification of the furan and thiophene amide derivatives by multiple regression analysis and partial least square (PLS based on semi-empirical ab initio molecular modeling studies and high-performance liquid chromatography (HPLC retention data is proposed. Structural parameters obtained from the PCM (Polarizable Continuum Model method and the literature values of biological activity (antiproliferative for the A431 cells expressed as LD_{50} of the examined furan and thiophene derivatives was used to search for relationships. It was tested how variable molecular modeling conditions considered together, with or without HPLC retention data, allow evaluation of the structural recognition of furan and thiophene derivatives with respect to their pharmacological properties.
Ab initio multiple cloning algorithm for quantum nonadiabatic molecular dynamics
Energy Technology Data Exchange (ETDEWEB)
Makhov, Dmitry V.; Shalashilin, Dmitrii V. [Department of Chemistry, University of Leeds, Leeds LS2 9JT (United Kingdom); Glover, William J.; Martinez, Todd J. [Department of Chemistry and The PULSE Institute, Stanford University, Stanford, California 94305, USA and SLAC National Accelerator Laboratory, Menlo Park, California 94025 (United States)
2014-08-07
We present a new algorithm for ab initio quantum nonadiabatic molecular dynamics that combines the best features of ab initio Multiple Spawning (AIMS) and Multiconfigurational Ehrenfest (MCE) methods. In this new method, ab initio multiple cloning (AIMC), the individual trajectory basis functions (TBFs) follow Ehrenfest equations of motion (as in MCE). However, the basis set is expanded (as in AIMS) when these TBFs become sufficiently mixed, preventing prolonged evolution on an averaged potential energy surface. We refer to the expansion of the basis set as “cloning,” in analogy to the “spawning” procedure in AIMS. This synthesis of AIMS and MCE allows us to leverage the benefits of mean-field evolution during periods of strong nonadiabatic coupling while simultaneously avoiding mean-field artifacts in Ehrenfest dynamics. We explore the use of time-displaced basis sets, “trains,” as a means of expanding the basis set for little cost. We also introduce a new bra-ket averaged Taylor expansion (BAT) to approximate the necessary potential energy and nonadiabatic coupling matrix elements. The BAT approximation avoids the necessity of computing electronic structure information at intermediate points between TBFs, as is usually done in saddle-point approximations used in AIMS. The efficiency of AIMC is demonstrated on the nonradiative decay of the first excited state of ethylene. The AIMC method has been implemented within the AIMS-MOLPRO package, which was extended to include Ehrenfest basis functions.
International Nuclear Information System (INIS)
Thompson, K.; Martinez, T.J.
1999-01-01
We present a new approach to first-principles molecular dynamics that combines a general and flexible interpolation method with ab initio evaluation of the potential energy surface. This hybrid approach extends significantly the domain of applicability of ab initio molecular dynamics. Use of interpolation significantly reduces the computational effort associated with the dynamics over most of the time scale of interest, while regions where potential energy surfaces are difficult to interpolate, for example near conical intersections, are treated by direct solution of the electronic Schroedinger equation during the dynamics. We demonstrate the concept through application to the nonadiabatic dynamics of collisional electronic quenching of Li(2p). Full configuration interaction is used to describe the wave functions of the ground and excited electronic states. The hybrid approach agrees well with full ab initio multiple spawning dynamics, while being more than an order of magnitude faster. copyright 1999 American Institute of Physics
Computational methods for industrial radiation measurement applications
International Nuclear Information System (INIS)
Gardner, R.P.; Guo, P.; Ao, Q.
1996-01-01
Computational methods have been used with considerable success to complement radiation measurements in solving a wide range of industrial problems. The almost exponential growth of computer capability and applications in the last few years leads to a open-quotes black boxclose quotes mentality for radiation measurement applications. If a black box is defined as any radiation measurement device that is capable of measuring the parameters of interest when a wide range of operating and sample conditions may occur, then the development of computational methods for industrial radiation measurement applications should now be focused on the black box approach and the deduction of properties of interest from the response with acceptable accuracy and reasonable efficiency. Nowadays, increasingly better understanding of radiation physical processes, more accurate and complete fundamental physical data, and more advanced modeling and software/hardware techniques have made it possible to make giant strides in that direction with new ideas implemented with computer software. The Center for Engineering Applications of Radioisotopes (CEAR) at North Carolina State University has been working on a variety of projects in the area of radiation analyzers and gauges for accomplishing this for quite some time, and they are discussed here with emphasis on current accomplishments
BLUES function method in computational physics
Indekeu, Joseph O.; Müller-Nedebock, Kristian K.
2018-04-01
We introduce a computational method in physics that goes ‘beyond linear use of equation superposition’ (BLUES). A BLUES function is defined as a solution of a nonlinear differential equation (DE) with a delta source that is at the same time a Green’s function for a related linear DE. For an arbitrary source, the BLUES function can be used to construct an exact solution to the nonlinear DE with a different, but related source. Alternatively, the BLUES function can be used to construct an approximate piecewise analytical solution to the nonlinear DE with an arbitrary source. For this alternative use the related linear DE need not be known. The method is illustrated in a few examples using analytical calculations and numerical computations. Areas for further applications are suggested.
Spatial analysis statistics, visualization, and computational methods
Oyana, Tonny J
2015-01-01
An introductory text for the next generation of geospatial analysts and data scientists, Spatial Analysis: Statistics, Visualization, and Computational Methods focuses on the fundamentals of spatial analysis using traditional, contemporary, and computational methods. Outlining both non-spatial and spatial statistical concepts, the authors present practical applications of geospatial data tools, techniques, and strategies in geographic studies. They offer a problem-based learning (PBL) approach to spatial analysis-containing hands-on problem-sets that can be worked out in MS Excel or ArcGIS-as well as detailed illustrations and numerous case studies. The book enables readers to: Identify types and characterize non-spatial and spatial data Demonstrate their competence to explore, visualize, summarize, analyze, optimize, and clearly present statistical data and results Construct testable hypotheses that require inferential statistical analysis Process spatial data, extract explanatory variables, conduct statisti...
Computer Animation Based on Particle Methods
Directory of Open Access Journals (Sweden)
Rafal Wcislo
1999-01-01
Full Text Available The paper presents the main issues of a computer animation of a set of elastic macroscopic objects based on the particle method. The main assumption of the generated animations is to achieve very realistic movements in a scene observed on the computer display. The objects (solid bodies interact mechanically with each other, The movements and deformations of solids are calculated using the particle method. Phenomena connected with the behaviour of solids in the gravitational field, their defomtations caused by collisions and interactions with the optional liquid medium are simulated. The simulation ofthe liquid is performed using the cellular automata method. The paper presents both simulation schemes (particle method and cellular automata rules an the method of combining them in the single animation program. ln order to speed up the execution of the program the parallel version based on the network of workstation was developed. The paper describes the methods of the parallelization and it considers problems of load-balancing, collision detection, process synchronization and distributed control of the animation.
Computational methods of electron/photon transport
International Nuclear Information System (INIS)
Mack, J.M.
1983-01-01
A review of computational methods simulating the non-plasma transport of electrons and their attendant cascades is presented. Remarks are mainly restricted to linearized formalisms at electron energies above 1 keV. The effectiveness of various metods is discussed including moments, point-kernel, invariant imbedding, discrete-ordinates, and Monte Carlo. Future research directions and the potential impact on various aspects of science and engineering are indicated
Mathematical optics classical, quantum, and computational methods
Lakshminarayanan, Vasudevan
2012-01-01
Going beyond standard introductory texts, Mathematical Optics: Classical, Quantum, and Computational Methods brings together many new mathematical techniques from optical science and engineering research. Profusely illustrated, the book makes the material accessible to students and newcomers to the field. Divided into six parts, the text presents state-of-the-art mathematical methods and applications in classical optics, quantum optics, and image processing. Part I describes the use of phase space concepts to characterize optical beams and the application of dynamic programming in optical wave
Energy Technology Data Exchange (ETDEWEB)
Liu, Cong; Assary, Rajeev S.; Curtiss, Larry A.
2014-06-26
Upgrading of furan and small oxygenates obtained from the decomposition of cellulosic materials via formation of carbon-carbon bonds is critical to effective conversion of biomass to liquid transportation fuels. Simulation-driven molecular level understanding of carbon-carbon bond formation is required to design efficient catalysts and processes. Accurate quantum chemical methods are utilized here to predict the reaction energetics for conversion of furan (C4H4O) to C5-C8 ethers and the transformation of furfural (C5H6O2) to C13-C26 alkanes. Furan, can be coupled with various C1 to C4 lower molecular weight carbohydrates obtained from the pyrolysis via Diels-Alder type reactions in the gas phase to produce C5-C8 cyclic ethers. The computed reaction barriers for these reactions (~25 kcal/mol) are lower than the cellulose activation or decomposition reactions (~50 kcal/mol). Cycloaddition of C5-C8 cyclo-ethers with furans can also occur in the gas phase, and the computed activation energy is similar to that of the first Diels-Alder reaction. Furfural, obtained from biomass, can be coupled with aldehydes or ketones with α-hydrogen atoms to form longer chain aldol products and these aldol products can undergo vapor phase hydrocycloaddition (activation barrier of ~20 kcal/mol) to form the precursors of C26 cyclic hydrocarbons. These thermochemical studies provide the basis for further vapor phase catalytic studies required for upgrading of furans/furfurals to longer chain hydrocarbons.
Delamination detection using methods of computational intelligence
Ihesiulor, Obinna K.; Shankar, Krishna; Zhang, Zhifang; Ray, Tapabrata
2012-11-01
Abstract Reliable delamination prediction scheme is indispensable in order to prevent potential risks of catastrophic failures in composite structures. The existence of delaminations changes the vibration characteristics of composite laminates and hence such indicators can be used to quantify the health characteristics of laminates. An approach for online health monitoring of in-service composite laminates is presented in this paper that relies on methods based on computational intelligence. Typical changes in the observed vibration characteristics (i.e. change in natural frequencies) are considered as inputs to identify the existence, location and magnitude of delaminations. The performance of the proposed approach is demonstrated using numerical models of composite laminates. Since this identification problem essentially involves the solution of an optimization problem, the use of finite element (FE) methods as the underlying tool for analysis turns out to be computationally expensive. A surrogate assisted optimization approach is hence introduced to contain the computational time within affordable limits. An artificial neural network (ANN) model with Bayesian regularization is used as the underlying approximation scheme while an improved rate of convergence is achieved using a memetic algorithm. However, building of ANN surrogate models usually requires large training datasets. K-means clustering is effectively employed to reduce the size of datasets. ANN is also used via inverse modeling to determine the position, size and location of delaminations using changes in measured natural frequencies. The results clearly highlight the efficiency and the robustness of the approach.
Method of generating a computer readable model
DEFF Research Database (Denmark)
2008-01-01
A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element. The met......A method of generating a computer readable model of a geometrical object constructed from a plurality of interconnectable construction elements, wherein each construction element has a number of connection elements for connecting the construction element with another construction element....... The method comprises encoding a first and a second one of the construction elements as corresponding data structures, each representing the connection elements of the corresponding construction element, and each of the connection elements having associated with it a predetermined connection type. The method...... further comprises determining a first connection element of the first construction element and a second connection element of the second construction element located in a predetermined proximity of each other; and retrieving connectivity information of the corresponding connection types of the first...
Efficient computation method of Jacobian matrix
International Nuclear Information System (INIS)
Sasaki, Shinobu
1995-05-01
As well known, the elements of the Jacobian matrix are complex trigonometric functions of the joint angles, resulting in a matrix of staggering complexity when we write it all out in one place. This article addresses that difficulties to this subject are overcome by using velocity representation. The main point is that its recursive algorithm and computer algebra technologies allow us to derive analytical formulation with no human intervention. Particularly, it is to be noted that as compared to previous results the elements are extremely simplified throughout the effective use of frame transformations. Furthermore, in case of a spherical wrist, it is shown that the present approach is computationally most efficient. Due to such advantages, the proposed method is useful in studying kinematically peculiar properties such as singularity problems. (author)
Computational method for free surface hydrodynamics
International Nuclear Information System (INIS)
Hirt, C.W.; Nichols, B.D.
1980-01-01
There are numerous flow phenomena in pressure vessel and piping systems that involve the dynamics of free fluid surfaces. For example, fluid interfaces must be considered during the draining or filling of tanks, in the formation and collapse of vapor bubbles, and in seismically shaken vessels that are partially filled. To aid in the analysis of these types of flow phenomena, a new technique has been developed for the computation of complicated free-surface motions. This technique is based on the concept of a local average volume of fluid (VOF) and is embodied in a computer program for two-dimensional, transient fluid flow called SOLA-VOF. The basic approach used in the VOF technique is briefly described, and compared to other free-surface methods. Specific capabilities of the SOLA-VOF program are illustrated by generic examples of bubble growth and collapse, flows of immiscible fluid mixtures, and the confinement of spilled liquids
Soft Computing Methods for Disulfide Connectivity Prediction.
Márquez-Chamorro, Alfonso E; Aguilar-Ruiz, Jesús S
2015-01-01
The problem of protein structure prediction (PSP) is one of the main challenges in structural bioinformatics. To tackle this problem, PSP can be divided into several subproblems. One of these subproblems is the prediction of disulfide bonds. The disulfide connectivity prediction problem consists in identifying which nonadjacent cysteines would be cross-linked from all possible candidates. Determining the disulfide bond connectivity between the cysteines of a protein is desirable as a previous step of the 3D PSP, as the protein conformational search space is highly reduced. The most representative soft computing approaches for the disulfide bonds connectivity prediction problem of the last decade are summarized in this paper. Certain aspects, such as the different methodologies based on soft computing approaches (artificial neural network or support vector machine) or features of the algorithms, are used for the classification of these methods.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
Computational methods for nuclear criticality safety analysis
International Nuclear Information System (INIS)
Maragni, M.G.
1992-01-01
Nuclear criticality safety analyses require the utilization of methods which have been tested and verified against benchmarks results. In this work, criticality calculations based on the KENO-IV and MCNP codes are studied aiming the qualification of these methods at the IPEN-CNEN/SP and COPESP. The utilization of variance reduction techniques is important to reduce the computer execution time, and several of them are analysed. As practical example of the above methods, a criticality safety analysis for the storage tubes for irradiated fuel elements from the IEA-R1 research has been carried out. This analysis showed that the MCNP code is more adequate for problems with complex geometries, and the KENO-IV code shows conservative results when it is not used the generalized geometry option. (author)
Tang, Mei; Hu, Cui-E; Lv, Zhen-Long; Chen, Xiang-Rong; Cai, Ling-Cang
2016-12-01
The structures of cationic water clusters (H 2 O) 8 + have been globally explored by the particle swarm optimization method in combination with quantum chemical calculations. Geometry optimization and vibrational analysis for the 15 most interesting clusters were computed at the MP2/aug-cc-pVDZ level and infrared spectrum calculation at MPW1K/6-311++G** level. Special attention was paid to the relationships between their configurations and energies. Both MP2 and B3LYP-D3 calculations revealed that the cage-like structure is the most stable, which is different from a five-membered ring lowest energy structure but agrees well with a cage-like structure in the literature. Furthermore, our obtained cage-like structure is more stable by 0.87 and 1.23 kcal/mol than the previously reported structures at MP2 and B3LYP-D3 levels, respectively. Interestingly, on the basis of their relative Gibbs free energies and the temperature dependence of populations, the cage-like structure predominates only at very low temperatures, and the most dominating species transforms into a newfound four-membered ring structure from 100 to 400 K, which can contribute greatly to the experimental infrared spectrum. By topological analysis and reduced density gradient analysis, we also investigated the structural characteristics and bonding strengths of these water cluster radical cations.
Evolutionary Computing Methods for Spectral Retrieval
Terrile, Richard; Fink, Wolfgang; Huntsberger, Terrance; Lee, Seugwon; Tisdale, Edwin; VonAllmen, Paul; Tinetti, Geivanna
2009-01-01
A methodology for processing spectral images to retrieve information on underlying physical, chemical, and/or biological phenomena is based on evolutionary and related computational methods implemented in software. In a typical case, the solution (the information that one seeks to retrieve) consists of parameters of a mathematical model that represents one or more of the phenomena of interest. The methodology was developed for the initial purpose of retrieving the desired information from spectral image data acquired by remote-sensing instruments aimed at planets (including the Earth). Examples of information desired in such applications include trace gas concentrations, temperature profiles, surface types, day/night fractions, cloud/aerosol fractions, seasons, and viewing angles. The methodology is also potentially useful for retrieving information on chemical and/or biological hazards in terrestrial settings. In this methodology, one utilizes an iterative process that minimizes a fitness function indicative of the degree of dissimilarity between observed and synthetic spectral and angular data. The evolutionary computing methods that lie at the heart of this process yield a population of solutions (sets of the desired parameters) within an accuracy represented by a fitness-function value specified by the user. The evolutionary computing methods (ECM) used in this methodology are Genetic Algorithms and Simulated Annealing, both of which are well-established optimization techniques and have also been described in previous NASA Tech Briefs articles. These are embedded in a conceptual framework, represented in the architecture of the implementing software, that enables automatic retrieval of spectral and angular data and analysis of the retrieved solutions for uniqueness.
Saab, Mohamad; Réal, Florent; Šulka, Martin; Cantrel, Laurent; Virot, François; Vallet, Valérie
2017-06-01
Tributyl-phosphate (TBP), a ligand used in the PUREX liquid-liquid separation process of spent nuclear fuel, can form an explosive mixture in contact with nitric acid that might lead to a violent explosive thermal runaway. In the context of safety of a nuclear reprocessing plant facility, it is crucial to predict the stability of TBP at elevated temperatures. So far, only the enthalpies of formation of TBP are available in the literature with rather large uncertainties, while those of its degradation products, di-(HDBP) and mono-(H2MBP), are unknown. In this goal, we have used state-of-the art quantum chemical methods to compute the formation enthalpies and entropies of TBP and its degradation products di-(HDBP) and mono-(H2MBP) in gas and liquid phases. Comparisons of levels of quantum chemical theory revealed that there are significant effects of correlation on their electronic structures, pushing for the need of not only high level of electronic correlation treatment, namely, local coupled cluster with single and double excitation operators and perturbative treatment of triple excitations, but also extrapolations to the complete basis to produce reliable and accurate thermodynamics data. Solvation enthalpies were computed with the conductor-like screening model for real solvents [COSMO-RS], for which we observe errors not exceeding 22 kJ mol-1. We thus propose with final uncertainty of about 20 kJ mol-1 standard enthalpies of formation of TBP, HDBP, and H2MBP which amounts to -1281.7 ± 24.4, -1229.4 ± 19.6, and -1176.7 ± 14.8 kJ mol-1, respectively, in the gas phase. In the liquid phase, the predicted values are -1367.3 ± 24.4, -1348.7 ± 19.6, and -1323.8± 14.8 kJ mol-1, to which we may add about -22 kJ mol-1 error from the COSMO-RS solvent model. From these data, the complete hydrolysis of TBP is predicted as an exothermic phenomena but showing a slightly endergonic process.
Ratschek, H
2003-01-01
This undergraduate and postgraduate text will familiarise readers with interval arithmetic and related tools to gain reliable and validated results and logically correct decisions for a variety of geometric computations plus the means for alleviating the effects of the errors. It also considers computations on geometric point-sets, which are neither robust nor reliable in processing with standard methods. The authors provide two effective tools for obtaining correct results: (a) interval arithmetic, and (b) ESSA the new powerful algorithm which improves many geometric computations and makes th
Routine calculation of ab initio melting curves: application to aluminum
Robert, Grégory; Legrand, Philippe; Arnault, Philippe; Desbiens, Nicolas; Clérouin, Jean
2014-01-01
We present a simple, fast, and reliable method to compute the melting curves of materials with ab initio molecular dynamics. It is based on the two-phase thermodynamic model of [Lin et al., J. Chem. Phys. 119, 11792 (2003)] and its improved version given by [Desjarlais, Phys. Rev. E, 88, 062145 (2013)]. In this model, the velocity autocorrelation function is utilized to calculate the contribution of the nuclei motion to the entropy of the solid and liquid phases. It is then possible to find t...
Song, Lingchun; Han, Jaebeom; Lin, Yen-lin; Xie, Wangshen; Gao, Jiali
2009-10-29
The explicit polarization (X-Pol) method has been examined using ab initio molecular orbital theory and density functional theory. The X-Pol potential was designed to provide a novel theoretical framework for developing next-generation force fields for biomolecular simulations. Importantly, the X-Pol potential is a general method, which can be employed with any level of electronic structure theory. The present study illustrates the implementation of the X-Pol method using ab initio Hartree-Fock theory and hybrid density functional theory. The computational results are illustrated by considering a set of bimolecular complexes of small organic molecules and ions with water. The computed interaction energies and hydrogen bond geometries are in good accord with CCSD(T) calculations and B3LYP/aug-cc-pVDZ optimizations.
Sphinx: merging knowledge-based and ab initio approaches to improve protein loop prediction.
Marks, Claire; Nowak, Jaroslaw; Klostermann, Stefan; Georges, Guy; Dunbar, James; Shi, Jiye; Kelm, Sebastian; Deane, Charlotte M
2017-05-01
Loops are often vital for protein function, however, their irregular structures make them difficult to model accurately. Current loop modelling algorithms can mostly be divided into two categories: knowledge-based, where databases of fragments are searched to find suitable conformations and ab initio, where conformations are generated computationally. Existing knowledge-based methods only use fragments that are the same length as the target, even though loops of slightly different lengths may adopt similar conformations. Here, we present a novel method, Sphinx, which combines ab initio techniques with the potential extra structural information contained within loops of a different length to improve structure prediction. We show that Sphinx is able to generate high-accuracy predictions and decoy sets enriched with near-native loop conformations, performing better than the ab initio algorithm on which it is based. In addition, it is able to provide predictions for every target, unlike some knowledge-based methods. Sphinx can be used successfully for the difficult problem of antibody H3 prediction, outperforming RosettaAntibody, one of the leading H3-specific ab initio methods, both in accuracy and speed. Sphinx is available at http://opig.stats.ox.ac.uk/webapps/sphinx. deane@stats.ox.ac.uk. Supplementary data are available at Bioinformatics online. © The Author 2017. Published by Oxford University Press.
A computational method for sharp interface advection
Bredmose, Henrik; Jasak, Hrvoje
2016-01-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face–interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM® extension and is published as open source. PMID:28018619
A computational method for sharp interface advection.
Roenby, Johan; Bredmose, Henrik; Jasak, Hrvoje
2016-11-01
We devise a numerical method for passive advection of a surface, such as the interface between two incompressible fluids, across a computational mesh. The method is called isoAdvector, and is developed for general meshes consisting of arbitrary polyhedral cells. The algorithm is based on the volume of fluid (VOF) idea of calculating the volume of one of the fluids transported across the mesh faces during a time step. The novelty of the isoAdvector concept consists of two parts. First, we exploit an isosurface concept for modelling the interface inside cells in a geometric surface reconstruction step. Second, from the reconstructed surface, we model the motion of the face-interface intersection line for a general polygonal face to obtain the time evolution within a time step of the submerged face area. Integrating this submerged area over the time step leads to an accurate estimate for the total volume of fluid transported across the face. The method was tested on simple two-dimensional and three-dimensional interface advection problems on both structured and unstructured meshes. The results are very satisfactory in terms of volume conservation, boundedness, surface sharpness and efficiency. The isoAdvector method was implemented as an OpenFOAM ® extension and is published as open source.
Computational electromagnetic methods for transcranial magnetic stimulation
Gomez, Luis J.
Transcranial magnetic stimulation (TMS) is a noninvasive technique used both as a research tool for cognitive neuroscience and as a FDA approved treatment for depression. During TMS, coils positioned near the scalp generate electric fields and activate targeted brain regions. In this thesis, several computational electromagnetics methods that improve the analysis, design, and uncertainty quantification of TMS systems were developed. Analysis: A new fast direct technique for solving the large and sparse linear system of equations (LSEs) arising from the finite difference (FD) discretization of Maxwell's quasi-static equations was developed. Following a factorization step, the solver permits computation of TMS fields inside realistic brain models in seconds, allowing for patient-specific real-time usage during TMS. The solver is an alternative to iterative methods for solving FD LSEs, often requiring run-times of minutes. A new integral equation (IE) method for analyzing TMS fields was developed. The human head is highly-heterogeneous and characterized by high-relative permittivities (107). IE techniques for analyzing electromagnetic interactions with such media suffer from high-contrast and low-frequency breakdowns. The novel high-permittivity and low-frequency stable internally combined volume-surface IE method developed. The method not only applies to the analysis of high-permittivity objects, but it is also the first IE tool that is stable when analyzing highly-inhomogeneous negative permittivity plasmas. Design: TMS applications call for electric fields to be sharply focused on regions that lie deep inside the brain. Unfortunately, fields generated by present-day Figure-8 coils stimulate relatively large regions near the brain surface. An optimization method for designing single feed TMS coil-arrays capable of producing more localized and deeper stimulation was developed. Results show that the coil-arrays stimulate 2.4 cm into the head while stimulating 3
International Nuclear Information System (INIS)
Jezierski, Andrzej; Szytuła, Andrzej
2016-01-01
The electronic structures and thermodynamic properties of LaPtIn and CePtIn are studied by means of ab-initio full-relativistic full-potential local orbital basis (FPLO) method within densities functional (DFT) methodologies. We have also examined the influence of hydrogen on the electronic structure and stability of CePtInH and LaPtInH systems. The positions of the hydrogen atoms have been found from the minimum of the total energy. Our calculations have shown that band structure and topology of the Fermi surfaces changed significantly during the hydrogenation. The thermodynamic properties (bulk modulus, Debye temperatures, constant pressure heat capacity) calculated in quasi-harmonic Debye-Grüneisen model are in a good agreement with the experimental data. We have applied different methods of the calculation of the equation of states (EOS) (Murnaghan, Birch-Murnaghan, Poirier–Tarantola, Vinet). The thermodynamic properties are presented for the pressure 0< P<9 GPa and the temperature range 0< T<300 K. - Highlights: • Full relativistic band structure of LaPtIn and CePtIn. • Fermi surface of LaPtIn, LaPtInH, CePtIn, CePtInH. • Effect of hydrogenation on the electronic structure of LaPtIn and CePtIn. • Thermodynamic properties in the quasi-harmonic Debye-Grüneisen model.
Computational predictive methods for fracture and fatigue
Cordes, J.; Chang, A. T.; Nelson, N.; Kim, Y.
1994-09-01
The damage-tolerant design philosophy as used by aircraft industries enables aircraft components and aircraft structures to operate safely with minor damage, small cracks, and flaws. Maintenance and inspection procedures insure that damages developed during service remain below design values. When damage is found, repairs or design modifications are implemented and flight is resumed. Design and redesign guidelines, such as military specifications MIL-A-83444, have successfully reduced the incidence of damage and cracks. However, fatigue cracks continue to appear in aircraft well before the design life has expired. The F16 airplane, for instance, developed small cracks in the engine mount, wing support, bulk heads, the fuselage upper skin, the fuel shelf joints, and along the upper wings. Some cracks were found after 600 hours of the 8000 hour design service life and design modifications were required. Tests on the F16 plane showed that the design loading conditions were close to the predicted loading conditions. Improvements to analytic methods for predicting fatigue crack growth adjacent to holes, when multiple damage sites are present, and in corrosive environments would result in more cost-effective designs, fewer repairs, and fewer redesigns. The overall objective of the research described in this paper is to develop, verify, and extend the computational efficiency of analysis procedures necessary for damage tolerant design. This paper describes an elastic/plastic fracture method and an associated fatigue analysis method for damage tolerant design. Both methods are unique in that material parameters such as fracture toughness, R-curve data, and fatigue constants are not required. The methods are implemented with a general-purpose finite element package. Several proof-of-concept examples are given. With further development, the methods could be extended for analysis of multi-site damage, creep-fatigue, and corrosion fatigue problems.
Modules and methods for all photonic computing
Schultz, David R.; Ma, Chao Hung
2001-01-01
A method for all photonic computing, comprising the steps of: encoding a first optical/electro-optical element with a two dimensional mathematical function representing input data; illuminating the first optical/electro-optical element with a collimated beam of light; illuminating a second optical/electro-optical element with light from the first optical/electro-optical element, the second optical/electro-optical element having a characteristic response corresponding to an iterative algorithm useful for solving a partial differential equation; iteratively recirculating the signal through the second optical/electro-optical element with light from the second optical/electro-optical element for a predetermined number of iterations; and, after the predetermined number of iterations, optically and/or electro-optically collecting output data representing an iterative optical solution from the second optical/electro-optical element.
Optical design teaching by computing graphic methods
Vazquez-Molini, D.; Muñoz-Luna, J.; Fernandez-Balbuena, A. A.; Garcia-Botella, A.; Belloni, P.; Alda, J.
2012-10-01
One of the key challenges in the teaching of Optics is that students need to know not only the math of the optical design, but also, and more important, to grasp and understand the optics in a three-dimensional space. Having a clear image of the problem to solve is the first step in order to begin to solve that problem. Therefore to achieve that the students not only must know the equation of refraction law but they have also to understand how the main parameters of this law are interacting among them. This should be a major goal in the teaching course. Optical graphic methods are a valuable tool in this way since they have the advantage of visual information and the accuracy of a computer calculation.
Roy, Dipankar; Marianski, Mateusz; Maitra, Neepa T.; Dannenberg, J. J.
2012-10-01
We compare dispersion and induction interactions for noble gas dimers and for Ne, methane, and 2-butyne with HF and LiF using a variety of functionals (including some specifically parameterized to evaluate dispersion interactions) with ab initio methods including CCSD(T) and MP2. We see that inductive interactions tend to enhance dispersion and may be accompanied by charge-transfer. We show that the functionals do not generally follow the expected trends in interaction energies, basis set superposition errors (BSSE), and interaction distances as a function of basis set size. The functionals parameterized to treat dispersion interactions often overestimate these interactions, sometimes by quite a lot, when compared to higher level calculations. Which functionals work best depends upon the examples chosen. The B3LYP and X3LYP functionals, which do not describe pure dispersion interactions, appear to describe dispersion mixed with induction about as accurately as those parametrized to treat dispersion. We observed significant differences in high-level wavefunction calculations in a basis set larger than those used to generate the structures in many of the databases. We discuss the implications for highly parameterized functionals based on these databases, as well as the use of simple potential energy for fitting the parameters rather than experimentally determinable thermodynamic state functions that involve consideration of vibrational states.
International Nuclear Information System (INIS)
Cukovicova, M.; Cernusak, I.
2010-01-01
For our study we have chosen a series of diatomic molecules MeB (where Me = Li, Na, K, Rb, Cs, Fr). These molecules present experimentally unknown species, hence we were motivated to predict theoretically potential energy curves, equilibrium bond lengths, harmonic frequencies, constants of anharmonicity, dipole moments and dissociation energies for the ground and low-lying excited states using high level ab initio techniques. Based on previous state averaged MRCI calculations in ANO-S basis set of NaB and KB molecules, we have focused on four lowest-lying electronic states, ground state 3Π and excited states 1Σ+, 1Π and 3Σ+. All four states dissociate to the atoms in ground states 2P1/2(B) and 2S1/2(Me). 3Π, 1Σ+, 1Π and 3Σ+ electronic states we investigated employing CCSD(T) method using relativistic ANO-RCC basis set. Our calculations include scalar relativistic effects via the second order one-component (spin-free) Douglas-Kroll-Hess Hamiltonian. Relativistic effects become remarkable in the case of heavy atoms, hence properties of CsB and FrB molecules may differ from trend of properties in row from LiB to FrB. Spectroscopic properties of particular state were obtained from the analysis of the potential energy curves using VIBROT and DUNHAM programs.
Computed tomography shielding methods: a literature review.
Curtis, Jessica Ryann
2010-01-01
To investigate available shielding methods in an effort to further awareness and understanding of existing preventive measures related to patient exposure in computed tomography (CT) scanning. Searches were conducted to locate literature discussing the effectiveness of commercially available shields. Literature containing information regarding breast, gonad, eye and thyroid shielding was identified. Because of rapidly advancing technology, the selection of articles was limited to those published within the past 5 years. The selected studies were examined using the following topics as guidelines: the effectiveness of the shield (percentage of dose reduction), the shield's effect on image quality, arguments for or against its use (including practicality) and overall recommendation for its use in clinical practice. Only a limited number of studies have been performed on the use of shields for the eyes, thyroid and gonads, but the evidence shows an overall benefit to their use. Breast shielding has been the most studied shielding method, with consistent agreement throughout the literature on its effectiveness at reducing radiation dose. The effect of shielding on image quality was not remarkable in a majority of studies. Although it is noted that more studies need to be conducted regarding the impact on image quality, the currently published literature stresses the importance of shielding in reducing dose. Commercially available shields for the breast, thyroid, eyes and gonads should be implemented in clinical practice. Further research is needed to ascertain the prevalence of shielding in the clinical setting.
Computational methods in calculating superconducting current problems
Brown, David John, II
Various computational problems in treating superconducting currents are examined. First, field inversion in spatial Fourier transform space is reviewed to obtain both one-dimensional transport currents flowing down a long thin tape, and a localized two-dimensional current. The problems associated with spatial high-frequency noise, created by finite resolution and experimental equipment, are presented, and resolved with a smooth Gaussian cutoff in spatial frequency space. Convergence of the Green's functions for the one-dimensional transport current densities is discussed, and particular attention is devoted to the negative effects of performing discrete Fourier transforms alone on fields asymptotically dropping like 1/r. Results of imaging simulated current densities are favorably compared to the original distributions after the resulting magnetic fields undergo the imaging procedure. The behavior of high-frequency spatial noise, and the behavior of the fields with a 1/r asymptote in the imaging procedure in our simulations is analyzed, and compared to the treatment of these phenomena in the published literature. Next, we examine calculation of Mathieu and spheroidal wave functions, solutions to the wave equation in elliptical cylindrical and oblate and prolate spheroidal coordinates, respectively. These functions are also solutions to Schrodinger's equations with certain potential wells, and are useful in solving time-varying superconducting problems. The Mathieu functions are Fourier expanded, and the spheroidal functions expanded in associated Legendre polynomials to convert the defining differential equations to recursion relations. The infinite number of linear recursion equations is converted to an infinite matrix, multiplied by a vector of expansion coefficients, thus becoming an eigenvalue problem. The eigenvalue problem is solved with root solvers, and the eigenvector problem is solved using a Jacobi-type iteration method, after preconditioning the
Computational Studies of Protein Hydration Methods
Morozenko, Aleksandr
It is widely appreciated that water plays a vital role in proteins' functions. The long-range proton transfer inside proteins is usually carried out by the Grotthuss mechanism and requires a chain of hydrogen bonds that is composed of internal water molecules and amino acid residues of the protein. In other cases, water molecules can facilitate the enzymes catalytic reactions by becoming a temporary proton donor/acceptor. Yet a reliable way of predicting water protein interior is still not available to the biophysics community. This thesis presents computational studies that have been performed to gain insights into the problems of fast and accurate prediction of potential water sites inside internal cavities of protein. Specifically, we focus on the task of attainment of correspondence between results obtained from computational experiments and experimental data available from X-ray structures. An overview of existing methods of predicting water molecules in the interior of a protein along with a discussion of the trustworthiness of these predictions is a second major subject of this thesis. A description of differences of water molecules in various media, particularly, gas, liquid and protein interior, and theoretical aspects of designing an adequate model of water for the protein environment are widely discussed in chapters 3 and 4. In chapter 5, we discuss recently developed methods of placement of water molecules into internal cavities of a protein. We propose a new methodology based on the principle of docking water molecules to a protein body which allows to achieve a higher degree of matching experimental data reported in protein crystal structures than other techniques available in the world of biophysical software. The new methodology is tested on a set of high-resolution crystal structures of oligopeptide-binding protein (OppA) containing a large number of resolved internal water molecules and applied to bovine heart cytochrome c oxidase in the fully
International Nuclear Information System (INIS)
Hirose, Kenji; Kobayashi, Nobuhiko
2006-01-01
Using the recursion-transfer-matrix (RTM) method combined with the nonequilibrium Green's function (NEGF) method, we study the electronic states and current-voltage (I-V) characteristics of atomic-scale nanocontact systems. We find that non-linear behaviors appear in the I-V characteristics even without molecules between electrodes. Such non-linear behaviors emerge when the nanocontacts are not well constructed and the transport properties change from tunneling to ballistic regimes
Ab initio valence calculations in chemistry
Cook, D B
1974-01-01
Ab Initio Valence Calculations in Chemistry describes the theory and practice of ab initio valence calculations in chemistry and applies the ideas to a specific example, linear BeH2. Topics covered include the Schrödinger equation and the orbital approximation to atomic orbitals; molecular orbital and valence bond methods; practical molecular wave functions; and molecular integrals. Open shell systems, molecular symmetry, and localized descriptions of electronic structure are also discussed. This book is comprised of 13 chapters and begins by introducing the reader to the use of the Schrödinge
Simple calculation of ab initio melting curves: Application to aluminum.
Robert, Grégory; Legrand, Philippe; Arnault, Philippe; Desbiens, Nicolas; Clérouin, Jean
2015-03-01
We present a simple, fast, and promising method to compute the melting curves of materials with ab initio molecular dynamics. It is based on the two-phase thermodynamic model of Lin et al [J. Chem. Phys. 119, 11792 (2003)] and its improved version given by Desjarlais [Phys. Rev. E 88, 062145 (2013)]. In this model, the velocity autocorrelation function is utilized to calculate the contribution of the nuclei motion to the entropy of the solid and liquid phases. It is then possible to find the thermodynamic conditions of equal Gibbs free energy between these phases, defining the melting curve. The first benchmark on the face-centered cubic melting curve of aluminum from 0 to 300 GPa demonstrates how to obtain an accuracy of 5%-10%, comparable to the most sophisticated methods, for a much lower computational cost.
Computing and physical methods to calculate Pu
International Nuclear Information System (INIS)
Mohamed, Ashraf Elsayed Mohamed
2013-01-01
Main limitations due to the enhancement of the plutonium content are related to the coolant void effect as the spectrum becomes faster, the neutron flux in the thermal region tends towards zero and is concentrated in the region from 10 Ke to 1 MeV. Thus, all captures by 240 Pu and 242 Pu in the thermal and epithermal resonance disappear and the 240 Pu and 242 Pu contributions to the void effect became positive. The higher the Pu content and the poorer the Pu quality, the larger the void effect. The core control in nominal or transient conditions Pu enrichment leads to a decrease in (B eff.), the efficiency of soluble boron and control rods. Also, the Doppler effect tends to decrease when Pu replaces U, so, that in case of transients the core could diverge again if the control is not effective enough. As for the voiding effect, the plutonium degradation and the 240 Pu and 242 Pu accumulation after multiple recycling lead to spectrum hardening and to a decrease in control. One solution would be to use enriched boron in soluble boron and shutdown rods. In this paper, I discuss and show the advanced computing and physical methods to calculate Pu inside the nuclear reactors and glovebox and the different solutions to be used to overcome the difficulties that effect, on safety parameters and on reactor performance, and analysis the consequences of plutonium management on the whole fuel cycle like Raw materials savings, fraction of nuclear electric power involved in the Pu management. All through two types of scenario, one involving a low fraction of the nuclear park dedicated to plutonium management, the other involving a dilution of the plutonium in all the nuclear park. (author)
Computational methods in sequence and structure prediction
Lang, Caiyi
This dissertation is organized into two parts. In the first part, we will discuss three computational methods for cis-regulatory element recognition in three different gene regulatory networks as the following: (a) Using a comprehensive "Phylogenetic Footprinting Comparison" method, we will investigate the promoter sequence structures of three enzymes (PAL, CHS and DFR) that catalyze sequential steps in the pathway from phenylalanine to anthocyanins in plants. Our result shows there exists a putative cis-regulatory element "AC(C/G)TAC(C)" in the upstream of these enzyme genes. We propose this cis-regulatory element to be responsible for the genetic regulation of these three enzymes and this element, might also be the binding site for MYB class transcription factor PAP1. (b) We will investigate the role of the Arabidopsis gene glutamate receptor 1.1 (AtGLR1.1) in C and N metabolism by utilizing the microarray data we obtained from AtGLR1.1 deficient lines (antiAtGLR1.1). We focus our investigation on the putatively co-regulated transcript profile of 876 genes we have collected in antiAtGLR1.1 lines. By (a) scanning the occurrence of several groups of known abscisic acid (ABA) related cisregulatory elements in the upstream regions of 876 Arabidopsis genes; and (b) exhaustive scanning of all possible 6-10 bps motif occurrence in the upstream regions of the same set of genes, we are able to make a quantative estimation on the enrichment level of each of the cis-regulatory element candidates. We finally conclude that one specific cis-regulatory element group, called "ABRE" elements, are statistically highly enriched within the 876-gene group as compared to their occurrence within the genome. (c) We will introduce a new general purpose algorithm, called "fuzzy REDUCE1", which we have developed recently for automated cis-regulatory element identification. In the second part, we will discuss our newly devised protein design framework. With this framework we have developed
Computational methods for corpus annotation and analysis
Lu, Xiaofei
2014-01-01
This book reviews computational tools for lexical, syntactic, semantic, pragmatic and discourse analysis, with instructions on how to obtain, install and use each tool. Covers studies using Natural Language Processing, and offers ideas for better integration.
Cloud computing methods and practical approaches
Mahmood, Zaigham
2013-01-01
This book presents both state-of-the-art research developments and practical guidance on approaches, technologies and frameworks for the emerging cloud paradigm. Topics and features: presents the state of the art in cloud technologies, infrastructures, and service delivery and deployment models; discusses relevant theoretical frameworks, practical approaches and suggested methodologies; offers guidance and best practices for the development of cloud-based services and infrastructures, and examines management aspects of cloud computing; reviews consumer perspectives on mobile cloud computing an
Advanced Computational Methods in Bio-Mechanics.
Al Qahtani, Waleed M S; El-Anwar, Mohamed I
2018-04-15
A novel partnership between surgeons and machines, made possible by advances in computing and engineering technology, could overcome many of the limitations of traditional surgery. By extending surgeons' ability to plan and carry out surgical interventions more accurately and with fewer traumas, computer-integrated surgery (CIS) systems could help to improve clinical outcomes and the efficiency of healthcare delivery. CIS systems could have a similar impact on surgery to that long since realised in computer-integrated manufacturing. Mathematical modelling and computer simulation have proved tremendously successful in engineering. Computational mechanics has enabled technological developments in virtually every area of our lives. One of the greatest challenges for mechanists is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. Biomechanics has significant potential for applications in orthopaedic industry, and the performance arts since skills needed for these activities are visibly related to the human musculoskeletal and nervous systems. Although biomechanics is widely used nowadays in the orthopaedic industry to design orthopaedic implants for human joints, dental parts, external fixations and other medical purposes, numerous researches funded by billions of dollars are still running to build a new future for sports and human healthcare in what is called biomechanics era.
Ab Initio molecular dynamics with excited electrons
Alavi, A.; Kohanoff, J.; Parrinello, M.; Frenkel, D.
1994-01-01
A method to do ab initio molecular dynamics suitable for metallic and electronically hot systems is described. It is based on a density functional which is costationary with the finite-temperature functional of Mermin, with state being included with possibly fractional occupation numbers.
Ab initio electronic stopping power in materials
International Nuclear Information System (INIS)
Shukri, Abdullah-Atef
2015-01-01
The average energy loss of an ion per unit path length when it is moving through the matter is named the stopping power. The knowledge of the stopping power is essential for a variety of contemporary applications which depend on the transport of ions in matter, especially ion beam analysis techniques and ion implantation. Most noticeably, the use of proton or heavier ion beams in radiotherapy requires the knowledge of the stopping power. Whereas experimental data are readily available for elemental solids, the data are much more scarce for compounds. The linear response dielectric formalism has been widely used in the past to study the electronic stopping power. In particular, the famous pioneering calculations due to Lindhard evaluate the electronic stopping power of a free electron gas. In this thesis, we develop a fully ab initio scheme based on linear response time-dependent density functional theory to predict the impact parameter averaged quantity named the random electronic stopping power (RESP) of materials without any empirical fitting. The purpose is to be capable of predicting the outcome of experiments without any knowledge of target material besides its crystallographic structure. Our developments have been done within the open source ab initio code named ABINIT, where two approximations are now available: the Random-Phase Approximation (RPA) and the Adiabatic Local Density Approximation (ALDA). Furthermore, a new method named 'extrapolation scheme' have been introduced to overcome the stringent convergence issues we have encountered. These convergence issues have prevented the previous studies in literature from offering a direct comparison to experiment. First of all, we demonstrate the importance of describing the realistic ab initio electronic structure by comparing with the historical Lindhard stopping power evaluation. Whereas the Lindhard stopping power provides a first order description that captures the general features of the
Energy Technology Data Exchange (ETDEWEB)
Feller, D.F.
1993-07-01
This collection of benchmark timings represents a snapshot of the hardware and software capabilities available for ab initio quantum chemical calculations at Pacific Northwest Laboratory`s Molecular Science Research Center in late 1992 and early 1993. The ``snapshot`` nature of these results should not be underestimated, because of the speed with which both hardware and software are changing. Even during the brief period of this study, we were presented with newer, faster versions of several of the codes. However, the deadline for completing this edition of the benchmarks precluded updating all the relevant entries in the tables. As will be discussed below, a similar situation occurred with the hardware. The timing data included in this report are subject to all the normal failures, omissions, and errors that accompany any human activity. In an attempt to mimic the manner in which calculations are typically performed, we have run the calculations with the maximum number of defaults provided by each program and a near minimum amount of memory. This approach may not produce the fastest performance that a particular code can deliver. It is not known to what extent improved timings could be obtained for each code by varying the run parameters. If sufficient interest exists, it might be possible to compile a second list of timing data corresponding to the fastest observed performance from each application, using an unrestricted set of input parameters. Improvements in I/O might have been possible by fine tuning the Unix kernel, but we resisted the temptation to make changes to the operating system. Due to the large number of possible variations in levels of operating system, compilers, speed of disks and memory, versions of applications, etc., readers of this report may not be able to exactly reproduce the times indicated. Copies of the output files from individual runs are available if questions arise about a particular set of timings.
International Nuclear Information System (INIS)
Rode, Michał F.; Sobolewski, Andrzej L.
2014-01-01
Effect of chemical substitutions to the molecular structure of 3-hydroxy-picolinic acid on photo-switching properties of the system operating on excited-state intramolecular double proton transfer (d-ESIPT) process [M. F. Rode and A. L. Sobolewski, Chem. Phys. 409, 41 (2012)] was studied with the aid of electronic structure theory methods. It was shown that simultaneous application of electron-donating and electron-withdrawing substitutions at certain positions of the molecular frame increases the height of the S 0 -state tautomerization barrier (ensuring thermal stability of isomers) and facilitates a barrierless access to the S 1 /S 0 conical intersection from the Franck-Condon region of the S 1 potential-energy surface. Results of study point to the conclusion that the most challenging issue for practical design of a fast molecular photoswitch based on d-ESIPT phenomenon are to ensure a selectivity of optical excitation of a given tautomeric form of the system
Rode, Michał F.; Sobolewski, Andrzej L.
2014-02-01
Effect of chemical substitutions to the molecular structure of 3-hydroxy-picolinic acid on photo-switching properties of the system operating on excited-state intramolecular double proton transfer (d-ESIPT) process [M. F. Rode and A. L. Sobolewski, Chem. Phys. 409, 41 (2012)] was studied with the aid of electronic structure theory methods. It was shown that simultaneous application of electron-donating and electron-withdrawing substitutions at certain positions of the molecular frame increases the height of the S0-state tautomerization barrier (ensuring thermal stability of isomers) and facilitates a barrierless access to the S1/S0 conical intersection from the Franck-Condon region of the S1 potential-energy surface. Results of study point to the conclusion that the most challenging issue for practical design of a fast molecular photoswitch based on d-ESIPT phenomenon are to ensure a selectivity of optical excitation of a given tautomeric form of the system.
New or improved computational methods and advanced reactor design
International Nuclear Information System (INIS)
Nakagawa, Masayuki; Takeda, Toshikazu; Ushio, Tadashi
1997-01-01
Nuclear computational method has been studied continuously up to date, as a fundamental technology supporting the nuclear development. At present, research on computational method according to new theory and the calculating method thought to be difficult to practise are also continued actively to find new development due to splendid improvement of features of computer. In Japan, many light water type reactors are now in operations, new computational methods are induced for nuclear design, and a lot of efforts are concentrated for intending to more improvement of economics and safety. In this paper, some new research results on the nuclear computational methods and their application to nuclear design of the reactor were described for introducing recent trend of the nuclear design of the reactor. 1) Advancement of the computational method, 2) Reactor core design and management of the light water reactor, and 3) Nuclear design of the fast reactor. (G.K.)
Ab initio derivation of model energy density functionals
International Nuclear Information System (INIS)
Dobaczewski, Jacek
2016-01-01
I propose a simple and manageable method that allows for deriving coupling constants of model energy density functionals (EDFs) directly from ab initio calculations performed for finite fermion systems. A proof-of-principle application allows for linking properties of finite nuclei, determined by using the nuclear nonlocal Gogny functional, to the coupling constants of the quasilocal Skyrme functional. The method does not rely on properties of infinite fermion systems but on the ab initio calculations in finite systems. It also allows for quantifying merits of different model EDFs in describing the ab initio results. (letter)
International Nuclear Information System (INIS)
Solomonik, V.G.; Pogrebnaya, T.P.
2001-01-01
The basic and the first exciting electronic state of the VF 4 , NbF 4 and TaF 4 molecules were studied by the self-consistent field multifunctional method in the approximation of active orbitals full space. The symmetry of these states is defined as 2 E and 2 T 2 at tetrahedron configuration of the nuclei. The energies of electron excitation 2 E → 2 T 2 comprise 11610 (VF 4 ), 13450 (NbF 4 ) and 12560 cm -1 (TaF 4 ). In line to Jahn-Teller theorem the calculations evidenced instability of tetrahedron configuration of the molecules in the orbital-singular electronic states 2 E and 2 T 2 . The properties of the systems of the molecule potential energies were found, in keeping with deformation of tetrahedron configuration of the nuclei along the oscillation coordinates of the e symmetry, active in the Jahn-Teller effect. Equilibrium geometric configuration of the molecules with the lowest energy has a symmetry D 2d as in the basic ( 2 A 1 ), so in the first excited electronic state. The energy of the adiabatic electronic excitation 2 A 1 → 2 B 2 is equal to 8440 (VF 4 ), 9050 (NbF 4 ) and 11920 cm -1 (TaF 4 ). Deviations of molecules equilibrium geometry on the structure of the normal tetrahedron and the energy of Jahn-Teller stabilization E JT grow essentially in the series of the molecules VF 4 → NbF 4 → TaF 4 : E JT =E(T d , 2 E)Epy - E(D 2d , 2 A 1 )=412, 1856, 5970 cm -1 ; E JT =E(T d , 2 T 2 ) - E(D 2d , 2 B 2 )=3584, 6259, 6611 cm -1 . The quadratic force constants, the frequencies of normal oscillations and the intensities in the IR spectra of the VF 4 , NbF 4 and TaF 4 molecules in the basic state were calculated [ru
A computer method for spectral classification
International Nuclear Information System (INIS)
Appenzeller, I.; Zekl, H.
1978-01-01
The authors describe the start of an attempt to improve the accuracy of spectroscopic parallaxes by evaluating spectroscopic temperature and luminosity criteria such as those of the MK classification spectrograms which were analyzed automatically by means of a suitable computer program. (Auth.)
Computational structural biology: methods and applications
National Research Council Canada - National Science Library
Schwede, Torsten; Peitsch, Manuel Claude
2008-01-01
... sequencing reinforced the observation that structural information is needed to understand the detailed function and mechanism of biological molecules such as enzyme reactions and molecular recognition events. Furthermore, structures are obviously key to the design of molecules with new or improved functions. In this context, computational structural biology...
Soft computing methods for geoidal height transformation
Akyilmaz, O.; Özlüdemir, M. T.; Ayan, T.; Çelik, R. N.
2009-07-01
Soft computing techniques, such as fuzzy logic and artificial neural network (ANN) approaches, have enabled researchers to create precise models for use in many scientific and engineering applications. Applications that can be employed in geodetic studies include the estimation of earth rotation parameters and the determination of mean sea level changes. Another important field of geodesy in which these computing techniques can be applied is geoidal height transformation. We report here our use of a conventional polynomial model, the Adaptive Network-based Fuzzy (or in some publications, Adaptive Neuro-Fuzzy) Inference System (ANFIS), an ANN and a modified ANN approach to approximate geoid heights. These approximation models have been tested on a number of test points. The results obtained through the transformation processes from ellipsoidal heights into local levelling heights have also been compared.
Soft Computing Methods in Design of Superalloys
Cios, K. J.; Berke, L.; Vary, A.; Sharma, S.
1996-01-01
Soft computing techniques of neural networks and genetic algorithms are used in the design of superalloys. The cyclic oxidation attack parameter K(sub a), generated from tests at NASA Lewis Research Center, is modelled as a function of the superalloy chemistry and test temperature using a neural network. This model is then used in conjunction with a genetic algorithm to obtain an optimized superalloy composition resulting in low K(sub a) values.
Statistical methods and computing for big data
Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing
2016-01-01
Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay. PMID:27695593
Statistical methods and computing for big data.
Wang, Chun; Chen, Ming-Hui; Schifano, Elizabeth; Wu, Jing; Yan, Jun
2016-01-01
Big data are data on a massive scale in terms of volume, intensity, and complexity that exceed the capacity of standard analytic tools. They present opportunities as well as challenges to statisticians. The role of computational statisticians in scientific discovery from big data analyses has been under-recognized even by peer statisticians. This article summarizes recent methodological and software developments in statistics that address the big data challenges. Methodologies are grouped into three classes: subsampling-based, divide and conquer, and online updating for stream data. As a new contribution, the online updating approach is extended to variable selection with commonly used criteria, and their performances are assessed in a simulation study with stream data. Software packages are summarized with focuses on the open source R and R packages, covering recent tools that help break the barriers of computer memory and computing power. Some of the tools are illustrated in a case study with a logistic regression for the chance of airline delay.
Efficacy of the SU(3) scheme for ab initio large-scale calculations beyond the lightest nuclei
Energy Technology Data Exchange (ETDEWEB)
Dytrych, T. [Academy of Sciences of the Czech Republic (ASCR), Prague (Czech Republic); Louisiana State Univ., Baton Rouge, LA (United States); Maris, Pieter [Iowa State Univ., Ames, IA (United States); Launey, K. D. [Louisiana State Univ., Baton Rouge, LA (United States); Draayer, J. P. [Louisiana State Univ., Baton Rouge, LA (United States); Vary, James [Iowa State Univ., Ames, IA (United States); Langr, D. [Czech Technical Univ., Prague (Czech Republic); Aerospace Research and Test Establishment, Prague (Czech Republic); Saule, E. [Univ. of North Carolina, Charlotte, NC (United States); Caprio, M. A. [Univ. of Notre Dame, IN (United States); Catalyurek, U. [The Ohio State Univ., Columbus, OH (United States). Dept. of Electrical and Computer Engineering; Sosonkina, M. [Old Dominion Univ., Norfolk, VA (United States)
2016-06-09
We report on the computational characteristics of ab initio nuclear structure calculations in a symmetry-adapted no-core shell model (SA-NCSM) framework. We examine the computational complexity of the current implementation of the SA-NCSM approach, dubbed LSU3shell, by analyzing ab initio results for ^{6}Li and ^{12}C in large harmonic oscillator model spaces and SU(3)-selected subspaces. We demonstrate LSU3shell's strong-scaling properties achieved with highly-parallel methods for computing the many-body matrix elements. Results compare favorably with complete model space calculations and signi cant memory savings are achieved in physically important applications. In particular, a well-chosen symmetry-adapted basis a ords memory savings in calculations of states with a fixed total angular momentum in large model spaces while exactly preserving translational invariance.
Tensor network method for reversible classical computation
Yang, Zhi-Cheng; Kourtis, Stefanos; Chamon, Claudio; Mucciolo, Eduardo R.; Ruckenstein, Andrei E.
2018-03-01
We develop a tensor network technique that can solve universal reversible classical computational problems, formulated as vertex models on a square lattice [Nat. Commun. 8, 15303 (2017), 10.1038/ncomms15303]. By encoding the truth table of each vertex constraint in a tensor, the total number of solutions compatible with partial inputs and outputs at the boundary can be represented as the full contraction of a tensor network. We introduce an iterative compression-decimation (ICD) scheme that performs this contraction efficiently. The ICD algorithm first propagates local constraints to longer ranges via repeated contraction-decomposition sweeps over all lattice bonds, thus achieving compression on a given length scale. It then decimates the lattice via coarse-graining tensor contractions. Repeated iterations of these two steps gradually collapse the tensor network and ultimately yield the exact tensor trace for large systems, without the need for manual control of tensor dimensions. Our protocol allows us to obtain the exact number of solutions for computations where a naive enumeration would take astronomically long times.
Advanced Computational Methods for Monte Carlo Calculations
Energy Technology Data Exchange (ETDEWEB)
Brown, Forrest B. [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-01-12
This course is intended for graduate students who already have a basic understanding of Monte Carlo methods. It focuses on advanced topics that may be needed for thesis research, for developing new state-of-the-art methods, or for working with modern production Monte Carlo codes.
Computational methods for two-phase flow and particle transport
Lee, Wen Ho
2013-01-01
This book describes mathematical formulations and computational methods for solving two-phase flow problems with a computer code that calculates thermal hydraulic problems related to light water and fast breeder reactors. The physical model also handles the particle and gas flow problems that arise from coal gasification and fluidized beds. The second part of this book deals with the computational methods for particle transport.
Reference depth for geostrophic computation - A new method
Digital Repository Service at National Institute of Oceanography (India)
Varkey, M.J.; Sastry, J.S.
Various methods are available for the determination of reference depth for geostrophic computation. A new method based on the vertical profiles of mean and variance of the differences of mean specific volume anomaly (delta x 10) for different layers...
Lattice Boltzmann method fundamentals and engineering applications with computer codes
Mohamad, A A
2014-01-01
Introducing the Lattice Boltzmann Method in a readable manner, this book provides detailed examples with complete computer codes. It avoids the most complicated mathematics and physics without scarifying the basic fundamentals of the method.
An Augmented Fast Marching Method for Computing Skeletons and Centerlines
Telea, Alexandru; Wijk, Jarke J. van
2002-01-01
We present a simple and robust method for computing skeletons for arbitrary planar objects and centerlines for 3D objects. We augment the Fast Marching Method (FMM) widely used in level set applications by computing the paramterized boundary location every pixel came from during the boundary
Classical versus Computer Algebra Methods in Elementary Geometry
Pech, Pavel
2005-01-01
Computer algebra methods based on results of commutative algebra like Groebner bases of ideals and elimination of variables make it possible to solve complex, elementary and non elementary problems of geometry, which are difficult to solve using a classical approach. Computer algebra methods permit the proof of geometric theorems, automatic…
Methods for teaching geometric modelling and computer graphics
Energy Technology Data Exchange (ETDEWEB)
Rotkov, S.I.; Faitel`son, Yu. Ts.
1992-05-01
This paper considers methods for teaching the methods and algorithms of geometric modelling and computer graphics to programmers, designers and users of CAD and computer-aided research systems. There is a bibliography that can be used to prepare lectures and practical classes. 37 refs., 1 tab.
Feasible and realiable ab initio atomistic modeling for nuclear waste management
Energy Technology Data Exchange (ETDEWEB)
Beridze, George
2016-07-01
The studies in this PhD dissertation focus on finding a computationally feasible ab initio methodology which would make the reliable first principle atomistic modeling of nuclear materials possible. Here we tested the performance of the different DFT functionals and the DFT-based methods that explicitly account for the electronic correlations, such as the DFT+U approach, for prediction of structural and thermochemical properties of lanthanide- and actinide-bearing materials. In the previous studies, the value of the Hubbard U parameter, required by the DFT+U method, was often guessed or empirically derived. We applied and extensively tested the recently developed ab initio methods such as the constrained local density approximation (cLDA) and the constrained random phase approximation (cRPA), to compute the Hubbard U parameter values from first principles, thus making the DFT+U method a real it ab initio parameter free approach. Our successful benchmarking studies of the parameter-free DFT+U method, for prediction of the structures and the reaction enthalpies of actinide- and lanthanide-bearing molecular compounds and solids indicate, that the linear response method (cLDA) provides a very good, and consistent with the cRPA prediction, estimate of the Hubbard U parameter. In particular, we found that the Hubbard U parameter value, which describes the strength of the on-site Coulomb repulsion between f-electrons, depends strongly on the oxidation state of the f-element, its local bonding environment and crystalline structure of the materials, which has never been considered in such detail before. We have shown, that the applied computational approach substantially, if not dramatically, reduces the error of the predicted reaction enthalpies making the accuracy of the prediction comparable with the uncertainty of the computational unfeasible, higher order methods of quantum chemistry, and experiments. The derived methodology resulted in various, already published
Feasible and realiable ab initio atomistic modeling for nuclear waste management
International Nuclear Information System (INIS)
Beridze, George
2016-01-01
The studies in this PhD dissertation focus on finding a computationally feasible ab initio methodology which would make the reliable first principle atomistic modeling of nuclear materials possible. Here we tested the performance of the different DFT functionals and the DFT-based methods that explicitly account for the electronic correlations, such as the DFT+U approach, for prediction of structural and thermochemical properties of lanthanide- and actinide-bearing materials. In the previous studies, the value of the Hubbard U parameter, required by the DFT+U method, was often guessed or empirically derived. We applied and extensively tested the recently developed ab initio methods such as the constrained local density approximation (cLDA) and the constrained random phase approximation (cRPA), to compute the Hubbard U parameter values from first principles, thus making the DFT+U method a real it ab initio parameter free approach. Our successful benchmarking studies of the parameter-free DFT+U method, for prediction of the structures and the reaction enthalpies of actinide- and lanthanide-bearing molecular compounds and solids indicate, that the linear response method (cLDA) provides a very good, and consistent with the cRPA prediction, estimate of the Hubbard U parameter. In particular, we found that the Hubbard U parameter value, which describes the strength of the on-site Coulomb repulsion between f-electrons, depends strongly on the oxidation state of the f-element, its local bonding environment and crystalline structure of the materials, which has never been considered in such detail before. We have shown, that the applied computational approach substantially, if not dramatically, reduces the error of the predicted reaction enthalpies making the accuracy of the prediction comparable with the uncertainty of the computational unfeasible, higher order methods of quantum chemistry, and experiments. The derived methodology resulted in various, already published
Estudo da geometria da uréia por métodos ab initio e simulação computacional de líquidos
Directory of Open Access Journals (Sweden)
Cirino José Jair Vianna
2002-01-01
Full Text Available A study was carried out on the urea geometries using ab initio calculation and Monte Carlo computational simulation of liquids. The ab initio calculated results showed that urea has a non-planar conformation in the gas phase in which the hydrogen atoms are out of the plane formed by the heavy atoms. Free energies associated to the rotation of the amino groups of urea in water were obtained using the Monte Carlo method in which the thermodynamic perturbation theory is implemented. The magnitude of the free energy obtained from this simulation did not permit us to conclude that urea is non-planar in water.
Estudo da geometria da uréia por métodos ab initio e simulação computacional de líquidos
Cirino,José Jair Vianna; Bertran,Celso Aparecido
2002-01-01
A study was carried out on the urea geometries using ab initio calculation and Monte Carlo computational simulation of liquids. The ab initio calculated results showed that urea has a non-planar conformation in the gas phase in which the hydrogen atoms are out of the plane formed by the heavy atoms. Free energies associated to the rotation of the amino groups of urea in water were obtained using the Monte Carlo method in which the thermodynamic perturbation theory is implemented. The magnitud...
Computational Methods for Conformational Sampling of Biomolecules
DEFF Research Database (Denmark)
Bottaro, Sandro
mathematical approach to a classic geometrical problem in protein simulations, and demonstrated its superiority compared to existing approaches. Secondly, we have constructed a more accurate implicit model of the aqueous environment, which is of fundamental importance in protein chemistry. This model......Proteins play a fundamental role in virtually every process within living organisms. For example, some proteins act as enzymes, catalyzing a wide range of reactions necessary for life, others mediate the cell interaction with the surrounding environment and still others have regulatory functions...... is computationally much faster than models where water molecules are represented explicitly. Finally, in collaboration with the group of structural bioinformatics at the Department of Biology (KU), we have applied these techniques in the context of modeling of protein structure and flexibility from low...
Computational Method for Atomistic-Continuum Homogenization
National Research Council Canada - National Science Library
Chung, Peter
2002-01-01
The homogenization method is used as a framework for developing a multiscale system of equations involving atoms at zero temperature at the small scale and continuum mechanics at the very large scale...
Bridging a gap between continuum-QCD and ab initio predictions of hadron observables
Energy Technology Data Exchange (ETDEWEB)
Binosi, Daniele [European Centre for Theoretical Studies in Nuclear Physics and Related Areas - ECT* and Fondazione Bruno Kessler, Villa Tambosi, Strada delle Tabarelle 286, I-38123 Villazzano (Italy); Chang, Lei [CSSM, School of Chemistry and Physics, University of Adelaide, Adelaide, SA 5005 (Australia); Papavassiliou, Joannis [Department of Theoretical Physics and IFIC, University of Valencia and CSIC, E-46100, Valencia (Spain); Roberts, Craig D., E-mail: cdroberts@anl.gov [Physics Division, Argonne National Laboratory, Argonne, IL 60439 (United States)
2015-03-06
Within contemporary hadron physics there are two common methods for determining the momentum-dependence of the interaction between quarks: the top-down approach, which works toward an ab initio computation of the interaction via direct analysis of the gauge-sector gap equations; and the bottom-up scheme, which aims to infer the interaction by fitting data within a well-defined truncation of those equations in the matter sector that are relevant to bound-state properties. We unite these two approaches by demonstrating that the renormalisation-group-invariant running-interaction predicted by contemporary analyses of QCD's gauge sector coincides with that required in order to describe ground-state hadron observables using a nonperturbative truncation of QCD's Dyson–Schwinger equations in the matter sector. This bridges a gap that had lain between nonperturbative continuum-QCD and the ab initio prediction of bound-state properties.
Instrument design optimization with computational methods
Energy Technology Data Exchange (ETDEWEB)
Moore, Michael H. [Old Dominion Univ., Norfolk, VA (United States)
2017-08-01
Using Finite Element Analysis to approximate the solution of differential equations, two different instruments in experimental Hall C at the Thomas Jefferson National Accelerator Facility are analyzed. The time dependence of density uctuations from the liquid hydrogen (LH2) target used in the Q_{wea}k experiment (2011-2012) are studied with Computational Fluid Dynamics (CFD) and the simulation results compared to data from the experiment. The 2.5 kW liquid hydrogen target was the highest power LH2 target in the world and the first to be designed with CFD at Jefferson Lab. The first complete magnetic field simulation of the Super High Momentum Spectrometer (SHMS) is presented with a focus on primary electron beam deflection downstream of the target. The SHMS consists of a superconducting horizontal bending magnet (HB) and three superconducting quadrupole magnets. The HB allows particles scattered at an angle of 5:5 deg to the beam line to be steered into the quadrupole magnets which make up the optics of the spectrometer. Without mitigation, remnant fields from the SHMS may steer the unscattered beam outside of the acceptable envelope on the beam dump and limit beam operations at small scattering angles. A solution is proposed using optimal placement of a minimal amount of shielding iron around the beam line.
Computer methods in physics 250 problems with guided solutions
Landau, Rubin H
2018-01-01
Our future scientists and professionals must be conversant in computational techniques. In order to facilitate integration of computer methods into existing physics courses, this textbook offers a large number of worked examples and problems with fully guided solutions in Python as well as other languages (Mathematica, Java, C, Fortran, and Maple). Its also intended as a self-study guide for learning how to use computer methods in physics. The authors include an introductory chapter on numerical tools and indication of computational and physics difficulty level for each problem.
Electromagnetic computation methods for lightning surge protection studies
Baba, Yoshihiro
2016-01-01
This book is the first to consolidate current research and to examine the theories of electromagnetic computation methods in relation to lightning surge protection. The authors introduce and compare existing electromagnetic computation methods such as the method of moments (MOM), the partial element equivalent circuit (PEEC), the finite element method (FEM), the transmission-line modeling (TLM) method, and the finite-difference time-domain (FDTD) method. The application of FDTD method to lightning protection studies is a topic that has matured through many practical applications in the past decade, and the authors explain the derivation of Maxwell's equations required by the FDTD, and modeling of various electrical components needed in computing lightning electromagnetic fields and surges with the FDTD method. The book describes the application of FDTD method to current and emerging problems of lightning surge protection of continuously more complex installations, particularly in critical infrastructures of e...
Three-dimensional protein structure prediction: Methods and computational strategies.
Dorn, Márcio; E Silva, Mariel Barbachan; Buriol, Luciana S; Lamb, Luis C
2014-10-12
A long standing problem in structural bioinformatics is to determine the three-dimensional (3-D) structure of a protein when only a sequence of amino acid residues is given. Many computational methodologies and algorithms have been proposed as a solution to the 3-D Protein Structure Prediction (3-D-PSP) problem. These methods can be divided in four main classes: (a) first principle methods without database information; (b) first principle methods with database information; (c) fold recognition and threading methods; and (d) comparative modeling methods and sequence alignment strategies. Deterministic computational techniques, optimization techniques, data mining and machine learning approaches are typically used in the construction of computational solutions for the PSP problem. Our main goal with this work is to review the methods and computational strategies that are currently used in 3-D protein prediction. Copyright © 2014 Elsevier Ltd. All rights reserved.
DEFF Research Database (Denmark)
Novitsky, Andrey; de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn
2017-01-01
Five state-of-the-art computational methods are benchmarked by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities. The convergence of the methods with respect to resolution, degrees of freedom and number of modes is investigated. Specia...
Computational Methods for Physicists Compendium for Students
Sirca, Simon
2012-01-01
This book helps advanced undergraduate, graduate and postdoctoral students in their daily work by offering them a compendium of numerical methods. The choice of methods pays significant attention to error estimates, stability and convergence issues as well as to the ways to optimize program execution speeds. Many examples are given throughout the chapters, and each chapter is followed by at least a handful of more comprehensive problems which may be dealt with, for example, on a weekly basis in a one- or two-semester course. In these end-of-chapter problems the physics background is pronounced, and the main text preceding them is intended as an introduction or as a later reference. Less stress is given to the explanation of individual algorithms. It is tried to induce in the reader an own independent thinking and a certain amount of scepticism and scrutiny instead of blindly following readily available commercial tools.
Measurement method of cardiac computed tomography (CT)
International Nuclear Information System (INIS)
Watanabe, Shigeru; Yamamoto, Hironori; Yumura, Yasuo; Yoshida, Hideo; Morooka, Nobuhiro
1980-01-01
The CT was carried out in 126 cases consisting of 31 normals, 17 cases of mitral stenosis (MS), 8 cases of mitral regurgitation (MR), 11 cases of aortic stenosis (AS), 9 cases of aortic regurgitation (AR), 20 cases of myocardial infarction (MI), 8 cases of atrial septal defect (ASD) and 22 hypertensives. The 20-second scans were performed every 1.5 cm from the 2nd intercostal space to the 5th or 6th intercostal space. The computed tomograms obtained were classified into 8 levels by cross-sectional anatomy; levels of (1) the aortic arch, (2) just beneath the aortic arch, (3) the pulmonary artery bifurcation, (4) the right atrial appendage or the upper right atrium, (5) the aortic root, (6) the upper left ventricle, (7) the mid left ventricle, and (8) the lower left ventricle. The diameter (anteroposterior and transverse) and cross-sectional area were measured about ascending aorta (Ao), descending aorta (AoD), superior vena cava (SVC), inferoir vena cava (IVC), pulmonary artery branch (PA), main pulmonary artery (mPA), left atrium (LA), right atrium (RA), and right ventricular outflow tract (RVOT) on each level where they were clearly distinguished. However, it was difficult to separate cardiac wall from cardiac cavity because there was little difference of X-ray attenuation coefficient between the myocardium and blood. Therefore, on mid ventricular level, diameter and area about total cardiac shadow were measured, and then cardiac ratios to the thorax were respectively calculated. The normal range of their values was shown in table, and abnormal characteristics in cardiac disease were exhibited in comparison with normal values. In MS, diameter and area in LA were significantly larger than normal. In MS and ASD, all the right cardiac system were larger than normal, especially, RA and SVC in MS, PA and RVOT in ASD. The diameter and area of the aortic root was larger in the order of AR, AS and HT than normal. (author)
Three numerical methods for the computation of the electrostatic energy
International Nuclear Information System (INIS)
Poenaru, D.N.; Galeriu, D.
1975-01-01
The FORTRAN programs for computation of the electrostatic energy of a body with axial symmetry by Lawrence, Hill-Wheeler and Beringer methods are presented in detail. The accuracy, time of computation and the required memory of these methods are tested at various deformations for two simple parametrisations: two overlapping identical spheres and a spheroid. On this basis the field of application of each method is recomended
Computational Methods for ChIP-seq Data Analysis and Applications
Ashoor, Haitham
2017-04-25
integrates several experimental data including ChIP-seq data for TF binding sites. Finally, I present an extensive computational comparison of different ab-initio motif identification methods based on TF ChIP-seq data. The comparison included 10 different methods over 159 different TF datasets. Recommendations of this comparison indicate that the usage of simple methods outperforms the usage of high order models.
Reduced order methods for modeling and computational reduction
Rozza, Gianluigi
2014-01-01
This monograph addresses the state of the art of reduced order methods for modeling and computational reduction of complex parametrized systems, governed by ordinary and/or partial differential equations, with a special emphasis on real time computing techniques and applications in computational mechanics, bioengineering and computer graphics. Several topics are covered, including: design, optimization, and control theory in real-time with applications in engineering; data assimilation, geometry registration, and parameter estimation with special attention to real-time computing in biomedical engineering and computational physics; real-time visualization of physics-based simulations in computer science; the treatment of high-dimensional problems in state space, physical space, or parameter space; the interactions between different model reduction and dimensionality reduction approaches; the development of general error estimation frameworks which take into account both model and discretization effects. This...
Testing and Validation of Computational Methods for Mass Spectrometry.
Gatto, Laurent; Hansen, Kasper D; Hoopmann, Michael R; Hermjakob, Henning; Kohlbacher, Oliver; Beyer, Andreas
2016-03-04
High-throughput methods based on mass spectrometry (proteomics, metabolomics, lipidomics, etc.) produce a wealth of data that cannot be analyzed without computational methods. The impact of the choice of method on the overall result of a biological study is often underappreciated, but different methods can result in very different biological findings. It is thus essential to evaluate and compare the correctness and relative performance of computational methods. The volume of the data as well as the complexity of the algorithms render unbiased comparisons challenging. This paper discusses some problems and challenges in testing and validation of computational methods. We discuss the different types of data (simulated and experimental validation data) as well as different metrics to compare methods. We also introduce a new public repository for mass spectrometric reference data sets ( http://compms.org/RefData ) that contains a collection of publicly available data sets for performance evaluation for a wide range of different methods.
DEFF Research Database (Denmark)
Palmer, Michael H.; Hoffmann, Søren Vrønning; Jones, Nykola C.
2011-01-01
The Rydberg states in the vacuum ultraviolet photoabsorption spectrum of 1,2,3-triazole have been measured and analyzed with the aid of comparison to the UV valence photoelectron ionizations and the results of ab initio configuration interaction (CI) calculations. Calculated electronic ionization...... and excitation energies for singlet, triplet valence, and Rydberg states were obtained using multireference multiroot CI procedures with an aug-cc-pVTZ [5s3p3d1f] basis set and a set of Rydberg [4s3p3d3f] functions. Adiabatic excitation energies obtained for several electronic states using coupled...... are the excitations consistent with an f-series....
Developing a multimodal biometric authentication system using soft computing methods.
Malcangi, Mario
2015-01-01
Robust personal authentication is becoming ever more important in computer-based applications. Among a variety of methods, biometric offers several advantages, mainly in embedded system applications. Hard and soft multi-biometric, combined with hard and soft computing methods, can be applied to improve the personal authentication process and to generalize the applicability. This chapter describes the embedded implementation of a multi-biometric (voiceprint and fingerprint) multimodal identification system based on hard computing methods (DSP) for feature extraction and matching, an artificial neural network (ANN) for soft feature pattern matching, and a fuzzy logic engine (FLE) for data fusion and decision.
Computational Simulations and the Scientific Method
Kleb, Bil; Wood, Bill
2005-01-01
As scientific simulation software becomes more complicated, the scientific-software implementor's need for component tests from new model developers becomes more crucial. The community's ability to follow the basic premise of the Scientific Method requires independently repeatable experiments, and model innovators are in the best position to create these test fixtures. Scientific software developers also need to quickly judge the value of the new model, i.e., its cost-to-benefit ratio in terms of gains provided by the new model and implementation risks such as cost, time, and quality. This paper asks two questions. The first is whether other scientific software developers would find published component tests useful, and the second is whether model innovators think publishing test fixtures is a feasible approach.
Computer systems and methods for visualizing data
Stolte, Chris; Hanrahan, Patrick
2013-01-29
A method for forming a visual plot using a hierarchical structure of a dataset. The dataset comprises a measure and a dimension. The dimension consists of a plurality of levels. The plurality of levels form a dimension hierarchy. The visual plot is constructed based on a specification. A first level from the plurality of levels is represented by a first component of the visual plot. A second level from the plurality of levels is represented by a second component of the visual plot. The dataset is queried to retrieve data in accordance with the specification. The data includes all or a portion of the dimension and all or a portion of the measure. The visual plot is populated with the retrieved data in accordance with the specification.
Control rod computer code IAMCOS: general theory and numerical methods
International Nuclear Information System (INIS)
West, G.
1982-11-01
IAMCOS is a computer code for the description of mechanical and thermal behavior of cylindrical control rods for fast breeders. This code version was applied, tested and modified from 1979 to 1981. In this report are described the basic model (02 version), theoretical definitions and computation methods [fr
Towards hydrogen metallization: an Ab initio approach
International Nuclear Information System (INIS)
Bernard, St.
1998-01-01
The quest for metallic hydrogen is a major goal for both theoretical and experimental condensed matter physics. Hydrogen and deuterium have been compressed up to 200 GPa in diamond anvil cells, without any clear evidence for a metallic behaviour. Loubeyere has recently suggested that hydrogen could metallize, at pressures within experimental range, in a new Van der Waals compound: Ar(H 2 ) 2 which is characterized at ambient pressure by an open and anisotropic sublattice of hydrogen molecules, stabilized by an argon skeleton. This thesis deals with a detailed ab initio investigation, by Car-Parrinello molecular dynamics methods, of the evolution under pressure of this compound. In a last chapter, we go to much higher pressures and temperatures, in order to compare orbital and orbital free ab initio methods for the dense hydrogen plasma. (author)
Computation of saddle-type slow manifolds using iterative methods
DEFF Research Database (Denmark)
Kristiansen, Kristian Uldall
2015-01-01
with respect to , appropriate estimates are directly attainable using the method of this paper. The method is applied to several examples, including a model for a pair of neurons coupled by reciprocal inhibition with two slow and two fast variables, and the computation of homoclinic connections in the Fitz......This paper presents an alternative approach for the computation of trajectory segments on slow manifolds of saddle type. This approach is based on iterative methods rather than collocation-type methods. Compared to collocation methods, which require mesh refinements to ensure uniform convergence...
Vanicek, Jiri
2014-03-01
Rigorous quantum-mechanical calculations of coherent ultrafast electronic spectra remain difficult. I will present several approaches developed in our group that increase the efficiency and accuracy of such calculations: First, we justified the feasibility of evaluating time-resolved spectra of large systems by proving that the number of trajectories needed for convergence of the semiclassical dephasing representation/phase averaging is independent of dimensionality. Recently, we further accelerated this approximation with a cellular scheme employing inverse Weierstrass transform and optimal scaling of the cell size. The accuracy of potential energy surfaces was increased by combining the dephasing representation with accurate on-the-fly ab initio electronic structure calculations, including nonadiabatic and spin-orbit couplings. Finally, the inherent semiclassical approximation was removed in the exact quantum Gaussian dephasing representation, in which semiclassical trajectories are replaced by communicating frozen Gaussian basis functions evolving classically with an average Hamiltonian. Among other examples I will present an on-the-fly ab initio semiclassical dynamics calculation of the dispersed time-resolved stimulated emission spectrum of the 54-dimensional azulene. This research was supported by EPFL and by the Swiss National Science Foundation NCCR MUST (Molecular Ultrafast Science and Technology) and Grant No. 200021124936/1.
Discrete linear canonical transform computation by adaptive method.
Zhang, Feng; Tao, Ran; Wang, Yue
2013-07-29
The linear canonical transform (LCT) describes the effect of quadratic phase systems on a wavefield and generalizes many optical transforms. In this paper, the computation method for the discrete LCT using the adaptive least-mean-square (LMS) algorithm is presented. The computation approaches of the block-based discrete LCT and the stream-based discrete LCT using the LMS algorithm are derived, and the implementation structures of these approaches by the adaptive filter system are considered. The proposed computation approaches have the inherent parallel structures which make them suitable for efficient VLSI implementations, and are robust to the propagation of possible errors in the computation process.
Platform-independent method for computer aided schematic drawings
Vell, Jeffrey L [Slingerlands, NY; Siganporia, Darius M [Clifton Park, NY; Levy, Arthur J [Fort Lauderdale, FL
2012-02-14
A CAD/CAM method is disclosed for a computer system to capture and interchange schematic drawing and associated design information. The schematic drawing and design information are stored in an extensible, platform-independent format.
Simulating elastic light scattering using high performance computing methods
Hoekstra, A.G.; Sloot, P.M.A.; Verbraeck, A.; Kerckhoffs, E.J.H.
1993-01-01
The Coupled Dipole method, as originally formulated byPurcell and Pennypacker, is a very powerful method tosimulate the Elastic Light Scattering from arbitraryparticles. This method, which is a particle simulationmodel for Computational Electromagnetics, has one majordrawback: if the size of the
Computational and experimental methods for enclosed natural convection
International Nuclear Information System (INIS)
Larson, D.W.; Gartling, D.K.; Schimmel, W.P. Jr.
1977-10-01
Two computational procedures and one optical experimental procedure for studying enclosed natural convection are described. The finite-difference and finite-element numerical methods are developed and several sample problems are solved. Results obtained from the two computational approaches are compared. A temperature-visualization scheme using laser holographic interferometry is described, and results from this experimental procedure are compared with results from both numerical methods
Method and computer program product for maintenance and modernization backlogging
Mattimore, Bernard G; Reynolds, Paul E; Farrell, Jill M
2013-02-19
According to one embodiment, a computer program product for determining future facility conditions includes a computer readable medium having computer readable program code stored therein. The computer readable program code includes computer readable program code for calculating a time period specific maintenance cost, for calculating a time period specific modernization factor, and for calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. In another embodiment, a computer-implemented method for calculating future facility conditions includes calculating a time period specific maintenance cost, calculating a time period specific modernization factor, and calculating a time period specific backlog factor. Future facility conditions equal the time period specific maintenance cost plus the time period specific modernization factor plus the time period specific backlog factor. Other embodiments are also presented.
Computer Anti-forensics Methods and their Impact on Computer Forensic Investigation
Pajek, Przemyslaw; Pimenidis, Elias
2009-01-01
Electronic crime is very difficult to investigate and prosecute, mainly\\ud due to the fact that investigators have to build their cases based on artefacts left\\ud on computer systems. Nowadays, computer criminals are aware of computer forensics\\ud methods and techniques and try to use countermeasure techniques to efficiently\\ud impede the investigation processes. In many cases investigation with\\ud such countermeasure techniques in place appears to be too expensive, or too\\ud time consuming t...
Fibonacci’s Computation Methods vs Modern Algorithms
Directory of Open Access Journals (Sweden)
Ernesto Burattini
2013-12-01
Full Text Available In this paper we discuss some computational procedures given by Leonardo Pisano Fibonacci in his famous Liber Abaci book, and we propose their translation into a modern language for computers (C ++. Among the other we describe the method of “cross” multiplication, we evaluate its computational complexity in algorithmic terms and we show the output of a C ++ code that describes the development of the method applied to the product of two integers. In a similar way we show the operations performed on fractions introduced by Fibonacci. Thanks to the possibility to reproduce on a computer, the Fibonacci’s different computational procedures, it was possible to identify some calculation errors present in the different versions of the original text.
SmartShadow models and methods for pervasive computing
Wu, Zhaohui
2013-01-01
SmartShadow: Models and Methods for Pervasive Computing offers a new perspective on pervasive computing with SmartShadow, which is designed to model a user as a personality ""shadow"" and to model pervasive computing environments as user-centric dynamic virtual personal spaces. Just like human beings' shadows in the physical world, it follows people wherever they go, providing them with pervasive services. The model, methods, and software infrastructure for SmartShadow are presented and an application for smart cars is also introduced. The book can serve as a valuable reference work for resea
International Nuclear Information System (INIS)
Page, B.; Hilty, L.M.
1994-01-01
Environmental computer science is a new partial discipline of applied computer science, which makes use of methods and techniques of information processing in environmental protection. Thanks to the inter-disciplinary nature of environmental problems, computer science acts as a mediator between numerous disciplines and institutions in this sector. The handbook reflects the broad spectrum of state-of-the art environmental computer science. The following important subjects are dealt with: Environmental databases and information systems, environmental monitoring, modelling and simulation, visualization of environmental data and knowledge-based systems in the environmental sector. (orig.) [de
Computational methods for protein identification from mass spectrometry data.
Directory of Open Access Journals (Sweden)
Leo McHugh
2008-02-01
Full Text Available Protein identification using mass spectrometry is an indispensable computational tool in the life sciences. A dramatic increase in the use of proteomic strategies to understand the biology of living systems generates an ongoing need for more effective, efficient, and accurate computational methods for protein identification. A wide range of computational methods, each with various implementations, are available to complement different proteomic approaches. A solid knowledge of the range of algorithms available and, more critically, the accuracy and effectiveness of these techniques is essential to ensure as many of the proteins as possible, within any particular experiment, are correctly identified. Here, we undertake a systematic review of the currently available methods and algorithms for interpreting, managing, and analyzing biological data associated with protein identification. We summarize the advances in computational solutions as they have responded to corresponding advances in mass spectrometry hardware. The evolution of scoring algorithms and metrics for automated protein identification are also discussed with a focus on the relative performance of different techniques. We also consider the relative advantages and limitations of different techniques in particular biological contexts. Finally, we present our perspective on future developments in the area of computational protein identification by considering the most recent literature on new and promising approaches to the problem as well as identifying areas yet to be explored and the potential application of methods from other areas of computational biology.
Big data mining analysis method based on cloud computing
Cai, Qing Qiu; Cui, Hong Gang; Tang, Hao
2017-08-01
Information explosion era, large data super-large, discrete and non-(semi) structured features have gone far beyond the traditional data management can carry the scope of the way. With the arrival of the cloud computing era, cloud computing provides a new technical way to analyze the massive data mining, which can effectively solve the problem that the traditional data mining method cannot adapt to massive data mining. This paper introduces the meaning and characteristics of cloud computing, analyzes the advantages of using cloud computing technology to realize data mining, designs the mining algorithm of association rules based on MapReduce parallel processing architecture, and carries out the experimental verification. The algorithm of parallel association rule mining based on cloud computing platform can greatly improve the execution speed of data mining.
A Krylov Subspace Method for Unstructured Mesh SN Transport Computation
International Nuclear Information System (INIS)
Yoo, Han Jong; Cho, Nam Zin; Kim, Jong Woon; Hong, Ser Gi; Lee, Young Ouk
2010-01-01
Hong, et al., have developed a computer code MUST (Multi-group Unstructured geometry S N Transport) for the neutral particle transport calculations in three-dimensional unstructured geometry. In this code, the discrete ordinates transport equation is solved by using the discontinuous finite element method (DFEM) or the subcell balance methods with linear discontinuous expansion. In this paper, the conventional source iteration in the MUST code is replaced by the Krylov subspace method to reduce computing time and the numerical test results are given
Computational methods for high-energy source shielding
International Nuclear Information System (INIS)
Armstrong, T.W.; Cloth, P.; Filges, D.
1983-01-01
The computational methods for high-energy radiation transport related to shielding of the SNQ-spallation source are outlined. The basic approach is to couple radiation-transport computer codes which use Monte Carlo methods and discrete ordinates methods. A code system is suggested that incorporates state-of-the-art radiation-transport techniques. The stepwise verification of that system is briefly summarized. The complexity of the resulting code system suggests a more straightforward code specially tailored for thick shield calculations. A short guide line to future development of such a Monte Carlo code is given
Monte Carlo methods of PageRank computation
Litvak, Nelli
2004-01-01
We describe and analyze an on-line Monte Carlo method of PageRank computation. The PageRank is being estimated basing on results of a large number of short independent simulation runs initiated from each page that contains outgoing hyperlinks. The method does not require any storage of the hyperlink
Geometric optical transfer function and tis computation method
International Nuclear Information System (INIS)
Wang Qi
1992-01-01
Geometric Optical Transfer Function formula is derived after expound some content to be easily ignored, and the computation method is given with Bessel function of order zero and numerical integration and Spline interpolation. The method is of advantage to ensure accuracy and to save calculation
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-09-19
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Fully consistent CFD methods for incompressible flow computations
DEFF Research Database (Denmark)
Kolmogorov, Dmitry; Shen, Wen Zhong; Sørensen, Niels N.
2014-01-01
Nowadays collocated grid based CFD methods are one of the most e_cient tools for computations of the ows past wind turbines. To ensure the robustness of the methods they require special attention to the well-known problem of pressure-velocity coupling. Many commercial codes to ensure the pressure...
Efficient Numerical Methods for Stochastic Differential Equations in Computational Finance
Happola, Juho
2017-01-01
Stochastic Differential Equations (SDE) offer a rich framework to model the probabilistic evolution of the state of a system. Numerical approximation methods are typically needed in evaluating relevant Quantities of Interest arising from such models. In this dissertation, we present novel effective methods for evaluating Quantities of Interest relevant to computational finance when the state of the system is described by an SDE.
Haworth, Naomi L.; Bacskay, George B.
2002-12-01
The heats of formation of a range of phosphorus containing molecules (P2, P4, PH, PH2, PH3, P2H2, P2H4, PO, PO2, PO3, P2O, P2O2, HPO, HPOH, H2POH, H3PO, HOPO, and HOPO2) have been determined by high level quantum chemical calculations. The equilibrium geometries and vibrational frequencies were computed via density functional theory, utilizing the B3LYP/6-31G(2df,p) functional and basis set. Atomization energies were obtained by the application of ab initio coupled cluster theory with single and double excitations from (spin)-restricted Hartree-Fock reference states with perturbative correction for triples [CCSD(T)], in conjunction with cc-pVnZ basis sets (n=T, Q, 5) which include an extra d function on the phosphorus atoms and diffuse functions on the oxygens, as recommended by Bauschlicher [J. Phys. Chem. A 103, 11126 (1999)]. The valence correlated atomization energies were extrapolated to the complete basis limit and corrected for core-valence (CV) correlation and scalar relativistic effects, as well as for basis set superposition errors (BSSE) in the CV terms. This methodology is effectively the same as the one adopted by Bauschlicher in his study of PO, PO2, PO3, HPO, HOPO, and HOPO2. Consequently, for these molecules the results of this work closely match Bauschlicher's computed values. The theoretical heats of formation, whose accuracy is estimated as ranging from ±1.0 to ±2.5 kcal mol-1, are consistent with the available experimental data. The current set of theoretical data represent a convenient benchmark, against which the results of other computational procedures, such as G3, G3X, and G3X2, can be compared. Despite the fact that G3X2 [which is an approximation to the quadratic CI procedure QCISD(T,Full)/G3Xlarge] is a formally higher level theory than G3X, the heats of formation obtained by these two methods are found to be of comparable accuracy. Both reproduce the benchmark heats of formation on the average to within ±2 kcal mol-1 and, for these
Computational methods for structural load and resistance modeling
Thacker, B. H.; Millwater, H. R.; Harren, S. V.
1991-01-01
An automated capability for computing structural reliability considering uncertainties in both load and resistance variables is presented. The computations are carried out using an automated Advanced Mean Value iteration algorithm (AMV +) with performance functions involving load and resistance variables obtained by both explicit and implicit methods. A complete description of the procedures used is given as well as several illustrative examples, verified by Monte Carlo Analysis. In particular, the computational methods described in the paper are shown to be quite accurate and efficient for a material nonlinear structure considering material damage as a function of several primitive random variables. The results show clearly the effectiveness of the algorithms for computing the reliability of large-scale structural systems with a maximum number of resolutions.
Computational mathematics models, methods, and analysis with Matlab and MPI
White, Robert E
2004-01-01
Computational Mathematics: Models, Methods, and Analysis with MATLAB and MPI explores and illustrates this process. Each section of the first six chapters is motivated by a specific application. The author applies a model, selects a numerical method, implements computer simulations, and assesses the ensuing results. These chapters include an abundance of MATLAB code. By studying the code instead of using it as a "black box, " you take the first step toward more sophisticated numerical modeling. The last four chapters focus on multiprocessing algorithms implemented using message passing interface (MPI). These chapters include Fortran 9x codes that illustrate the basic MPI subroutines and revisit the applications of the previous chapters from a parallel implementation perspective. All of the codes are available for download from www4.ncsu.edu./~white.This book is not just about math, not just about computing, and not just about applications, but about all three--in other words, computational science. Whether us...
Thermal transport in nanocrystalline Si and SiGe by ab initio based Monte Carlo simulation.
Yang, Lina; Minnich, Austin J
2017-03-14
Nanocrystalline thermoelectric materials based on Si have long been of interest because Si is earth-abundant, inexpensive, and non-toxic. However, a poor understanding of phonon grain boundary scattering and its effect on thermal conductivity has impeded efforts to improve the thermoelectric figure of merit. Here, we report an ab-initio based computational study of thermal transport in nanocrystalline Si-based materials using a variance-reduced Monte Carlo method with the full phonon dispersion and intrinsic lifetimes from first-principles as input. By fitting the transmission profile of grain boundaries, we obtain excellent agreement with experimental thermal conductivity of nanocrystalline Si [Wang et al. Nano Letters 11, 2206 (2011)]. Based on these calculations, we examine phonon transport in nanocrystalline SiGe alloys with ab-initio electron-phonon scattering rates. Our calculations show that low energy phonons still transport substantial amounts of heat in these materials, despite scattering by electron-phonon interactions, due to the high transmission of phonons at grain boundaries, and thus improvements in ZT are still possible by disrupting these modes. This work demonstrates the important insights into phonon transport that can be obtained using ab-initio based Monte Carlo simulations in complex nanostructured materials.
Early stage precipitation in aluminum alloys : An ab initio study
Zhang, X.
2017-01-01
Multiscale computational materials science has reached a stage where many complicated phenomena or properties that are of great importance to manufacturing can be predicted or explained. The word “ab initio study” becomes commonplace as the development of density functional theory has enabled the
Embedded atom approach for gold–silicon system from ab initio
Indian Academy of Sciences (India)
In the present paper, an empirical embedded atom method (EAM) potential for gold–silicon (Au–Si) is developed by fitting to ab initio force (the 'force matching' method) and experimental data. The force database is generated within ab initio molecular dynamics (AIMD). The database includes liquid phase at various ...
International Nuclear Information System (INIS)
Sekimura, Naoto; Okita, Taira
2006-01-01
Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the fourth issue showing the overview of scientific computational methods with the introduction of continuum simulation methods and their applications. Simulation methods on physical radiation effects on materials are reviewed based on the process such as binary collision approximation, molecular dynamics, kinematic Monte Carlo method, reaction rate method and dislocation dynamics. (T. Tanaka)
International Nuclear Information System (INIS)
Freire, Ricardo O.; Rocha, Gerd B.; Albuquerque, Rodrigo Q.; Simas, Alfredo M.
2005-01-01
The second version of the sparkle model for the calculation of lanthanide complexes (SMLC II) as well as ab-initio calculations (HF/STO-3G and HF/3-21G) have been used to calculate the geometries of a series of europium (III) complexes with different coordination numbers (CN=7, 8 and 9), ligating atoms (O and N) and ligands (mono, bi and polydentate). The so-called ligand field parameters, Bqk's, have been calculated from both SMLC II and ab-initio optimized structures and compared to the ones calculated from crystallographic data. The results show that the SMLC II model represents a significant improvement over the previous version (SMLC) and has given good results when compared to ab-initio methods, which demand a much higher computational effort. Indeed, ab-initio methods take around a hundred times more computing time than SMLC. As such, our results indicate that our sparkle model can be a very useful and a fast tool when applied to the prediction of both ground state geometries and ligand field parameters of europium (III) complexes
Energy Technology Data Exchange (ETDEWEB)
Masrour, R., E-mail: rachidmasrour@hotmail.com [Laboratory of Materials, Processes, Environment and Quality, Cady Ayyed University, National School of Applied Sciences, BP. 63, 46000 Safi (Morocco); LMPHE (URAC 12), Faculty of Science, Mohammed V-Agdal University, Rabat (Morocco); Hlil, E.K. [Institut Néel, CNRS et Université Joseph Fourier, BP 166, F-38042 Grenoble Cedex 9 (France); Hamedoun, M. [Institute of Nanomaterials and Nanotechnologies, MAScIR, Rabat (Morocco); Benyoussef, A. [LMPHE (URAC 12), Faculty of Science, Mohammed V-Agdal University, Rabat (Morocco); Institute of Nanomaterials and Nanotechnologies, MAScIR, Rabat (Morocco); Hassan II Academy of Science and Technology, Rabat (Morocco); Mounkachi, O.; El Moussaoui, H. [Institute of Nanomaterials and Nanotechnologies, MAScIR, Rabat (Morocco)
2014-06-01
Self-consistent ab initio calculations, based on DFT (Density Functional Theory) approach and using FLAPW (Full potential Linear Augmented Plane Wave) method, are performed to investigate both electronic and magnetic properties of the MnSe lattice. Polarized spin and spin–orbit coupling are included in calculations within the framework of the antiferromagnetic state between two adjacent Mn lattices. Magnetic moments considered to lie along (001) axes are computed. Obtained data from ab initio calculations are used as input for the high temperature series expansions (HTSEs) calculations to compute other magnetic parameters. The zero-field high temperature static susceptibility series of the spin −4.28 nearest-neighbor Ising model on face centered cubic (fcc) and lattices is thoroughly analyzed by means of a power series coherent anomaly method (CAM). The exchange interaction between the magnetic atoms and the Néel temperature are deduced using the mean filed and HTSEs theories. - Highlights: • Ab initio calculations are used to investigate both electronic and magnetic properties of the MnSe alloys. • Obtained data from ab initio calculations are used as input for the HTSEs. • The Néel temperature is obtained for MnSe alloys.
Class of reconstructed discontinuous Galerkin methods in computational fluid dynamics
International Nuclear Information System (INIS)
Luo, Hong; Xia, Yidong; Nourgaliev, Robert
2011-01-01
A class of reconstructed discontinuous Galerkin (DG) methods is presented to solve compressible flow problems on arbitrary grids. The idea is to combine the efficiency of the reconstruction methods in finite volume methods and the accuracy of the DG methods to obtain a better numerical algorithm in computational fluid dynamics. The beauty of the resulting reconstructed discontinuous Galerkin (RDG) methods is that they provide a unified formulation for both finite volume and DG methods, and contain both classical finite volume and standard DG methods as two special cases of the RDG methods, and thus allow for a direct efficiency comparison. Both Green-Gauss and least-squares reconstruction methods and a least-squares recovery method are presented to obtain a quadratic polynomial representation of the underlying linear discontinuous Galerkin solution on each cell via a so-called in-cell reconstruction process. The devised in-cell reconstruction is aimed to augment the accuracy of the discontinuous Galerkin method by increasing the order of the underlying polynomial solution. These three reconstructed discontinuous Galerkin methods are used to compute a variety of compressible flow problems on arbitrary meshes to assess their accuracy. The numerical experiments demonstrate that all three reconstructed discontinuous Galerkin methods can significantly improve the accuracy of the underlying second-order DG method, although the least-squares reconstructed DG method provides the best performance in terms of both accuracy, efficiency, and robustness. (author)
Data analysis through interactive computer animation method (DATICAM)
International Nuclear Information System (INIS)
Curtis, J.N.; Schwieder, D.H.
1983-01-01
DATICAM is an interactive computer animation method designed to aid in the analysis of nuclear research data. DATICAM was developed at the Idaho National Engineering Laboratory (INEL) by EG and G Idaho, Inc. INEL analysts use DATICAM to produce computer codes that are better able to predict the behavior of nuclear power reactors. In addition to increased code accuracy, DATICAM has saved manpower and computer costs. DATICAM has been generalized to assist in the data analysis of virtually any data-producing dynamic process
Multigrid methods for the computation of propagators in gauge fields
International Nuclear Information System (INIS)
Kalkreuter, T.
1992-11-01
In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)
Water demand forecasting: review of soft computing methods.
Ghalehkhondabi, Iman; Ardjmand, Ehsan; Young, William A; Weckman, Gary R
2017-07-01
Demand forecasting plays a vital role in resource management for governments and private companies. Considering the scarcity of water and its inherent constraints, demand management and forecasting in this domain are critically important. Several soft computing techniques have been developed over the last few decades for water demand forecasting. This study focuses on soft computing methods of water consumption forecasting published between 2005 and 2015. These methods include artificial neural networks (ANNs), fuzzy and neuro-fuzzy models, support vector machines, metaheuristics, and system dynamics. Furthermore, it was discussed that while in short-term forecasting, ANNs have been superior in many cases, but it is still very difficult to pick a single method as the overall best. According to the literature, various methods and their hybrids are applied to water demand forecasting. However, it seems soft computing has a lot more to contribute to water demand forecasting. These contribution areas include, but are not limited, to various ANN architectures, unsupervised methods, deep learning, various metaheuristics, and ensemble methods. Moreover, it is found that soft computing methods are mainly used for short-term demand forecasting.
Short-term electric load forecasting using computational intelligence methods
Jurado, Sergio; Peralta, J.; Nebot, Àngela; Mugica, Francisco; Cortez, Paulo
2013-01-01
Accurate time series forecasting is a key issue to support individual and organizational decision making. In this paper, we introduce several methods for short-term electric load forecasting. All the presented methods stem from computational intelligence techniques: Random Forest, Nonlinear Autoregressive Neural Networks, Evolutionary Support Vector Machines and Fuzzy Inductive Reasoning. The performance of the suggested methods is experimentally justified with several experiments carried out...
A stochastic method for computing hadronic matrix elements
Energy Technology Data Exchange (ETDEWEB)
Alexandrou, Constantia [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; The Cyprus Institute, Nicosia (Cyprus). Computational-based Science and Technology Research Center; Dinter, Simon; Drach, Vincent [Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Jansen, Karl [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Deutsches Elektronen-Synchrotron (DESY), Zeuthen (Germany). John von Neumann-Inst. fuer Computing NIC; Hadjiyiannakou, Kyriakos [Cyprus Univ., Nicosia (Cyprus). Dept. of Physics; Renner, Dru B. [Thomas Jefferson National Accelerator Facility, Newport News, VA (United States); Collaboration: European Twisted Mass Collaboration
2013-02-15
We present a stochastic method for the calculation of baryon three-point functions that is more versatile compared to the typically used sequential method. We analyze the scaling of the error of the stochastically evaluated three-point function with the lattice volume and find a favorable signal-to-noise ratio suggesting that our stochastic method can be used efficiently at large volumes to compute hadronic matrix elements.
Ab initio study of spin-dependent transport in carbon nanotubes with iron and vanadium adatoms
DEFF Research Database (Denmark)
Fürst, Joachim Alexander; Brandbyge, Mads; Jauho, Antti-Pekka
2008-01-01
(majority or minority) being scattered depends on the adsorbate and is explained in terms of d-state filling. We contrast the single-walled carbon nanotube results to the simpler case of the adsorbate on a flat graphene sheet with periodic boundary conditions and corresponding width in the zigzag direction......We present an ab initio study of spin-dependent transport in armchair carbon nanotubes with transition metal adsorbates: iron or vanadium. The method based on density functional theory and nonequilibrium Green's functions is used to compute the electronic structure and zero-bias conductance...
The Direct Lighting Computation in Global Illumination Methods
Wang, Changyaw Allen
1994-01-01
Creating realistic images is a computationally expensive process, but it is very important for applications such as interior design, product design, education, virtual reality, and movie special effects. To generate realistic images, state-of-art rendering techniques are employed to simulate global illumination, which accounts for the interreflection of light among objects. In this document, we formalize the global illumination problem into a eight -dimensional integral and discuss various methods that can accelerate the process of approximating this integral. We focus on the direct lighting computation, which accounts for the light reaching the viewer from the emitting sources after exactly one reflection, Monte Carlo sampling methods, and light source simplification. Results include a new sample generation method, a framework for the prediction of the total number of samples used in a solution, and a generalized Monte Carlo approach for computing the direct lighting from an environment which for the first time makes ray tracing feasible for highly complex environments.
Numerical methods design, analysis, and computer implementation of algorithms
Greenbaum, Anne
2012-01-01
Numerical Methods provides a clear and concise exploration of standard numerical analysis topics, as well as nontraditional ones, including mathematical modeling, Monte Carlo methods, Markov chains, and fractals. Filled with appealing examples that will motivate students, the textbook considers modern application areas, such as information retrieval and animation, and classical topics from physics and engineering. Exercises use MATLAB and promote understanding of computational results. The book gives instructors the flexibility to emphasize different aspects--design, analysis, or computer implementation--of numerical algorithms, depending on the background and interests of students. Designed for upper-division undergraduates in mathematics or computer science classes, the textbook assumes that students have prior knowledge of linear algebra and calculus, although these topics are reviewed in the text. Short discussions of the history of numerical methods are interspersed throughout the chapters. The book a...
Integrating computational methods to retrofit enzymes to synthetic pathways.
Brunk, Elizabeth; Neri, Marilisa; Tavernelli, Ivano; Hatzimanikatis, Vassily; Rothlisberger, Ursula
2012-02-01
Microbial production of desired compounds provides an efficient framework for the development of renewable energy resources. To be competitive to traditional chemistry, one requirement is to utilize the full capacity of the microorganism to produce target compounds with high yields and turnover rates. We use integrated computational methods to generate and quantify the performance of novel biosynthetic routes that contain highly optimized catalysts. Engineering a novel reaction pathway entails addressing feasibility on multiple levels, which involves handling the complexity of large-scale biochemical networks while respecting the critical chemical phenomena at the atomistic scale. To pursue this multi-layer challenge, our strategy merges knowledge-based metabolic engineering methods with computational chemistry methods. By bridging multiple disciplines, we provide an integral computational framework that could accelerate the discovery and implementation of novel biosynthetic production routes. Using this approach, we have identified and optimized a novel biosynthetic route for the production of 3HP from pyruvate. Copyright © 2011 Wiley Periodicals, Inc.
The Experiment Method for Manufacturing Grid Development on Single Computer
Institute of Scientific and Technical Information of China (English)
XIAO Youan; ZHOU Zude
2006-01-01
In this paper, an experiment method for the Manufacturing Grid application system development in the single personal computer environment is proposed. The characteristic of the proposed method is constructing a full prototype Manufacturing Grid application system which is hosted on a single personal computer with the virtual machine technology. Firstly, it builds all the Manufacturing Grid physical resource nodes on an abstraction layer of a single personal computer with the virtual machine technology. Secondly, all the virtual Manufacturing Grid resource nodes will be connected with virtual network and the application software will be deployed on each Manufacturing Grid nodes. Then, we can obtain a prototype Manufacturing Grid application system which is working in the single personal computer, and can carry on the experiment on this foundation. Compared with the known experiment methods for the Manufacturing Grid application system development, the proposed method has the advantages of the known methods, such as cost inexpensively, operation simple, and can get the confidence experiment result easily. The Manufacturing Grid application system constructed with the proposed method has the high scalability, stability and reliability. It is can be migrated to the real application environment rapidly.
Sakamoto, Shinichi; Otsuru, Toru
2014-01-01
This book reviews a variety of methods for wave-based acoustic simulation and recent applications to architectural and environmental acoustic problems. Following an introduction providing an overview of computational simulation of sound environment, the book is in two parts: four chapters on methods and four chapters on applications. The first part explains the fundamentals and advanced techniques for three popular methods, namely, the finite-difference time-domain method, the finite element method, and the boundary element method, as well as alternative time-domain methods. The second part demonstrates various applications to room acoustics simulation, noise propagation simulation, acoustic property simulation for building components, and auralization. This book is a valuable reference that covers the state of the art in computational simulation for architectural and environmental acoustics.
Marques, Yuri Bento; de Paiva Oliveira, Alcione; Ribeiro Vasconcelos, Ana Tereza; Cerqueira, Fabio Ribeiro
2016-12-15
MicroRNAs (miRNAs) are key gene expression regulators in plants and animals. Therefore, miRNAs are involved in several biological processes, making the study of these molecules one of the most relevant topics of molecular biology nowadays. However, characterizing miRNAs in vivo is still a complex task. As a consequence, in silico methods have been developed to predict miRNA loci. A common ab initio strategy to find miRNAs in genomic data is to search for sequences that can fold into the typical hairpin structure of miRNA precursors (pre-miRNAs). The current ab initio approaches, however, have selectivity issues, i.e., a high number of false positives is reported, which can lead to laborious and costly attempts to provide biological validation. This study presents an extension of the ab initio method miRNAFold, with the aim of improving selectivity through machine learning techniques, namely, random forest combined with the SMOTE procedure that copes with imbalance datasets. By comparing our method, termed Mirnacle, with other important approaches in the literature, we demonstrate that Mirnacle substantially improves selectivity without compromising sensitivity. For the three datasets used in our experiments, our method achieved at least 97% of sensitivity and could deliver a two-fold, 20-fold, and 6-fold increase in selectivity, respectively, compared with the best results of current computational tools. The extension of miRNAFold by the introduction of machine learning techniques, significantly increases selectivity in pre-miRNA ab initio prediction, which optimally contributes to advanced studies on miRNAs, as the need of biological validations is diminished. Hopefully, new research, such as studies of severe diseases caused by miRNA malfunction, will benefit from the proposed computational tool.
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems
Application of statistical method for FBR plant transient computation
International Nuclear Information System (INIS)
Kikuchi, Norihiro; Mochizuki, Hiroyasu
2014-01-01
Highlights: • A statistical method with a large trial number up to 10,000 is applied to the plant system analysis. • A turbine trip test conducted at the “Monju” reactor is selected as a plant transient. • A reduction method of trial numbers is discussed. • The result with reduced trial number can express the base regions of the computed distribution. -- Abstract: It is obvious that design tolerances, errors included in operation, and statistical errors in empirical correlations effect on the transient behavior. The purpose of the present study is to apply above mentioned statistical errors to a plant system computation in order to evaluate the statistical distribution contained in the transient evolution. A selected computation case is the turbine trip test conducted at 40% electric power of the prototype fast reactor “Monju”. All of the heat transport systems of “Monju” are modeled with the NETFLOW++ system code which has been validated using the plant transient tests of the experimental fast reactor Joyo, and “Monju”. The effects of parameters on upper plenum temperature are confirmed by sensitivity analyses, and dominant parameters are chosen. The statistical errors are applied to each computation deck by using a pseudorandom number and the Monte-Carlo method. The dSFMT (Double precision SIMD-oriented Fast Mersenne Twister) that is developed version of Mersenne Twister (MT), is adopted as the pseudorandom number generator. In the present study, uniform random numbers are generated by dSFMT, and these random numbers are transformed to the normal distribution by the Box–Muller method. Ten thousands of different computations are performed at once. In every computation case, the steady calculation is performed for 12,000 s, and transient calculation is performed for 4000 s. In the purpose of the present statistical computation, it is important that the base regions of distribution functions should be calculated precisely. A large number of
Electron beam treatment planning: A review of dose computation methods
International Nuclear Information System (INIS)
Mohan, R.; Riley, R.; Laughlin, J.S.
1983-01-01
Various methods of dose computations are reviewed. The equivalent path length methods used to account for body curvature and internal structure are not adequate because they ignore the lateral diffusion of electrons. The Monte Carlo method for the broad field three-dimensional situation in treatment planning is impractical because of the enormous computer time required. The pencil beam technique may represent a suitable compromise. The behavior of a pencil beam may be described by the multiple scattering theory or, alternatively, generated using the Monte Carlo method. Although nearly two orders of magnitude slower than the equivalent path length technique, the pencil beam method improves accuracy sufficiently to justify its use. It applies very well when accounting for the effect of surface irregularities; the formulation for handling inhomogeneous internal structure is yet to be developed
International Nuclear Information System (INIS)
Masrour, R.; Jabar, A.; Hlil, E.K.; Hamedoun, M.; Benyoussef, A.; Hourmatallah, A.; Rezzouk, A.; Bouslykhane, K.; Benzakour, N.
2017-01-01
Self-consistent ab initio calculations, based on Density Functional Theory (DFT) approach and using Full potential Linear Augmented Plane Wave (FLAPW) method, are performed to investigate both electronic and magnetic properties of the Mn 2 NiAl. Magnetic moment considered to lie along (001) axes are computed. Obtained data from ab initio calculations are used as input for Monte Carlo simulations to compute other magnetic parameters. Also, the magnetic properties of Mn 2 NiAl are studied using the Monte Carlo simulations. The variation of magnetization and magnetic susceptibility with the reduced temperature of Mn 2 NiAl are investigated. The transition temperature of this system is deduced for different values exchange interaction and crystal field. The thermal total magnetization has been obtained, and the magnetic hysteresis cycle is established. The total magnetic moment is superior to those obtained by the other method and is mainly determined by the antiparallel aligned Mn I , Mn II and Ni spin moments. The superparamagnetic phase is found at the neighborhood of transition temperature. - Highlights: • Ab initio calculations are used to study magnetic and electronic properties of Mn 2 NiX. • The transition temperature of Mn 2 NiX is established. • The magnetic hysteresis cycle of M n2 NiX (X = Al, Ga, In, Sn) is deduced. • The magnetic coercive field of Mn 2 NiX (X = Al, Ga, In, Sn) is given.
International Nuclear Information System (INIS)
Fink, R.F.; Pfister, J.; Schneider, A.; Zhao, H.; Engels, B.
2008-01-01
We present new, generally applicable protocols for the computation of the coupling parameter, J, of excitation energy transfer with quantum chemical ab initio methods. The protocols allow to select the degree of approximation and computational demand such that they are applicable for realistic systems and still allow to control the quality of the approach. We demonstrate the capabilities of the different protocols using the CO dimer as a first example. Correlation effects are found to scale J by a factor of about 0.7 which is in good agreement to earlier results obtained for the ethene dimer. The various levels of the protocol allow to assess the influence of ionic configurations and the polarisation within the dimer. Further, the interplay between the Foerster and Dexter contribution to J is investigated. The computations also show error compensation within approximations that are widely used for extended systems as in particular the transition density cube method
Energy Technology Data Exchange (ETDEWEB)
Fink, R.F. [University of Wuerzburg, Institute of Organic Chemistry, Am Hubland, D-97074 Wuerzburg (Germany)], E-mail: reinhold.fink@rub.de; Pfister, J.; Schneider, A.; Zhao, H.; Engels, B. [University of Wuerzburg, Institute of Organic Chemistry, Am Hubland, D-97074 Wuerzburg (Germany)
2008-01-29
We present new, generally applicable protocols for the computation of the coupling parameter, J, of excitation energy transfer with quantum chemical ab initio methods. The protocols allow to select the degree of approximation and computational demand such that they are applicable for realistic systems and still allow to control the quality of the approach. We demonstrate the capabilities of the different protocols using the CO dimer as a first example. Correlation effects are found to scale J by a factor of about 0.7 which is in good agreement to earlier results obtained for the ethene dimer. The various levels of the protocol allow to assess the influence of ionic configurations and the polarisation within the dimer. Further, the interplay between the Foerster and Dexter contribution to J is investigated. The computations also show error compensation within approximations that are widely used for extended systems as in particular the transition density cube method.
Computational methods for three-dimensional microscopy reconstruction
Frank, Joachim
2014-01-01
Approaches to the recovery of three-dimensional information on a biological object, which are often formulated or implemented initially in an intuitive way, are concisely described here based on physical models of the object and the image-formation process. Both three-dimensional electron microscopy and X-ray tomography can be captured in the same mathematical framework, leading to closely-related computational approaches, but the methodologies differ in detail and hence pose different challenges. The editors of this volume, Gabor T. Herman and Joachim Frank, are experts in the respective methodologies and present research at the forefront of biological imaging and structural biology. Computational Methods for Three-Dimensional Microscopy Reconstruction will serve as a useful resource for scholars interested in the development of computational methods for structural biology and cell biology, particularly in the area of 3D imaging and modeling.
Kempisty, Pawel; Strąk, Paweł; Sakowski, Konrad; Kangawa, Yoshihiro; Krukowski, Stanisław
2017-11-08
Thermodynamic foundations of ab initio modeling of vapor-solid and vapor-surface equilibria are introduced. The chemical potential change is divided into enthalpy and entropy terms. The enthalpy path passes through vapor-solid transition at zero temperature. The entropy path avoids the singular point at zero temperature passing a solid-vapor transition under normal conditions, where evaporation entropy is employed. In addition, the thermal changes are calculated. The chemical potential difference contribution of the following terms: vaporization enthalpy, vaporization entropy, the temperature-entropy related change, the thermal enthalpy change and mechanical pressure is obtained. The latter term is negligibly small for the pressure typical for epitaxy. The thermal enthalpy change is two orders smaller than the first three terms which have to be taken into account explicitly. The configurational vaporization entropy change is derived for adsorption processes. The same formulation is derived for vapor-surface equilibria using hydrogen at the GaN(0001) surface as an example. The critical factor is the dependence of the enthalpy of evaporation (desorption energy) on the pinning of the Fermi level bringing a drastic change of the value from 2.24 eV to -2.38 eV. In addition it is shown that entropic contributions considerable change the hydrogen equilibrium pressure over the GaN(0001) surface by several orders of magnitude. Thus a complete and exact formulation of vapor-solid and vapor-surface equilibria is presented.
Mo, Yuxiang; Gao, Shuming; Dai, Zuyang; Li, Hua
2013-06-01
We report a combined experimental and theoretical study on the vibronic structure of CH_3F^+. The results show that the tunneling splittings of vibrational energy levels occur in CH_3F^+ due to the Jahn-Teller effect. Experimentally, we have measured a high resolution ZEKE spectrum of CH_3F up to 3500 cm^-^1 above the ground state. Theoretically, we performed an ab initio calculation based on the diabatic model. The adiabatic potential energy surfaces (APES) of CH_3F^+ have been calculated at the MRCI/CAS/avq(t)z level and expressed by Taylor expansions with normal coordinates as variables. The energy gradients for the lower and upper APES, the derivative couplings between them and also the energies of the APES have been used to determine the coefficients in the Taylor expansion. The spin-vibronic energy levels have been calculated by accounting all six vibrational modes and their couplings. The experimental ZEKE spectra were assigned based on the theoretical calculations. W. Domcke, D. R. Yarkony, and H. Köpple (Eds.), Conical Intersections: Eletronic Structure, Dynamics and Spectroscopy (World Scientific, Singapore, 2004). M. S. Schuurman, D. E. Weinberg, and D. R. Yarkony, J. Chem. Phys. 127, 104309 (2007).
Computations of finite temperature QCD with the pseudofermion method
International Nuclear Information System (INIS)
Fucito, F.; Solomon, S.
1985-01-01
The authors discuss the phase diagram of finite temperature QCD as it is obtained including the effects of dynamical quarks by the pseudofermion method. They compare their results with the results obtained by other groups and comment on the actual state of the art for these kind of computations
Multiscale methods in computational fluid and solid mechanics
Borst, de R.; Hulshoff, S.J.; Lenz, S.; Munts, E.A.; Brummelen, van E.H.; Wall, W.; Wesseling, P.; Onate, E.; Periaux, J.
2006-01-01
First, an attempt is made towards gaining a more systematic understanding of recent progress in multiscale modelling in computational solid and fluid mechanics. Sub- sequently, the discussion is focused on variational multiscale methods for the compressible and incompressible Navier-Stokes
International Nuclear Information System (INIS)
Oka, Yoshiaki; Okuda, Hiroshi
2006-01-01
Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the first issue showing their overview and introduction of continuum simulation methods. Finite element method as their applications is also reviewed. (T. Tanaka)
Regression modeling methods, theory, and computation with SAS
Panik, Michael
2009-01-01
Regression Modeling: Methods, Theory, and Computation with SAS provides an introduction to a diverse assortment of regression techniques using SAS to solve a wide variety of regression problems. The author fully documents the SAS programs and thoroughly explains the output produced by the programs.The text presents the popular ordinary least squares (OLS) approach before introducing many alternative regression methods. It covers nonparametric regression, logistic regression (including Poisson regression), Bayesian regression, robust regression, fuzzy regression, random coefficients regression,
Recent Development in Rigorous Computational Methods in Dynamical Systems
Arai, Zin; Kokubu, Hiroshi; Pilarczyk, Paweł
2009-01-01
We highlight selected results of recent development in the area of rigorous computations which use interval arithmetic to analyse dynamical systems. We describe general ideas and selected details of different ways of approach and we provide specific sample applications to illustrate the effectiveness of these methods. The emphasis is put on a topological approach, which combined with rigorous calculations provides a broad range of new methods that yield mathematically rel...
Method and system for environmentally adaptive fault tolerant computing
Copenhaver, Jason L. (Inventor); Jeremy, Ramos (Inventor); Wolfe, Jeffrey M. (Inventor); Brenner, Dean (Inventor)
2010-01-01
A method and system for adapting fault tolerant computing. The method includes the steps of measuring an environmental condition representative of an environment. An on-board processing system's sensitivity to the measured environmental condition is measured. It is determined whether to reconfigure a fault tolerance of the on-board processing system based in part on the measured environmental condition. The fault tolerance of the on-board processing system may be reconfigured based in part on the measured environmental condition.
Numerical evaluation of methods for computing tomographic projections
International Nuclear Information System (INIS)
Zhuang, W.; Gopal, S.S.; Hebert, T.J.
1994-01-01
Methods for computing forward/back projections of 2-D images can be viewed as numerical integration techniques. The accuracy of any ray-driven projection method can be improved by increasing the number of ray-paths that are traced per projection bin. The accuracy of pixel-driven projection methods can be increased by dividing each pixel into a number of smaller sub-pixels and projecting each sub-pixel. The authors compared four competing methods of computing forward/back projections: bilinear interpolation, ray-tracing, pixel-driven projection based upon sub-pixels, and pixel-driven projection based upon circular, rather than square, pixels. This latter method is equivalent to a fast, bi-nonlinear interpolation. These methods and the choice of the number of ray-paths per projection bin or the number of sub-pixels per pixel present a trade-off between computational speed and accuracy. To solve the problem of assessing backprojection accuracy, the analytical inverse Fourier transform of the ramp filtered forward projection of the Shepp and Logan head phantom is derived
High-integrity software, computation and the scientific method
International Nuclear Information System (INIS)
Hatton, L.
2012-01-01
Computation rightly occupies a central role in modern science. Datasets are enormous and the processing implications of some algorithms are equally staggering. With the continuing difficulties in quantifying the results of complex computations, it is of increasing importance to understand its role in the essentially Popperian scientific method. In this paper, some of the problems with computation, for example the long-term unquantifiable presence of undiscovered defect, problems with programming languages and process issues will be explored with numerous examples. One of the aims of the paper is to understand the implications of trying to produce high-integrity software and the limitations which still exist. Unfortunately Computer Science itself suffers from an inability to be suitably critical of its practices and has operated in a largely measurement-free vacuum since its earliest days. Within computer science itself, this has not been so damaging in that it simply leads to unconstrained creativity and a rapid turnover of new technologies. In the applied sciences however which have to depend on computational results, such unquantifiability significantly undermines trust. It is time this particular demon was put to rest. (author)
Computational biology in the cloud: methods and new insights from computing at scale.
Kasson, Peter M
2013-01-01
The past few years have seen both explosions in the size of biological data sets and the proliferation of new, highly flexible on-demand computing capabilities. The sheer amount of information available from genomic and metagenomic sequencing, high-throughput proteomics, experimental and simulation datasets on molecular structure and dynamics affords an opportunity for greatly expanded insight, but it creates new challenges of scale for computation, storage, and interpretation of petascale data. Cloud computing resources have the potential to help solve these problems by offering a utility model of computing and storage: near-unlimited capacity, the ability to burst usage, and cheap and flexible payment models. Effective use of cloud computing on large biological datasets requires dealing with non-trivial problems of scale and robustness, since performance-limiting factors can change substantially when a dataset grows by a factor of 10,000 or more. New computing paradigms are thus often needed. The use of cloud platforms also creates new opportunities to share data, reduce duplication, and to provide easy reproducibility by making the datasets and computational methods easily available.
Edwin, Bismi; Joe, I. Hubert
2013-10-01
Vibrational analysis of anti-epileptic drug vigabatrin, a structural GABA analog was carried out using NIR FT-Raman and FTIR spectroscopic techniques. The equilibrium geometry, various bonding features and harmonic vibrational wavenumbers were studied using density functional theory method. The detailed interpretation of the vibrational spectra has been carried out with the aid of VEDA.4 program. Vibrational spectra, natural bond orbital analysis and optimized molecular structure show clear evidence for the effect of electron charge transfer on the activity of the molecule. Predicted electronic absorption spectrum from TD-DFT calculation has been compared with the UV-vis spectrum. The Mulliken population analysis on atomic charges and the HOMO-LUMO energy were also calculated. Good consistency is found between the calculated results and experimental data for the electronic absorption as well as IR and Raman spectra. The blue-shifting of the Csbnd C stretching wavenumber reveals that the vinyl group is actively involved in the conjugation path. The NBO analysis confirms the occurrence of intramolecular hyperconjugative interactions resulting in ICT causing stabilization of the system.
Edwin, Bismi; Joe, I Hubert
2013-10-01
Vibrational analysis of anti-epileptic drug vigabatrin, a structural GABA analog was carried out using NIR FT-Raman and FTIR spectroscopic techniques. The equilibrium geometry, various bonding features and harmonic vibrational wavenumbers were studied using density functional theory method. The detailed interpretation of the vibrational spectra has been carried out with the aid of VEDA.4 program. Vibrational spectra, natural bond orbital analysis and optimized molecular structure show clear evidence for the effect of electron charge transfer on the activity of the molecule. Predicted electronic absorption spectrum from TD-DFT calculation has been compared with the UV-vis spectrum. The Mulliken population analysis on atomic charges and the HOMO-LUMO energy were also calculated. Good consistency is found between the calculated results and experimental data for the electronic absorption as well as IR and Raman spectra. The blue-shifting of the C-C stretching wavenumber reveals that the vinyl group is actively involved in the conjugation path. The NBO analysis confirms the occurrence of intramolecular hyperconjugative interactions resulting in ICT causing stabilization of the system. Copyright © 2013 Elsevier B.V. All rights reserved.
Palmer, Michael H.; Vrønning Hoffmann, Søren; Jones, Nykola C.; Coreno, Marcello; de Simone, Monica; Grazioli, Cesare
2018-06-01
The vacuum ultraviolet (VUV) spectrum for CH2F2 from a new synchrotron study has been combined with earlier data and subjected to detailed scrutiny. The onset of absorption, band I and also band IV, is resolved into broad vibrational peaks, which contrast with the continuous absorption previously claimed. A new theoretical analysis, using a combination of time dependent density functional theory (TDDFT) calculations and complete active space self-consistent field, leads to a major new interpretation. Adiabatic excitation energies (AEEs) and vertical excitation energies, evaluated by these methods, are used to interpret the spectra in unprecedented detail using theoretical vibronic analysis. This includes both Franck-Condon (FC) and Herzberg-Teller (HT) effects on cold and hot bands. These results lead to the re-assignment of several known excited states and the identification of new ones. The lowest calculated AEE sequence for singlet states is 11B1 ˜ 11A2 expected; the onset of the 15.5 eV band shows a set of vibrational peaks, but the vibration frequency does not correspond to any of the photoelectron spectral (PES) structure and is clearly valence in nature. The routine use of PES footprints to detect Rydberg states in VUV spectra is shown to be inadequate. The combined effects of FC and HT in the VUV spectral bands lead to additional vibrations when compared with the PES.
Computational Methods for Modeling Aptamers and Designing Riboswitches
Directory of Open Access Journals (Sweden)
Sha Gong
2017-11-01
Full Text Available Riboswitches, which are located within certain noncoding RNA region perform functions as genetic “switches”, regulating when and where genes are expressed in response to certain ligands. Understanding the numerous functions of riboswitches requires computation models to predict structures and structural changes of the aptamer domains. Although aptamers often form a complex structure, computational approaches, such as RNAComposer and Rosetta, have already been applied to model the tertiary (three-dimensional (3D structure for several aptamers. As structural changes in aptamers must be achieved within the certain time window for effective regulation, kinetics is another key point for understanding aptamer function in riboswitch-mediated gene regulation. The coarse-grained self-organized polymer (SOP model using Langevin dynamics simulation has been successfully developed to investigate folding kinetics of aptamers, while their co-transcriptional folding kinetics can be modeled by the helix-based computational method and BarMap approach. Based on the known aptamers, the web server Riboswitch Calculator and other theoretical methods provide a new tool to design synthetic riboswitches. This review will represent an overview of these computational methods for modeling structure and kinetics of riboswitch aptamers and for designing riboswitches.
Computational electrodynamics the finite-difference time-domain method
Taflove, Allen
2005-01-01
This extensively revised and expanded third edition of the Artech House bestseller, Computational Electrodynamics: The Finite-Difference Time-Domain Method, offers engineers the most up-to-date and definitive resource on this critical method for solving Maxwell's equations. The method helps practitioners design antennas, wireless communications devices, high-speed digital and microwave circuits, and integrated optical devices with unsurpassed efficiency. There has been considerable advancement in FDTD computational technology over the past few years, and the third edition brings professionals the very latest details with entirely new chapters on important techniques, major updates on key topics, and new discussions on emerging areas such as nanophotonics. What's more, to supplement the third edition, the authors have created a Web site with solutions to problems, downloadable graphics and videos, and updates, making this new edition the ideal textbook on the subject as well.
A Computationally Efficient Method for Polyphonic Pitch Estimation
Directory of Open Access Journals (Sweden)
Ruohua Zhou
2009-01-01
Full Text Available This paper presents a computationally efficient method for polyphonic pitch estimation. The method employs the Fast Resonator Time-Frequency Image (RTFI as the basic time-frequency analysis tool. The approach is composed of two main stages. First, a preliminary pitch estimation is obtained by means of a simple peak-picking procedure in the pitch energy spectrum. Such spectrum is calculated from the original RTFI energy spectrum according to harmonic grouping principles. Then the incorrect estimations are removed according to spectral irregularity and knowledge of the harmonic structures of the music notes played on commonly used music instruments. The new approach is compared with a variety of other frame-based polyphonic pitch estimation methods, and results demonstrate the high performance and computational efficiency of the approach.
Evolutionary Computation Methods and their applications in Statistics
Directory of Open Access Journals (Sweden)
Francesco Battaglia
2013-05-01
Full Text Available A brief discussion of the genesis of evolutionary computation methods, their relationship to artificial intelligence, and the contribution of genetics and Darwin’s theory of natural evolution is provided. Then, the main evolutionary computation methods are illustrated: evolution strategies, genetic algorithms, estimation of distribution algorithms, differential evolution, and a brief description of some evolutionary behavior methods such as ant colony and particle swarm optimization. We also discuss the role of the genetic algorithm for multivariate probability distribution random generation, rather than as a function optimizer. Finally, some relevant applications of genetic algorithm to statistical problems are reviewed: selection of variables in regression, time series model building, outlier identification, cluster analysis, design of experiments.
Variational-moment method for computing magnetohydrodynamic equilibria
International Nuclear Information System (INIS)
Lao, L.L.
1983-08-01
A fast yet accurate method to compute magnetohydrodynamic equilibria is provided by the variational-moment method, which is similar to the classical Rayleigh-Ritz-Galerkin approximation. The equilibrium solution sought is decomposed into a spectral representation. The partial differential equations describing the equilibrium are then recast into their equivalent variational form and systematically reduced to an optimum finite set of coupled ordinary differential equations. An appropriate spectral decomposition can make the series representing the solution coverge rapidly and hence substantially reduces the amount of computational time involved. The moment method was developed first to compute fixed-boundary inverse equilibria in axisymmetric toroidal geometry, and was demonstrated to be both efficient and accurate. The method since has been generalized to calculate free-boundary axisymmetric equilibria, to include toroidal plasma rotation and pressure anisotropy, and to treat three-dimensional toroidal geometry. In all these formulations, the flux surfaces are assumed to be smooth and nested so that the solutions can be decomposed in Fourier series in inverse coordinates. These recent developments and the advantages and limitations of the moment method are reviewed. The use of alternate coordinates for decomposition is discussed
DEFF Research Database (Denmark)
Palmer, Michael H.; Camp, Philip J.; Hoffmann, Søren Vrønning
2012-01-01
The first vacuum ultraviolet absorption spectrum of a 1,2,4-triazole has been obtained and analyzed in detail, with assistance from both an enhanced UV photoelectron spectroscopic study and ab initio multi-reference multi-root configuration interaction procedures. For both 1H- and 1-methyl-1,2...
Computer-aided methods of determining thyristor thermal transients
International Nuclear Information System (INIS)
Lu, E.; Bronner, G.
1988-08-01
An accurate tracing of the thyristor thermal response is investigated. This paper offers several alternatives for thermal modeling and analysis by using an electrical circuit analog: topological method, convolution integral method, etc. These methods are adaptable to numerical solutions and well suited to the use of the digital computer. The thermal analysis of thyristors was performed for the 1000 MVA converter system at the Princeton Plasma Physics Laboratory. Transient thermal impedance curves for individual thyristors in a given cooling arrangement were known from measurements and from manufacturer's data. The analysis pertains to almost any loading case, and the results are obtained in a numerical or a graphical format. 6 refs., 9 figs
Ab initio path-integral molecular dynamics and the quantum nature of hydrogen bonds
International Nuclear Information System (INIS)
Feng Yexin; Chen Ji; Wang Enge; Li Xin-Zheng
2016-01-01
The hydrogen bond (HB) is an important type of intermolecular interaction, which is generally weak, ubiquitous, and essential to life on earth. The small mass of hydrogen means that many properties of HBs are quantum mechanical in nature. In recent years, because of the development of computer simulation methods and computational power, the influence of nuclear quantum effects (NQEs) on the structural and energetic properties of some hydrogen bonded systems has been intensively studied. Here, we present a review of these studies by focussing on the explanation of the principles underlying the simulation methods, i.e., the ab initio path-integral molecular dynamics. Its extension in combination with the thermodynamic integration method for the calculation of free energies will also be introduced. We use two examples to show how this influence of NQEs in realistic systems is simulated in practice. (topical review)
Energy Technology Data Exchange (ETDEWEB)
HOLM,ELIZABETH A.; BATTAILE,CORBETT C.; BUCHHEIT,THOMAS E.; FANG,HUEI ELIOT; RINTOUL,MARK DANIEL; VEDULA,VENKATA R.; GLASS,S. JILL; KNOROVSKY,GERALD A.; NEILSEN,MICHAEL K.; WELLMAN,GERALD W.; SULSKY,DEBORAH; SHEN,YU-LIN; SCHREYER,H. BUCK
2000-04-01
Computational materials simulations have traditionally focused on individual phenomena: grain growth, crack propagation, plastic flow, etc. However, real materials behavior results from a complex interplay between phenomena. In this project, the authors explored methods for coupling mesoscale simulations of microstructural evolution and micromechanical response. In one case, massively parallel (MP) simulations for grain evolution and microcracking in alumina stronglink materials were dynamically coupled. In the other, codes for domain coarsening and plastic deformation in CuSi braze alloys were iteratively linked. this program provided the first comparison of two promising ways to integrate mesoscale computer codes. Coupled microstructural/micromechanical codes were applied to experimentally observed microstructures for the first time. In addition to the coupled codes, this project developed a suite of new computational capabilities (PARGRAIN, GLAD, OOF, MPM, polycrystal plasticity, front tracking). The problem of plasticity length scale in continuum calculations was recognized and a solution strategy was developed. The simulations were experimentally validated on stockpile materials.
Fast calculation method for computer-generated cylindrical holograms.
Yamaguchi, Takeshi; Fujii, Tomohiko; Yoshikawa, Hiroshi
2008-07-01
Since a general flat hologram has a limited viewable area, we usually cannot see the other side of a reconstructed object. There are some holograms that can solve this problem. A cylindrical hologram is well known to be viewable in 360 deg. Most cylindrical holograms are optical holograms, but there are few reports of computer-generated cylindrical holograms. The lack of computer-generated cylindrical holograms is because the spatial resolution of output devices is not great enough; therefore, we have to make a large hologram or use a small object to fulfill the sampling theorem. In addition, in calculating the large fringe, the calculation amount increases in proportion to the hologram size. Therefore, we propose what we believe to be a new calculation method for fast calculation. Then, we print these fringes with our prototype fringe printer. As a result, we obtain a good reconstructed image from a computer-generated cylindrical hologram.
Computational methods in metabolic engineering for strain design.
Long, Matthew R; Ong, Wai Kit; Reed, Jennifer L
2015-08-01
Metabolic engineering uses genetic approaches to control microbial metabolism to produce desired compounds. Computational tools can identify new biological routes to chemicals and the changes needed in host metabolism to improve chemical production. Recent computational efforts have focused on exploring what compounds can be made biologically using native, heterologous, and/or enzymes with broad specificity. Additionally, computational methods have been developed to suggest different types of genetic modifications (e.g. gene deletion/addition or up/down regulation), as well as suggest strategies meeting different criteria (e.g. high yield, high productivity, or substrate co-utilization). Strategies to improve the runtime performances have also been developed, which allow for more complex metabolic engineering strategies to be identified. Future incorporation of kinetic considerations will further improve strain design algorithms. Copyright © 2015 Elsevier Ltd. All rights reserved.
Method of Computer-aided Instruction in Situation Control Systems
Directory of Open Access Journals (Sweden)
Anatoliy O. Kargin
2013-01-01
Full Text Available The article considers the problem of computer-aided instruction in context-chain motivated situation control system of the complex technical system behavior. The conceptual and formal models of situation control with practical instruction are considered. Acquisition of new behavior knowledge is presented as structural changes in system memory in the form of situational agent set. Model and method of computer-aided instruction represent formalization, based on the nondistinct theories by physiologists and cognitive psychologists.The formal instruction model describes situation and reaction formation and dependence on different parameters, effecting education, such as the reinforcement value, time between the stimulus, action and the reinforcement. The change of the contextual link between situational elements when using is formalized.The examples and results of computer instruction experiments of the robot device “LEGO MINDSTORMS NXT”, equipped with ultrasonic distance, touch, light sensors.
A new fault detection method for computer networks
International Nuclear Information System (INIS)
Lu, Lu; Xu, Zhengguo; Wang, Wenhai; Sun, Youxian
2013-01-01
Over the past few years, fault detection for computer networks has attracted extensive attentions for its importance in network management. Most existing fault detection methods are based on active probing techniques which can detect the occurrence of faults fast and precisely. But these methods suffer from the limitation of traffic overhead, especially in large scale networks. To relieve traffic overhead induced by active probing based methods, a new fault detection method, whose key is to divide the detection process into multiple stages, is proposed in this paper. During each stage, only a small region of the network is detected by using a small set of probes. Meanwhile, it also ensures that the entire network can be covered after multiple detection stages. This method can guarantee that the traffic used by probes during each detection stage is small sufficiently so that the network can operate without severe disturbance from probes. Several simulation results verify the effectiveness of the proposed method
Practical methods to improve the development of computational software
International Nuclear Information System (INIS)
Osborne, A. G.; Harding, D. W.; Deinert, M. R.
2013-01-01
The use of computation has become ubiquitous in science and engineering. As the complexity of computer codes has increased, so has the need for robust methods to minimize errors. Past work has show that the number of functional errors is related the number of commands that a code executes. Since the late 1960's, major participants in the field of computation have encouraged the development of best practices for programming to help reduce coder induced error, and this has lead to the emergence of 'software engineering' as a field of study. Best practices for coding and software production have now evolved and become common in the development of commercial software. These same techniques, however, are largely absent from the development of computational codes by research groups. Many of the best practice techniques from the professional software community would be easy for research groups in nuclear science and engineering to adopt. This paper outlines the history of software engineering, as well as issues in modern scientific computation, and recommends practices that should be adopted by individual scientific programmers and university research groups. (authors)
Computing homography with RANSAC algorithm: a novel method of registration
Li, Xiaowei; Liu, Yue; Wang, Yongtian; Yan, Dayuan
2005-02-01
An AR (Augmented Reality) system can integrate computer-generated objects with the image sequences of real world scenes in either an off-line or a real-time way. Registration, or camera pose estimation, is one of the key techniques to determine its performance. The registration methods can be classified as model-based and move-matching. The former approach can accomplish relatively accurate registration results, but it requires the precise model of the scene, which is hard to be obtained. The latter approach carries out registration by computing the ego-motion of the camera. Because it does not require the prior-knowledge of the scene, its registration results sometimes turn out to be less accurate. When the model defined is as simple as a plane, a mixed method is introduced to take advantages of the virtues of the two methods mentioned above. Although unexpected objects often occlude this plane in an AR system, one can still try to detect corresponding points with a contract-expand method, while this will import erroneous correspondences. Computing homography with RANSAC algorithm is used to overcome such shortcomings. Using the robustly estimated homography resulted from RANSAC, the camera projective matrix can be recovered and thus registration is accomplished even when the markers are lost in the scene.
Pair Programming as a Modern Method of Teaching Computer Science
Directory of Open Access Journals (Sweden)
Irena Nančovska Šerbec
2008-10-01
Full Text Available At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM Computing Curricula. The professional knowledge is therefore associated and combined with the teaching knowledge and skills. In the paper we present how to achieve competences related to programming by using different didactical models (semiotic ladder, cognitive objectives taxonomy, problem solving and modern teaching method “pair programming”. Pair programming differs from standard methods (individual work, seminars, projects etc.. It belongs to the extreme programming as a discipline of software development and is known to have positive effects on teaching first programming language. We have experimentally observed pair programming in the introductory programming course. The paper presents and analyzes the results of using this method: the aspects of satisfaction during programming and the level of gained knowledge. The results are in general positive and demonstrate the promising usage of this teaching method.
Applications of meshless methods for damage computations with finite strains
International Nuclear Information System (INIS)
Pan Xiaofei; Yuan Huang
2009-01-01
Material defects such as cavities have great effects on the damage process in ductile materials. Computations based on finite element methods (FEMs) often suffer from instability due to material failure as well as large distortions. To improve computational efficiency and robustness the element-free Galerkin (EFG) method is applied in the micro-mechanical constitute damage model proposed by Gurson and modified by Tvergaard and Needleman (the GTN damage model). The EFG algorithm is implemented in the general purpose finite element code ABAQUS via the user interface UEL. With the help of the EFG method, damage processes in uniaxial tension specimens and notched specimens are analyzed and verified with experimental data. Computational results reveal that the damage which takes place in the interior of specimens will extend to the exterior and cause fracture of specimens; the damage is a fast procedure relative to the whole tensing process. The EFG method provides more stable and robust numerical solution in comparing with the FEM analysis
Improved computation method in residual life estimation of structural components
Directory of Open Access Journals (Sweden)
Maksimović Stevan M.
2013-01-01
Full Text Available This work considers the numerical computation methods and procedures for the fatigue crack growth predicting of cracked notched structural components. Computation method is based on fatigue life prediction using the strain energy density approach. Based on the strain energy density (SED theory, a fatigue crack growth model is developed to predict the lifetime of fatigue crack growth for single or mixed mode cracks. The model is based on an equation expressed in terms of low cycle fatigue parameters. Attention is focused on crack growth analysis of structural components under variable amplitude loads. Crack growth is largely influenced by the effect of the plastic zone at the front of the crack. To obtain efficient computation model plasticity-induced crack closure phenomenon is considered during fatigue crack growth. The use of the strain energy density method is efficient for fatigue crack growth prediction under cyclic loading in damaged structural components. Strain energy density method is easy for engineering applications since it does not require any additional determination of fatigue parameters (those would need to be separately determined for fatigue crack propagation phase, and low cyclic fatigue parameters are used instead. Accurate determination of fatigue crack closure has been a complex task for years. The influence of this phenomenon can be considered by means of experimental and numerical methods. Both of these models are considered. Finite element analysis (FEA has been shown to be a powerful and useful tool1,6 to analyze crack growth and crack closure effects. Computation results are compared with available experimental results. [Projekat Ministarstva nauke Republike Srbije, br. OI 174001
Ab initio lattice dynamics of metal surfaces
International Nuclear Information System (INIS)
Heid, R.; Bohnen, K.-P.
2003-01-01
Dynamical properties of atoms on surfaces depend sensitively on their bonding environment and thus provide valuable insight into the local geometry and chemical binding at the boundary of a solid. Density-functional theory provides a unified approach to the calculation of structural and dynamical properties from first principles. Its high accuracy and predictive power for lattice dynamical properties of semiconductor surfaces has been demonstrated in a previous article by Fritsch and Schroeder (Phys. Rep. 309 (1999) 209). In this report, we review the state-of-the-art of these ab initio approaches to surface dynamical properties of metal surfaces. We give a brief introduction to the conceptual framework with focus on recent advances in computational procedures for the ab initio linear-response approach, which have been a prerequisite for an efficient treatment of surface dynamics of noble and transition metals. The discussed applications to clean and adsorbate-covered surfaces demonstrate the high accuracy and reliability of this approach in predicting detailed microscopic properties of the phonon dynamics for a wide range of metallic surfaces
An introduction to computer simulation methods applications to physical systems
Gould, Harvey; Christian, Wolfgang
2007-01-01
Now in its third edition, this book teaches physical concepts using computer simulations. The text incorporates object-oriented programming techniques and encourages readers to develop good programming habits in the context of doing physics. Designed for readers at all levels , An Introduction to Computer Simulation Methods uses Java, currently the most popular programming language. Introduction, Tools for Doing Simulations, Simulating Particle Motion, Oscillatory Systems, Few-Body Problems: The Motion of the Planets, The Chaotic Motion of Dynamical Systems, Random Processes, The Dynamics of Many Particle Systems, Normal Modes and Waves, Electrodynamics, Numerical and Monte Carlo Methods, Percolation, Fractals and Kinetic Growth Models, Complex Systems, Monte Carlo Simulations of Thermal Systems, Quantum Systems, Visualization and Rigid Body Dynamics, Seeing in Special and General Relativity, Epilogue: The Unity of Physics For all readers interested in developing programming habits in the context of doing phy...
NATO Advanced Study Institute on Methods in Computational Molecular Physics
Diercksen, Geerd
1992-01-01
This volume records the lectures given at a NATO Advanced Study Institute on Methods in Computational Molecular Physics held in Bad Windsheim, Germany, from 22nd July until 2nd. August, 1991. This NATO Advanced Study Institute sought to bridge the quite considerable gap which exist between the presentation of molecular electronic structure theory found in contemporary monographs such as, for example, McWeeny's Methods 0/ Molecular Quantum Mechanics (Academic Press, London, 1989) or Wilson's Electron correlation in moleeules (Clarendon Press, Oxford, 1984) and the realization of the sophisticated computational algorithms required for their practical application. It sought to underline the relation between the electronic structure problem and the study of nuc1ear motion. Software for performing molecular electronic structure calculations is now being applied in an increasingly wide range of fields in both the academic and the commercial sectors. Numerous applications are reported in areas as diverse as catalysi...
An Adaptive Reordered Method for Computing PageRank
Directory of Open Access Journals (Sweden)
Yi-Ming Bu
2013-01-01
Full Text Available We propose an adaptive reordered method to deal with the PageRank problem. It has been shown that one can reorder the hyperlink matrix of PageRank problem to calculate a reduced system and get the full PageRank vector through forward substitutions. This method can provide a speedup for calculating the PageRank vector. We observe that in the existing reordered method, the cost of the recursively reordering procedure could offset the computational reduction brought by minimizing the dimension of linear system. With this observation, we introduce an adaptive reordered method to accelerate the total calculation, in which we terminate the reordering procedure appropriately instead of reordering to the end. Numerical experiments show the effectiveness of this adaptive reordered method.
Experiences using DAKOTA stochastic expansion methods in computational simulations.
Energy Technology Data Exchange (ETDEWEB)
Templeton, Jeremy Alan; Ruthruff, Joseph R.
2012-01-01
Uncertainty quantification (UQ) methods bring rigorous statistical connections to the analysis of computational and experiment data, and provide a basis for probabilistically assessing margins associated with safety and reliability. The DAKOTA toolkit developed at Sandia National Laboratories implements a number of UQ methods, which are being increasingly adopted by modeling and simulation teams to facilitate these analyses. This report disseminates results as to the performance of DAKOTA's stochastic expansion methods for UQ on a representative application. Our results provide a number of insights that may be of interest to future users of these methods, including the behavior of the methods in estimating responses at varying probability levels, and the expansion levels for the methodologies that may be needed to achieve convergence.
Interval sampling methods and measurement error: a computer simulation.
Wirth, Oliver; Slaven, James; Taylor, Matthew A
2014-01-01
A simulation study was conducted to provide a more thorough account of measurement error associated with interval sampling methods. A computer program simulated the application of momentary time sampling, partial-interval recording, and whole-interval recording methods on target events randomly distributed across an observation period. The simulation yielded measures of error for multiple combinations of observation period, interval duration, event duration, and cumulative event duration. The simulations were conducted up to 100 times to yield measures of error variability. Although the present simulation confirmed some previously reported characteristics of interval sampling methods, it also revealed many new findings that pertain to each method's inherent strengths and weaknesses. The analysis and resulting error tables can help guide the selection of the most appropriate sampling method for observation-based behavioral assessments. © Society for the Experimental Analysis of Behavior.
Advanced soft computing diagnosis method for tumour grading.
Papageorgiou, E I; Spyridonos, P P; Stylios, C D; Ravazoula, P; Groumpos, P P; Nikiforidis, G N
2006-01-01
To develop an advanced diagnostic method for urinary bladder tumour grading. A novel soft computing modelling methodology based on the augmentation of fuzzy cognitive maps (FCMs) with the unsupervised active Hebbian learning (AHL) algorithm is applied. One hundred and twenty-eight cases of urinary bladder cancer were retrieved from the archives of the Department of Histopathology, University Hospital of Patras, Greece. All tumours had been characterized according to the classical World Health Organization (WHO) grading system. To design the FCM model for tumour grading, three experts histopathologists defined the main histopathological features (concepts) and their impact on grade characterization. The resulted FCM model consisted of nine concepts. Eight concepts represented the main histopathological features for tumour grading. The ninth concept represented the tumour grade. To increase the classification ability of the FCM model, the AHL algorithm was applied to adjust the weights of the FCM. The proposed FCM grading model achieved a classification accuracy of 72.5%, 74.42% and 95.55% for tumours of grades I, II and III, respectively. An advanced computerized method to support tumour grade diagnosis decision was proposed and developed. The novelty of the method is based on employing the soft computing method of FCMs to represent specialized knowledge on histopathology and on augmenting FCMs ability using an unsupervised learning algorithm, the AHL. The proposed method performs with reasonably high accuracy compared to other existing methods and at the same time meets the physicians' requirements for transparency and explicability.
Directory of Open Access Journals (Sweden)
Bashir A Akhoon
Full Text Available The rapid appearance of resistant malarial parasites after introduction of atovaquone (ATQ drug has prompted the search for new drugs as even single point mutations in the active site of Cytochrome b protein can rapidly render ATQ ineffective. The presence of Y268 mutations in the Cytochrome b (Cyt b protein is previously suggested to be responsible for the ATQ resistance in Plasmodium falciparum (P. falciparum. In this study, we examined the resistance mechanism against ATQ in P. falciparum through computational methods. Here, we reported a reliable protein model of Cyt bc1 complex containing Cyt b and the Iron-Sulphur Protein (ISP of P. falciparum using composite modeling method by combining threading, ab initio modeling and atomic-level structure refinement approaches. The molecular dynamics simulations suggest that Y268S mutation causes ATQ resistance by reducing hydrophobic interactions between Cyt bc1 protein complex and ATQ. Moreover, the important histidine contact of ATQ with the ISP chain is also lost due to Y268S mutation. We noticed the induced mutation alters the arrangement of active site residues in a fashion that enforces ATQ to find its new stable binding site far away from the wild-type binding pocket. The MM-PBSA calculations also shows that the binding affinity of ATQ with Cyt bc1 complex is enough to hold it at this new site that ultimately leads to the ATQ resistance.
Ab initio potential for solids
DEFF Research Database (Denmark)
Chetty, N.; Stokbro, Kurt; Jacobsen, Karsten Wedel
1992-01-01
. At the most approximate level, the theory is equivalent to the usual effective-medium theory. At all levels of approximation, every term in the total-energy expression is calculated ab initio, that is, without any fitting to experiment or to other calculations. Every step in the approximation procedure can...
Splitting method for computing coupled hydrodynamic and structural response
International Nuclear Information System (INIS)
Ash, J.E.
1977-01-01
A numerical method is developed for application to unsteady fluid dynamics problems, in particular to the mechanics following a sudden release of high energy. Solution of the initial compressible flow phase provides input to a power-series method for the incompressible fluid motions. The system is split into spatial and time domains leading to the convergent computation of a sequence of elliptic equations. Two sample problems are solved, the first involving an underwater explosion and the second the response of a nuclear reactor containment shell structure to a hypothetical core accident. The solutions are correlated with experimental data
Complex Data Modeling and Computationally Intensive Statistical Methods
Mantovan, Pietro
2010-01-01
The last years have seen the advent and development of many devices able to record and store an always increasing amount of complex and high dimensional data; 3D images generated by medical scanners or satellite remote sensing, DNA microarrays, real time financial data, system control datasets. The analysis of this data poses new challenging problems and requires the development of novel statistical models and computational methods, fueling many fascinating and fast growing research areas of modern statistics. The book offers a wide variety of statistical methods and is addressed to statistici
Computational methods for planning and evaluating geothermal energy projects
International Nuclear Information System (INIS)
Goumas, M.G.; Lygerou, V.A.; Papayannakis, L.E.
1999-01-01
In planning, designing and evaluating a geothermal energy project, a number of technical, economic, social and environmental parameters should be considered. The use of computational methods provides a rigorous analysis improving the decision-making process. This article demonstrates the application of decision-making methods developed in operational research for the optimum exploitation of geothermal resources. Two characteristic problems are considered: (1) the economic evaluation of a geothermal energy project under uncertain conditions using a stochastic analysis approach and (2) the evaluation of alternative exploitation schemes for optimum development of a low enthalpy geothermal field using a multicriteria decision-making procedure. (Author)
Automated uncertainty analysis methods in the FRAP computer codes
International Nuclear Information System (INIS)
Peck, S.O.
1980-01-01
A user oriented, automated uncertainty analysis capability has been incorporated in the Fuel Rod Analysis Program (FRAP) computer codes. The FRAP codes have been developed for the analysis of Light Water Reactor fuel rod behavior during steady state (FRAPCON) and transient (FRAP-T) conditions as part of the United States Nuclear Regulatory Commission's Water Reactor Safety Research Program. The objective of uncertainty analysis of these codes is to obtain estimates of the uncertainty in computed outputs of the codes is to obtain estimates of the uncertainty in computed outputs of the codes as a function of known uncertainties in input variables. This paper presents the methods used to generate an uncertainty analysis of a large computer code, discusses the assumptions that are made, and shows techniques for testing them. An uncertainty analysis of FRAP-T calculated fuel rod behavior during a hypothetical loss-of-coolant transient is presented as an example and carried through the discussion to illustrate the various concepts
Comparison of different methods for shielding design in computed tomography
International Nuclear Information System (INIS)
Ciraj-Bjelac, O.; Arandjic, D.; Kosutic, D.
2011-01-01
The purpose of this work is to compare different methods for shielding calculation in computed tomography (CT). The BIR-IPEM (British Inst. of Radiology and Inst. of Physics in Engineering in Medicine) and NCRP (National Council on Radiation Protection) method were used for shielding thickness calculation. Scattered dose levels and calculated barrier thickness were also compared with those obtained by scatter dose measurements in the vicinity of a dedicated CT unit. Minimal requirement for protective barriers based on BIR-IPEM method ranged between 1.1 and 1.4 mm of lead demonstrating underestimation of up to 20 % and overestimation of up to 30 % when compared with thicknesses based on measured dose levels. For NCRP method, calculated thicknesses were 33 % higher (27-42 %). BIR-IPEM methodology-based results were comparable with values based on scattered dose measurements, while results obtained using NCRP methodology demonstrated an overestimation of the minimal required barrier thickness. (authors)
International Nuclear Information System (INIS)
Ng, T Y; Yeak, S H; Liew, K M
2008-01-01
A multiscale technique is developed that couples empirical molecular dynamics (MD) and ab initio density functional theory (DFT). An overlap handshaking region between the empirical MD and ab initio DFT regions is formulated and the interaction forces between the carbon atoms are calculated based on the second-generation reactive empirical bond order potential, the long-range Lennard-Jones potential as well as the quantum-mechanical DFT derived forces. A density of point algorithm is also developed to track all interatomic distances in the system, and to activate and establish the DFT and handshaking regions. Through parallel computing, this multiscale method is used here to study the dynamic behavior of single-walled carbon nanotubes (SWCNTs) under asymmetrical axial compression. The detection of sideways buckling due to the asymmetrical axial compression is reported and discussed. It is noted from this study on SWCNTs that the MD results may be stiffer compared to those with electron density considerations, i.e. first-principle ab initio methods
Multiscale methods in turbulent combustion: strategies and computational challenges
International Nuclear Information System (INIS)
Echekki, Tarek
2009-01-01
A principal challenge in modeling turbulent combustion flows is associated with their complex, multiscale nature. Traditional paradigms in the modeling of these flows have attempted to address this nature through different strategies, including exploiting the separation of turbulence and combustion scales and a reduced description of the composition space. The resulting moment-based methods often yield reasonable predictions of flow and reactive scalars' statistics under certain conditions. However, these methods must constantly evolve to address combustion at different regimes, modes or with dominant chemistries. In recent years, alternative multiscale strategies have emerged, which although in part inspired by the traditional approaches, also draw upon basic tools from computational science, applied mathematics and the increasing availability of powerful computational resources. This review presents a general overview of different strategies adopted for multiscale solutions of turbulent combustion flows. Within these strategies, some specific models are discussed or outlined to illustrate their capabilities and underlying assumptions. These strategies may be classified under four different classes, including (i) closure models for atomistic processes, (ii) multigrid and multiresolution strategies, (iii) flame-embedding strategies and (iv) hybrid large-eddy simulation-low-dimensional strategies. A combination of these strategies and models can potentially represent a robust alternative strategy to moment-based models; but a significant challenge remains in the development of computational frameworks for these approaches as well as their underlying theories. (topical review)
Mathematical modellings and computational methods for structural analysis of LMFBR's
International Nuclear Information System (INIS)
Liu, W.K.; Lam, D.
1983-01-01
In this paper, two aspects of nuclear reactor problems are discussed, modelling techniques and computational methods for large scale linear and nonlinear analyses of LMFBRs. For nonlinear fluid-structure interaction problem with large deformation, arbitrary Lagrangian-Eulerian description is applicable. For certain linear fluid-structure interaction problem, the structural response spectrum can be found via 'added mass' approach. In a sense, the fluid inertia is accounted by a mass matrix added to the structural mass. The fluid/structural modes of certain fluid-structure problem can be uncoupled to get the reduced added mass. The advantage of this approach is that it can account for the many repeated structures of nuclear reactor. In regard to nonlinear dynamic problem, the coupled nonlinear fluid-structure equations usually have to be solved by direct time integration. The computation can be very expensive and time consuming for nonlinear problems. Thus, it is desirable to optimize the accuracy and computation effort by using implicit-explicit mixed time integration method. (orig.)
A numerical method to compute interior transmission eigenvalues
International Nuclear Information System (INIS)
Kleefeld, Andreas
2013-01-01
In this paper the numerical calculation of eigenvalues of the interior transmission problem arising in acoustic scattering for constant contrast in three dimensions is considered. From the computational point of view existing methods are very expensive, and are only able to show the existence of such transmission eigenvalues. Furthermore, they have trouble finding them if two or more eigenvalues are situated closely together. We present a new method based on complex-valued contour integrals and the boundary integral equation method which is able to calculate highly accurate transmission eigenvalues. So far, this is the first paper providing such accurate values for various surfaces different from a sphere in three dimensions. Additionally, the computational cost is even lower than those of existing methods. Furthermore, the algorithm is capable of finding complex-valued eigenvalues for which no numerical results have been reported yet. Until now, the proof of existence of such eigenvalues is still open. Finally, highly accurate eigenvalues of the interior Dirichlet problem are provided and might serve as test cases to check newly derived Faber–Krahn type inequalities for larger transmission eigenvalues that are not yet available. (paper)
Ab initio study of hydrogen adsorption on benzenoid linkers in metal-organic framework materials
International Nuclear Information System (INIS)
Gao Yi; Zeng, X C
2007-01-01
We have computed the energies of adsorption of molecular hydrogen on a number of molecular linkers in metal-organic framework solid materials using density functional theory (DFT) and ab initio molecular orbital methods. We find that the hybrid B3LYP (Becke three-parameter Lee-Yang-Parr) DFT method gives a qualitatively incorrect prediction of the hydrogen binding with benzenoid molecular linkers. Both local-density approximation (LDA) and generalized gradient approximation (GGA) DFT methods are inaccurate in predicting the values of hydrogen binding energies, but can give a qualitatively correct prediction of the hydrogen binding. When compared to the more accurate binding-energy results based on the ab initio Moeller-Plesset second-order perturbation (MP2) method, the LDA results may be viewed as an upper limit while the GGA results may be viewed as a lower limit. Since the MP2 calculation is impractical for realistic metal-organic framework systems, the combined LDA and GGA calculations provide a cost-effective way to assess the hydrogen binding capability of these systems
Laboratory Sequence in Computational Methods for Introductory Chemistry
Cody, Jason A.; Wiser, Dawn C.
2003-07-01
A four-exercise laboratory sequence for introductory chemistry integrating hands-on, student-centered experience with computer modeling has been designed and implemented. The progression builds from exploration of molecular shapes to intermolecular forces and the impact of those forces on chemical separations made with gas chromatography and distillation. The sequence ends with an exploration of molecular orbitals. The students use the computers as a tool; they build the molecules, submit the calculations, and interpret the results. Because of the construction of the sequence and its placement spanning the semester break, good laboratory notebook practices are reinforced and the continuity of course content and methods between semesters is emphasized. The inclusion of these techniques in the first year of chemistry has had a positive impact on student perceptions and student learning.
An analytical method for computing atomic contact areas in biomolecules.
Mach, Paul; Koehl, Patrice
2013-01-15
We propose a new analytical method for detecting and computing contacts between atoms in biomolecules. It is based on the alpha shape theory and proceeds in three steps. First, we compute the weighted Delaunay triangulation of the union of spheres representing the molecule. In the second step, the Delaunay complex is filtered to derive the dual complex. Finally, contacts between spheres are collected. In this approach, two atoms i and j are defined to be in contact if their centers are connected by an edge in the dual complex. The contact areas between atom i and its neighbors are computed based on the caps formed by these neighbors on the surface of i; the total area of all these caps is partitioned according to their spherical Laguerre Voronoi diagram on the surface of i. This method is analytical and its implementation in a new program BallContact is fast and robust. We have used BallContact to study contacts in a database of 1551 high resolution protein structures. We show that with this new definition of atomic contacts, we generate realistic representations of the environments of atoms and residues within a protein. In particular, we establish the importance of nonpolar contact areas that complement the information represented by the accessible surface areas. This new method bears similarity to the tessellation methods used to quantify atomic volumes and contacts, with the advantage that it does not require the presence of explicit solvent molecules if the surface of the protein is to be considered. © 2012 Wiley Periodicals, Inc. Copyright © 2012 Wiley Periodicals, Inc.
Ab initio molecular dynamics simulation of laser melting of silicon
Silvestrelli, P.-L.; Alavi, A.; Parrinello, M.; Frenkel, D.
1996-01-01
The method of ab initio molecular dynamics, based on finite temperature density functional theory, is used to simulate laser heating of crystal silicon. We have found that a high concentration of excited electrons dramatically weakens the covalent bond. As a result, the system undergoes a melting
Ab initio calculations and modelling of atomic cluster structure
DEFF Research Database (Denmark)
Solov'yov, Ilia; Lyalin, Andrey G.; Solov'yov, Andrey V.
2004-01-01
The optimized structure and electronic properties of small sodium and magnesium clusters have been investigated using it ab initio theoretical methods based on density-functional theory and post-Hartree-Fock many-body perturbation theory accounting for all electrons in the system. A new theoretical...
An Accurate liver segmentation method using parallel computing algorithm
International Nuclear Information System (INIS)
Elbasher, Eiman Mohammed Khalied
2014-12-01
Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther
A discrete ordinate response matrix method for massively parallel computers
International Nuclear Information System (INIS)
Hanebutte, U.R.; Lewis, E.E.
1991-01-01
A discrete ordinate response matrix method is formulated for the solution of neutron transport problems on massively parallel computers. The response matrix formulation eliminates iteration on the scattering source. The nodal matrices which result from the diamond-differenced equations are utilized in a factored form which minimizes memory requirements and significantly reduces the required number of algorithm utilizes massive parallelism by assigning each spatial node to a processor. The algorithm is accelerated effectively by a synthetic method in which the low-order diffusion equations are also solved by massively parallel red/black iterations. The method has been implemented on a 16k Connection Machine-2, and S 8 and S 16 solutions have been obtained for fixed-source benchmark problems in X--Y geometry
Review methods for image segmentation from computed tomography images
International Nuclear Information System (INIS)
Mamat, Nurwahidah; Rahman, Wan Eny Zarina Wan Abdul; Soh, Shaharuddin Cik; Mahmud, Rozi
2014-01-01
Image segmentation is a challenging process in order to get the accuracy of segmentation, automation and robustness especially in medical images. There exist many segmentation methods that can be implemented to medical images but not all methods are suitable. For the medical purposes, the aims of image segmentation are to study the anatomical structure, identify the region of interest, measure tissue volume to measure growth of tumor and help in treatment planning prior to radiation therapy. In this paper, we present a review method for segmentation purposes using Computed Tomography (CT) images. CT images has their own characteristics that affect the ability to visualize anatomic structures and pathologic features such as blurring of the image and visual noise. The details about the methods, the goodness and the problem incurred in the methods will be defined and explained. It is necessary to know the suitable segmentation method in order to get accurate segmentation. This paper can be a guide to researcher to choose the suitable segmentation method especially in segmenting the images from CT scan
A computer method for simulating the decay of radon daughters
International Nuclear Information System (INIS)
Hartley, B.M.
1988-01-01
The analytical equations representing the decay of a series of radioactive atoms through a number of daughter products are well known. These equations are for an idealized case in which the expectation value of the number of atoms which decay in a certain time can be represented by a smooth curve. The real curve of the total number of disintegrations from a radioactive species consists of a series of Heaviside step functions, with the steps occurring at the time of the disintegration. The disintegration of radioactive atoms is said to be random but this random behaviour is such that a single species forms an ensemble of which the times of disintegration give a geometric distribution. Numbers which have a geometric distribution can be generated by computer and can be used to simulate the decay of one or more radioactive species. A computer method is described for simulating such decay of radioactive atoms and this method is applied specifically to the decay of the short half life daughters of radon 222 and the emission of alpha particles from polonium 218 and polonium 214. Repeating the simulation of the decay a number of times provides a method for investigating the statistical uncertainty inherent in methods for measurement of exposure to radon daughters. This statistical uncertainty is difficult to investigate analytically since the time of decay of an atom of polonium 218 is not independent of the time of decay of subsequent polonium 214. The method is currently being used to investigate the statistical uncertainties of a number of commonly used methods for the counting of alpha particles from radon daughters and the calculations of exposure
Pal, Amrita; Arabnejad, Saeid; Yamashita, Koichi; Manzhos, Sergei
2018-05-01
C60 and C60 based molecules are efficient acceptors and electron transport layers for planar perovskite solar cells. While properties of these molecules are well studied by ab initio methods, those of solid C60, specifically its optical absorption properties, are not. We present a combined density functional theory-Density Functional Tight Binding (DFTB) study of the effect of solid state packing on the band structure and optical absorption of C60. The valence and conduction band edge energies of solid C60 differ on the order of 0.1 eV from single molecule frontier orbital energies. We show that calculations of optical properties using linear response time dependent-DFT(B) or the imaginary part of the dielectric constant (dipole approximation) can result in unrealistically large redshifts in the presence of intermolecular interactions compared to available experimental data. We show that optical spectra computed from the frequency-dependent real polarizability can better reproduce the effect of C60 aggregation on optical absorption, specifically with a generalized gradient approximation functional, and may be more suited to study effects of molecular aggregation.
Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.
Battiti, Roberto
1990-01-01
This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from
Ab initio results for intermediate-mass, open-shell nuclei
Baker, Robert B.; Dytrych, Tomas; Launey, Kristina D.; Draayer, Jerry P.
2017-01-01
A theoretical understanding of nuclei in the intermediate-mass region is vital to astrophysical models, especially for nucleosynthesis. Here, we employ the ab initio symmetry-adapted no-core shell model (SA-NCSM) in an effort to push first-principle calculations across the sd-shell region. The ab initio SA-NCSM's advantages come from its ability to control the growth of model spaces by including only physically relevant subspaces, which allows us to explore ultra-large model spaces beyond the reach of other methods. We report on calculations for 19Ne and 20Ne up through 13 harmonic oscillator shells using realistic interactions and discuss the underlying structure as well as implications for various astrophysical reactions. This work was supported by the U.S. NSF (OCI-0904874 and ACI -1516338) and the U.S. DOE (DE-SC0005248), and also benefitted from the Blue Waters sustained-petascale computing project and high performance computing resources provided by LSU.
A new computational method for reactive power market clearing
International Nuclear Information System (INIS)
Zhang, T.; Elkasrawy, A.; Venkatesh, B.
2009-01-01
After deregulation of electricity markets, ancillary services such as reactive power supply are priced separately. However, unlike real power supply, procedures for costing and pricing reactive power supply are still evolving and spot markets for reactive power do not exist as of now. Further, traditional formulations proposed for clearing reactive power markets use a non-linear mixed integer programming formulation that are difficult to solve. This paper proposes a new reactive power supply market clearing scheme. Novelty of this formulation lies in the pricing scheme that rewards transformers for tap shifting while participating in this market. The proposed model is a non-linear mixed integer challenge. A significant portion of the manuscript is devoted towards the development of a new successive mixed integer linear programming (MILP) technique to solve this formulation. The successive MILP method is computationally robust and fast. The IEEE 6-bus and 300-bus systems are used to test the proposed method. These tests serve to demonstrate computational speed and rigor of the proposed method. (author)
Empirical method for simulation of water tables by digital computers
International Nuclear Information System (INIS)
Carnahan, C.L.; Fenske, P.R.
1975-09-01
An empirical method is described for computing a matrix of water-table elevations from a matrix of topographic elevations and a set of observed water-elevation control points which may be distributed randomly over the area of interest. The method is applicable to regions, such as the Great Basin, where the water table can be assumed to conform to a subdued image of overlying topography. A first approximation to the water table is computed by smoothing a matrix of topographic elevations and adjusting each node of the smoothed matrix according to a linear regression between observed water elevations and smoothed topographic elevations. Each observed control point is assumed to exert a radially decreasing influence on the first approximation surface. The first approximation is then adjusted further to conform to observed water-table elevations near control points. Outside the domain of control, the first approximation is assumed to represent the most probable configuration of the water table. The method has been applied to the Nevada Test Site and the Hot Creek Valley areas in Nevada
A Novel Automated Method for Analyzing Cylindrical Computed Tomography Data
Roth, D. J.; Burke, E. R.; Rauser, R. W.; Martin, R. E.
2011-01-01
A novel software method is presented that is applicable for analyzing cylindrical and partially cylindrical objects inspected using computed tomography. This method involves unwrapping and re-slicing data so that the CT data from the cylindrical object can be viewed as a series of 2-D sheets in the vertical direction in addition to volume rendering and normal plane views provided by traditional CT software. The method is based on interior and exterior surface edge detection and under proper conditions, is FULLY AUTOMATED and requires no input from the user except the correct voxel dimension from the CT scan. The software is available from NASA in 32- and 64-bit versions that can be applied to gigabyte-sized data sets, processing data either in random access memory or primarily on the computer hard drive. Please inquire with the presenting author if further interested. This software differentiates itself in total from other possible re-slicing software solutions due to complete automation and advanced processing and analysis capabilities.
Computer codes and methods for simulating accelerator driven systems
International Nuclear Information System (INIS)
Sartori, E.; Byung Chan Na
2003-01-01
A large set of computer codes and associated data libraries have been developed by nuclear research and industry over the past half century. A large number of them are in the public domain and can be obtained under agreed conditions from different Information Centres. The areas covered comprise: basic nuclear data and models, reactor spectra and cell calculations, static and dynamic reactor analysis, criticality, radiation shielding, dosimetry and material damage, fuel behaviour, safety and hazard analysis, heat conduction and fluid flow in reactor systems, spent fuel and waste management (handling, transportation, and storage), economics of fuel cycles, impact on the environment of nuclear activities etc. These codes and models have been developed mostly for critical systems used for research or power generation and other technological applications. Many of them have not been designed for accelerator driven systems (ADS), but with competent use, they can be used for studying such systems or can form the basis for adapting existing methods to the specific needs of ADS's. The present paper describes the types of methods, codes and associated data available and their role in the applications. It provides Web addresses for facilitating searches for such tools. Some indications are given on the effect of non appropriate or 'blind' use of existing tools to ADS. Reference is made to available experimental data that can be used for validating the methods use. Finally, some international activities linked to the different computational aspects are described briefly. (author)
Methodics of computing the results of monitoring the exploratory gallery
Directory of Open Access Journals (Sweden)
Krúpa Víazoslav
2000-09-01
Full Text Available At building site of motorway tunnel Viòové-Dubná skala , the priority is given to driving of exploration galley that secures in detail: geologic, engineering geology, hydrogeology and geotechnics research. This research is based on gathering information for a supposed use of the full profile driving machine that would drive the motorway tunnel. From a part of the exploration gallery which is driven by the TBM method, a fulfilling information is gathered about the parameters of the driving process , those are gathered by a computer monitoring system. The system is mounted on a driving machine. This monitoring system is based on the industrial computer PC 104. It records 4 basic values of the driving process: the electromotor performance of the driving machine Voest-Alpine ATB 35HA, the speed of driving advance, the rotation speed of the disintegrating head TBM and the total head pressure. The pressure force is evaluated from the pressure in the hydraulic cylinders of the machine. Out of these values, the strength of rock mass, the angle of inner friction, etc. are mathematically calculated. These values characterize rock mass properties as their changes. To define the effectivity of the driving process, the value of specific energy and the working ability of driving head is used. The article defines the methodics of computing the gathered monitoring information, that is prepared for the driving machine Voest Alpine ATB 35H at the Institute of Geotechnics SAS. It describes the input forms (protocols of the developed method created by an EXCEL program and shows selected samples of the graphical elaboration of the first monitoring results obtained from exploratory gallery driving process in the Viòové Dubná skala motorway tunnel.
Description of a method for computing fluid-structure interaction
International Nuclear Information System (INIS)
Gantenbein, F.
1982-02-01
A general formulation allowing computation of structure vibrations in a dense fluid is described. It is based on fluid modelisation by fluid finite elements. For each fluid node are associated two variables: the pressure p and a variable π defined as p=d 2 π/dt 2 . Coupling between structure and fluid is introduced by surface elements. This method is easy to introduce in a general finite element code. Validation was obtained by analytical calculus and tests. It is widely used for vibrational and seismic studies of pipes and internals of nuclear reactors some applications are presented [fr
Computer Aided Flowsheet Design using Group Contribution Methods
DEFF Research Database (Denmark)
Bommareddy, Susilpa; Eden, Mario R.; Gani, Rafiqul
2011-01-01
In this paper, a systematic group contribution based framework is presented for synthesis of process flowsheets from a given set of input and output specifications. Analogous to the group contribution methods developed for molecular design, the framework employs process groups to represent...... information of each flowsheet to minimize the computational load and information storage. The design variables for the selected flowsheet(s) are identified through a reverse simulation approach and are used as initial estimates for rigorous simulation to verify the feasibility and performance of the design....
COMPUTER-IMPLEMENTED METHOD OF PERFORMING A SEARCH USING SIGNATURES
DEFF Research Database (Denmark)
2017-01-01
A computer-implemented method of processing a query vector and a data vector), comprising: generating a set of masks and a first set of multiple signatures and a second set of multiple signatures by applying the set of masks to the query vector and the data vector, respectively, and generating...... candidate pairs, of a first signature and a second signature, by identifying matches of a first signature and a second signature. The set of masks comprises a configuration of the elements that is a Hadamard code; a permutation of a Hadamard code; or a code that deviates from a Hadamard code...
Method and apparatus for managing transactions with connected computers
Goldsmith, Steven Y.; Phillips, Laurence R.; Spires, Shannon V.
2003-01-01
The present invention provides a method and apparatus that make use of existing computer and communication resources and that reduce the errors and delays common to complex transactions such as international shipping. The present invention comprises an agent-based collaborative work environment that assists geographically distributed commercial and government users in the management of complex transactions such as the transshipment of goods across the U.S.-Mexico border. Software agents can mediate the creation, validation and secure sharing of shipment information and regulatory documentation over the Internet, using the World-Wide Web to interface with human users.
Numerical methods and computers used in elastohydrodynamic lubrication
Hamrock, B. J.; Tripp, J. H.
1982-01-01
Some of the methods of obtaining approximate numerical solutions to boundary value problems that arise in elastohydrodynamic lubrication are reviewed. The highlights of four general approaches (direct, inverse, quasi-inverse, and Newton-Raphson) are sketched. Advantages and disadvantages of these approaches are presented along with a flow chart showing some of the details of each. The basic question of numerical stability of the elastohydrodynamic lubrication solutions, especially in the pressure spike region, is considered. Computers used to solve this important class of lubrication problems are briefly described, with emphasis on supercomputers.
A hybrid method for the parallel computation of Green's functions
International Nuclear Information System (INIS)
Petersen, Dan Erik; Li Song; Stokbro, Kurt; Sorensen, Hans Henrik B.; Hansen, Per Christian; Skelboe, Stig; Darve, Eric
2009-01-01
Quantum transport models for nanodevices using the non-equilibrium Green's function method require the repeated calculation of the block tridiagonal part of the Green's and lesser Green's function matrices. This problem is related to the calculation of the inverse of a sparse matrix. Because of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only require computing a small number of entries of the inverse matrix. Then, we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size.
GAUSSIAN 76: an ab initio molecular orbital program
International Nuclear Information System (INIS)
Binkley, J.S.; Whiteside, R.; Hariharan, P.C.; Seeger, R.; Hehre, W.J.; Lathan, W.A.; Newton, M.D.; Ditchfield, R.; Pople, J.A.
Gaussian 76 is a general-purpose computer program for ab initio Hartree-Fock molecular orbital calculations. It can handle basis sets involving s, p and d-type gaussian functions. Certain standard sets (STO-3G, 4-31G, 6-31G*, etc.) are stored internally for easy use. Closed shell (RHF) or unrestricted open shell (UHF) wave functions can be obtained. Facilities are provided for geometry optimization to potential minima and for limited potential surface scans
Multigrid Methods for the Computation of Propagators in Gauge Fields
Kalkreuter, Thomas
Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.
Fluid history computation methods for reactor safeguards problems using MNODE computer program
International Nuclear Information System (INIS)
Huang, Y.S.; Savery, C.W.
1976-10-01
A method for predicting the pressure-temperature histories of air, water liquid, and vapor flowing in a zoned containment as a result of high energy pipe rupture is described. The computer code, MNODE, has been developed for 12 connected control volumes and 24 inertia flow paths. Predictions by the code are compared with the results of an analytical gas dynamic problem, semiscale blowdown experiments, full scale MARVIKEN test results, Battelle-Frankfurt model PWR containment test data. The MNODE solutions to NRC/AEC subcompartment benchmark problems are also compared with results predicted by other computer codes such as RELAP-3, FLASH-2, CONTEMPT-PS. The analytical consideration is consistent with Section 6.2.1.2 of the Standard Format (Rev. 2) issued by U.S. Nuclear Regulatory Commission in September 1975
Ab initio molecular crystal structures, spectra, and phase diagrams.
Hirata, So; Gilliard, Kandis; He, Xiao; Li, Jinjin; Sode, Olaseni
2014-09-16
Conspectus Molecular crystals are chemists' solids in the sense that their structures and properties can be understood in terms of those of the constituent molecules merely perturbed by a crystalline environment. They form a large and important class of solids including ices of atmospheric species, drugs, explosives, and even some organic optoelectronic materials and supramolecular assemblies. Recently, surprisingly simple yet extremely efficient, versatile, easily implemented, and systematically accurate electronic structure methods for molecular crystals have been developed. The methods, collectively referred to as the embedded-fragment scheme, divide a crystal into monomers and overlapping dimers and apply modern molecular electronic structure methods and software to these fragments of the crystal that are embedded in a self-consistently determined crystalline electrostatic field. They enable facile applications of accurate but otherwise prohibitively expensive ab initio molecular orbital theories such as Møller-Plesset perturbation and coupled-cluster theories to a broad range of properties of solids such as internal energies, enthalpies, structures, equation of state, phonon dispersion curves and density of states, infrared and Raman spectra (including band intensities and sometimes anharmonic effects), inelastic neutron scattering spectra, heat capacities, Gibbs energies, and phase diagrams, while accounting for many-body electrostatic (namely, induction or polarization) effects as well as two-body exchange and dispersion interactions from first principles. They can fundamentally alter the role of computing in the studies of molecular crystals in the same way ab initio molecular orbital theories have transformed research practices in gas-phase physical chemistry and synthetic chemistry in the last half century. In this Account, after a brief summary of formalisms and algorithms, we discuss applications of these methods performed in our group as compelling
Szostak, Roman; Aubé, Jeffrey; Szostak, Michal
2015-08-21
Twisted amides containing nitrogen at the bridgehead position are attractive practical prototypes for the investigation of the electronic and structural properties of nonplanar amide linkages. Changes that occur during rotation around the N-C(O) axis in one-carbon-bridged twisted amides have been studied using ab initio molecular orbital methods. Calculations at the MP2/6-311++G(d,p) level performed on a set of one-carbon-bridged lactams, including 20 distinct scaffolds ranging from [2.2.1] to [6.3.1] ring systems, with the C═O bond on the shortest bridge indicate significant variations in structures, resonance energies, proton affinities, core ionization energies, frontier molecular orbitals, atomic charges, and infrared frequencies that reflect structural changes corresponding to the extent of resonance stabilization during rotation along the N-C(O) axis. The results are discussed in the context of resonance theory and activation of amides toward N-protonation (N-activation) by distortion. This study demonstrates that one-carbon-bridged lactams-a class of readily available, hydrolytically robust twisted amides-are ideally suited to span the whole spectrum of the amide bond distortion energy surface. Notably, this study provides a blueprint for the rational design and application of nonplanar amides in organic synthesis. The presented findings strongly support the classical amide bond resonance model in predicting the properties of nonplanar amides.
Oligomerization of G protein-coupled receptors: computational methods.
Selent, J; Kaczor, A A
2011-01-01
Recent research has unveiled the complexity of mechanisms involved in G protein-coupled receptor (GPCR) functioning in which receptor dimerization/oligomerization may play an important role. Although the first high-resolution X-ray structure for a likely functional chemokine receptor dimer has been deposited in the Protein Data Bank, the interactions and mechanisms of dimer formation are not yet fully understood. In this respect, computational methods play a key role for predicting accurate GPCR complexes. This review outlines computational approaches focusing on sequence- and structure-based methodologies as well as discusses their advantages and limitations. Sequence-based approaches that search for possible protein-protein interfaces in GPCR complexes have been applied with success in several studies, but did not yield always consistent results. Structure-based methodologies are a potent complement to sequence-based approaches. For instance, protein-protein docking is a valuable method especially when guided by experimental constraints. Some disadvantages like limited receptor flexibility and non-consideration of the membrane environment have to be taken into account. Molecular dynamics simulation can overcome these drawbacks giving a detailed description of conformational changes in a native-like membrane. Successful prediction of GPCR complexes using computational approaches combined with experimental efforts may help to understand the role of dimeric/oligomeric GPCR complexes for fine-tuning receptor signaling. Moreover, since such GPCR complexes have attracted interest as potential drug target for diverse diseases, unveiling molecular determinants of dimerization/oligomerization can provide important implications for drug discovery.
Computing thermal Wigner densities with the phase integration method
International Nuclear Information System (INIS)
Beutier, J.; Borgis, D.; Vuilleumier, R.; Bonella, S.
2014-01-01
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems
Computing thermal Wigner densities with the phase integration method.
Beutier, J; Borgis, D; Vuilleumier, R; Bonella, S
2014-08-28
We discuss how the Phase Integration Method (PIM), recently developed to compute symmetrized time correlation functions [M. Monteferrante, S. Bonella, and G. Ciccotti, Mol. Phys. 109, 3015 (2011)], can be adapted to sampling/generating the thermal Wigner density, a key ingredient, for example, in many approximate schemes for simulating quantum time dependent properties. PIM combines a path integral representation of the density with a cumulant expansion to represent the Wigner function in a form calculable via existing Monte Carlo algorithms for sampling noisy probability densities. The method is able to capture highly non-classical effects such as correlation among the momenta and coordinates parts of the density, or correlations among the momenta themselves. By using alternatives to cumulants, it can also indicate the presence of negative parts of the Wigner density. Both properties are demonstrated by comparing PIM results to those of reference quantum calculations on a set of model problems.
Data graphing methods, articles of manufacture, and computing devices
Energy Technology Data Exchange (ETDEWEB)
Wong, Pak Chung; Mackey, Patrick S.; Cook, Kristin A.; Foote, Harlan P.; Whiting, Mark A.
2016-12-13
Data graphing methods, articles of manufacture, and computing devices are described. In one aspect, a method includes accessing a data set, displaying a graphical representation including data of the data set which is arranged according to a first of different hierarchical levels, wherein the first hierarchical level represents the data at a first of a plurality of different resolutions which respectively correspond to respective ones of the hierarchical levels, selecting a portion of the graphical representation wherein the data of the portion is arranged according to the first hierarchical level at the first resolution, modifying the graphical representation by arranging the data of the portion according to a second of the hierarchal levels at a second of the resolutions, and after the modifying, displaying the graphical representation wherein the data of the portion is arranged according to the second hierarchal level at the second resolution.
A finite element solution method for quadrics parallel computer
International Nuclear Information System (INIS)
Zucchini, A.
1996-08-01
A distributed preconditioned conjugate gradient method for finite element analysis has been developed and implemented on a parallel SIMD Quadrics computer. The main characteristic of the method is that it does not require any actual assembling of all element equations in a global system. The physical domain of the problem is partitioned in cells of n p finite elements and each cell element is assigned to a different node of an n p -processors machine. Element stiffness matrices are stored in the data memory of the assigned processing node and the solution process is completely executed in parallel at element level. Inter-element and therefore inter-processor communications are required once per iteration to perform local sums of vector quantities between neighbouring elements. A prototype implementation has been tested on an 8-nodes Quadrics machine in a simple 2D benchmark problem
A novel dual energy method for enhanced quantitative computed tomography
Emami, A.; Ghadiri, H.; Rahmim, A.; Ay, M. R.
2018-01-01
Accurate assessment of bone mineral density (BMD) is critically important in clinical practice, and conveniently enabled via quantitative computed tomography (QCT). Meanwhile, dual-energy QCT (DEQCT) enables enhanced detection of small changes in BMD relative to single-energy QCT (SEQCT). In the present study, we aimed to investigate the accuracy of QCT methods, with particular emphasis on a new dual-energy approach, in comparison to single-energy and conventional dual-energy techniques. We used a sinogram-based analytical CT simulator to model the complete chain of CT data acquisitions, and assessed performance of SEQCT and different DEQCT techniques in quantification of BMD. We demonstrate a 120% reduction in error when using a proposed dual-energy Simultaneous Equation by Constrained Least-squares method, enabling more accurate bone mineral measurements.
DEFF Research Database (Denmark)
de Lasson, Jakob Rosenkrantz; Frandsen, Lars Hagedorn; Burger, Sven
2016-01-01
We benchmark four state-of-the-art computational methods by computing quality factors and resonance wavelengths in photonic crystal membrane L5 and L9 line defect cavities.The convergence of the methods with respect to resolution, degrees of freedom and number ofmodes is investigated. Special att...... attention is paid to the influence of the size of the computational domain. Convergence is not obtained for some of the methods, indicating that some are moresuitable than others for analyzing line defect cavities....
Computer prediction of subsurface radionuclide transport: an adaptive numerical method
International Nuclear Information System (INIS)
Neuman, S.P.
1983-01-01
Radionuclide transport in the subsurface is often modeled with the aid of the advection-dispersion equation. A review of existing computer methods for the solution of this equation shows that there is need for improvement. To answer this need, a new adaptive numerical method is proposed based on an Eulerian-Lagrangian formulation. The method is based on a decomposition of the concentration field into two parts, one advective and one dispersive, in a rigorous manner that does not leave room for ambiguity. The advective component of steep concentration fronts is tracked forward with the aid of moving particles clustered around each front. Away from such fronts the advection problem is handled by an efficient modified method of characteristics called single-step reverse particle tracking. When a front dissipates with time, its forward tracking stops automatically and the corresponding cloud of particles is eliminated. The dispersion problem is solved by an unconventional Lagrangian finite element formulation on a fixed grid which involves only symmetric and diagonal matrices. Preliminary tests against analytical solutions of ne- and two-dimensional dispersion in a uniform steady state velocity field suggest that the proposed adaptive method can handle the entire range of Peclet numbers from 0 to infinity, with Courant numbers well in excess of 1
Parallel computation of multigroup reactivity coefficient using iterative method
Susmikanti, Mike; Dewayatna, Winter
2013-09-01
One of the research activities to support the commercial radioisotope production program is a safety research target irradiation FPM (Fission Product Molybdenum). FPM targets form a tube made of stainless steel in which the nuclear degrees of superimposed high-enriched uranium. FPM irradiation tube is intended to obtain fission. The fission material widely used in the form of kits in the world of nuclear medicine. Irradiation FPM tube reactor core would interfere with performance. One of the disorders comes from changes in flux or reactivity. It is necessary to study a method for calculating safety terrace ongoing configuration changes during the life of the reactor, making the code faster became an absolute necessity. Neutron safety margin for the research reactor can be reused without modification to the calculation of the reactivity of the reactor, so that is an advantage of using perturbation method. The criticality and flux in multigroup diffusion model was calculate at various irradiation positions in some uranium content. This model has a complex computation. Several parallel algorithms with iterative method have been developed for the sparse and big matrix solution. The Black-Red Gauss Seidel Iteration and the power iteration parallel method can be used to solve multigroup diffusion equation system and calculated the criticality and reactivity coeficient. This research was developed code for reactivity calculation which used one of safety analysis with parallel processing. It can be done more quickly and efficiently by utilizing the parallel processing in the multicore computer. This code was applied for the safety limits calculation of irradiated targets FPM with increment Uranium.
Böhm, Karl-Heinz; Banert, Klaus; Auer, Alexander A
2014-04-23
We present ab-initio calculations of secondary isotope effects on NMR chemical shieldings. The change of the NMR chemical shift of a certain nucleus that is observed if another nucleus is replaced by a different isotope can be calculated by computing vibrational corrections on the NMR parameters using electronic structure methods. We demonstrate that the accuracy of the computational results is sufficient to even distinguish different conformers. For this purpose, benchmark calculations for fluoro(2-2H)ethane in gauche and antiperiplanar conformation are carried out at the HF, MP2 and CCSD(T) level of theory using basis sets ranging from double- to quadruple-zeta quality. The methodology is applied to the secondary isotope shifts for 2-fluoronorbornane in order to resolve an ambiguity in the literature on the assignment of endo- and exo-2-fluoronorbornanes with deuterium substituents in endo-3 and exo-3 positions, also yielding insight into mechanistic details of the corresponding synthesis.
Directory of Open Access Journals (Sweden)
Karl-Heinz Böhm
2014-04-01
Full Text Available We present ab-initio calculations of secondary isotope effects on NMR chemical shieldings. The change of the NMR chemical shift of a certain nucleus that is observed if another nucleus is replaced by a different isotope can be calculated by computing vibrational corrections on the NMR parameters using electronic structure methods. We demonstrate that the accuracy of the computational results is sufficient to even distinguish different conformers. For this purpose, benchmark calculations for fluoro(2-2Hethane in gauche and antiperiplanar conformation are carried out at the HF, MP2 and CCSD(T level of theory using basis sets ranging from double- to quadruple-zeta quality. The methodology is applied to the secondary isotope shifts for 2-fluoronorbornane in order to resolve an ambiguity in the literature on the assignment of endo- and exo-2-fluoronorbornanes with deuterium substituents in endo-3 and exo-3 positions, also yielding insight into mechanistic details of the corresponding synthesis.
Application of Computational Methods in Planaria Research: A Current Update
Directory of Open Access Journals (Sweden)
Ghosh Shyamasree
2017-07-01
Full Text Available Planaria is a member of the Phylum Platyhelminthes including flatworms. Planarians possess the unique ability of regeneration from adult stem cells or neoblasts and finds importance as a model organism for regeneration and developmental studies. Although research is being actively carried out globally through conventional methods to understand the process of regeneration from neoblasts, biology of development, neurobiology and immunology of Planaria, there are many thought provoking questions related to stem cell plasticity, and uniqueness of regenerative potential in Planarians amongst other members of Phylum Platyhelminthes. The complexity of receptors and signalling mechanisms, immune system network, biology of repair, responses to injury are yet to be understood in Planaria. Genomic and transcriptomic studies have generated a vast repository of data, but their availability and analysis is a challenging task. Data mining, computational approaches of gene curation, bioinformatics tools for analysis of transcriptomic data, designing of databases, application of algorithms in deciphering changes of morphology by RNA interference (RNAi approaches, understanding regeneration experiments is a new venture in Planaria research that is helping researchers across the globe in understanding the biology. We highlight the applications of Hidden Markov models (HMMs in designing of computational tools and their applications in Planaria decoding their complex biology.
Ab Initio Calculation of Hyperfine Interaction Parameters: Recent Evolutions, Recent Examples
International Nuclear Information System (INIS)
Cottenier, Stefaan; Vanhoof, Veerle; Torumba, Doru; Bellini, Valerio; Cakmak, Mehmet; Rots, Michel
2004-01-01
For some years already, ab initio calculations based on Density Functional Theory (DFT) belong to the toolbox of the field of hyperfine interaction studies. In this paper, the standard ab initio approach is schematically sketched. New features, methods and possibilities that broke through during the past few years are listed, and their relation to the standard approach is explained. All this is illustrated by some highlights of recent ab initio work done by the Nuclear Condensed Matter Group at the K.U.Leuven.
Software Defects, Scientific Computation and the Scientific Method
CERN. Geneva
2011-01-01
Computation has rapidly grown in the last 50 years so that in many scientific areas it is the dominant partner in the practice of science. Unfortunately, unlike the experimental sciences, it does not adhere well to the principles of the scientific method as espoused by, for example, the philosopher Karl Popper. Such principles are built around the notions of deniability and reproducibility. Although much research effort has been spent on measuring the density of software defects, much less has been spent on the more difficult problem of measuring their effect on the output of a program. This talk explores these issues with numerous examples suggesting how this situation might be improved to match the demands of modern science. Finally it develops a theoretical model based on an amalgam of statistical mechanics and Hartley/Shannon information theory which suggests that software systems have strong implementation independent behaviour and supports the widely observed phenomenon that defects clust...
Computation of Hemagglutinin Free Energy Difference by the Confinement Method
2017-01-01
Hemagglutinin (HA) mediates membrane fusion, a crucial step during influenza virus cell entry. How many HAs are needed for this process is still subject to debate. To aid in this discussion, the confinement free energy method was used to calculate the conformational free energy difference between the extended intermediate and postfusion state of HA. Special care was taken to comply with the general guidelines for free energy calculations, thereby obtaining convergence and demonstrating reliability of the results. The energy that one HA trimer contributes to fusion was found to be 34.2 ± 3.4kBT, similar to the known contributions from other fusion proteins. Although computationally expensive, the technique used is a promising tool for the further energetic characterization of fusion protein mechanisms. Knowledge of the energetic contributions per protein, and of conserved residues that are crucial for fusion, aids in the development of fusion inhibitors for antiviral drugs. PMID:29151344
Conference on Boundary and Interior Layers : Computational and Asymptotic Methods
Stynes, Martin; Zhang, Zhimin
2017-01-01
This volume collects papers associated with lectures that were presented at the BAIL 2016 conference, which was held from 14 to 19 August 2016 at Beijing Computational Science Research Center and Tsinghua University in Beijing, China. It showcases the variety and quality of current research into numerical and asymptotic methods for theoretical and practical problems whose solutions involve layer phenomena. The BAIL (Boundary And Interior Layers) conferences, held usually in even-numbered years, bring together mathematicians and engineers/physicists whose research involves layer phenomena, with the aim of promoting interaction between these often-separate disciplines. These layers appear as solutions of singularly perturbed differential equations of various types, and are common in physical problems, most notably in fluid dynamics. This book is of interest for current researchers from mathematics, engineering and physics whose work involves the accurate app roximation of solutions of singularly perturbed diffe...
Computational Methods for Sensitivity and Uncertainty Analysis in Criticality Safety
International Nuclear Information System (INIS)
Broadhead, B.L.; Childs, R.L.; Rearden, B.T.
1999-01-01
Interest in the sensitivity methods that were developed and widely used in the 1970s (the FORSS methodology at ORNL among others) has increased recently as a result of potential use in the area of criticality safety data validation procedures to define computational bias, uncertainties and area(s) of applicability. Functional forms of the resulting sensitivity coefficients can be used as formal parameters in the determination of applicability of benchmark experiments to their corresponding industrial application areas. In order for these techniques to be generally useful to the criticality safety practitioner, the procedures governing their use had to be updated and simplified. This paper will describe the resulting sensitivity analysis tools that have been generated for potential use by the criticality safety community
Statistical physics and computational methods for evolutionary game theory
Javarone, Marco Alberto
2018-01-01
This book presents an introduction to Evolutionary Game Theory (EGT) which is an emerging field in the area of complex systems attracting the attention of researchers from disparate scientific communities. EGT allows one to represent and study several complex phenomena, such as the emergence of cooperation in social systems, the role of conformity in shaping the equilibrium of a population, and the dynamics in biological and ecological systems. Since EGT models belong to the area of complex systems, statistical physics constitutes a fundamental ingredient for investigating their behavior. At the same time, the complexity of some EGT models, such as those realized by means of agent-based methods, often require the implementation of numerical simulations. Therefore, beyond providing an introduction to EGT, this book gives a brief overview of the main statistical physics tools (such as phase transitions and the Ising model) and computational strategies for simulating evolutionary games (such as Monte Carlo algor...
Activation method for measuring the neutron spectra parameters. Computer software
International Nuclear Information System (INIS)
Efimov, B.V.; Ionov, V.S.; Konyaev, S.I.; Marin, S.V.
2005-01-01
The description of mathematical statement of a task for definition the spectral characteristics of neutron fields with use developed in RRC KI unified activation detectors (UKD) is resulted. The method of processing of results offered by authors activation measurements and calculation of the parameters used for an estimation of the neutron spectra characteristics is discussed. Features of processing of the experimental data received at measurements of activation with using UKD are considered. Activation detectors UKD contain a little bit specially the picked up isotopes giving at irradiation peaks scale of activity in the common spectrum scale of activity. Computing processing of results of the measurements is applied on definition of spectrum parameters for nuclear reactor installations with thermal and close to such power spectrum of neutrons. The example of the data processing, the measurements received at carrying out at RRC KI research reactor F-1 is resulted [ru
A computed microtomography method for understanding epiphyseal growth plate fusion
Staines, Katherine A.; Madi, Kamel; Javaheri, Behzad; Lee, Peter D.; Pitsillides, Andrew A.
2017-12-01
The epiphyseal growth plate is a developmental region responsible for linear bone growth, in which chondrocytes undertake a tightly regulated series of biological processes. Concomitant with the cessation of growth and sexual maturation, the human growth plate undergoes progressive narrowing, and ultimately disappears. Despite the crucial role of this growth plate fusion ‘bridging’ event, the precise mechanisms by which it is governed are complex and yet to be established. Progress is likely hindered by the current methods for growth plate visualisation; these are invasive and largely rely on histological procedures. Here we describe our non-invasive method utilising synchrotron x-ray computed microtomography for the examination of growth plate bridging, which ultimately leads to its closure coincident with termination of further longitudinal bone growth. We then apply this method to a dataset obtained from a benchtop microcomputed tomography scanner to highlight its potential for wide usage. Furthermore, we conduct finite element modelling at the micron-scale to reveal the effects of growth plate bridging on local tissue mechanics. Employment of these 3D analyses of growth plate bone bridging is likely to advance our understanding of the physiological mechanisms that control growth plate fusion.
Methods and computer codes for probabilistic sensitivity and uncertainty analysis
International Nuclear Information System (INIS)
Vaurio, J.K.
1985-01-01
This paper describes the methods and applications experience with two computer codes that are now available from the National Energy Software Center at Argonne National Laboratory. The purpose of the SCREEN code is to identify a group of most important input variables of a code that has many (tens, hundreds) input variables with uncertainties, and do this without relying on judgment or exhaustive sensitivity studies. Purpose of the PROSA-2 code is to propagate uncertainties and calculate the distributions of interesting output variable(s) of a safety analysis code using response surface techniques, based on the same runs used for screening. Several applications are discussed, but the codes are generic, not tailored to any specific safety application code. They are compatible in terms of input/output requirements but also independent of each other, e.g., PROSA-2 can be used without first using SCREEN if a set of important input variables has first been selected by other methods. Also, although SCREEN can select cases to be run (by random sampling), a user can select cases by other methods if he so prefers, and still use the rest of SCREEN for identifying important input variables
Emerging Computational Methods for the Rational Discovery of Allosteric Drugs.
Wagner, Jeffrey R; Lee, Christopher T; Durrant, Jacob D; Malmstrom, Robert D; Feher, Victoria A; Amaro, Rommie E
2016-06-08
Allosteric drug development holds promise for delivering medicines that are more selective and less toxic than those that target orthosteric sites. To date, the discovery of allosteric binding sites and lead compounds has been mostly serendipitous, achieved through high-throughput screening. Over the past decade, structural data has become more readily available for larger protein systems and more membrane protein classes (e.g., GPCRs and ion channels), which are common allosteric drug targets. In parallel, improved simulation methods now provide better atomistic understanding of the protein dynamics and cooperative motions that are critical to allosteric mechanisms. As a result of these advances, the field of predictive allosteric drug development is now on the cusp of a new era of rational structure-based computational methods. Here, we review algorithms that predict allosteric sites based on sequence data and molecular dynamics simulations, describe tools that assess the druggability of these pockets, and discuss how Markov state models and topology analyses provide insight into the relationship between protein dynamics and allosteric drug binding. In each section, we first provide an overview of the various method classes before describing relevant algorithms and software packages.
Computation of rectangular source integral by rational parameter polynomial method
International Nuclear Information System (INIS)
Prabha, Hem
2001-01-01
Hubbell et al. (J. Res. Nat Bureau Standards 64C, (1960) 121) have obtained a series expansion for the calculation of the radiation field generated by a plane isotropic rectangular source (plaque), in which leading term is the integral H(a,b). In this paper another integral I(a,b), which is related with the integral H(a,b) has been solved by the rational parameter polynomial method. From I(a,b), we compute H(a,b). Using this method the integral I(a,b) is expressed in the form of a polynomial of a rational parameter. Generally, a function f (x) is expressed in terms of x. In this method this is expressed in terms of x/(1+x). In this way, the accuracy of the expression is good over a wide range of x as compared to the earlier approach. The results for I(a,b) and H(a,b) are given for a sixth degree polynomial and are found to be in good agreement with the results obtained by numerically integrating the integral. Accuracy could be increased either by increasing the degree of the polynomial or by dividing the range of integration. The results of H(a,b) and I(a,b) are given for values of b and a up to 2.0 and 20.0, respectively
Hoy, Erik P; Mazziotti, David A
2015-08-14
Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.
Energy Technology Data Exchange (ETDEWEB)
Hoy, Erik P.; Mazziotti, David A., E-mail: damazz@uchicago.edu [Department of Chemistry and The James Franck Institute, The University of Chicago, Chicago, Illinois 60637 (United States)
2015-08-14
Tensor factorization of the 2-electron integral matrix is a well-known technique for reducing the computational scaling of ab initio electronic structure methods toward that of Hartree-Fock and density functional theories. The simplest factorization that maintains the positive semidefinite character of the 2-electron integral matrix is the Cholesky factorization. In this paper, we introduce a family of positive semidefinite factorizations that generalize the Cholesky factorization. Using an implementation of the factorization within the parametric 2-RDM method [D. A. Mazziotti, Phys. Rev. Lett. 101, 253002 (2008)], we study several inorganic molecules, alkane chains, and potential energy curves and find that this generalized factorization retains the accuracy and size extensivity of the Cholesky factorization, even in the presence of multi-reference correlation. The generalized family of positive semidefinite factorizations has potential applications to low-scaling ab initio electronic structure methods that treat electron correlation with a computational cost approaching that of the Hartree-Fock method or density functional theory.
Thermal, spectroscopic, and ab initio structural characterization of carprofen polymorphs.
Bruni, Giovanna; Gozzo, Fabia; Capsoni, Doretta; Bini, Marcella; Macchi, Piero; Simoncic, Petra; Berbenni, Vittorio; Milanese, Chiara; Girella, Alessandro; Ferrari, Stefania; Marini, Amedeo
2011-06-01
Commercial and recrystallized polycrystalline samples of carprofen, a nonsteroidal anti-inflammatory drug, were studied by thermal, spectroscopic, and structural techniques. Our investigations demonstrated that recrystallized sample, stable at room temperature (RT), is a single polymorphic form of carprofen (polymorph I) that undergoes an isostructural polymorphic transformation by heating (polymorph II). Polymorph II remains then metastable at ambient conditions. Commercial sample is instead a mixture of polymorphs I and II. The thermodynamic relationships between the two polymorphs were determined through the construction of an energy/temperature diagram. The ab initio structural determination performed on synchrotron X-Ray powder diffraction patterns recorded at RT on both polymorphs allowed us to elucidate, for the first time, their crystal structure. Both crystallize in the monoclinic space group type P2(1) /c, and the unit cell similarity index and the volumetric isostructurality index indicate that the temperature-induced polymorphic transformation I → II is isostructural. Polymorphs I and II are conformational polymorphs, sharing a very similar hydrogen bond network, but with different conformation of the propanoic skeleton, which produces two different packing. The small conformational change agrees with the low value of transition enthalpy obtained by differential scanning calorimetry measurements and the small internal energy computed with density functional methods. Copyright © 2011 Wiley-Liss, Inc.
Ab initio study of point defects in magnesium oxide
International Nuclear Information System (INIS)
Gilbert, C. A.; Kenny, S. D.; Smith, R.; Sanville, E.
2007-01-01
Energetics of a variety of point defects in MgO have been considered from an ab initio perspective using density functional theory. The considered defects are isolated Schottky and Frenkel defects and interstitial pairs, along with a number of Schottky defects and di-interstitials. Comparisons were made between the density functional theory results and results obtained from empirical potential simulations and these generally showed good agreement. Both methodologies predicted the first nearest neighbor Schottky defects to be the most energetically favorable of the considered Schottky defects and that the first, second, and fifth nearest neighbor di-interstitials were of similar energy and were favored over the other di-interstitial configurations. Relaxed structures of the defects were analyzed, which showed that empirical potential simulations were accurately predicting the displacements of atoms surrounding di-interstitials, but were overestimating O atom displacement for Schottky defects. Transition barriers were computed for the defects using the nudged elastic band method. Vacancies and Schottky defects were found to have relatively high energy barriers, the majority of which were over 2 eV, in agreement with conclusions reached using empirical potentials. The lowest barriers for di-interstitial transitions were found to be for migration into a first nearest neighbor configuration. Charges were calculated using a Bader analysis and this found negligible charge transfer during the defect transitions and only small changes in the charges on atoms surrounding defects, indicating why fixed charge models work as well as they do
A fast iterative method for computing particle beams penetrating matter
International Nuclear Information System (INIS)
Boergers, C.
1997-01-01
Beams of microscopic particles penetrating matter are important in several fields. The application motivating our parameter choices in this paper is electron beam cancer therapy. Mathematically, a steady particle beam penetrating matter, or a configuration of several such beams, is modeled by a boundary value problem for a Boltzmann equation. Grid-based discretization of this problem leads to a system of algebraic equations. This system is typically very large because of the large number of independent variables in the Boltzmann equation (six if time independence is the only dimension-reducing assumption). If grid-based methods are to be practical at all, it is therefore necessary to develop fast solvers for the discretized problems. This is the subject of the present paper. For two-dimensional, mono-energetic, linear particle beam problems, we describe an iterative domain decomposition algorithm based on overlapping decompositions of the set of particle directions and computationally demonstrate its rapid, grid independent convergence. There appears to be no fundamental obstacle to generalizing the method to three-dimensional, energy dependent problems. 34 refs., 15 figs., 6 tabs
Global Seabed Materials and Habitats Mapped: The Computational Methods
Jenkins, C. J.
2016-02-01
What the seabed is made of has proven difficult to map on the scale of whole ocean-basins. Direct sampling and observation can be augmented with proxy-parameter methods such as acoustics. Both avenues are essential to obtain enough detail and coverage, and also to validate the mapping methods. We focus on the direct observations such as samplings, photo and video, probes, diver and sub reports, and surveyed features. These are often in word-descriptive form: over 85% of the records for site materials are in this form, whether as sample/view descriptions or classifications, or described parameters such as consolidation, color, odor, structures and components. Descriptions are absolutely necessary for unusual materials and for processes - in other words, for research. This project dbSEABED not only has the largest collection of seafloor materials data worldwide, but it uses advanced computing math to obtain the best possible coverages and detail. Included in those techniques are linguistic text analysis (e.g., Natural Language Processing, NLP), fuzzy set theory (FST), and machine learning (ML, e.g., Random Forest). These techniques allow efficient and accurate import of huge datasets, thereby optimizing the data that exists. They merge quantitative and qualitative types of data for rich parameter sets, and extrapolate where the data are sparse for best map production. The dbSEABED data resources are now very widely used worldwide in oceanographic research, environmental management, the geosciences, engineering and survey.
Semi-coarsening multigrid methods for parallel computing
Energy Technology Data Exchange (ETDEWEB)
Jones, J.E.
1996-12-31
Standard multigrid methods are not well suited for problems with anisotropic coefficients which can occur, for example, on grids that are stretched to resolve a boundary layer. There are several different modifications of the standard multigrid algorithm that yield efficient methods for anisotropic problems. In the paper, we investigate the parallel performance of these multigrid algorithms. Multigrid algorithms which work well for anisotropic problems are based on line relaxation and/or semi-coarsening. In semi-coarsening multigrid algorithms a grid is coarsened in only one of the coordinate directions unlike standard or full-coarsening multigrid algorithms where a grid is coarsened in each of the coordinate directions. When both semi-coarsening and line relaxation are used, the resulting multigrid algorithm is robust and automatic in that it requires no knowledge of the nature of the anisotropy. This is the basic multigrid algorithm whose parallel performance we investigate in the paper. The algorithm is currently being implemented on an IBM SP2 and its performance is being analyzed. In addition to looking at the parallel performance of the basic semi-coarsening algorithm, we present algorithmic modifications with potentially better parallel efficiency. One modification reduces the amount of computational work done in relaxation at the expense of using multiple coarse grids. This modification is also being implemented with the aim of comparing its performance to that of the basic semi-coarsening algorithm.
Particular application of methods of AdaBoost and LBP to the problems of computer vision
Волошин, Микола Володимирович
2012-01-01
The application of AdaBoost method and local binary pattern (LBP) method for different spheres of computer vision implementation, such as personality identification and computer iridology, is considered in the article. The goal of the research is to develop error-correcting methods and systems for implements of computer vision and computer iridology, in particular. This article considers the problem of colour spaces, which are used as a filter and as a pre-processing of images. Method of AdaB...
Non-unitary probabilistic quantum computing circuit and method
Williams, Colin P. (Inventor); Gingrich, Robert M. (Inventor)
2009-01-01
A quantum circuit performing quantum computation in a quantum computer. A chosen transformation of an initial n-qubit state is probabilistically obtained. The circuit comprises a unitary quantum operator obtained from a non-unitary quantum operator, operating on an n-qubit state and an ancilla state. When operation on the ancilla state provides a success condition, computation is stopped. When operation on the ancilla state provides a failure condition, computation is performed again on the ancilla state and the n-qubit state obtained in the previous computation, until a success condition is obtained.
Nuclear power reactor analysis, methods, algorithms and computer programs
International Nuclear Information System (INIS)
Matausek, M.V
1981-01-01
Full text: For a developing country buying its first nuclear power plants from a foreign supplier, disregarding the type and scope of the contract, there is a certain number of activities which have to be performed by local stuff and domestic organizations. This particularly applies to the choice of the nuclear fuel cycle strategy and the choice of the type and size of the reactors, to bid parameters specification, bid evaluation and final safety analysis report evaluation, as well as to in-core fuel management activities. In the Nuclear Engineering Department of the Boris Kidric Institute of Nuclear Sciences (NET IBK) the continual work is going on, related to the following topics: cross section and resonance integral calculations, spectrum calculations, generation of group constants, lattice and cell problems, criticality and global power distribution search, fuel burnup analysis, in-core fuel management procedures, cost analysis and power plant economics, safety and accident analysis, shielding problems and environmental impact studies, etc. The present paper gives the details of the methods developed and the results achieved, with the particular emphasis on the NET IBK computer program package for the needs of planning, construction and operation of nuclear power plants. The main problems encountered so far were related to small working team, lack of large and powerful computers, absence of reliable basic nuclear data and shortage of experimental and empirical results for testing theoretical models. Some of these difficulties have been overcome thanks to bilateral and multilateral cooperation with developed countries, mostly through IAEA. It is the authors opinion, however, that mutual cooperation of developing countries, having similar problems and similar goals, could lead to significant results. Some activities of this kind are suggested and discussed. (author)
Justification of computational methods to ensure information management systems
Directory of Open Access Journals (Sweden)
E. D. Chertov
2016-01-01
Full Text Available Summary. Due to the diversity and complexity of organizational management tasks a large enterprise, the construction of an information management system requires the establishment of interconnected complexes of means, implementing the most efficient way collect, transfer, accumulation and processing of information necessary drivers handle different ranks in the governance process. The main trends of the construction of integrated logistics management information systems can be considered: the creation of integrated data processing systems by centralizing storage and processing of data arrays; organization of computer systems to realize the time-sharing; aggregate-block principle of the integrated logistics; Use a wide range of peripheral devices with the unification of information and hardware communication. Main attention is paid to the application of the system of research of complex technical support, in particular, the definition of quality criteria for the operation of technical complex, the development of information base analysis methods of management information systems and define the requirements for technical means, as well as methods of structural synthesis of the major subsystems of integrated logistics. Thus, the aim is to study on the basis of systematic approach of integrated logistics management information system and the development of a number of methods of analysis and synthesis of complex logistics that are suitable for use in the practice of engineering systems design. The objective function of the complex logistics management information systems is the task of gathering systems, transmission and processing of specified amounts of information in the regulated time intervals with the required degree of accuracy while minimizing the reduced costs for the establishment and operation of technical complex. Achieving the objective function of the complex logistics to carry out certain organization of interaction of information
26 CFR 1.167(b)-0 - Methods of computing depreciation.
2010-04-01
... 26 Internal Revenue 2 2010-04-01 2010-04-01 false Methods of computing depreciation. 1.167(b)-0....167(b)-0 Methods of computing depreciation. (a) In general. Any reasonable and consistently applied method of computing depreciation may be used or continued in use under section 167. Regardless of the...
Use of digital computers for correction of gamma method and neutron-gamma method indications
International Nuclear Information System (INIS)
Lakhnyuk, V.M.
1978-01-01
The program for the NAIRI-S computer is described which is intended for accounting and elimination of the effect of by-processes when interpreting gamma and neutron-gamma logging indications. By means of slight corrections it is possible to use the program as a mathematical basis for logging diagram standardization by the method of multidimensional regressive analysis and estimation of rock reservoir properties
International Nuclear Information System (INIS)
Solomonik, V.G.; Marochko, O.Yu.
2000-01-01
Structure and vibrational spectra of MHal 3 molecules (M = Sc, Y, La, Lu; Hal = F, Cl, Br, I) are studied by the CISD+Q method. It is ascertained that equilibrium configuration of nuclei in all the molecules, except LaF 3 , is plane (D 3h symmetry), while that of LaF 3 molecule - pyramidal (C 3c symmetry). Results of the calculations are compared with previously published experimental data. Band reference in IR spectra of ScBr 3 , YF 3 and YCl 3 molecules has been corrected [ru
International Nuclear Information System (INIS)
Garrett, W.R.
1979-01-01
Through the use of a molecular pseudopotential method, we determine the a approximate magnitudes of errors that result when electron affinity determinations of polar negative ions are made through ab initio calculations in which the use of a given basis set yields inappropriate values for permanent and induced dipole moments of the neutral molecule. These results should prove useful in assessing the adequacy of basis sets in ab initio calculations of molecular electron affinities for simple linear polar molecules
Methodical Approaches to Teaching of Computer Modeling in Computer Science Course
Rakhimzhanova, B. Lyazzat; Issabayeva, N. Darazha; Khakimova, Tiyshtik; Bolyskhanova, J. Madina
2015-01-01
The purpose of this study was to justify of the formation technique of representation of modeling methodology at computer science lessons. The necessity of studying computer modeling is that the current trends of strengthening of general education and worldview functions of computer science define the necessity of additional research of the…
Recent advances in computational structural reliability analysis methods
Thacker, Ben H.; Wu, Y.-T.; Millwater, Harry R.; Torng, Tony Y.; Riha, David S.
1993-10-01
The goal of structural reliability analysis is to determine the probability that the structure will adequately perform its intended function when operating under the given environmental conditions. Thus, the notion of reliability admits the possibility of failure. Given the fact that many different modes of failure are usually possible, achievement of this goal is a formidable task, especially for large, complex structural systems. The traditional (deterministic) design methodology attempts to assure reliability by the application of safety factors and conservative assumptions. However, the safety factor approach lacks a quantitative basis in that the level of reliability is never known and usually results in overly conservative designs because of compounding conservatisms. Furthermore, problem parameters that control the reliability are not identified, nor their importance evaluated. A summary of recent advances in computational structural reliability assessment is presented. A significant level of activity in the research and development community was seen recently, much of which was directed towards the prediction of failure probabilities for single mode failures. The focus is to present some early results and demonstrations of advanced reliability methods applied to structural system problems. This includes structures that can fail as a result of multiple component failures (e.g., a redundant truss), or structural components that may fail due to multiple interacting failure modes (e.g., excessive deflection, resonate vibration, or creep rupture). From these results, some observations and recommendations are made with regard to future research needs.
Computational Methods for Physical Model Information Management: Opening the Aperture
International Nuclear Information System (INIS)
Moser, F.; Kirgoeze, R.; Gagne, D.; Calle, D.; Murray, J.; Crowley, J.
2015-01-01
The volume, velocity and diversity of data available to analysts are growing exponentially, increasing the demands on analysts to stay abreast of developments in their areas of investigation. In parallel to the growth in data, technologies have been developed to efficiently process, store, and effectively extract information suitable for the development of a knowledge base capable of supporting inferential (decision logic) reasoning over semantic spaces. These technologies and methodologies, in effect, allow for automated discovery and mapping of information to specific steps in the Physical Model (Safeguard's standard reference of the Nuclear Fuel Cycle). This paper will describe and demonstrate an integrated service under development at the IAEA that utilizes machine learning techniques, computational natural language models, Bayesian methods and semantic/ontological reasoning capabilities to process large volumes of (streaming) information and associate relevant, discovered information to the appropriate process step in the Physical Model. The paper will detail how this capability will consume open source and controlled information sources and be integrated with other capabilities within the analysis environment, and provide the basis for a semantic knowledge base suitable for hosting future mission focused applications. (author)
THE METHOD OF DESIGNING ASSISTED ON COMPUTER OF THE
Directory of Open Access Journals (Sweden)
LUCA Cornelia
2015-05-01
Full Text Available To the base of the footwear soles designing, is the shoe last. The shoe lasts have irregular shapes, with various curves witch can’t be represented by a simple mathematic function. In order to design the footwear’s soles it’s necessary to take from the shoe last some base contours. These contours are obtained with high precision in a 3D CAD system. In the paper, it will be presented a method of designing of the soles for footwear, computer assisted. The copying process of the shoe last is done using the 3D digitizer. For digitizing, the shoe last spatial shape is positioned on the peripheral of data gathering, witch follows automatically the shoe last’s surface. The wire network obtained through digitizing is numerically interpolated with the interpolator functions in order to obtain the spatial numerical shape of the shoe last. The 3D designing of the sole will be realized on the numerical shape of the shoe last following the next steps: the manufacture of the sole’s surface, the lateral surface realization of the sole’s shape, obtaining the link surface between the lateral side and the planner one of the sole, of the sole’s margin, the sole’s designing contains the skid proof area. The main advantage of the designing method is the design precision, visualization in 3D space of the sole and the possibility to take the best decision viewing the acceptance of new sole’s pattern.
A method of paralleling computer calculation for two-dimensional kinetic plasma model
International Nuclear Information System (INIS)
Brazhnik, V.A.; Demchenko, V.V.; Dem'yanov, V.G.; D'yakov, V.E.; Ol'shanskij, V.V.; Panchenko, V.I.
1987-01-01
A method for parallel computer calculation and OSIRIS program complex realizing it and designed for numerical plasma simulation by the macroparticle method are described. The calculation can be carried out either with one or simultaneously with two computers BESM-6, that is provided by some package of interacting programs functioning in every computer. Program interaction in every computer is based on event techniques realized in OS DISPAK. Parallel computer calculation with two BESM-6 computers allows to accelerate the computation 1.5 times
Energy Technology Data Exchange (ETDEWEB)
Assary, Rajeev S.; Kim, Taijin; Low, John; Greeley, Jeffrey P.; Curtiss, Larry A.
2012-12-28
Molecular level understanding of acid-catalysed conversion of sugar molecules to platform chemicals such as hydroxy-methyl furfural (HMF), furfuryl alcohol (FAL), and levulinic acid (LA) is essential for efficient biomass conversion. In this paper, the high-level G4MP2 method along with the SMD solvation model is employed to understand detailed reaction energetics of the acid-catalysed decomposition of glucose and fructose to HMF. Based on protonation free energies of various hydroxyl groups of the sugar molecule, the relative reactivity of gluco-pyranose, fructo-pyranose and fructo-furanose are predicted. Calculations suggest that, in addition to the protonated intermediates, a solvent assisted dehydration of one of the fructo-furanosyl intermediates is a competing mechanism, indicating the possibility of multiple reaction pathways for fructose to HMF conversion in aqueous acidic medium. Two reaction pathways were explored to understand the thermodynamics of glucose to HMF; the first one is initiated by the protonation of a C2–OH group and the second one through an enolate intermediate involving acyclic intermediates. Additionally, a pathway is proposed for the formation of furfuryl alcohol from glucose initiated by the protonation of a C2–OH position, which includes a C–C bond cleavage, and the formation of formic acid. The detailed free energy landscapes predicted in this study can be used as benchmarks for further exploring the sugar decomposition reactions, prediction of possible intermediates, and finally designing improved catalysts for biomass conversion chemistry in the future.
Augmented wave ab initio EFG calculations: some methodological warnings
International Nuclear Information System (INIS)
Errico, Leonardo A.; Renteria, Mario; Petrilli, Helena M.
2007-01-01
We discuss some accuracy aspects inherent to ab initio electronic structure calculations in the understanding of nuclear quadrupole interactions. We use the projector augmented wave method to study the electric-field gradient (EFG) at both Sn and O sites in the prototype cases SnO and SnO 2 . The term ab initio is used in the standard context of the also called first principles methods in the framework of the Density Functional Theory. As the main contributions of EFG calculations to problems in condensed matter physics are related to structural characterizations on the atomic scale, we discuss the 'state of the art' on theoretical EFG calculations and make a brief critical review on the subject, calling attention to some fundamental theoretical aspects
Augmented wave ab initio EFG calculations: some methodological warnings
Energy Technology Data Exchange (ETDEWEB)
Errico, Leonardo A. [Departamento de Fisica-IFLP (CONICET), Facultad de Ciencias Exactas, Universidad Nacional de La Plata, CC67 (1900) La Plata (Argentina); Renteria, Mario [Departamento de Fisica-IFLP (CONICET), Facultad de Ciencias Exactas, Universidad Nacional de La Plata, CC67 (1900) La Plata (Argentina); Petrilli, Helena M. [Instituto de Fisica-DFMT, Universidade de Sao Paulo, C.P. 66318, 05315-970 Sao Paulo, SP (Brazil)]. E-mail: hmpetril@macbeth.if.usp.br
2007-02-01
We discuss some accuracy aspects inherent to ab initio electronic structure calculations in the understanding of nuclear quadrupole interactions. We use the projector augmented wave method to study the electric-field gradient (EFG) at both Sn and O sites in the prototype cases SnO and SnO{sub 2}. The term ab initio is used in the standard context of the also called first principles methods in the framework of the Density Functional Theory. As the main contributions of EFG calculations to problems in condensed matter physics are related to structural characterizations on the atomic scale, we discuss the 'state of the art' on theoretical EFG calculations and make a brief critical review on the subject, calling attention to some fundamental theoretical aspects.
Overview of Computer Simulation Modeling Approaches and Methods
Robert E. Manning; Robert M. Itami; David N. Cole; Randy Gimblett
2005-01-01
The field of simulation modeling has grown greatly with recent advances in computer hardware and software. Much of this work has involved large scientific and industrial applications for which substantial financial resources are available. However, advances in object-oriented programming and simulation methodology, concurrent with dramatic increases in computer...
Computational Fluid Dynamics Methods and Their Applications in Medical Science
Directory of Open Access Journals (Sweden)
Kowalewski Wojciech
2016-12-01
Full Text Available As defined by the National Institutes of Health: “Biomedical engineering integrates physical, chemical, mathematical, and computational sciences and engineering principles to study biology, medicine, behavior, and health”. Many issues in this area are closely related to fluid dynamics. This paper provides an overview of the basic concepts concerning Computational Fluid Dynamics and its applications in medicine.
Ab Initio Calculations of the Electronic Structures and Biological Functions of Protein Molecules
Zheng, Haoping
2003-04-01
The self-consistent cluster-embedding (SCCE) calculation method reduces the computational effort from M3 to about M1 (M is the number of atoms in the system) with unchanged calculation precision. So the ab initio, all-electron calculation of the electronic structure and biological function of protein molecule becomes a reality, which will promote new proteomics considerably. The calculated results of two real protein molecules, the trypsin inhibitor from the seeds of squash Cucurbita maxima (CMTI-I, 436 atoms) and the Ascaris trypsin inhibitor (912 atoms, two three-dimensional structures), are presented. The reactive sites of the inhibitors are determined and explained. The precision of structure determination of inhibitors are tested theoretically.
Carbon diffusion in molten uranium: an ab initio molecular dynamics study
Garrett, Kerry E.; Abrecht, David G.; Kessler, Sean H.; Henson, Neil J.; Devanathan, Ram; Schwantes, Jon M.; Reilly, Dallas D.
2018-04-01
In this work we used ab initio molecular dynamics within the framework of density functional theory and the projector-augmented wave method to study carbon diffusion in liquid uranium at temperatures above 1600 K. The electronic interactions of carbon and uranium were described using the local density approximation (LDA). The self-diffusion of uranium based on this approach is compared with literature computational and experimental results for liquid uranium. The temperature dependence of carbon and uranium diffusion in the melt was evaluated by fitting the resulting diffusion coefficients to an Arrhenius relationship. We found that the LDA calculated activation energy for carbon was nearly twice that of uranium: 0.55 ± 0.03 eV for carbon compared to 0.32 ± 0.04 eV for uranium. Structural analysis of the liquid uranium-carbon system is also discussed.
AB INITIO Modeling of Thermomechanical Properties of Mo-Based Alloys for Fossil Energy Conversion
Energy Technology Data Exchange (ETDEWEB)
Ching, Wai-Yim
2013-12-31
In this final scientific/technical report covering the period of 3.5 years started on July 1, 2011, we report the accomplishments on the study of thermo-mechanical properties of Mo-based intermetallic compounds under NETL support. These include computational method development, physical properties investigation of Mo-based compounds and alloys. The main focus is on the mechanical and thermo mechanical properties at high temperature since these are the most crucial properties for their potential applications. In particular, recent development of applying ab initio molecular dynamic (AIMD) simulations to the T1 (Mo{sub 5}Si{sub 3}) and T2 (Mo{sub 5}SiB{sub 2}) phases are highlighted for alloy design in further improving their properties.
Sanfilippo, Antonio P [Richland, WA; Tratz, Stephen C [Richland, WA; Gregory, Michelle L [Richland, WA; Chappell, Alan R [Seattle, WA; Whitney, Paul D [Richland, WA; Posse, Christian [Seattle, WA; Baddeley, Robert L [Richland, WA; Hohimer, Ryan E [West Richland, WA
2011-10-11
Methods of defining ontologies, word disambiguation methods, computer systems, and articles of manufacture are described according to some aspects. In one aspect, a word disambiguation method includes accessing textual content to be disambiguated, wherein the textual content comprises a plurality of words individually comprising a plurality of word senses, for an individual word of the textual content, identifying one of the word senses of the word as indicative of the meaning of the word in the textual content, for the individual word, selecting one of a plurality of event classes of a lexical database ontology using the identified word sense of the individual word, and for the individual word, associating the selected one of the event classes with the textual content to provide disambiguation of a meaning of the individual word in the textual content.
International Nuclear Information System (INIS)
Satake, Shin-ichi; Kunugi, Tomoaki
2006-01-01
Scientific computational methods have advanced remarkably with the progress of nuclear development. They have played the role of weft connecting each realm of nuclear engineering and then an introductory course of advanced scientific computational methods and their applications to nuclear technologies were prepared in serial form. This is the third issue showing the introduction of continuum simulation methods and their applications. Spectral methods and multi-interface calculation methods in fluid dynamics are reviewed. (T. Tanaka)
A Simple Method for Dynamic Scheduling in a Heterogeneous Computing System
Žumer, Viljem; Brest, Janez
2002-01-01
A simple method for the dynamic scheduling on a heterogeneous computing system is proposed in this paper. It was implemented to minimize the parallel program execution time. The proposed method decomposes the program workload into computationally homogeneous subtasks, which may be of the different size, depending on the current load of each machine in a heterogeneous computing system.
Energy Technology Data Exchange (ETDEWEB)
Bernard, St
1999-12-31
The quest for metallic hydrogen is a major goal for both theoretical and experimental condensed matter physics. Hydrogen and deuterium have been compressed up to 200 GPa in diamond anvil cells, without any clear evidence for a metallic behaviour. Loubeyere has recently suggested that hydrogen could metallize, at pressures within experimental range, in a new Van der Waals compound: Ar(H{sub 2}){sub 2} which is characterized at ambient pressure by an open and anisotropic sublattice of hydrogen molecules, stabilized by an argon skeleton. This thesis deals with a detailed ab initio investigation, by Car-Parrinello molecular dynamics methods, of the evolution under pressure of this compound. In a last chapter, we go to much higher pressures and temperatures, in order to compare orbital and orbital free ab initio methods for the dense hydrogen plasma. (author) 109 refs.
Energy Technology Data Exchange (ETDEWEB)
Bernard, St
1998-12-31
The quest for metallic hydrogen is a major goal for both theoretical and experimental condensed matter physics. Hydrogen and deuterium have been compressed up to 200 GPa in diamond anvil cells, without any clear evidence for a metallic behaviour. Loubeyere has recently suggested that hydrogen could metallize, at pressures within experimental range, in a new Van der Waals compound: Ar(H{sub 2}){sub 2} which is characterized at ambient pressure by an open and anisotropic sublattice of hydrogen molecules, stabilized by an argon skeleton. This thesis deals with a detailed ab initio investigation, by Car-Parrinello molecular dynamics methods, of the evolution under pressure of this compound. In a last chapter, we go to much higher pressures and temperatures, in order to compare orbital and orbital free ab initio methods for the dense hydrogen plasma. (author) 109 refs.
Collective rotation from ab initio theory
International Nuclear Information System (INIS)
Caprio, M.A.; Maris, P.; Vary, J.P.; Smith, R.
2015-01-01
Through ab initio approaches in nuclear theory, we may now seek to quantitatively understand the wealth of nuclear collective phenomena starting from the underlying internucleon interactions. No-core configuration interaction (NCCI) calculations for p-shell nuclei give rise to rotational bands, as evidenced by rotational patterns for excitation energies, electromagnetic moments and electromagnetic transitions. In this review, NCCI calculations of 7–9 Be are used to illustrate and explore ab initio rotational structure, and the resulting predictions for rotational band properties are compared with experiment. We highlight the robustness of ab initio rotational predictions across different choices for the internucleon interaction. (author)
Modelling of elementary computer operations using the intellect method
Energy Technology Data Exchange (ETDEWEB)
Shabanov-kushnarenko, Yu P
1982-01-01
The formal and apparatus intellect theory is used to describe functions of machine intelligence. A mathematical description is proposed as well as a machine realisation as switched networks of some simple computer operations. 5 references.
Pair Programming as a Modern Method of Teaching Computer Science
Irena Nančovska Šerbec; Branko Kaučič; Jože Rugelj
2008-01-01
At the Faculty of Education, University of Ljubljana we educate future computer science teachers. Beside didactical, pedagogical, mathematical and other interdisciplinary knowledge, students gain knowledge and skills of programming that are crucial for computer science teachers. For all courses, the main emphasis is the absorption of professional competences, related to the teaching profession and the programming profile. The latter are selected according to the well-known document, the ACM C...
Hafner, Jürgen
2010-09-29
During the last 20 years computer simulations based on a quantum-mechanical description of the interactions between electrons and atomic nuclei have developed an increasingly important impact on materials science, not only in promoting a deeper understanding of the fundamental physical phenomena, but also enabling the computer-assisted design of materials for future technologies. The backbone of atomic-scale computational materials science is density-functional theory (DFT) which allows us to cast the intractable complexity of electron-electron interactions into the form of an effective single-particle equation determined by the exchange-correlation functional. Progress in DFT-based calculations of the properties of materials and of simulations of processes in materials depends on: (1) the development of improved exchange-correlation functionals and advanced post-DFT methods and their implementation in highly efficient computer codes, (2) the development of methods allowing us to bridge the gaps in the temperature, pressure, time and length scales between the ab initio calculations and real-world experiments and (3) the extension of the functionality of these codes, permitting us to treat additional properties and new processes. In this paper we discuss the current status of techniques for performing quantum-based simulations on materials and present some illustrative examples of applications to complex quasiperiodic alloys, cluster-support interactions in microporous acid catalysts and magnetic nanostructures.
Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing
Gao, Xin
2013-01-01
research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing
Ab-Initio Description and Prediction of Properties of Carbon-Based and Other Non-Metallic Materials
Bagayoko, D.; Zhao, G. L.; Hasan, S.
2001-01-01
We have resolved the long-standing problem consisting of 30%-50% theoretical underestimates of the band gaps of non-metallic materials. We describe the Bagayoko, Zhao, and Williams (BZW) method that rigorously circumvents the basis-set and variational effect presumed to be a cause of these underestimates. We present ab-initio, computational results that are in agreement with experiment for diamond (C), silicon (Si), silicon carbides (3C-SiC and 4H-SiC), and other semiconductors (GaN, BaTiO3, AlN, ZnSe, ZnO). We illustrate the predictive capability of the BZW method in the case of the newly discovered cubic phase of silicon nitride (c-Si3N4) and of selected carbon nanotabes [(10,0), and (8,4)]. Our conclusion underscores the inescapable need for the BZW method in ab-initio calculations that employ a basis set in a variational approach. Current nanoscale trends amplify this need. We estimate that the potential impact of applications of the BZW method in advancing our understanding of nonmetallic materials, in informing experiment, and particularly in guiding device design and fabrication is simply priceless.
High performance computing and quantum trajectory method in CPU and GPU systems
International Nuclear Information System (INIS)
Wiśniewska, Joanna; Sawerwain, Marek; Leoński, Wiesław
2015-01-01
Nowadays, a dynamic progress in computational techniques allows for development of various methods, which offer significant speed-up of computations, especially those related to the problems of quantum optics and quantum computing. In this work, we propose computational solutions which re-implement the quantum trajectory method (QTM) algorithm in modern parallel computation environments in which multi-core CPUs and modern many-core GPUs can be used. In consequence, new computational routines are developed in more effective way than those applied in other commonly used packages, such as Quantum Optics Toolbox (QOT) for Matlab or QuTIP for Python
Ab Initio Calculations Of Light-Ion Reactions
International Nuclear Information System (INIS)
Navratil, P.; Quaglioni, S.; Roth, R.; Horiuchi, W.
2012-01-01
The exact treatment of nuclei starting from the constituent nucleons and the fundamental interactions among them has been a long-standing goal in nuclear physics. In addition to the complex nature of nuclear forces, one faces the quantum-mechanical many-nucleon problem governed by an interplay between bound and continuum states. In recent years, significant progress has been made in ab initio nuclear structure and reaction calculations based on input from QCD employing Hamiltonians constructed within chiral effective field theory. In this contribution, we present one of such promising techniques capable of describing simultaneously both bound and scattering states in light nuclei. By combining the resonating-group method (RGM) with the ab initio no-core shell model (NCSM), we complement a microscopic cluster approach with the use of realistic interactions and a microscopic and consistent description of the clusters. We discuss applications to light nuclei scattering, radiative capture and fusion reactions.
Computational methods in several fields of radiation dosimetry
International Nuclear Information System (INIS)
Paretzke, Herwig G.
2010-01-01
Full text: Radiation dosimetry has to cope with a wide spectrum of applications and requirements in time and size. The ubiquitous presence of various radiation fields or radionuclides in the human home, working, urban or agricultural environment can lead to various dosimetric tasks starting from radioecology, retrospective and predictive dosimetry, personal dosimetry, up to measurements of radionuclide concentrations in environmental and food product and, finally in persons and their excreta. In all these fields measurements and computational models for the interpretation or understanding of observations are employed explicitly or implicitly. In this lecture some examples of own computational models will be given from the various dosimetric fields, including a) Radioecology (e.g. with the code systems based on ECOSYS, which was developed far before the Chernobyl reactor accident, and tested thoroughly afterwards), b) Internal dosimetry (improved metabolism models based on our own data), c) External dosimetry (with the new ICRU-ICRP-Voxelphantom developed by our lab), d) Radiation therapy (with GEANT IV as applied to mixed reactor radiation incident on individualized voxel phantoms), e) Some aspects of nanodosimetric track structure computations (not dealt with in the other presentation of this author). Finally, some general remarks will be made on the high explicit or implicit importance of computational models in radiation protection and other research field dealing with large systems, as well as on good scientific practices which should generally be followed when developing and applying such computational models
Ab initio nuclear structure - the large sparse matrix eigenvalue problem
Energy Technology Data Exchange (ETDEWEB)
Vary, James P; Maris, Pieter [Department of Physics, Iowa State University, Ames, IA, 50011 (United States); Ng, Esmond; Yang, Chao [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Sosonkina, Masha, E-mail: jvary@iastate.ed [Scalable Computing Laboratory, Ames Laboratory, Iowa State University, Ames, IA, 50011 (United States)
2009-07-01
The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10{sup 10} and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.
Ab initio nuclear structure - the large sparse matrix eigenvalue problem
International Nuclear Information System (INIS)
Vary, James P; Maris, Pieter; Ng, Esmond; Yang, Chao; Sosonkina, Masha
2009-01-01
The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10 10 and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.
A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system
Jia, Meng; Fan, Yang-Yu; Tian, Wei-Jian
2011-03-01
Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. Project supported by the National Natural Science Foundation of China (Grant No. 60872159).
A fast computing method to distinguish the hyperbolic trajectory of an non-autonomous system
International Nuclear Information System (INIS)
Jia Meng; Fan Yang-Yu; Tian Wei-Jian
2011-01-01
Attempting to find a fast computing method to DHT (distinguished hyperbolic trajectory), this study first proves that the errors of the stable DHT can be ignored in normal direction when they are computed as the trajectories extend. This conclusion means that the stable flow with perturbation will approach to the real trajectory as it extends over time. Based on this theory and combined with the improved DHT computing method, this paper reports a new fast computing method to DHT, which magnifies the DHT computing speed without decreasing its accuracy. (electromagnetism, optics, acoustics, heat transfer, classical mechanics, and fluid dynamics)
Decomposition and Cross-Product-Based Method for Computing the Dynamic Equation of Robots
Directory of Open Access Journals (Sweden)
Ching-Long Shih
2012-08-01
Full Text Available This paper aims to demonstrate a clear relationship between Lagrange equations and Newton-Euler equations regarding computational methods for robot dynamics, from which we derive a systematic method for using either symbolic or on-line numerical computations. Based on the decomposition approach and cross-product operation, a computing method for robot dynamics can be easily developed. The advantages of this computing framework are that: it can be used for both symbolic and on-line numeric computation purposes, and it can also be applied to biped systems, as well as some simple closed-chain robot systems.
Recent Advances in Computational Methods for Nuclear Magnetic Resonance Data Processing
Gao, Xin
2013-01-11
Although three-dimensional protein structure determination using nuclear magnetic resonance (NMR) spectroscopy is a computationally costly and tedious process that would benefit from advanced computational techniques, it has not garnered much research attention from specialists in bioinformatics and computational biology. In this paper, we review recent advances in computational methods for NMR protein structure determination. We summarize the advantages of and bottlenecks in the existing methods and outline some open problems in the field. We also discuss current trends in NMR technology development and suggest directions for research on future computational methods for NMR.
Simplified method of computation for fatigue crack growth
International Nuclear Information System (INIS)
Stahlberg, R.
1978-01-01
A procedure is described for drastically reducing the computation time in calculating crack growth for variable-amplitude fatigue loading when the loading sequence is periodic. By the proposed procedure, the crack growth, r, per loading is approximated as a smooth function and its reciprocal is integrated, rather than summing crack growth cycle by cycle. The savings in computation time results since only a few pointwise values of r must be computed to generate an accurate interpolation function for numerical integration. Further time savings can be achieved by selecting the stress intensity coefficient (stress intensity divided by load) as the argument of r. Once r has been obtained as a function of stress intensity coefficient for a given material, environment, and loading sequence, it applies to any configuration of cracked structure. (orig.) [de
A hybrid method for the parallel computation of Green's functions
DEFF Research Database (Denmark)
Petersen, Dan Erik; Li, Song; Stokbro, Kurt
2009-01-01
of the large number of times this calculation needs to be performed, this is computationally very expensive even on supercomputers. The classical approach is based on recurrence formulas which cannot be efficiently parallelized. This practically prevents the solution of large problems with hundreds...... of thousands of atoms. We propose new recurrences for a general class of sparse matrices to calculate Green's and lesser Green's function matrices which extend formulas derived by Takahashi and others. We show that these recurrences may lead to a dramatically reduced computational cost because they only...... require computing a small number of entries of the inverse matrix. Then. we propose a parallelization strategy for block tridiagonal matrices which involves a combination of Schur complement calculations and cyclic reduction. It achieves good scalability even on problems of modest size....
Ab initio calculations on hydrogen storage in porous carbons
International Nuclear Information System (INIS)
Maresca, O.; Marinelli, F.; Pellenq, R.J.M.; Duclaux, L.; Azais, Ph.; Conard, J.
2005-01-01
We have investigated through ab initio computations the possible ways to achieve efficient hydrogen storage on carbons. Firstly, we have considered how the curvature of a carbon surface could affect the chemisorption of atomic H 0 Secondly, we show that electron donor elements such as Li and K, used as dopants for the carbon substrate, strongly enhance the physi-sorption energy of H 2 , allowing in principle its storage in this type of material at room temperature under mild conditions of pressure. (authors)
Comparison of four classification methods for brain-computer interface
Czech Academy of Sciences Publication Activity Database
Frolov, A.; Húsek, Dušan; Bobrov, P.
2011-01-01
Roč. 21, č. 2 (2011), s. 101-115 ISSN 1210-0552 R&D Projects: GA MŠk(CZ) 1M0567; GA ČR GA201/05/0079; GA ČR GAP202/10/0262 Institutional research plan: CEZ:AV0Z10300504 Keywords : brain computer interface * motor imagery * visual imagery * EEG pattern classification * Bayesian classification * Common Spatial Patterns * Common Tensor Discriminant Analysis Subject RIV: IN - Informatics, Computer Science Impact factor: 0.646, year: 2011
The Extrapolation-Accelerated Multilevel Aggregation Method in PageRank Computation
Directory of Open Access Journals (Sweden)
Bing-Yuan Pu
2013-01-01
Full Text Available An accelerated multilevel aggregation method is presented for calculating the stationary probability vector of an irreducible stochastic matrix in PageRank computation, where the vector extrapolation method is its accelerator. We show how to periodically combine the extrapolation method together with the multilevel aggregation method on the finest level for speeding up the PageRank computation. Detailed numerical results are given to illustrate the behavior of this method, and comparisons with the typical methods are also made.
Computational design and experimental validation of new thermal barrier systems
Energy Technology Data Exchange (ETDEWEB)
Guo, Shengmin [Louisiana State Univ., Baton Rouge, LA (United States)
2015-03-31
The focus of this project is on the development of a reliable and efficient ab initio based computational high temperature material design method which can be used to assist the Thermal Barrier Coating (TBC) bond-coat and top-coat design. Experimental evaluations on the new TBCs are conducted to confirm the new TBCs’ properties. Southern University is the subcontractor on this project with a focus on the computational simulation method development. We have performed ab initio density functional theory (DFT) method and molecular dynamics simulation on screening the top coats and bond coats for gas turbine thermal barrier coating design and validation applications. For experimental validations, our focus is on the hot corrosion performance of different TBC systems. For example, for one of the top coatings studied, we examined the thermal stability of TaZr_{2.75}O_{8} and confirmed it’s hot corrosion performance.
B.Bavishna*1, Mrs.M.Agalya2 & Dr.G.Kavitha3
2018-01-01
A lot of research has been done in the field of cloud computing in computing domain. For its effective performance, variety of algorithms has been proposed. The role of virtualization is significant and its performance is dependent on VM Migration and allocation. More of the energy is absorbed in cloud; therefore, the utilization of numerous algorithms is required for saving energy and efficiency enhancement in the proposed work. In the proposed work, green algorithm has been considered with ...
International Nuclear Information System (INIS)
Izquierdo, J.; Vega, A.; Balbas, L. C.; Sanchez-Portal, Daniel; Junquera, Javier; Artacho, Emilio; Soler, Jose M.; Ordejon, Pablo
2000-01-01
We present a theoretical study of the electronic and magnetic properties of iron systems in different environments: pure iron systems [dimer, bcc bulk, (100) surface, and free-standing iron monolayer], and low-dimensional iron systems deposited on Ag (100) surface (monoatomic linear wires, iron monolayer, planar, and three-dimensional clusters). Electronic and magnetic properties have been calculated using a recently developed total-energy first-principles method based on density-functional theory with numerical atomic orbitals as a basis set for the description of valence electrons and nonlocal pseudopotentials for the atomic core. The Kohn-Sham equations are solved self-consistently within the generalized gradient approximation for the exchange-correlation potential. Tests on the pseudopotential, the basis set, grid spacing, and k sampling are carefully performed. This technique, which has been proved to be very efficient for large nonmagnetic systems, is applied in this paper to calculate electronic and magnetic properties of different iron nanostructures. The results compare well with previous ab initio all-electron calculations and with experimental data. The method predicts the correct trends in the magnetic moments of Fe systems for a great variety of environments and requires a smaller computational effort than other ab initio methods. (c) 2000 The American Physical Society
Energy Technology Data Exchange (ETDEWEB)
Izquierdo, J. [Departamento de Fisica Teorica, Universidad de Valladolid, E-47011 Valladolid, (Spain); Vega, A. [Departamento de Fisica Teorica, Universidad de Valladolid, E-47011 Valladolid, (Spain); Balbas, L. C. [Departamento de Fisica Teorica, Universidad de Valladolid, E-47011 Valladolid, (Spain); Sanchez-Portal, Daniel [Department of Physics and Materials Research Laboratory, University of Illinois, Urbana, Illinois 61801 (United States); Junquera, Javier [Departamento de Fisica de la Materia Condensada, C-III, and Institut Nicolas Cabrera, Universidad Autonoma de Madrid, 28049 Madrid, (Spain); Artacho, Emilio [Departamento de Fisica de la Materia Condensada, C-III, and Institut Nicolas Cabrera, Universidad Autonoma de Madrid, 28049 Madrid, (Spain); Soler, Jose M. [Department of Physics, Harvard University, Cambridge, Massachusetts 02138 (United States); Ordejon, Pablo [Institut de Ciencia de Materials de Barcelona (CSIC), Campus de la U.A.B., Bellaterra, E-08193 Barcelona, (Spain)
2000-05-15
We present a theoretical study of the electronic and magnetic properties of iron systems in different environments: pure iron systems [dimer, bcc bulk, (100) surface, and free-standing iron monolayer], and low-dimensional iron systems deposited on Ag (100) surface (monoatomic linear wires, iron monolayer, planar, and three-dimensional clusters). Electronic and magnetic properties have been calculated using a recently developed total-energy first-principles method based on density-functional theory with numerical atomic orbitals as a basis set for the description of valence electrons and nonlocal pseudopotentials for the atomic core. The Kohn-Sham equations are solved self-consistently within the generalized gradient approximation for the exchange-correlation potential. Tests on the pseudopotential, the basis set, grid spacing, and k sampling are carefully performed. This technique, which has been proved to be very efficient for large nonmagnetic systems, is applied in this paper to calculate electronic and magnetic properties of different iron nanostructures. The results compare well with previous ab initio all-electron calculations and with experimental data. The method predicts the correct trends in the magnetic moments of Fe systems for a great variety of environments and requires a smaller computational effort than other ab initio methods. (c) 2000 The American Physical Society.
International Nuclear Information System (INIS)
Wimmer, E
2008-01-01
A workshop, 'Theory Meets Industry', was held on 12-14 June 2007 in Vienna, Austria, attended by a well balanced number of academic and industrial scientists from America, Europe, and Japan. The focus was on advances in ab initio solid state calculations and their practical use in industry. The theoretical papers addressed three dominant themes, namely (i) more accurate total energies and electronic excitations (ii) more complex systems, and (iii) more diverse and accurate materials properties. Hybrid functionals give some improvements in energies, but encounter difficulties for metallic systems. Quantum Monte Carlo methods are progressing, but no clear breakthrough is on the horizon. Progress in order-N methods is steady, as is the case for efficient methods for exploring complex energy hypersurfaces and large numbers of structural configurations. The industrial applications were dominated by materials issues in energy conversion systems, the quest for hydrogen storage materials, improvements of electronic and optical properties of microelectronic and display materials, and the simulation of reactions on heterogeneous catalysts. The workshop is a clear testimony that ab initio computations have become an industrial practice with increasingly recognized impact
Wimmer, E.
2008-02-01
A workshop, 'Theory Meets Industry', was held on 12-14 June 2007 in Vienna, Austria, attended by a well balanced number of academic and industrial scientists from America, Europe, and Japan. The focus was on advances in ab initio solid state calculations and their practical use in industry. The theoretical papers addressed three dominant themes, namely (i) more accurate total energies and electronic excitations, (ii) more complex systems, and (iii) more diverse and accurate materials properties. Hybrid functionals give some improvements in energies, but encounter difficulties for metallic systems. Quantum Monte Carlo methods are progressing, but no clear breakthrough is on the horizon. Progress in order-N methods is steady, as is the case for efficient methods for exploring complex energy hypersurfaces and large numbers of structural configurations. The industrial applications were dominated by materials issues in energy conversion systems, the quest for hydrogen storage materials, improvements of electronic and optical properties of microelectronic and display materials, and the simulation of reactions on heterogeneous catalysts. The workshop is a clear testimony that ab initio computations have become an industrial practice with increasingly recognized impact.
Computational methods to dissect cis-regulatory transcriptional ...
Indian Academy of Sciences (India)
The formation of diverse cell types from an invariant set of genes is governed by biochemical and molecular processes that regulate gene activity. A complete understanding of the regulatory mechanisms of gene expression is the major function of genomics. Computational genomics is a rapidly emerging area for ...
Computer methods in designing tourist equipment for people with disabilities
Zuzda, Jolanta GraŻyna; Borkowski, Piotr; Popławska, Justyna; Latosiewicz, Robert; Moska, Eleonora
2017-11-01
Modern technologies enable disabled people to enjoy physical activity every day. Many new structures are matched individually and created for people who fancy active tourism, giving them wider opportunities for active pastime. The process of creating this type of devices in every stage, from initial design through assessment to validation, is assisted by various types of computer support software.
New design methods for computer aided architecturald design methodology teaching
Achten, H.H.
2003-01-01
Architects and architectural students are exploring new ways of design using Computer Aided Architectural Design software. This exploration is seldom backed up from a design methodological viewpoint. In this paper, a design methodological framework for reflection on innovate design processes by
Computational methods for more fuel-efficient ship
Koren, B.
2008-01-01
The flow of water around a ship powered by a combustion engine is a key factor in the ship's fuel consumption. The simulation of flow patterns around ship hulls is therefore an important aspect of ship design. While lengthy computations are required for such simulations, research by Jeroen Wackers
New Methods of Mobile Computing: From Smartphones to Smart Education
Sykes, Edward R.
2014-01-01
Every aspect of our daily lives has been touched by the ubiquitous nature of mobile devices. We have experienced an exponential growth of mobile computing--a trend that seems to have no limit. This paper provides a report on the findings of a recent offering of an iPhone Application Development course at Sheridan College, Ontario, Canada. It…
An affective music player: Methods and models for physiological computing
Janssen, J.H.; Westerink, J.H.D.M.; van den Broek, Egon
2009-01-01
Affective computing is embraced by many to create more intelligent systems and smart environments. In this thesis, a specific affective application is envisioned: an affective physiological music player (APMP), which should be able to direct its user's mood. In a first study, the relationship
Computer Facilitated Mathematical Methods in Chemical Engineering--Similarity Solution
Subramanian, Venkat R.
2006-01-01
High-performance computers coupled with highly efficient numerical schemes and user-friendly software packages have helped instructors to teach numerical solutions and analysis of various nonlinear models more efficiently in the classroom. One of the main objectives of a model is to provide insight about the system of interest. Analytical…
All for One: Integrating Budgetary Methods by Computer.
Herman, Jerry J.
1994-01-01
With the advent of high speed and sophisticated computer programs, all budgetary systems can be combined in one fiscal management information system. Defines and provides examples for the four budgeting systems: (1) function/object; (2) planning, programming, budgeting system; (3) zero-based budgeting; and (4) site-based budgeting. (MLF)
Method for quantitative assessment of nuclear safety computer codes
International Nuclear Information System (INIS)
Dearien, J.A.; Davis, C.B.; Matthews, L.J.
1979-01-01
A procedure has been developed for the quantitative assessment of nuclear safety computer codes and tested by comparison of RELAP4/MOD6 predictions with results from two Semiscale tests. This paper describes the developed procedure, the application of the procedure to the Semiscale tests, and the results obtained from the comparison
A Parameter Estimation Method for Dynamic Computational Cognitive Models
Thilakarathne, D.J.
2015-01-01
A dynamic computational cognitive model can be used to explore a selected complex cognitive phenomenon by providing some features or patterns over time. More specifically, it can be used to simulate, analyse and explain the behaviour of such a cognitive phenomenon. It generates output data in the
Computed radiography imaging plates and associated methods of manufacture
Henry, Nathaniel F.; Moses, Alex K.
2015-08-18
Computed radiography imaging plates incorporating an intensifying material that is coupled to or intermixed with the phosphor layer, allowing electrons and/or low energy x-rays to impart their energy on the phosphor layer, while decreasing internal scattering and increasing resolution. The radiation needed to perform radiography can also be reduced as a result.
Verifying a computational method for predicting extreme ground motion
Harris, R.A.; Barall, M.; Andrews, D.J.; Duan, B.; Ma, S.; Dunham, E.M.; Gabriel, A.-A.; Kaneko, Y.; Kase, Y.; Aagaard, Brad T.; Oglesby, D.D.; Ampuero, J.-P.; Hanks, T.C.; Abrahamson, N.
2011-01-01
In situations where seismological data is rare or nonexistent, computer simulations may be used to predict ground motions caused by future earthquakes. This is particularly practical in the case of extreme ground motions, where engineers of special buildings may need to design for an event that has not been historically observed but which may occur in the far-distant future. Once the simulations have been performed, however, they still need to be tested. The SCEC-USGS dynamic rupture code verification exercise provides a testing mechanism for simulations that involve spontaneous earthquake rupture. We have performed this examination for the specific computer code that was used to predict maximum possible ground motion near Yucca Mountain. Our SCEC-USGS group exercises have demonstrated that the specific computer code that was used for the Yucca Mountain simulations produces similar results to those produced by other computer codes when tackling the same science problem. We also found that the 3D ground motion simulations produced smaller ground motions than the 2D simulations.
A comparison of methods for the assessment of postural load and duration of computer use
Heinrich, J.; Blatter, B.M.; Bongers, P.M.
2004-01-01
Aim: To compare two different methods for assessment of postural load and duration of computer use in office workers. Methods: The study population existed of 87 computer workers. Questionnaire data about exposure were compared with exposures measured by a standardised or objective method. Measuring
Minimizing the Free Energy: A Computer Method for Teaching Chemical Equilibrium Concepts.
Heald, Emerson F.
1978-01-01
Presents a computer method for teaching chemical equilibrium concepts using material balance conditions and the minimization of the free energy. Method for the calculation of chemical equilibrium, the computer program used to solve equilibrium problems and applications of the method are also included. (HM)
Energy Technology Data Exchange (ETDEWEB)
Anzelj, D.; Pye, C.C., E-mail: diki1979@hotmail.com, E-mail: cory.pye@smu.ca [Saint Mary' s University, Halifax, NS (Canada)
2015-07-01
One of the undesirable processes hindering development of Generation IV SCWR is the possibility of corrosion of construction material. Formation of corrosion products such as metal-ligand complexes is poorly understood both experimentally and computationally. It is essential to predict and control its water chemistry to ensure sustainability of SCWR. Pressurized and heated solutions are challenging for experimental research; computational method becomes an important research tool. A series of ab initio calculations of chloroaqualead (II) complexes have been performed at HF, MP2 and B3LYP levels of theory with CEP-121G, LANL2DZ, SDD basis sets for Pb and 6-31G*, 6-31+G*, 6-311+G* for water. (author)
International Nuclear Information System (INIS)
Anzelj, D.; Pye, C.C.
2015-01-01
One of the undesirable processes hindering development of Generation IV SCWR is the possibility of corrosion of construction material. Formation of corrosion products such as metal-ligand complexes is poorly understood both experimentally and computationally. It is essential to predict and control its water chemistry to ensure sustainability of SCWR. Pressurized and heated solutions are challenging for experimental research; computational method becomes an important research tool. A series of ab initio calculations of chloroaqualead (II) complexes have been performed at HF, MP2 and B3LYP levels of theory with CEP-121G, LANL2DZ, SDD basis sets for Pb and 6-31G*, 6-31+G*, 6-311+G* for water. (author)
Inferring biological functions of guanylyl cyclases with computational methods
Alquraishi, May Majed; Meier, Stuart Kurt
2013-01-01
A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.
Inferring biological functions of guanylyl cyclases with computational methods
Alquraishi, May Majed
2013-09-03
A number of studies have shown that functionally related genes are often co-expressed and that computational based co-expression analysis can be used to accurately identify functional relationships between genes and by inference, their encoded proteins. Here we describe how a computational based co-expression analysis can be used to link the function of a specific gene of interest to a defined cellular response. Using a worked example we demonstrate how this methodology is used to link the function of the Arabidopsis Wall-Associated Kinase-Like 10 gene, which encodes a functional guanylyl cyclase, to host responses to pathogens. © Springer Science+Business Media New York 2013.
Computational Nuclear Physics and Post Hartree-Fock Methods
Energy Technology Data Exchange (ETDEWEB)
Lietz, Justin [Michigan State University; Sam, Novario [Michigan State University; Hjorth-Jensen, M. [University of Oslo, Norway; Hagen, Gaute [ORNL; Jansen, Gustav R. [ORNL
2017-05-01
We present a computational approach to infinite nuclear matter employing Hartree-Fock theory, many-body perturbation theory and coupled cluster theory. These lectures are closely linked with those of chapters 9, 10 and 11 and serve as input for the correlation functions employed in Monte Carlo calculations in chapter 9, the in-medium similarity renormalization group theory of dense fermionic systems of chapter 10 and the Green's function approach in chapter 11. We provide extensive code examples and benchmark calculations, allowing thereby an eventual reader to start writing her/his own codes. We start with an object-oriented serial code and end with discussions on strategies for porting the code to present and planned high-performance computing facilities.
Multi-Level iterative methods in computational plasma physics
International Nuclear Information System (INIS)
Knoll, D.A.; Barnes, D.C.; Brackbill, J.U.; Chacon, L.; Lapenta, G.
1999-01-01
Plasma physics phenomena occur on a wide range of spatial scales and on a wide range of time scales. When attempting to model plasma physics problems numerically the authors are inevitably faced with the need for both fine spatial resolution (fine grids) and implicit time integration methods. Fine grids can tax the efficiency of iterative methods and large time steps can challenge the robustness of iterative methods. To meet these challenges they are developing a hybrid approach where multigrid methods are used as preconditioners to Krylov subspace based iterative methods such as conjugate gradients or GMRES. For nonlinear problems they apply multigrid preconditioning to a matrix-few Newton-GMRES method. Results are presented for application of these multilevel iterative methods to the field solves in implicit moment method PIC, multidimensional nonlinear Fokker-Planck problems, and their initial efforts in particle MHD
Methods for the development of large computer codes under LTSS
International Nuclear Information System (INIS)
Sicilian, J.M.
1977-06-01
TRAC is a large computer code being developed by Group Q-6 for the analysis of the transient thermal hydraulic behavior of light-water nuclear reactors. A system designed to assist the development of TRAC is described. The system consists of a central HYDRA dataset, R6LIB, containing files used in the development of TRAC, and a file maintenance program, HORSE, which facilitates the use of this dataset
Easy computer assisted teaching method for undergraduate surgery
Agrawal, Vijay P
2015-01-01
Use of computers to aid or support the education or training of people has become commonplace in medical education. Recent studies have shown that it can improve learning outcomes in diagnostic abilities, clinical skills and knowledge across different learner levels from undergraduate medical education to continuing medical education. It also enhance the educational process by increasing access to learning materials, standardising the educational process, providing opportunities for asynchron...
The cell method a purely algebraic computational method in physics and engineering
Ferretti, Elena
2014-01-01
The Cell Method (CM) is a computational tool that maintains critical multidimensional attributes of physical phenomena in analysis. This information is neglected in the differential formulations of the classical approaches of finite element, boundary element, finite volume, and finite difference analysis, often leading to numerical instabilities and spurious results. This book highlights the central theoretical concepts of the CM that preserve a more accurate and precise representation of the geometric and topological features of variables for practical problem solving. Important applications occur in fields such as electromagnetics, electrodynamics, solid mechanics and fluids. CM addresses non-locality in continuum mechanics, an especially important circumstance in modeling heterogeneous materials. Professional engineers and scientists, as well as graduate students, are offered: A general overview of physics and its mathematical descriptions; Guidance on how to build direct, discrete formulations; Coverag...
Stable numerical method in computation of stellar evolution
International Nuclear Information System (INIS)
Sugimoto, Daiichiro; Eriguchi, Yoshiharu; Nomoto, Ken-ichi.
1982-01-01
To compute the stellar structure and evolution in different stages, such as (1) red-giant stars in which the density and density gradient change over quite wide ranges, (2) rapid evolution with neutrino loss or unstable nuclear flashes, (3) hydrodynamical stages of star formation or supernova explosion, (4) transition phases from quasi-static to dynamical evolutions, (5) mass-accreting or losing stars in binary-star systems, and (6) evolution of stellar core whose mass is increasing by shell burning or decreasing by penetration of convective envelope into the core, we face ''multi-timescale problems'' which can neither be treated by simple-minded explicit scheme nor implicit one. This problem has been resolved by three prescriptions; one by introducing the hybrid scheme suitable for the multi-timescale problems of quasi-static evolution with heat transport, another by introducing also the hybrid scheme suitable for the multi-timescale problems of hydrodynamic evolution, and the other by introducing the Eulerian or, in other words, the mass fraction coordinate for evolution with changing mass. When all of them are combined in a single computer code, we can compute numerically stably any phase of stellar evolution including transition phases, as far as the star is spherically symmetric. (author)
Unconventional methods of imaging: computational microscopy and compact implementations
McLeod, Euan; Ozcan, Aydogan
2016-07-01
In the past two decades or so, there has been a renaissance of optical microscopy research and development. Much work has been done in an effort to improve the resolution and sensitivity of microscopes, while at the same time to introduce new imaging modalities, and make existing imaging systems more efficient and more accessible. In this review, we look at two particular aspects of this renaissance: computational imaging techniques and compact imaging platforms. In many cases, these aspects go hand-in-hand because the use of computational techniques can simplify the demands placed on optical hardware in obtaining a desired imaging performance. In the first main section, we cover lens-based computational imaging, in particular, light-field microscopy, structured illumination, synthetic aperture, Fourier ptychography, and compressive imaging. In the second main section, we review lensfree holographic on-chip imaging, including how images are reconstructed, phase recovery techniques, and integration with smart substrates for more advanced imaging tasks. In the third main section we describe how these and other microscopy modalities have been implemented in compact and field-portable devices, often based around smartphones. Finally, we conclude with some comments about opportunities and demand for better results, and where we believe the field is heading.
Analysis of multigrid methods on massively parallel computers: Architectural implications
Matheson, Lesley R.; Tarjan, Robert E.
1993-01-01
We study the potential performance of multigrid algorithms running on massively parallel computers with the intent of discovering whether presently envisioned machines will provide an efficient platform for such algorithms. We consider the domain parallel version of the standard V cycle algorithm on model problems, discretized using finite difference techniques in two and three dimensions on block structured grids of size 10(exp 6) and 10(exp 9), respectively. Our models of parallel computation were developed to reflect the computing characteristics of the current generation of massively parallel multicomputers. These models are based on an interconnection network of 256 to 16,384 message passing, 'workstation size' processors executing in an SPMD mode. The first model accomplishes interprocessor communications through a multistage permutation network. The communication cost is a logarithmic function which is similar to the costs in a variety of different topologies. The second model allows single stage communication costs only. Both models were designed with information provided by machine developers and utilize implementation derived parameters. With the medium grain parallelism of the current generation and the high fixed cost of an interprocessor communication, our analysis suggests an efficient implementation requires the machine to support the efficient transmission of long messages, (up to 1000 words) or the high initiation cost of a communication must be significantly reduced through an alternative optimization technique. Furthermore, with variable length message capability, our analysis suggests the low diameter multistage networks provide little or no advantage over a simple single stage communications network.
An Overview of the Computational Physics and Methods Group at Los Alamos National Laboratory
Energy Technology Data Exchange (ETDEWEB)
Baker, Randal Scott [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2018-02-22
CCS Division was formed to strengthen the visibility and impact of computer science and computational physics research on strategic directions for the Laboratory. Both computer science and computational science are now central to scientific discovery and innovation. They have become indispensable tools for all other scientific missions at the Laboratory. CCS Division forms a bridge between external partners and Laboratory programs, bringing new ideas and technologies to bear on today’s important problems and attracting high-quality technical staff members to the Laboratory. The Computational Physics and Methods Group CCS-2 conducts methods research and develops scientific software aimed at the latest and emerging HPC systems.
An historical survey of computational methods in optimal control.
Polak, E.
1973-01-01
Review of some of the salient theoretical developments in the specific area of optimal control algorithms. The first algorithms for optimal control were aimed at unconstrained problems and were derived by using first- and second-variation methods of the calculus of variations. These methods have subsequently been recognized as gradient, Newton-Raphson, or Gauss-Newton methods in function space. A much more recent addition to the arsenal of unconstrained optimal control algorithms are several variations of conjugate-gradient methods. At first, constrained optimal control problems could only be solved by exterior penalty function methods. Later algorithms specifically designed for constrained problems have appeared. Among these are methods for solving the unconstrained linear quadratic regulator problem, as well as certain constrained minimum-time and minimum-energy problems. Differential-dynamic programming was developed from dynamic programming considerations. The conditional-gradient method, the gradient-projection method, and a couple of feasible directions methods were obtained as extensions or adaptations of related algorithms for finite-dimensional problems. Finally, the so-called epsilon-methods combine the Ritz method with penalty function techniques.
Computation of Optimal Monotonicity Preserving General Linear Methods
Ketcheson, David I.
2009-07-01
Monotonicity preserving numerical methods for ordinary differential equations prevent the growth of propagated errors and preserve convex boundedness properties of the solution. We formulate the problem of finding optimal monotonicity preserving general linear methods for linear autonomous equations, and propose an efficient algorithm for its solution. This algorithm reliably finds optimal methods even among classes involving very high order accuracy and that use many steps and/or stages. The optimality of some recently proposed methods is verified, and many more efficient methods are found. We use similar algorithms to find optimal strong stability preserving linear multistep methods of both explicit and implicit type, including methods for hyperbolic PDEs that use downwind-biased operators.
Systematic Methods and Tools for Computer Aided Modelling
DEFF Research Database (Denmark)
Fedorova, Marina
and processes can be faster, cheaper and very efficient. The developed modelling framework involves five main elements: 1) a modelling tool, that includes algorithms for model generation; 2) a template library, which provides building blocks for the templates (generic models previously developed); 3) computer......-format and COM-objects, are incorporated to allow the export and import of mathematical models; 5) a user interface that provides the work-flow and data-flow to guide the user through the different modelling tasks....
Spatial Analysis Along Networks Statistical and Computational Methods
Okabe, Atsuyuki
2012-01-01
In the real world, there are numerous and various events that occur on and alongside networks, including the occurrence of traffic accidents on highways, the location of stores alongside roads, the incidence of crime on streets and the contamination along rivers. In order to carry out analyses of those events, the researcher needs to be familiar with a range of specific techniques. Spatial Analysis Along Networks provides a practical guide to the necessary statistical techniques and their computational implementation. Each chapter illustrates a specific technique, from Stochastic Point Process
Improved methods for computing masses from numerical simulations
Energy Technology Data Exchange (ETDEWEB)
Kronfeld, A.S.
1989-11-22
An important advance in the computation of hadron and glueball masses has been the introduction of non-local operators. This talk summarizes the critical signal-to-noise ratio of glueball correlation functions in the continuum limit, and discusses the case of (q{bar q} and qqq) hadrons in the chiral limit. A new strategy for extracting the masses of excited states is outlined and tested. The lessons learned here suggest that gauge-fixed momentum-space operators might be a suitable choice of interpolating operators. 15 refs., 2 tabs.
Advanced Computational Methods for Thermal Radiative Heat Transfer
Energy Technology Data Exchange (ETDEWEB)
Tencer, John; Carlberg, Kevin Thomas; Larsen, Marvin E.; Hogan, Roy E.,
2016-10-01
Participating media radiation (PMR) in weapon safety calculations for abnormal thermal environments are too costly to do routinely. This cost may be s ubstantially reduced by applying reduced order modeling (ROM) techniques. The application of ROM to PMR is a new and unique approach for this class of problems. This approach was investigated by the authors and shown to provide significant reductions in the computational expense associated with typical PMR simulations. Once this technology is migrated into production heat transfer analysis codes this capability will enable the routine use of PMR heat transfer in higher - fidelity simulations of weapon resp onse in fire environments.
The null-event method in computer simulation
International Nuclear Information System (INIS)
Lin, S.L.
1978-01-01
The simulation of collisions of ions moving under the influence of an external field through a neutral gas to non-zero temperatures is discussed as an example of computer models of processes in which a probe particle undergoes a series of interactions with an ensemble of other particles, such that the frequency and outcome of the events depends on internal properties of the second particles. The introduction of null events removes the need for much complicated algebra, leads to a more efficient simulation and reduces the likelihood of logical error. (Auth.)
Lattice QCD computations: Recent progress with modern Krylov subspace methods
Energy Technology Data Exchange (ETDEWEB)
Frommer, A. [Bergische Universitaet GH Wuppertal (Germany)
1996-12-31
Quantum chromodynamics (QCD) is the fundamental theory of the strong interaction of matter. In order to compare the theory with results from experimental physics, the theory has to be reformulated as a discrete problem of lattice gauge theory using stochastic simulations. The computational challenge consists in solving several hundreds of very large linear systems with several right hand sides. A considerable part of the world`s supercomputer time is spent in such QCD calculations. This paper presents results on solving systems for the Wilson fermions. Recent progress is reviewed on algorithms obtained in cooperation with partners from theoretical physics.
Permeability computation on a REV with an immersed finite element method
International Nuclear Information System (INIS)
Laure, P.; Puaux, G.; Silva, L.; Vincent, M.
2011-01-01
An efficient method to compute permeability of fibrous media is presented. An immersed domain approach is used to represent the porous material at its microscopic scale and the flow motion is computed with a stabilized mixed finite element method. Therefore the Stokes equation is solved on the whole domain (including solid part) using a penalty method. The accuracy is controlled by refining the mesh around the solid-fluid interface defined by a level set function. Using homogenisation techniques, the permeability of a representative elementary volume (REV) is computed. The computed permeabilities of regular fibre packings are compared to classical analytical relations found in the bibliography.
Higher-Order Integral Equation Methods in Computational Electromagnetics
DEFF Research Database (Denmark)
Jørgensen, Erik; Meincke, Peter
Higher-order integral equation methods have been investigated. The study has focused on improving the accuracy and efficiency of the Method of Moments (MoM) applied to electromagnetic problems. A new set of hierarchical Legendre basis functions of arbitrary order is developed. The new basis...
A computational method for the solution of one-dimensional ...
Indian Academy of Sciences (India)
embedding parameter p ∈ [0, 1], which is considered as a 'small parameter'. Consid- erable research work has recently been conducted in applying this method to a class of linear and nonlinear equations. This method was further developed and improved by He, and applied to nonlinear oscillators with discontinuities [1], ...
The Ulam Index: Methods of Theoretical Computer Science Help in Identifying Chemical Substances
Beltran, Adriana; Salvador, James
1997-01-01
In this paper, we show how methods developed for solving a theoretical computer problem of graph isomorphism are used in structural chemistry. We also discuss potential applications of these methods to exobiology: the search for life outside Earth.
Computation Method Comparison for Th Based Seed-Blanket Cores
International Nuclear Information System (INIS)
Kolesnikov, S.; Galperin, A.; Shwageraus, E.
2004-01-01
This work compares two methods for calculating a given nuclear fuel cycle in the WASB configuration. Both methods use the ELCOS Code System (2-D transport code BOXER and 3-D nodal code SILWER) [4] are compared. In the first method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated separately for each region by the 2-D transport code. In the second method, the cross-sections of the Seed and Blanket, needed for the 3-D nodal code are generated from Seed-Blanket Colorsets (Fig.1) calculated by the 2-D transport code. The evaluation of the error introduced by the first method is the main objective of the present study
Numerical computation of FCT equilibria by inverse equilibrium method
International Nuclear Information System (INIS)
Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki
1986-11-01
FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)
A Review of Solid-Solution Models of High-Entropy Alloys Based on Ab Initio Calculations
Directory of Open Access Journals (Sweden)
Fuyang Tian
2017-11-01
Full Text Available Similar to the importance of XRD in experiments, ab initio calculations, as a powerful tool, have been applied to predict the new potential materials and investigate the intrinsic properties of materials in theory. As a typical solid-solution material, the large degree of uncertainty of high-entropy alloys (HEAs results in the difficulty of ab initio calculations application to HEAs. The present review focuses on the available ab initio based solid-solution models (virtual lattice approximation, coherent potential approximation, special quasirandom structure, similar local atomic environment, maximum-entropy method, and hybrid Monte Carlo/molecular dynamics and their applications and limits in single phase HEAs.
Сlassification of methods of production of computer forensic by usage approach of graph theory
Directory of Open Access Journals (Sweden)
Anna Ravilyevna Smolina
2016-06-01
Full Text Available Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.
Сlassification of methods of production of computer forensic by usage approach of graph theory
Anna Ravilyevna Smolina; Alexander Alexandrovich Shelupanov
2016-01-01
Сlassification of methods of production of computer forensic by usage approach of graph theory is proposed. If use this classification, it is possible to accelerate and simplify the search of methods of production of computer forensic and this process to automatize.
Reference interval computation: which method (not) to choose?
Pavlov, Igor Y; Wilson, Andrew R; Delgado, Julio C
2012-07-11
When different methods are applied to reference interval (RI) calculation the results can sometimes be substantially different, especially for small reference groups. If there are no reliable RI data available, there is no way to confirm which method generates results closest to the true RI. We randomly drawn samples obtained from a public database for 33 markers. For each sample, RIs were calculated by bootstrapping, parametric, and Box-Cox transformed parametric methods. Results were compared to the values of the population RI. For approximately half of the 33 markers, results of all 3 methods were within 3% of the true reference value. For other markers, parametric results were either unavailable or deviated considerably from the true values. The transformed parametric method was more accurate than bootstrapping for sample size of 60, very close to bootstrapping for sample size 120, but in some cases unavailable. We recommend against using parametric calculations to determine RIs. The transformed parametric method utilizing Box-Cox transformation would be preferable way of RI calculation, if it satisfies normality test. If not, the bootstrapping is always available, and is almost as accurate and precise as the transformed parametric method. Copyright © 2012 Elsevier B.V. All rights reserved.
Ab initio molecular dynamics in a finite homogeneous electric field.
Umari, P; Pasquarello, Alfredo
2002-10-07
We treat homogeneous electric fields within density functional calculations with periodic boundary conditions. A nonlocal energy functional depending on the applied field is used within an ab initio molecular dynamics scheme. The reliability of the method is demonstrated in the case of bulk MgO for the Born effective charges, and the high- and low-frequency dielectric constants. We evaluate the static dielectric constant by performing a damped molecular dynamics in an electric field and avoiding the calculation of the dynamical matrix. Application of this method to vitreous silica shows good agreement with experiment and illustrates its potential for systems of large size.
International Nuclear Information System (INIS)
Dragt, A.J.; Gluckstern, R.L.
1992-11-01
The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides
Computational methods for constructing protein structure models from 3D electron microscopy maps.
Esquivel-Rodríguez, Juan; Kihara, Daisuke
2013-10-01
Protein structure determination by cryo-electron microscopy (EM) has made significant progress in the past decades. Resolutions of EM maps have been improving as evidenced by recently reported structures that are solved at high resolutions close to 3Å. Computational methods play a key role in interpreting EM data. Among many computational procedures applied to an EM map to obtain protein structure information, in this article we focus on reviewing computational methods that model protein three-dimensional (3D) structures from a 3D EM density map that is constructed from two-dimensional (2D) maps. The computational methods we discuss range from de novo methods, which identify structural elements in an EM map, to structure fitting methods, where known high resolution structures are fit into a low-resolution EM map. A list of available computational tools is also provided. Copyright © 2013 Elsevier Inc. All rights reserved.
Interpolation method by whole body computed tomography, Artronix 1120
International Nuclear Information System (INIS)
Fujii, Kyoichi; Koga, Issei; Tokunaga, Mitsuo
1981-01-01
Reconstruction of the whole body CT images by interpolation method was investigated by rapid scanning. Artronix 1120 with fixed collimator was used to obtain the CT images every 5 mm. X-ray source was circully movable to obtain perpendicular beam to the detector. A length of 150 mm was scanned in about 15 min., with the slice width of 5 mm. The images were reproduced every 7.5 mm, which was able to reduce every 1.5 mm when necessary. Out of 420 inspection in the chest, abdomen, and pelvis, 5 representative cases for which this method was valuable were described. The cases were fibrous histiocytoma of upper mediastinum, left adrenal adenoma, left ureter fibroma, recurrence of colon cancer in the pelvis, and abscess around the rectum. This method improved the image quality of lesions in the vicinity of the ureters, main artery, and rectum. The time required and exposure dose were reduced to 50% by this method. (Nakanishi, T.)
A new method for computing the quark-gluon vertex
International Nuclear Information System (INIS)
Aguilar, A C
2015-01-01
In this talk we present a new method for determining the nonperturbative quark-gluon vertex, which constitutes a crucial ingredient for a variety of theoretical and phenomenological studies. This new method relies heavily on the exact all-order relation connecting the conventional quark-gluon vertex with the corresponding vertex of the background field method, which is Abelian-like. The longitudinal part of this latter quantity is fixed using the standard gauge technique, whereas the transverse is estimated with the help of the so-called transverse Ward identities. This method allows the approximate determination of the nonperturbative behavior of all twelve form factors comprising the quark-gluon vertex, for arbitrary values of the momenta. Numerical results are presented for the form factors in three special kinematical configurations (soft gluon and quark symmetric limit, zero quark momentum), and compared with the corresponding lattice data. (paper)
Medical Data Probabilistic Analysis by Optical Computing Methods
Directory of Open Access Journals (Sweden)
Alexander LARKIN
2014-06-01
Full Text Available The purpose of this article to show the laser coherent photonics methods can be use for classification of medical information. It is shown that the holography methods can be used not only for work with images. Holographic methods can be used for processing of information provided in the universal multi-parametric form. It is shown that along with the usual correlation algorithm enable to realize a number of algorithms of classification: searching for a precedent, Hamming distance measurement, Bayes probability algorithm, deterministic and “correspondence” algorithms. Significantly, that preserves all advantages of holographic method – speed, two-dimension, record-breaking high capacity of memory, flexibility of data processing and representation of result, high radiation resistance in comparison with electronic equipment. For example is presented the result of solving one of the problems of medical diagnostics - a forecast of organism state after mass traumatic lesions.
Directory of Open Access Journals (Sweden)
Ling Kang
2017-03-01
Full Text Available Compared to the hydrostatic hydrodynamic model, the non-hydrostatic hydrodynamic model can accurately simulate flows that feature vertical accelerations. The model’s low computational efficiency severely restricts its wider application. This paper proposes a non-hydrostatic hydrodynamic model based on a multithreading parallel computing method. The horizontal momentum equation is obtained by integrating the Navier–Stokes equations from the bottom to the free surface. The vertical momentum equation is approximated by the Keller-box scheme. A two-step method is used to solve the model equations. A parallel strategy based on block decomposition computation is utilized. The original computational domain is subdivided into two subdomains that are physically connected via a virtual boundary technique. Two sub-threads are created and tasked with the computation of the two subdomains. The producer–consumer model and the thread lock technique are used to achieve synchronous communication between sub-threads. The validity of the model was verified by solitary wave propagation experiments over a flat bottom and slope, followed by two sinusoidal wave propagation experiments over submerged breakwater. The parallel computing method proposed here was found to effectively enhance computational efficiency and save 20%–40% computation time compared to serial computing. The parallel acceleration rate and acceleration efficiency are approximately 1.45% and 72%, respectively. The parallel computing method makes a contribution to the popularization of non-hydrostatic models.
Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)
2012-01-01
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.
Vectorization on the star computer of several numerical methods for a fluid flow problem
Lambiotte, J. J., Jr.; Howser, L. M.
1974-01-01
A reexamination of some numerical methods is considered in light of the new class of computers which use vector streaming to achieve high computation rates. A study has been made of the effect on the relative efficiency of several numerical methods applied to a particular fluid flow problem when they are implemented on a vector computer. The method of Brailovskaya, the alternating direction implicit method, a fully implicit method, and a new method called partial implicitization have been applied to the problem of determining the steady state solution of the two-dimensional flow of a viscous imcompressible fluid in a square cavity driven by a sliding wall. Results are obtained for three mesh sizes and a comparison is made of the methods for serial computation.
Wu, Xiangyang
1999-07-01
The heterocyclic amine 2-amino-3-methylimidazo (4, 5-f) quinoline (IQ) is one of a number of carcinogens found in barbecued meat and fish. It induces tumors in mammals and is probably involved in human carcinogenesis, because of great exposure to such food carcinogens. IQ is biochemically activated to a derivative which reacts with DNA to form a covalent adduct. This adduct may deform the DNA and consequently cause a mutation. which may initiate carcinogenesis. To understand this cancer initiating event, it is necessary to obtain atomic resolution structures of the damaged DNA. No such structures are available experimentally due to synthesis difficulties. Therefore, we employ extensive molecular mechanics and dynamics calculations for this purpose. The major IQ-DNA adduct in the specific DNA sequence d(5'G1G2C G3CCA3') - d(5'TGGCGCC3') with IQ modified at G3 is studied. The d(5'G1G2C G3CC3') sequence has recently been shown to be a hot-spot for mutations when IQ modification is at G3. Although this sequence is prone to -2 deletions via a ``slippage mechanism'' even when unmodified, a key question is why IQ increases the mutation frequency of the unmodified DNA by about 104 fold. Is there a structural feature imposed by IQ that is responsible? The molecular mechanics and dynamics program AMBER for nucleic acids with the latest force field was chosen for this work. This force field has been demonstrated to reproduce well the B-DNA structure. However, some parameters, the partial charges, bond lengths and angles, dihedral parameters of the modified residue, are not available in the AMBER database. We parameterized the force field using high level ab initio quantum calculations. We created 800 starting conformations which uniformly sampled in combination at 18° intervals three torsion angles that govern the IQ-DNA orientations, and energy minimized them. The most important structures are abnormal; the IQ damaged guanine is rotated out of its standard B
Computation of nonuniform transmission lines using the FDTD method
Energy Technology Data Exchange (ETDEWEB)
Miranda, G.C.; Paulino, J.O.S. [Universidade Federal de Minas Gerais, Belo Horizonte, MG (Brazil). School of Engineering
1997-12-31
Calculation of lightning overvoltages on transmission lines has been described. Lightning induced overvoltages are of great significance under certain conditions because of the main characteristics of the phenomena. The lightning channel model is one of the most important parameters essential to obtaining the generated electromagnetic fields. In this study, nonuniform transmission line equations were solved using the finite difference method and the leap-frog scheme, the Finite Difference Time Domain (FDTD) method. The subroutine was interfaced with the Electromagnetic Transients Program (EMTP). Two models were used to represent the characteristic impedance of the nonuniform lines used to model the transmission line towers and the lightning main channel. The advantages of the FDTD method was the much smaller code and faster processing time. 35 refs., 5 figs.