WorldWideScience

Sample records for accurate field computations

  1. Beyond mean-field approximations for accurate and computationally efficient models of on-lattice chemical kinetics

    Science.gov (United States)

    Pineda, M.; Stamatakis, M.

    2017-07-01

    Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.

  2. Accurate computation of transfer maps from magnetic field data

    International Nuclear Information System (INIS)

    Venturini, Marco; Dragt, Alex J.

    1999-01-01

    Consider an arbitrary beamline magnet. Suppose one component (for example, the radial component) of the magnetic field is known on the surface of some imaginary cylinder coaxial to and contained within the magnet aperture. This information can be obtained either by direct measurement or by computation with the aid of some 3D electromagnetic code. Alternatively, suppose that the field harmonics have been measured by using a spinning coil. We describe how this information can be used to compute the exact transfer map for the beamline element. This transfer map takes into account all effects of real beamline elements including fringe-field, pseudo-multipole, and real multipole error effects. The method we describe automatically takes into account the smoothing properties of the Laplace-Green function. Consequently, it is robust against both measurement and electromagnetic code errors. As an illustration we apply the method to the field analysis of high-gradient interaction region quadrupoles in the Large Hadron Collider (LHC)

  3. An efficient and accurate method for calculating nonlinear diffraction beam fields

    Energy Technology Data Exchange (ETDEWEB)

    Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)

    2016-04-15

    This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.

  4. Accurate evaluation of exchange fields in finite element micromagnetic solvers

    Science.gov (United States)

    Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.

    2012-04-01

    Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.

  5. Fast magnetic field computation in fusion technology using GPU technology

    Energy Technology Data Exchange (ETDEWEB)

    Chiariello, Andrea Gaetano [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Formisano, Alessandro, E-mail: Alessandro.Formisano@unina2.it [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Martone, Raffaele [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy)

    2013-10-15

    Highlights: ► The paper deals with high accuracy numerical simulations of high field magnets. ► The porting of existing codes of High Performance Computing architectures allowed to obtain a relevant speedup while not reducing computational accuracy. ► Some examples of applications, referred to ITER-like magnets, are reported. -- Abstract: One of the main issues in the simulation of Tokamaks functioning is the reliable and accurate computation of actual field maps in the plasma chamber. In this paper a tool able to accurately compute magnetic field maps produced by active coils of any 3D shape, wound with high number of conductors, is presented. Under linearity assumption, the coil winding is modeled by means of “sticks”, following each conductor's shape, and the contribution of each stick is computed using high speed Graphic Computing Units (GPU's). Relevant speed enhancements with respect to standard parallel computing environment are achieved in this way.

  6. Accurate Calculation of Fringe Fields in the LHC Main Dipoles

    CERN Document Server

    Kurz, S; Siegel, N

    2000-01-01

    The ROXIE program developed at CERN for the design and optimization of the superconducting LHC magnets has been recently extended in a collaboration with the University of Stuttgart, Germany, with a field computation method based on the coupling between the boundary element (BEM) and the finite element (FEM) technique. This avoids the meshing of the coils and the air regions, and avoids the artificial far field boundary conditions. The method is therefore specially suited for the accurate calculation of fields in the superconducting magnets in which the field is dominated by the coil. We will present the fringe field calculations in both 2d and 3d geometries to evaluate the effect of connections and the cryostat on the field quality and the flux density to which auxiliary bus-bars are exposed.

  7. The importance of accurate meteorological input fields and accurate planetary boundary layer parameterizations, tested against ETEX-1

    International Nuclear Information System (INIS)

    Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.

    1997-01-01

    Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)

  8. Accurate measurement of surface areas of anatomical structures by computer-assisted triangulation of computed tomography images

    Energy Technology Data Exchange (ETDEWEB)

    Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))

    1993-05-01

    There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).

  9. Accurate atom-mapping computation for biochemical reactions.

    Science.gov (United States)

    Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D

    2012-11-26

    The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.

  10. A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation

    Energy Technology Data Exchange (ETDEWEB)

    Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)

    2012-06-15

    Purpose: To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian Registered-Sign On-Board Imager Registered-Sign (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. Methods: We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian Registered-Sign OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. Results: The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp ({+-}0.2 mm Al and {+-}2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 Multiplication-Sign 5 cm{sup 2} to 40 Multiplication-Sign 40 cm{sup 2}. The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within

  11. An efficient hysteresis modeling methodology and its implementation in field computation applications

    Energy Technology Data Exchange (ETDEWEB)

    Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)

    2017-07-15

    Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.

  12. Rigorous bounds on survival times in circular accelerators and efficient computation of fringe-field transfer maps

    International Nuclear Information System (INIS)

    Hoffstaetter, G.H.

    1994-12-01

    Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)

  13. Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics

    Science.gov (United States)

    Goodrich, John W.; Dyson, Rodger W.

    1999-01-01

    The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that

  14. Computer-based personality judgments are more accurate than those made by humans.

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-27

    Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.

  15. An efficient and accurate method for computation of energy release rates in beam structures with longitudinal cracks

    DEFF Research Database (Denmark)

    Blasques, José Pedro Albergaria Amaral; Bitsche, Robert

    2015-01-01

    This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual...... Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......-dimensional solid finite element models while using only a fraction of the computation time....

  16. Fast and accurate computation of projected two-point functions

    Science.gov (United States)

    Grasshorn Gebhardt, Henry S.; Jeong, Donghui

    2018-01-01

    We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.

  17. Computer-based personality judgments are more accurate than those made by humans

    Science.gov (United States)

    Youyou, Wu; Kosinski, Michal; Stillwell, David

    2015-01-01

    Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507

  18. Magnetostatic fields computed using an integral equation derived from Green's theorems

    International Nuclear Information System (INIS)

    Simkin, J.; Trowbridge, C.W.

    1976-04-01

    A method of computing magnetostatic fields is described that is based on a numerical solution of the integral equation obtained from Green's Theorems. The magnetic scalar potential and its normal derivative on the surfaces of volumes are found by solving a set of linear equations. These are obtained from Green's Second Theorem and the continuity conditions at interfaces between volumes. Results from a two-dimensional computer program are presented and these show the method to be accurate and efficient. (author)

  19. Accurate method of the magnetic field measurement of quadrupole magnets

    International Nuclear Information System (INIS)

    Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.

    1983-01-01

    We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)

  20. Accurate computer simulation of a drift chamber

    International Nuclear Information System (INIS)

    Killian, T.J.

    1980-01-01

    A general purpose program for drift chamber studies is described. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. Results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR

  1. Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation

    Science.gov (United States)

    Laakso, Ilkka; Hirata, Akimasa

    2012-12-01

    In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.

  2. Accurate computer simulation of a drift chamber

    CERN Document Server

    Killian, T J

    1980-01-01

    The author describes a general purpose program for drift chamber studies. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. The results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR. (1 refs).

  3. Accurate nonadiabatic quantum dynamics on the cheap: Making the most of mean field theory with master equations

    Energy Technology Data Exchange (ETDEWEB)

    Kelly, Aaron; Markland, Thomas E., E-mail: tmarkland@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Brackbill, Nora [Department of Physics, Stanford University, Stanford, California 94305 (United States)

    2015-03-07

    In this article, we show how Ehrenfest mean field theory can be made both a more accurate and efficient method to treat nonadiabatic quantum dynamics by combining it with the generalized quantum master equation framework. The resulting mean field generalized quantum master equation (MF-GQME) approach is a non-perturbative and non-Markovian theory to treat open quantum systems without any restrictions on the form of the Hamiltonian that it can be applied to. By studying relaxation dynamics in a wide range of dynamical regimes, typical of charge and energy transfer, we show that MF-GQME provides a much higher accuracy than a direct application of mean field theory. In addition, these increases in accuracy are accompanied by computational speed-ups of between one and two orders of magnitude that become larger as the system becomes more nonadiabatic. This combination of quantum-classical theory and master equation techniques thus makes it possible to obtain the accuracy of much more computationally expensive approaches at a cost lower than even mean field dynamics, providing the ability to treat the quantum dynamics of atomistic condensed phase systems for long times.

  4. Accurate nonadiabatic quantum dynamics on the cheap: making the most of mean field theory with master equations.

    Science.gov (United States)

    Kelly, Aaron; Brackbill, Nora; Markland, Thomas E

    2015-03-07

    In this article, we show how Ehrenfest mean field theory can be made both a more accurate and efficient method to treat nonadiabatic quantum dynamics by combining it with the generalized quantum master equation framework. The resulting mean field generalized quantum master equation (MF-GQME) approach is a non-perturbative and non-Markovian theory to treat open quantum systems without any restrictions on the form of the Hamiltonian that it can be applied to. By studying relaxation dynamics in a wide range of dynamical regimes, typical of charge and energy transfer, we show that MF-GQME provides a much higher accuracy than a direct application of mean field theory. In addition, these increases in accuracy are accompanied by computational speed-ups of between one and two orders of magnitude that become larger as the system becomes more nonadiabatic. This combination of quantum-classical theory and master equation techniques thus makes it possible to obtain the accuracy of much more computationally expensive approaches at a cost lower than even mean field dynamics, providing the ability to treat the quantum dynamics of atomistic condensed phase systems for long times.

  5. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description.

    Science.gov (United States)

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by

  6. Electrostatics of proteins in dielectric solvent continua. I. An accurate and efficient reaction field description

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Sebastian; Mathias, Gerald; Tavan, Paul, E-mail: paul.tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludwig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)

    2014-03-14

    We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ{sub i} of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ{sub i}. A summarizing discussion highlights the achievements of the new theory and of its approximate solution

  7. Field computation for two-dimensional array transducers with limited diffraction array beams.

    Science.gov (United States)

    Lu, Jian-Yu; Cheng, Jiqi

    2005-10-01

    A method is developed for calculating fields produced with a two-dimensional (2D) array transducer. This method decomposes an arbitrary 2D aperture weighting function into a set of limited diffraction array beams. Using the analytical expressions of limited diffraction beams, arbitrary continuous wave (cw) or pulse wave (pw) fields of 2D arrays can be obtained with a simple superposition of these beams. In addition, this method can be simplified and applied to a 1D array transducer of a finite or infinite elevation height. For beams produced with axially symmetric aperture weighting functions, this method can be reduced to the Fourier-Bessel method studied previously where an annular array transducer can be used. The advantage of the method is that it is accurate and computationally efficient, especially in regions that are not far from the surface of the transducer (near field), where it is important for medical imaging. Both computer simulations and a synthetic array experiment are carried out to verify the method. Results (Bessel beam, focused Gaussian beam, X wave and asymmetric array beams) show that the method is accurate as compared to that using the Rayleigh-Sommerfeld diffraction formula and agrees well with the experiment.

  8. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields

    Science.gov (United States)

    Zhu, Wuming; Trickey, S. B.

    2017-12-01

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.

  9. Accurate and balanced anisotropic Gaussian type orbital basis sets for atoms in strong magnetic fields.

    Science.gov (United States)

    Zhu, Wuming; Trickey, S B

    2017-12-28

    In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B

  10. Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)

    2017-07-15

    The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.

  11. Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter

    Science.gov (United States)

    2009-03-31

    AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields

  12. Computational methods for reversed-field equilibrium

    International Nuclear Information System (INIS)

    Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.

    1980-01-01

    Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described

  13. An Accurate liver segmentation method using parallel computing algorithm

    International Nuclear Information System (INIS)

    Elbasher, Eiman Mohammed Khalied

    2014-12-01

    Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther

  14. Reservoir computer predictions for the Three Meter magnetic field time evolution

    Science.gov (United States)

    Perevalov, A.; Rojas, R.; Lathrop, D. P.; Shani, I.; Hunt, B. R.

    2017-12-01

    The source of the Earth's magnetic field is the turbulent flow of liquid metal in the outer core. Our experiment's goal is to create Earth-like dynamo, to explore the mechanisms and to understand the dynamics of the magnetic and velocity fields. Since it is a complicated system, predictions of the magnetic field is a challenging problem. We present results of mimicking the three Meter experiment by a reservoir computer deep learning algorithm. The experiment is a three-meter diameter outer sphere and a one-meter diameter inner sphere with the gap filled with liquid sodium. The spheres can rotate up to 4 and 14 Hz respectively, giving a Reynolds number near to 108. Two external electromagnets apply magnetic fields, while an array of 31 external and 2 internal Hall sensors measure the resulting induced fields. We use this magnetic probe data to train a reservoir computer to predict the 3M time evolution and mimic waves in the experiment. Surprisingly accurate predictions can be made for several magnetic dipole time scales. This shows that such a complicated MHD system's behavior can be predicted. We gratefully acknowledge support from NSF EAR-1417148.

  15. Hydration free energies of cyanide and hydroxide ions from molecular dynamics simulations with accurate force fields

    Science.gov (United States)

    Lee, M.W.; Meuwly, M.

    2013-01-01

    The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories.

  16. An Accurate and Dynamic Computer Graphics Muscle Model

    Science.gov (United States)

    Levine, David Asher

    1997-01-01

    A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.

  17. Computers in field ion microscopy

    International Nuclear Information System (INIS)

    Suvorov, A.L.; Razinkova, T.L.; Sokolov, A.G.

    1980-01-01

    A review is presented of computer applications in field ion microscopy (FIM). The following topics are discussed in detail: (1) modeling field ion images in perfect crystals, (2) a general scheme of modeling, (3) modeling of the process of field evaporation, (4) crystal structure defects, (5) alloys, and (6) automation of FIM experiments and computer-assisted processing of real images. 146 references are given

  18. A method for accurate computation of elastic and discrete inelastic scattering transfer matrix

    International Nuclear Information System (INIS)

    Garcia, R.D.M.; Santina, M.D.

    1986-05-01

    A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt

  19. Methods for Efficiently and Accurately Computing Quantum Mechanical Free Energies for Enzyme Catalysis.

    Science.gov (United States)

    Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L

    2016-01-01

    Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.

  20. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    KAUST Repository

    Hale, Nicholas; Townsend, Alex

    2013-01-01

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton's root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  1. Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights

    KAUST Repository

    Hale, Nicholas

    2013-03-06

    An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.

  2. TE/TM scheme for computation of electromagnetic fields in accelerators

    International Nuclear Information System (INIS)

    Zagorodnov, Igor; Weiland, Thomas

    2005-01-01

    We propose a new two-level economical conservative scheme for short-range wake field calculation in three dimensions. The scheme does not have dispersion in the longitudinal direction and is staircase free (second order convergent). Unlike the finite-difference time domain method (FDTD), it is based on a TE/TM like splitting of the field components in time. Additionally, it uses an enhanced alternating direction splitting of the transverse space operator that makes the scheme computationally as effective as the conventional FDTD method. Unlike the FDTD ADI and low-order Strang methods, the splitting error in our scheme is only of fourth order. As numerical examples show, the new scheme is much more accurate on the long-time scale than the conventional FDTD approach

  3. Accurate van der Waals force field for gas adsorption in porous materials.

    Science.gov (United States)

    Sun, Lei; Yang, Li; Zhang, Ya-Dong; Shi, Qi; Lu, Rui-Feng; Deng, Wei-Qiao

    2017-09-05

    An accurate van der Waals force field (VDW FF) was derived from highly precise quantum mechanical (QM) calculations. Small molecular clusters were used to explore van der Waals interactions between gas molecules and porous materials. The parameters of the accurate van der Waals force field were determined by QM calculations. To validate the force field, the prediction results from the VDW FF were compared with standard FFs, such as UFF, Dreiding, Pcff, and Compass. The results from the VDW FF were in excellent agreement with the experimental measurements. This force field can be applied to the prediction of the gas density (H 2 , CO 2 , C 2 H 4 , CH 4 , N 2 , O 2 ) and adsorption performance inside porous materials, such as covalent organic frameworks (COFs), zeolites and metal organic frameworks (MOFs), consisting of H, B, N, C, O, S, Si, Al, Zn, Mg, Ni, and Co. This work provides a solid basis for studying gas adsorption in porous materials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  4. Defect correction and multigrid for an efficient and accurate computation of airfoil flows

    NARCIS (Netherlands)

    Koren, B.

    1988-01-01

    Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction

  5. Magnetic field computations of the magnetic circuits with permanent magnets by infinite element method

    International Nuclear Information System (INIS)

    Hahn, Song Yop

    1985-01-01

    A method employing infinite elements is described for the magnetic field computations of the magnetic circuits with permanent magnet. The system stiffness matrix is derived by a variational approach, while the interfacial boundary conditions between the finite element regions and the infinite element regions are dealt with using collocation method. The proposed method is applied to a simple linear problems, and the numerical results are compared with those of the standard finite element method and the analytic solutions. It is observed that the proposed method gives more accurate results than those of the standard finite element method under the same computing efforts. (Author)

  6. Nurturing a growing field: Computers & Geosciences

    Science.gov (United States)

    Mariethoz, Gregoire; Pebesma, Edzer

    2017-10-01

    Computational issues are becoming increasingly critical for virtually all fields of geoscience. This includes the development of improved algorithms and models, strategies for implementing high-performance computing, or the management and visualization of the large datasets provided by an ever-growing number of environmental sensors. Such issues are central to scientific fields as diverse as geological modeling, Earth observation, geophysics or climatology, to name just a few. Related computational advances, across a range of geoscience disciplines, are the core focus of Computers & Geosciences, which is thus a truly multidisciplinary journal.

  7. Spherical near-field antenna measurements — The most accurate antenna measurement technique

    DEFF Research Database (Denmark)

    Breinbjerg, Olav

    2016-01-01

    The spherical near-field antenna measurement technique combines several advantages and generally constitutes the most accurate technique for experimental characterization of radiation from antennas. This paper/presentation discusses these advantages, briefly reviews the early history and present...

  8. How accurate are adolescents in portion-size estimation using the computer tool young adolescents' nutrition assessment on computer (YANA-C)?

    OpenAIRE

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-01-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amou...

  9. Fast and accurate algorithm for the computation of complex linear canonical transforms.

    Science.gov (United States)

    Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus

    2010-09-01

    A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.

  10. Computation of Electromagnetic Fields Scattered From Dielectric Objects of Uncertain Shapes Using MLMC

    KAUST Repository

    Litvinenko, Alexander

    2016-01-06

    Simulators capable of computing scattered fields from objects of uncertain shapes are highly useful in electromagnetics and photonics, where device designs are typically subject to fabrication tolerances. Knowledge of statistical variations in scattered fields is useful in ensuring error-free functioning of devices. Oftentimes such simulators use a Monte Carlo (MC) scheme to sample the random domain, where the variables parameterize the uncertainties in the geometry. At each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver is executed to compute the scattered fields. However, to obtain accurate statistics of the scattered fields, the number of MC samples has to be large. This significantly increases the total execution time. In this work, to address this challenge, the Multilevel MC (MLMC [1]) scheme is used together with a (deterministic) surface integral equation solver. The MLMC achieves a higher efficiency by balancing the statistical errors due to sampling of the random domain and the numerical errors due to discretization of the geometry at each of these samples. Error balancing results in a smaller number of samples requiring coarser discretizations. Consequently, total execution time is significantly shortened.

  11. Computation of Electromagnetic Fields Scattered From Dielectric Objects of Uncertain Shapes Using MLMC

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    Simulators capable of computing scattered fields from objects of uncertain shapes are highly useful in electromagnetics and photonics, where device designs are typically subject to fabrication tolerances. Knowledge of statistical variations in scattered fields is useful in ensuring error-free functioning of devices. Oftentimes such simulators use a Monte Carlo (MC) scheme to sample the random domain, where the variables parameterize the uncertainties in the geometry. At each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver is executed to compute the scattered fields. However, to obtain accurate statistics of the scattered fields, the number of MC samples has to be large. This significantly increases the total execution time. In this work, to address this challenge, the Multilevel MC (MLMC [1]) scheme is used together with a (deterministic) surface integral equation solver. The MLMC achieves a higher efficiency by “balancing” the statistical errors due to sampling of the random domain and the numerical errors due to discretization of the geometry at each of these samples. Error balancing results in a smaller number of samples requiring coarser discretizations. Consequently, total execution time is significantly shortened.

  12. A computational study of the near-field generation and decay of wingtip vortices

    International Nuclear Information System (INIS)

    Craft, T.J.; Gerasimov, A.V.; Launder, B.E.; Robinson, C.M.E.

    2006-01-01

    The numerical prediction of the downstream trailing vortex shed from an aircraft wingtip is a particularly challenging CFD task because, besides predicting the development of the strong vortex itself, one needs to compute accurately the flow over the wing to resolve the boundary layer roll-up and shedding which provide the initial conditions for the free vortex. Computations are here reported of the flow over a NACA 0012 half-wing with rounded wing tip and the near-field wake as measured by [Chow, J.S., Zilliac, G., Bradshaw, P., 1997. Turbulence measurements in the near-field of a wingtip vortex. NASA Tech Mem 110418, NASA.]. The aim is to assess the performance of two turbulence models which, in principle, might be seen as capable of resolving both the three dimensional boundary layer on the wing and the generation and near-field decay of the strongly accelerated vortex that develops from the wingtip. Results using linear and non-linear eddy-viscosity models are presented, but these both exhibit a far too rapid decay of the vortex core. Only a stress-transport (or second-moment) model that satisfies the 'two-component limit', [Lumley, J.L., 1978. Computational modelling of turbulent flows. Adv. Appl. Mech. 18, 123-176.], reproduces the principal features found in the experimental measurements

  13. Machine learning of accurate energy-conserving molecular force fields

    Science.gov (United States)

    Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert

    2017-01-01

    Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076

  14. Computers for lattice field theories

    International Nuclear Information System (INIS)

    Iwasaki, Y.

    1994-01-01

    Parallel computers dedicated to lattice field theories are reviewed with emphasis on the three recent projects, the Teraflops project in the US, the CP-PACS project in Japan and the 0.5-Teraflops project in the US. Some new commercial parallel computers are also discussed. Recent development of semiconductor technologies is briefly surveyed in relation to possible approaches toward Teraflops computers. (orig.)

  15. An Efficient Approach for Fast and Accurate Voltage Stability Margin Computation in Large Power Grids

    Directory of Open Access Journals (Sweden)

    Heng-Yi Su

    2016-11-01

    Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.

  16. Accurate calculation of field and carrier distributions in doped semiconductors

    Directory of Open Access Journals (Sweden)

    Wenji Yang

    2012-06-01

    Full Text Available We use the numerical squeezing algorithm(NSA combined with the shooting method to accurately calculate the built-in fields and carrier distributions in doped silicon films (SFs in the micron and sub-micron thickness range and results are presented in graphical form for variety of doping profiles under different boundary conditions. As a complementary approach, we also present the methods and the results of the inverse problem (IVP - finding out the doping profile in the SFs for given field distribution. The solution of the IVP provides us the approach to arbitrarily design field distribution in SFs - which is very important for low dimensional (LD systems and device designing. Further more, the solution of the IVP is both direct and much easy for all the one-, two-, and three-dimensional semiconductor systems. With current efforts focused on the LD physics, knowing of the field and carrier distribution details in the LD systems will facilitate further researches on other aspects and hence the current work provides a platform for those researches.

  17. Accurate computations of monthly average daily extraterrestrial irradiation and the maximum possible sunshine duration

    International Nuclear Information System (INIS)

    Jain, P.C.

    1985-12-01

    The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers

  18. Visualizing the Computational Intelligence Field

    NARCIS (Netherlands)

    L. Waltman (Ludo); J.H. van den Berg (Jan); U. Kaymak (Uzay); N.J.P. van Eck (Nees Jan)

    2006-01-01

    textabstractIn this paper, we visualize the structure and the evolution of the computational intelligence (CI) field. Based on our visualizations, we analyze the way in which the CI field is divided into several subfields. The visualizations provide insight into the characteristics of each subfield

  19. Accurate three dimensional characterization of ultrasonic sound fields (by computer controlled rotational scanning)

    International Nuclear Information System (INIS)

    Gundtoft, H.E.; Nielsen, T.

    1981-07-01

    A rotational scanning system has recently been developed at Risoe National Laboratory. It allows sound fields from ultrasonic transducers to be examined in 3 dimensions. Using different calculation and plotting programs, any section in the sound field can be plotted. Results from examination of transducers for automatic inspection are presented. (author)

  20. Fast and accurate three-dimensional point spread function computation for fluorescence microscopy.

    Science.gov (United States)

    Li, Jizhou; Xue, Feng; Blu, Thierry

    2017-06-01

    The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.

  1. Time-Accurate Simulations of Synthetic Jet-Based Flow Control for An Axisymmetric Spinning Body

    National Research Council Canada - National Science Library

    Sahu, Jubaraj

    2004-01-01

    .... A time-accurate Navier-Stokes computational technique has been used to obtain numerical solutions for the unsteady jet-interaction flow field for a spinning projectile at a subsonic speed, Mach...

  2. Optimization of the GBMV2 implicit solvent force field for accurate simulation of protein conformational equilibria.

    Science.gov (United States)

    Lee, Kuo Hao; Chen, Jianhan

    2017-06-15

    Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.

  3. Three dimensional field computation

    International Nuclear Information System (INIS)

    Trowbridge, C.W.

    1981-06-01

    Recent research work carried out at Rutherford and Appleton Laboratories into the Computation of Electromagnetic Fields is summarised. The topics covered include algorithms for integral and differential methods for the solution of 3D magnetostatic fields, comparison of results with experiment and an investigation into the strengths and weaknesses of both methods for an analytic problem. The paper concludes with a brief summary of the work in progress on the solution of 3D eddy currents using differential finite elements. (author)

  4. Methods for Computing Accurate Atomic Spin Moments for Collinear and Noncollinear Magnetism in Periodic and Nonperiodic Materials.

    Science.gov (United States)

    Manz, Thomas A; Sholl, David S

    2011-12-13

    The partitioning of electron spin density among atoms in a material gives atomic spin moments (ASMs), which are important for understanding magnetic properties. We compare ASMs computed using different population analysis methods and introduce a method for computing density derived electrostatic and chemical (DDEC) ASMs. Bader and DDEC ASMs can be computed for periodic and nonperiodic materials with either collinear or noncollinear magnetism, while natural population analysis (NPA) ASMs can be computed for nonperiodic materials with collinear magnetism. Our results show Bader, DDEC, and (where applicable) NPA methods give similar ASMs, but different net atomic charges. Because they are optimized to reproduce both the magnetic field and the chemical states of atoms in a material, DDEC ASMs are especially suitable for constructing interaction potentials for atomistic simulations. We describe the computation of accurate ASMs for (a) a variety of systems using collinear and noncollinear spin DFT, (b) highly correlated materials (e.g., magnetite) using DFT+U, and (c) various spin states of ozone using coupled cluster expansions. The computed ASMs are in good agreement with available experimental results for a variety of periodic and nonperiodic materials. Examples considered include the antiferromagnetic metal organic framework Cu3(BTC)2, several ozone spin states, mono- and binuclear transition metal complexes, ferri- and ferro-magnetic solids (e.g., Fe3O4, Fe3Si), and simple molecular systems. We briefly discuss the theory of exchange-correlation functionals for studying noncollinear magnetism. A method for finding the ground state of systems with highly noncollinear magnetism is introduced. We use these methods to study the spin-orbit coupling potential energy surface of the single molecule magnet Fe4C40H52N4O12, which has highly noncollinear magnetism, and find that it contains unusual features that give a new interpretation to experimental data.

  5. Computation within the auxiliary field approach

    International Nuclear Information System (INIS)

    Baeurle, S.A.

    2003-01-01

    Recently, the classical auxiliary field methodology has been developed as a new simulation technique for performing calculations within the framework of classical statistical mechanics. Since the approach suffers from a sign problem, a judicious choice of the sampling algorithm, allowing a fast statistical convergence and an efficient generation of field configurations, is of fundamental importance for a successful simulation. In this paper we focus on the computational aspects of this simulation methodology. We introduce two different types of algorithms, the single-move auxiliary field Metropolis Monte Carlo algorithm and two new classes of force-based algorithms, which enable multiple-move propagation. In addition, to further optimize the sampling, we describe a preconditioning scheme, which permits to treat each field degree of freedom individually with regard to the evolution through the auxiliary field configuration space. Finally, we demonstrate the validity and assess the competitiveness of these algorithms on a representative practical example. We believe that they may also provide an interesting possibility for enhancing the computational efficiency of other auxiliary field methodologies

  6. Accurate Computation of Reduction Potentials of 4Fe−4S Clusters Indicates a Carboxylate Shift in Pyrococcus furiosus Ferredoxin

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Ooi, Bee Lean; Christensen, Hans Erik Mølager

    2007-01-01

    This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (as...

  7. An emergency computation model for the wind field and diffusion during accidental nuclear pollutants releases

    International Nuclear Information System (INIS)

    Yoshikawa, T.; Kimura, F.; Koide, T.; Kurita, S.

    1990-01-01

    Since 1986, a simple computation model for a nuclear accident has been operating in the emergency information center of Japan Agency for Science and Technology. It was developed by introducing the variation method for wind and a random walk particle model for diffusion in 50-100 km scale. Furthermore, we developed a new model with dynamic equations and a diffusion equation to predict more accurately the wind and diffusion, including local thermal convection. The momentum equation and the continuity equation are solved numerically in nonhydrostatic and incompressible conditions, using a finite difference technique. Then, the equation of thermal energy preservation is solved for potential temperature in the predicted wind field of every time step. The diffusion of nuclear pollutants is computed numerically in the predicted wind field, using diffusion coefficients obtained from the predictive dynamic equations. These computations were verified with meteorological surveys and gas tracer diffusion experiments over flat land, along a sea shore and over a mountainous area. Horizontal circulations and vertical convections can be computed in any mesh size from several tens of meters to several kilometers, while small vertical convections less than 1 km or so cannot be represented with the former hydrostatic circulation models. (author)

  8. Entertainment computing, social transformation and the quantum field

    OpenAIRE

    Rauterberg, G.W.M.; Nijholt, A.; Reidsma, D.; Hondorp, H.

    2009-01-01

    Entertainment computing is on its way getting an established academic discipline. The scope of entertainment computing is quite broad (see the scope of the international journal Entertainment Computing). One unifying idea in this diverse community of entertainment researchers and developers might be a normative position to enhance human living through social transformation. One possible option in this direction is a shared `conscious' field. Several ideas about a new kind of field based on qu...

  9. Computation of Electromagnetic Fields Scattered From Dielectric Objects of Uncertain Shapes Using MLMC Center for Uncertainty

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    Simulators capable of computing scattered fields from objects of uncertain shapes are highly useful in electromagnetics and photonics, where device designs are typically subject to fabrication tolerances. Knowledge of statistical variations in scattered fields is useful in ensuring error-free functioning of devices. Oftentimes such simulators use a Monte Carlo (MC) scheme to sample the random domain, where the variables parameterize the uncertainties in the geometry. At each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver is executed to compute the scattered fields. However, to obtain accurate statistics of the scattered fields, the number of MC samples has to be large. This significantly increases the total execution time. In this work, to address this challenge, the Multilevel MC (MLMC) scheme is used together with a (deterministic) surface integral equation solver. The MLMC achieves a higher efficiency by “balancing” the statistical errors due to sampling of the random domain and the numerical errors due to discretization of the geometry at each of these samples. Error balancing results in a smaller number of samples requiring coarser discretizations. Consequently, total execution time is significantly shortened.

  10. Modified cable equation incorporating transverse polarization of neuronal membranes for accurate coupling of electric fields.

    Science.gov (United States)

    Wang, Boshuo; Aberra, Aman S; Grill, Warren M; Peterchev, Angel V

    2018-04-01

    We present a theory and computational methods to incorporate transverse polarization of neuronal membranes into the cable equation to account for the secondary electric field generated by the membrane in response to transverse electric fields. The effect of transverse polarization on nonlinear neuronal activation thresholds is quantified and discussed in the context of previous studies using linear membrane models. The response of neuronal membranes to applied electric fields is derived under two time scales and a unified solution of transverse polarization is given for spherical and cylindrical cell geometries. The solution is incorporated into the cable equation re-derived using an asymptotic model that separates the longitudinal and transverse dimensions. Two numerical methods are proposed to implement the modified cable equation. Several common neural stimulation scenarios are tested using two nonlinear membrane models to compare thresholds of the conventional and modified cable equations. The implementations of the modified cable equation incorporating transverse polarization are validated against previous results in the literature. The test cases show that transverse polarization has limited effect on activation thresholds. The transverse field only affects thresholds of unmyelinated axons for short pulses and in low-gradient field distributions, whereas myelinated axons are mostly unaffected. The modified cable equation captures the membrane's behavior on different time scales and models more accurately the coupling between electric fields and neurons. It addresses the limitations of the conventional cable equation and allows sound theoretical interpretations. The implementation provides simple methods that are compatible with current simulation approaches to study the effect of transverse polarization on nonlinear membranes. The minimal influence by transverse polarization on axonal activation thresholds for the nonlinear membrane models indicates that

  11. Temperature Field Accurate Modeling and Cooling Performance Evaluation of Direct-Drive Outer-Rotor Air-Cooling In-Wheel Motor

    Directory of Open Access Journals (Sweden)

    Feng Chai

    2016-10-01

    Full Text Available High power density outer-rotor motors commonly use water or oil cooling. A reasonable thermal design for outer-rotor air-cooling motors can effectively enhance the power density without the fluid circulating device. Research on the heat dissipation mechanism of an outer-rotor air-cooling motor can provide guidelines for the selection of the suitable cooling mode and the design of the cooling structure. This study investigates the temperature field of the motor through computational fluid dynamics (CFD and presents a method to overcome the difficulties in building an accurate temperature field model. The proposed method mainly includes two aspects: a new method for calculating the equivalent thermal conductivity (ETC of the air-gap in the laminar state and an equivalent treatment to the thermal circuit that comprises a hub, shaft, and bearings. Using an outer-rotor air-cooling in-wheel motor as an example, the temperature field of this motor is calculated numerically using the proposed method; the results are experimentally verified. The heat transfer rate (HTR of each cooling path is obtained using the numerical results and analytic formulas. The influences of the structural parameters on temperature increases and the HTR of each cooling path are analyzed. Thereafter, the overload capability of the motor is analyzed in various overload conditions.

  12. Semi-analytical computation of the acoustic field of a segment of a cylindrically concave transducer in lossless and attenuating media.

    Science.gov (United States)

    Karbeyaz, Başak Ulker; Miller, Eric L; Cleveland, Robin O

    2007-02-01

    Conventional ultrasound transducers used for medical diagnosis generally consist of linearly aligned rectangular apertures with elements that are focused in one plane. While traditional beamforming is easily accomplished with such transducers, the development of quantitative, physics-based imaging methods, such as tomography, requires an accurate, and computationally efficient, model of the field radiated by the transducer. The field can be expressed in terms of the Helmholtz-Kirchhoff integral; however, its direct numerical evaluation is a computationally intensive task. Here, a fast semianalytical method based on Stepanishen's spatial impulse response formulation [J. Acoust. Soc. Am. 49, 1627-1638 (1971)] is developed to compute the acoustic field of a rectangular element of cylindrically concave transducers in a homogeneous medium. The pressure field, for, lossless and attenuating media, is expressed as a superposition of Bessel functions, which can be evaluated rapidly. In particular, the coefficients of the Bessel series are frequency independent and need only be evaluated once for a given transducer. A speed up of two orders of magnitude is obtained compared to an optimized direct numerical integration. The numerical results are compared with Field II and the Fresnel approximation.

  13. A Computational Model for Real-Time Calculation of Electric Field due to Transcranial Magnetic Stimulation in Clinics

    Directory of Open Access Journals (Sweden)

    Alessandra Paffi

    2015-01-01

    Full Text Available The aim of this paper is to propose an approach for an accurate and fast (real-time computation of the electric field induced inside the whole brain volume during a transcranial magnetic stimulation (TMS procedure. The numerical solution implements the admittance method for a discretized realistic brain model derived from Magnetic Resonance Imaging (MRI. Results are in a good agreement with those obtained using commercial codes and require much less computational time. An integration of the developed code with neuronavigation tools will permit real-time evaluation of the stimulated brain regions during the TMS delivery, thus improving the efficacy of clinical applications.

  14. Finite element electromagnetic field computation on the Sequent Symmetry 81 parallel computer

    International Nuclear Information System (INIS)

    Ratnajeevan, S.; Hoole, H.

    1990-01-01

    Finite element field analysis algorithms lend themselves to parallelization and this fact is exploited in this paper to implement a finite element analysis program for electromagnetic field computation on the Sequent Symmetry 81 parallel computer with three processors. In terms of waiting time, the maximum gains are to be made in matrix solution and therefore this paper concentrates on the gains in parallelizing the solution part of finite element analysis. An outline of how parallelization could be exploited in most finite element operations is given in this paper although the actual implemention of parallelism on the Sequent Symmetry 81 parallel computer was in sparsity computation, matrix assembly and the matrix solution areas. In all cases, the algorithms were modified suit the parallel programming application rather than allowing the compiler to parallelize on existing algorithms

  15. Computation of Surface Integrals of Curl Vector Fields

    Science.gov (United States)

    Hu, Chenglie

    2007-01-01

    This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…

  16. A flexible and accurate digital volume correlation method applicable to high-resolution volumetric images

    Science.gov (United States)

    Pan, Bing; Wang, Bo

    2017-10-01

    Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.

  17. Approximation and Computation

    CERN Document Server

    Gautschi, Walter; Rassias, Themistocles M

    2011-01-01

    Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg

  18. Time Accurate Unsteady Pressure Loads Simulated for the Space Launch System at a Wind Tunnel Condition

    Science.gov (United States)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.

    2015-01-01

    Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.

  19. Reconfigurable computing the theory and practice of FPGA-based computation

    CERN Document Server

    Hauck, Scott

    2010-01-01

    Reconfigurable Computing marks a revolutionary and hot topic that bridges the gap between the separate worlds of hardware and software design- the key feature of reconfigurable computing is its groundbreaking ability to perform computations in hardware to increase performance while retaining the flexibility of a software solution. Reconfigurable computers serve as affordable, fast, and accurate tools for developing designs ranging from single chip architectures to multi-chip and embedded systems. Scott Hauck and Andre DeHon have assembled a group of the key experts in the fields of both hardwa

  20. Online monitoring of the two-dimensional temperature field in a boiler furnace based on acoustic computed tomography

    International Nuclear Information System (INIS)

    Zhang, Shiping; Shen, Guoqing; An, Liansuo; Niu, Yuguang

    2015-01-01

    Online monitoring of the temperature field is crucial to optimally adjust combustion within a boiler. In this paper, acoustic computed tomography (CT) technology was used to obtain the temperature profile of a furnace cross-section. The physical principles behind acoustic CT, acoustic signals and time delay estimation were studied. Then, the technique was applied to a domestic 600-MW coal-fired boiler. Acoustic CT technology was used to monitor the temperature field of the cross-section in the boiler furnace, and the temperature profile was reconstructed through ART iteration. The linear sweeping frequency signal was adopted as the sound source signal, whose sweeping frequency ranged from 500 to 3000 Hz with a sweeping cycle of 0.1 s. The generalized cross-correlation techniques with PHAT and ML were used as the time delay estimation method when the boiler was in different states. Its actual operation indicated that the monitored images accurately represented the combustion state of the boiler, and the acoustic CT system was determined to be accurate and reliable. - Highlights: • An online monitoring approach to monitor temperature field in a boiler furnace. • The paper provides acoustic CT technology to obtain the temperature profile of a furnace cross-section. • The temperature profile was reconstructed through ART iteration. • The technique is applied to a domestic 600-MW coal-fired boiler. • The monitored images accurately represent the combustion state of the boiler

  1. Computational methods in several fields of radiation dosimetry

    International Nuclear Information System (INIS)

    Paretzke, Herwig G.

    2010-01-01

    Full text: Radiation dosimetry has to cope with a wide spectrum of applications and requirements in time and size. The ubiquitous presence of various radiation fields or radionuclides in the human home, working, urban or agricultural environment can lead to various dosimetric tasks starting from radioecology, retrospective and predictive dosimetry, personal dosimetry, up to measurements of radionuclide concentrations in environmental and food product and, finally in persons and their excreta. In all these fields measurements and computational models for the interpretation or understanding of observations are employed explicitly or implicitly. In this lecture some examples of own computational models will be given from the various dosimetric fields, including a) Radioecology (e.g. with the code systems based on ECOSYS, which was developed far before the Chernobyl reactor accident, and tested thoroughly afterwards), b) Internal dosimetry (improved metabolism models based on our own data), c) External dosimetry (with the new ICRU-ICRP-Voxelphantom developed by our lab), d) Radiation therapy (with GEANT IV as applied to mixed reactor radiation incident on individualized voxel phantoms), e) Some aspects of nanodosimetric track structure computations (not dealt with in the other presentation of this author). Finally, some general remarks will be made on the high explicit or implicit importance of computational models in radiation protection and other research field dealing with large systems, as well as on good scientific practices which should generally be followed when developing and applying such computational models

  2. Quantum fields on the computer

    CERN Document Server

    1992-01-01

    This book provides an overview of recent progress in computer simulations of nonperturbative phenomena in quantum field theory, particularly in the context of the lattice approach. It is a collection of extensive self-contained reviews of various subtopics, including algorithms, spectroscopy, finite temperature physics, Yukawa and chiral theories, bounds on the Higgs meson mass, the renormalization group, and weak decays of hadrons.Physicists with some knowledge of lattice gauge ideas will find this book a useful and interesting source of information on the recent developments in the field.

  3. FASTSIM2: a second-order accurate frictional rolling contact algorithm

    Science.gov (United States)

    Vollebregt, E. A. H.; Wilders, P.

    2011-01-01

    In this paper we consider the frictional (tangential) steady rolling contact problem. We confine ourselves to the simplified theory, instead of using full elastostatic theory, in order to be able to compute results fast, as needed for on-line application in vehicle system dynamics simulation packages. The FASTSIM algorithm is the leading technology in this field and is employed in all dominant railway vehicle system dynamics packages (VSD) in the world. The main contribution of this paper is a new version "FASTSIM2" of the FASTSIM algorithm, which is second-order accurate. This is relevant for VSD, because with the new algorithm 16 times less grid points are required for sufficiently accurate computations of the contact forces. The approach is based on new insights in the characteristics of the rolling contact problem when using the simplified theory, and on taking precise care of the contact conditions in the numerical integration scheme employed.

  4. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    International Nuclear Information System (INIS)

    Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang

    2016-01-01

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  5. Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field

    Energy Technology Data Exchange (ETDEWEB)

    Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)

    2016-08-15

    Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.

  6. Electromagnetic field computation by network methods

    CERN Document Server

    Felsen, Leopold B; Russer, Peter

    2009-01-01

    This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.

  7. Field-programmable custom computing technology architectures, tools, and applications

    CERN Document Server

    Luk, Wayne; Pocek, Ken

    2000-01-01

    Field-Programmable Custom Computing Technology: Architectures, Tools, and Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In seven selected chapters, the book describes the latest advances in architectures, design methods, and applications of field-programmable devices for high-performance reconfigurable systems. The contributors to this work were selected from the leading researchers and practitioners in the field. It will be valuable to anyone working or researching in the field of custom computing technology. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.

  8. Multigrid methods for the computation of propagators in gauge fields

    International Nuclear Information System (INIS)

    Kalkreuter, T.

    1992-11-01

    In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)

  9. Optimal usage of computing grid network in the fields of nuclear fusion computing task

    International Nuclear Information System (INIS)

    Tenev, D.

    2006-01-01

    Nowadays the nuclear power becomes the main source of energy. To make its usage more efficient, the scientists created complicated simulation models, which require powerful computers. The grid computing is the answer to powerful and accessible computing resources. The article observes, and estimates the optimal configuration of the grid environment in the fields of the complicated nuclear fusion computing tasks. (author)

  10. Computer codes for shaping the magnetic field of the JINR phasotron

    International Nuclear Information System (INIS)

    Zaplatin, N.L.; Morozov, N.A.

    1983-01-01

    The computer codes providing for the shaping the magnetic field of the JINR high current phasotron are presented. Using these codes the control for the magnetic field mapping was realized in on- or off-line regimes. Then these field parameters were calculated and ferromagnetic correcting elements and trim coils setting were chosen. Some computer codes were realised for the magnetic field horizontal component measurements. The data are presented on some codes possibilities. The codes were used on the EC-1010 and the CDC-6500 computers

  11. Computing discrete signed distance fields from triangle meshes

    DEFF Research Database (Denmark)

    Bærentzen, Jakob Andreas; Aanæs, Henrik

    2002-01-01

    A method for generating a discrete, signed 3D distance field is proposed. Distance fields are used in a number of contexts. In particular the popular level set method is usually initialized by a distance field. The main focus of our work is on simplifying the computation of the sign when generating...

  12. Accurate Computation of Electric Field Enhancement Factors for Metallic Nanoparticles Using the Discrete Dipole Approximation

    Directory of Open Access Journals (Sweden)

    DePrince A

    2010-01-01

    Full Text Available Abstract We model the response of nanoscale Ag prolate spheroids to an external uniform static electric field using simulations based on the discrete dipole approximation, in which the spheroid is represented as a collection of polarizable subunits. We compare the results of simulations that employ subunit polarizabilities derived from the Clausius–Mossotti relation with those of simulations that employ polarizabilities that include a local environmental correction for subunits near the spheroid’s surface [Rahmani et al. Opt Lett 27: 2118 (2002]. The simulations that employ corrected polarizabilities give predictions in very good agreement with exact results obtained by solving Laplace’s equation. In contrast, simulations that employ uncorrected Clausius–Mossotti polarizabilities substantially underestimate the extent of the electric field “hot spot” near the spheroid’s sharp tip, and give predictions for the field enhancement factor near the tip that are 30 to 50% too small.

  13. Accurate Computation of Electric Field Enhancement Factors for Metallic Nanoparticles Using the Discrete Dipole Approximation

    Science.gov (United States)

    2010-01-01

    We model the response of nanoscale Ag prolate spheroids to an external uniform static electric field using simulations based on the discrete dipole approximation, in which the spheroid is represented as a collection of polarizable subunits. We compare the results of simulations that employ subunit polarizabilities derived from the Clausius–Mossotti relation with those of simulations that employ polarizabilities that include a local environmental correction for subunits near the spheroid’s surface [Rahmani et al. Opt Lett 27: 2118 (2002)]. The simulations that employ corrected polarizabilities give predictions in very good agreement with exact results obtained by solving Laplace’s equation. In contrast, simulations that employ uncorrected Clausius–Mossotti polarizabilities substantially underestimate the extent of the electric field “hot spot” near the spheroid’s sharp tip, and give predictions for the field enhancement factor near the tip that are 30 to 50% too small. PMID:20672062

  14. Towards a scalable and accurate quantum approach for describing vibrations of molecule–metal interfaces

    Directory of Open Access Journals (Sweden)

    David M. Benoit

    2011-08-01

    Full Text Available We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters.

  15. Systems, computer-implemented methods, and tangible computer-readable storage media for wide-field interferometry

    Science.gov (United States)

    Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)

    2012-01-01

    Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.

  16. Relativistic force field: parametric computations of proton-proton coupling constants in (1)H NMR spectra.

    Science.gov (United States)

    Kutateladze, Andrei G; Mukhina, Olga A

    2014-09-05

    Spin-spin coupling constants in (1)H NMR carry a wealth of structural information and offer a powerful tool for deciphering molecular structures. However, accurate ab initio or DFT calculations of spin-spin coupling constants have been very challenging and expensive. Scaling of (easy) Fermi contacts, fc, especially in the context of recent findings by Bally and Rablen (Bally, T.; Rablen, P. R. J. Org. Chem. 2011, 76, 4818), offers a framework for achieving practical evaluation of spin-spin coupling constants. We report a faster and more precise parametrization approach utilizing a new basis set for hydrogen atoms optimized in conjunction with (i) inexpensive B3LYP/6-31G(d) molecular geometries, (ii) inexpensive 4-31G basis set for carbon atoms in fc calculations, and (iii) individual parametrization for different atom types/hybridizations, not unlike a force field in molecular mechanics, but designed for the fc's. With the training set of 608 experimental constants we achieved rmsd <0.19 Hz. The methodology performs very well as we illustrate with a set of complex organic natural products, including strychnine (rmsd 0.19 Hz), morphine (rmsd 0.24 Hz), etc. This precision is achieved with much shorter computational times: accurate spin-spin coupling constants for the two conformers of strychnine were computed in parallel on two 16-core nodes of a Linux cluster within 10 min.

  17. A Fixpoint-Based Calculus for Graph-Shaped Computational Fields

    DEFF Research Database (Denmark)

    Lluch Lafuente, Alberto; Loreti, Michele; Montanari, Ugo

    2015-01-01

    topology is represented by a graph-shaped field, namely a network with attributes on both nodes and arcs, where arcs represent interaction capabilities between nodes. We propose a calculus where computation is strictly synchronous and corresponds to sequential computations of fixpoints in the graph......-shaped field. Under some conditions, those fixpoints can be computed by synchronised iterations, where in each iteration the attributes of a node is updated based on the attributes of the neighbours in the previous iteration. Basic constructs are reminiscent of the semiring μ-calculus, a semiring......-valued generalisation of the modal μ-calculus, which provides a flexible mechanism to specify the neighbourhood range (according to path formulae) and the way attributes should be combined (through semiring operators). Additional control-How constructs allow one to conveniently structure the fixpoint computations. We...

  18. Efficient reconfigurable hardware architecture for accurately computing success probability and data complexity of linear attacks

    DEFF Research Database (Denmark)

    Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar

    2012-01-01

    An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...

  19. Asynchronous Distributed Execution of Fixpoint-Based Computational Fields

    DEFF Research Database (Denmark)

    Lluch Lafuente, Alberto; Loreti, Michele; Montanari, Ugo

    2017-01-01

    . Computational fields are a key ingredient of aggregate programming, a promising software engineering methodology particularly relevant for the Internet of Things. In our approach, space topology is represented by a fixed graph-shaped field, namely a network with attributes on both nodes and arcs, where arcs...

  20. Computation of the magnetic field of a spectrometer in detectors region

    International Nuclear Information System (INIS)

    Zhidkov, E.P.; Yuldasheva, M.B.; Yudin, I.P.; Yuldashev, O.I.

    1995-01-01

    Computed results of the 3D magnetic field of a spectrometer intended for investigation of hadron production of charmed particles and the indication of the narrow resonances in neutron-nucleus interactions are presented. The methods used in computations: finite element method and finite element method with suggested new infinite elements are described. For accuracy control the computations were carried out on a sequence of three-dimensional meshes. Special attention is devoted to behaviour of the magnetic field in the basic detector (proportional chambers) region. The performed results can be used for the field behaviour estimate of similar spectrometer magnets. (orig.)

  1. 25th Annual International Symposium on Field-Programmable Custom Computing Machines

    CERN Document Server

    The IEEE Symposium on Field-Programmable Custom Computing Machines is the original and premier forum for presenting and discussing new research related to computing that exploits the unique features and capabilities of FPGAs and other reconfigurable hardware. Over the past two decades, FCCM has been the place to present papers on architectures, tools, and programming models for field-programmable custom computing machines as well as applications that use such systems.

  2. Precise and Fast Computation of the Gravitational Field of a General Finite Body and Its Application to the Gravitational Study of Asteroid Eros

    International Nuclear Information System (INIS)

    Fukushima, Toshio

    2017-01-01

    In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitational acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.

  3. Precise and Fast Computation of the Gravitational Field of a General Finite Body and Its Application to the Gravitational Study of Asteroid Eros

    Energy Technology Data Exchange (ETDEWEB)

    Fukushima, Toshio, E-mail: Toshio.Fukushima@nao.ac.jp [National Astronomical Observatory/SOKENDAI, Ohsawa, Mitaka, Tokyo 181-8588 (Japan)

    2017-10-01

    In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitational acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.

  4. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems

  5. Hamiltonian lattice field theory: Computer calculations using variational methods

    International Nuclear Information System (INIS)

    Zako, R.L.

    1991-01-01

    A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems

  6. Experiment and theory at the convergence limit: accurate equilibrium structure of picolinic acid by gas-phase electron diffraction and coupled-cluster computations.

    Science.gov (United States)

    Vogt, Natalja; Marochkin, Ilya I; Rykov, Anatolii N

    2018-04-18

    The accurate molecular structure of picolinic acid has been determined from experimental data and computed at the coupled cluster level of theory. Only one conformer with the O[double bond, length as m-dash]C-C-N and H-O-C[double bond, length as m-dash]O fragments in antiperiplanar (ap) positions, ap-ap, has been detected under conditions of the gas-phase electron diffraction (GED) experiment (Tnozzle = 375(3) K). The semiexperimental equilibrium structure, rsee, of this conformer has been derived from the GED data taking into account the anharmonic vibrational effects estimated from the ab initio force field. The equilibrium structures of the two lowest-energy conformers, ap-ap and ap-sp (with the synperiplanar H-O-C[double bond, length as m-dash]O fragment), have been fully optimized at the CCSD(T)_ae level of theory in conjunction with the triple-ζ basis set (cc-pwCVTZ). The quality of the optimized structures has been improved due to extrapolation to the quadruple-ζ basis set. The high accuracy of both GED determination and CCSD(T) computations has been disclosed by a correct comparison of structures having the same physical meaning. The ap-ap conformer has been found to be stabilized by the relatively strong NH-O hydrogen bond of 1.973(27) Å (GED) and predicted to be lower in energy by 16 kJ mol-1 with respect to the ap-sp conformer without a hydrogen bond. The influence of this bond on the structure of picolinic acid has been analyzed within the Natural Bond Orbital model. The possibility of the decarboxylation of picolinic acid has been considered in the GED analysis, but no significant amounts of pyridine and carbon dioxide could be detected. To reveal the structural changes reflecting the mesomeric and inductive effects due to the carboxylic substituent, the accurate structure of pyridine has been also computed at the CCSD(T)_ae level with basis sets from triple- to 5-ζ quality. The comprehensive structure computations for pyridine as well as for

  7. Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors

    KAUST Repository

    Choi, Hyun Ho; Rodionov, Yaroslav I.; Paterson, Alexandra F.; Panidi, Julianna; Saranin, Danila; Kharlamov, Nikolai; Didenko, Sergei I.; Anthopoulos, Thomas D.; Cho, Kilwon; Podzorov, Vitaly

    2018-01-01

    Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.

  8. Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors

    KAUST Repository

    Choi, Hyun Ho

    2018-04-30

    Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.

  9. Computation of the Magnetic Field of a Spectrometer in Detectors Region

    International Nuclear Information System (INIS)

    Zhidkov, E.P.; Yuldasheva, M.B.; Yudin, I.P.; Yuldashev, O.I.

    1994-01-01

    Computed results of the 3D magnetic field of a spectrometer intended for investigation of hadron production of charmed particles and the indication of the narrow resonances in neutron-nucleus interaction are presented. The methods, used in computations - finite element method and finite element method with suggested new infinite elements are described. For accuracy control the computations were carried out on a sequence of three-dimensional meshes. Special attention is devoted to the behaviour of the magnetic field in the basic detectors (proportional chambers) region. The performed results can be used for the field behaviour estimate of similar spectrometer magnets. 12 refs., 16 figs

  10. Accurate computation of Mathieu functions

    CERN Document Server

    Bibby, Malcolm M

    2013-01-01

    This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach

  11. Use of time space Green's functions in the computation of transient eddy current fields

    International Nuclear Information System (INIS)

    Davey, K.; Turner, L.

    1988-01-01

    The utility of integral equations to solve eddy current problems has been borne out by numerous computations in the past few years, principally in sinusoidal steady-state problems. This paper attempts to examine the applicability of the integral approaches in both time and space for the more generic transient problem. The basic formulation for the time space Green's function approach is laid out. A technique employing Gauss-Laguerre integration is employed to realize the temporal solution, while Gauss--Legendre integration is used to resolve the spatial field character. The technique is then applied to the fusion electromagnetic induction experiments (FELIX) cylinder experiments in both two and three dimensions. It is found that quite accurate solutions can be obtained using rather coarse time steps and very few unknowns; the three-dimensional field solution worked out in this context used basically only four unknowns. The solution appears to be somewhat sensitive to the choice of time step, a consequence of a numerical instability imbedded in the Green's function near the origin

  12. Field microcomputerized multichannel γ ray spectrometer based on notebook computer

    International Nuclear Information System (INIS)

    Jia Wenyi; Wei Biao; Zhou Rongsheng; Li Guodong; Tang Hong

    1996-01-01

    Currently, field γ ray spectrometry can not rapidly measure γ ray full spectrum, so a field microcomputerized multichannel γ ray spectrometer based on notebook computer is described, and the γ ray full spectrum can be rapidly measured in the field

  13. Room Acoustical Fields

    CERN Document Server

    Mechel, Fridolin

    2013-01-01

    This book presents the theory of room acoustical fields and revises the Mirror Source Methods for practical computational use, emphasizing the wave character of acoustical fields.  The presented higher methods include the concepts of “Mirror Point Sources” and “Corner sources which allow for an excellent approximation of complex room geometries and even equipped rooms. In contrast to classical description, this book extends the theory of sound fields describing them by their complex sound pressure and the particle velocity. This approach enables accurate descriptions of interference and absorption phenomena.

  14. Forward Field Computation with OpenMEEG

    Directory of Open Access Journals (Sweden)

    Alexandre Gramfort

    2011-01-01

    must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG, Magnetoencephalography (MEG, Electrical Impedance Tomography (EIT, and intracranial electric potentials (IPs. OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0.

  15. Automatic procedure for realistic 3D finite element modelling of human brain for bioelectromagnetic computations

    International Nuclear Information System (INIS)

    Aristovich, K Y; Khan, S H

    2010-01-01

    Realistic computer modelling of biological objects requires building of very accurate and realistic computer models based on geometric and material data, type, and accuracy of numerical analyses. This paper presents some of the automatic tools and algorithms that were used to build accurate and realistic 3D finite element (FE) model of whole-brain. These models were used to solve the forward problem in magnetic field tomography (MFT) based on Magnetoencephalography (MEG). The forward problem involves modelling and computation of magnetic fields produced by human brain during cognitive processing. The geometric parameters of the model were obtained from accurate Magnetic Resonance Imaging (MRI) data and the material properties - from those obtained from Diffusion Tensor MRI (DTMRI). The 3D FE models of the brain built using this approach has been shown to be very accurate in terms of both geometric and material properties. The model is stored on the computer in Computer-Aided Parametrical Design (CAD) format. This allows the model to be used in a wide a range of methods of analysis, such as finite element method (FEM), Boundary Element Method (BEM), Monte-Carlo Simulations, etc. The generic model building approach presented here could be used for accurate and realistic modelling of human brain and many other biological objects.

  16. Development of highly accurate approximate scheme for computing the charge transfer integral

    Energy Technology Data Exchange (ETDEWEB)

    Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)

    2015-08-21

    The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.

  17. Computer algebra in quantum field theory integration, summation and special functions

    CERN Document Server

    Schneider, Carsten

    2013-01-01

    The book focuses on advanced computer algebra methods and special functions that have striking applications in the context of quantum field theory. It presents the state of the art and new methods for (infinite) multiple sums, multiple integrals, in particular Feynman integrals, difference and differential equations in the format of survey articles. The presented techniques emerge from interdisciplinary fields: mathematics, computer science and theoretical physics; the articles are written by mathematicians and physicists with the goal that both groups can learn from the other field, including

  18. Numerical computation of space shuttle orbiter flow field

    Science.gov (United States)

    Tannehill, John C.

    1988-01-01

    A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.

  19. Calcium ions in aqueous solutions: Accurate force field description aided by ab initio molecular dynamics and neutron scattering

    Science.gov (United States)

    Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel

    2018-06-01

    We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.

  20. Spectrally accurate contour dynamics

    International Nuclear Information System (INIS)

    Van Buskirk, R.D.; Marcus, P.S.

    1994-01-01

    We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use

  1. Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields

    Science.gov (United States)

    Milstead, Jonathan

    The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.

  2. Acoustic radiosity for computation of sound fields in diffuse environments

    Science.gov (United States)

    Muehleisen, Ralph T.; Beamer, C. Walter

    2002-05-01

    The use of image and ray tracing methods (and variations thereof) for the computation of sound fields in rooms is relatively well developed. In their regime of validity, both methods work well for prediction in rooms with small amounts of diffraction and mostly specular reflection at the walls. While extensions to the method to include diffuse reflections and diffraction have been made, they are limited at best. In the fields of illumination and computer graphics the ray tracing and image methods are joined by another method called luminous radiative transfer or radiosity. In radiosity, an energy balance between surfaces is computed assuming diffuse reflection at the reflective surfaces. Because the interaction between surfaces is constant, much of the computation required for sound field prediction with multiple or moving source and receiver positions can be reduced. In acoustics the radiosity method has had little attention because of the problems of diffraction and specular reflection. The utility of radiosity in acoustics and an approach to a useful development of the method for acoustics will be presented. The method looks especially useful for sound level prediction in industrial and office environments. [Work supported by NSF.

  3. How accurate are adolescents in portion-size estimation using the computer tool Young Adolescents' Nutrition Assessment on Computer (YANA-C)?

    Science.gov (United States)

    Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea

    2010-06-01

    Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.

  4. Towards accurate simulation of fringe field effects

    International Nuclear Information System (INIS)

    Berz, M.; Erdelyi, B.; Makino, K.

    2001-01-01

    In this paper, we study various fringe field effects. Previously, we showed the large impact that fringe fields can have on certain lattice scenarios of the proposed Neutrino Factory. Besides the linear design of the lattice, the effects depend strongly on the details of the field fall off. Various scenarios are compared. Furthermore, in the absence of detailed information, we study the effects for the LHC, a case where the fringe fields are known, and try to draw some conclusions for Neutrino Factory lattices

  5. Assessing smoking status in disadvantaged populations: is computer administered self report an accurate and acceptable measure?

    Directory of Open Access Journals (Sweden)

    Bryant Jamie

    2011-11-01

    Full Text Available Abstract Background Self report of smoking status is potentially unreliable in certain situations and in high-risk populations. This study aimed to determine the accuracy and acceptability of computer administered self-report of smoking status among a low socioeconomic (SES population. Methods Clients attending a community service organisation for welfare support were invited to complete a cross-sectional touch screen computer health survey. Following survey completion, participants were invited to provide a breath sample to measure exposure to tobacco smoke in expired air. Sensitivity, specificity, positive predictive value and negative predictive value were calculated. Results Three hundred and eighty three participants completed the health survey, and 330 (86% provided a breath sample. Of participants included in the validation analysis, 59% reported being a daily or occasional smoker. Sensitivity was 94.4% and specificity 92.8%. The positive and negative predictive values were 94.9% and 92.0% respectively. The majority of participants reported that the touch screen survey was both enjoyable (79% and easy (88% to complete. Conclusions Computer administered self report is both acceptable and accurate as a method of assessing smoking status among low SES smokers in a community setting. Routine collection of health information using touch-screen computer has the potential to identify smokers and increase provision of support and referral in the community setting.

  6. Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number

    Directory of Open Access Journals (Sweden)

    Wang Xingyuan

    2010-01-01

    Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.

  7. Accurate Assessment of Computed Order Tracking

    Directory of Open Access Journals (Sweden)

    P.N. Saavedra

    2006-01-01

    Full Text Available Spectral vibration analysis using the Fourier transform is the most common technique for evaluating the mechanical condition of machinery working in stationary regimen. However, machinery operating in transient modes, such as variable speed equipment, generates spectra with distinct frequency content at each time, and the standard approach is not directly applicable for diagnostic. The "order tracking" technique is a suitable tool for analyzing variable speed machines. We have studied the computed order tracking (COT, and a new computed procedure is proposed for solving the indeterminate results generated by the traditional method at constant speed. The effect on the accuracy of the assumptions inherent in the COT was assessed using data from various simulations. The use of these simulations allowed us to determine the effect on the overall true accuracy of the method of different user-defined factors: the signal and tachometric pulse sampling frequency, the method of amplitude interpolation, and the number of tachometric pulses per revolution. Tests on real data measured on the main transmissions of a mining shovel were carried out, and we concluded that the new method is appropriate for the condition monitoring of this type of machine.

  8. Comparison of Experimental Surface and Flow Field Measurements to Computational Results of the Juncture Flow Model

    Science.gov (United States)

    Roozeboom, Nettie H.; Lee, Henry C.; Simurda, Laura J.; Zilliac, Gregory G.; Pulliam, Thomas H.

    2016-01-01

    Wing-body juncture flow fields on commercial aircraft configurations are challenging to compute accurately. The NASA Advanced Air Vehicle Program's juncture flow committee is designing an experiment to provide data to improve Computational Fluid Dynamics (CFD) modeling in the juncture flow region. Preliminary design of the model was done using CFD, yet CFD tends to over-predict the separation in the juncture flow region. Risk reduction wind tunnel tests were requisitioned by the committee to obtain a better understanding of the flow characteristics of the designed models. NASA Ames Research Center's Fluid Mechanics Lab performed one of the risk reduction tests. The results of one case, accompanied by CFD simulations, are presented in this paper. Experimental results suggest the wall mounted wind tunnel model produces a thicker boundary layer on the fuselage than the CFD predictions, resulting in a larger wing horseshoe vortex suppressing the side of body separation in the juncture flow region. Compared to experimental results, CFD predicts a thinner boundary layer on the fuselage generates a weaker wing horseshoe vortex resulting in a larger side of body separation.

  9. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    KAUST Repository

    Ahmed, Ahfaz

    2015-03-01

    Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates

  10. Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates

    Science.gov (United States)

    Carbogno, Christian; Scheffler, Matthias

    In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.

  11. A hybrid solution using computational prediction and measured data to accurately determine process corrections with reduced overlay sampling

    Science.gov (United States)

    Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen

    2017-03-01

    Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.

  12. Novel computational approaches for the analysis of cosmic magnetic fields

    Energy Technology Data Exchange (ETDEWEB)

    Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)

    2016-07-01

    In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.

  13. Magnetic fields for transporting charged beams

    International Nuclear Information System (INIS)

    Parzen, G.

    1976-01-01

    The transport of charged particle beams requires magnetic fields that must be shaped correctly and very accurately. During the last 20 years or so, many studies have been made, both analytically and through the use of computer programs, of various magnetic shapes that have proved to be useful. Many of the results for magnetic field shapes can be applied equally well to electric field shapes. A report is given which gathers together the results that have more general significance and would be useful in designing a configuration to produce a desired magnetic field shape. The field shapes studied include the fields in dipoles, quadrupoles, sextupoles, octupoles, septum magnets, combined-function magnets, and electrostatic septums. Where possible, empirical formulas are proposed, based on computer and analytical studies and on magnetic field measurements. These empirical formulas are often easier to use than analytical formulas and often include effects that are difficult to compute analytically. In addition, results given in the form of tables and graphs serve as illustrative examples. The field shapes studied include uniform fields produced by window-frame magnets, C-magnets, H-magnets, and cosine magnets; linear fields produced by various types of quadrupoles; quadratic and cubic fields produced by sextupoles and octupoles; combinations of uniform and linear fields; and septum fields with sharp boundaries

  14. Magnetic field computations for ISX using GFUN-3D

    International Nuclear Information System (INIS)

    Cain, W.D.

    1977-01-01

    This paper presents a comparison between measured magnetic fields and the magnetic fields calculated by the three-dimensional computer program GFUN-3D for the Impurity Study Experiment (ISX). Several iron models are considered ranging in sophistication from 50 to 222 tetrahedra iron elements. The effects of air gaps and the efforts made to simulate effects of grain orientation and packing factor are detailed. The results obtained are compared with the measured magnetic fields, and explanations are presented to account for the variations which occur

  15. Effect of computational grid on accurate prediction of a wind turbine rotor using delayed detached-eddy simulations

    Energy Technology Data Exchange (ETDEWEB)

    Bangga, Galih; Weihing, Pascal; Lutz, Thorsten; Krämer, Ewald [University of Stuttgart, Stuttgart (Germany)

    2017-05-15

    The present study focuses on the impact of grid for accurate prediction of the MEXICO rotor under stalled conditions. Two different blade mesh topologies, O and C-H meshes, and two different grid resolutions are tested for several time step sizes. The simulations are carried out using Delayed detached-eddy simulation (DDES) with two eddy viscosity RANS turbulence models, namely Spalart- Allmaras (SA) and Menter Shear stress transport (SST) k-ω. A high order spatial discretization, WENO (Weighted essentially non- oscillatory) scheme, is used in these computations. The results are validated against measurement data with regards to the sectional loads and the chordwise pressure distributions. The C-H mesh topology is observed to give the best results employing the SST k-ω turbulence model, but the computational cost is more expensive as the grid contains a wake block that increases the number of cells.

  16. Accurate and efficient computation of synchrotron radiation functions

    International Nuclear Information System (INIS)

    MacLeod, Allan J.

    2000-01-01

    We consider the computation of three functions which appear in the theory of synchrotron radiation. These are F(x)=x∫x∞K 5/3 (y) dy))F p (x)=xK 2/3 (x) and G p (x)=x 1/3 K 1/3 (x), where K ν denotes a modified Bessel function. Chebyshev series coefficients are given which enable the functions to be computed with an accuracy of up to 15 sig. figures

  17. Analysis of the Extremely Low Frequency Magnetic Field Emission from Laptop Computers

    Directory of Open Access Journals (Sweden)

    Brodić Darko

    2016-03-01

    Full Text Available This study addresses the problem of magnetic field emission produced by the laptop computers. Although, the magnetic field is spread over the entire frequency spectrum, the most dangerous part of it to the laptop users is the frequency range from 50 to 500 Hz, commonly called the extremely low frequency magnetic field. In this frequency region the magnetic field is characterized by high peak values. To examine the influence of laptop’s magnetic field emission in the office, a specific experiment is proposed. It includes the measurement of the magnetic field at six laptop’s positions, which are in close contact to its user. The results obtained from ten different laptop computers show the extremely high emission at some positions, which are dependent on the power dissipation or bad ergonomics. Eventually, the experiment extracts these dangerous positions of magnetic field emission and suggests possible solutions.

  18. Computational strong-field quantum dynamics. Intense light-matter interactions

    International Nuclear Information System (INIS)

    Bauer, Dieter

    2017-01-01

    This graduate textbook introduces the computational techniques to study ultra-fast quantum dynamics of matter exposed to strong laser fields. Coverage includes methods to propagate wavefunctions according to the time dependent Schroedinger, Klein-Gordon or Dirac equation, the calculation of typical observables, time-dependent density functional theory, multi configurational time-dependent Hartree-Fock, time-dependent configuration interaction singles, the strong-field approximation, and the microscopic particle-in-cell approach.

  19. Computational strong-field quantum dynamics. Intense light-matter interactions

    Energy Technology Data Exchange (ETDEWEB)

    Bauer, Dieter (ed.) [Rostock Univ. (Germany). Inst. fuer Physik

    2017-09-01

    This graduate textbook introduces the computational techniques to study ultra-fast quantum dynamics of matter exposed to strong laser fields. Coverage includes methods to propagate wavefunctions according to the time dependent Schroedinger, Klein-Gordon or Dirac equation, the calculation of typical observables, time-dependent density functional theory, multi configurational time-dependent Hartree-Fock, time-dependent configuration interaction singles, the strong-field approximation, and the microscopic particle-in-cell approach.

  20. Computational strong-field quantum dynamics intense light-matter interactions

    CERN Document Server

    2017-01-01

    This graduate textbook introduces the computational techniques to study ultra-fast quantum dynamics of matter exposed to strong laser fields. Coverage includes methods to propagate wavefunctions according to the time-dependent Schrödinger, Klein-Gordon or Dirac equation, the calculation of typical observables, time-dependent density functional theory, multi-configurational time-dependent Hartree-Fock, time-dependent configuration interaction singles, the strong-field approximation, and the microscopic particle-in-cell approach.

  1. Experience with a distributed computing system for magnetic field analysis

    International Nuclear Information System (INIS)

    Newman, M.J.

    1978-08-01

    The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)

  2. Computer simulation of induced electric currents and fields in biological bodies by 60 Hz magnetic fields

    International Nuclear Information System (INIS)

    Xi Weiguo; Stuchly, M.A.; Gandhi, O.P.

    1993-01-01

    Possible health effects of human exposure to 60 Hz magnetic fields are a subject of increasing concern. An understanding of the coupling of electromagnetic fields to human body tissues is essential for assessment of their biological effects. A method is presented for the computerized simulation of induced electric currents and fields in bodies of men and rodents from power-line frequency magnetic fields. In the impedance method, the body is represented by a 3 dimensional impedance network. The computational model consists of several tens of thousands of cubic numerical cells and thus represented a realistic shape. The modelling for humans is performed with two models, a heterogeneous model based on cross-section anatomy and a homogeneous one using an average tissue conductivity. A summary of computed results of induced electric currents and fields is presented. It is confirmed that induced currents are lower than endangerous current levels for most environmental exposures. However, the induced current density varies greatly, with the maximum being at least 10 times larger than the average. This difference is likely to be greater when more detailed anatomy and morphology are considered. 15 refs., 2 figs., 1 tab

  3. Numerical computation of gravitational field of general extended body and its application to rotation curve study of galaxies

    Science.gov (United States)

    Fukushima, Toshio

    2017-06-01

    Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface

  4. Program package for the computation of lenses and deflectors

    International Nuclear Information System (INIS)

    Lencova, B.; Wisselink, G.

    1990-01-01

    In this paper a set of computer programs for the design of electrostatic and magnetic electron lenses and for the design of multipoles for electron microscopy and lithography is described. The two-dimensional field computation is performed by the finite-element method. In order to meet the high demands on accuracy, the programs include the use of a variable step in the fine mesh made with an automeshing procedure, improved methods for coefficient evaluation, a fast solution procedure for the linear equations, and modified algorithms for computation of multipoles and electrostatic lenses. They allow for a fast and accurate computation of electron optical elements. For the input and modification of data, and for presentation of results, graphical menu driven programs written for personal computers are used. For the computation of electron optical properties axial fields are used. (orig.)

  5. Improved Off-Shell Scattering Amplitudes in String Field Theory and New Computational Methods

    CERN Document Server

    Park, I Y; Bars, Itzhak

    2004-01-01

    We report on new results in Witten's cubic string field theory for the off-shell factor in the 4-tachyon amplitude that was not fully obtained explicitly before. This is achieved by completing the derivation of the Veneziano formula in the Moyal star formulation of Witten's string field theory (MSFT). We also demonstrate detailed agreement of MSFT with a number of on-shell and off-shell computations in other approaches to Witten's string field theory. We extend the techniques of computation in MSFT, and show that the j=0 representation of SL(2,R) generated by the Virasoro operators $L_{0},L_{\\pm1}$ is a key structure in practical computations for generating numbers. We provide more insight into the Moyal structure that simplifies string field theory, and develop techniques that could be applied more generally, including nonperturbative processes.

  6. Transversity results and computations in symplectic field theory

    International Nuclear Information System (INIS)

    Fabert, Oliver

    2008-01-01

    Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M φ of symplectic manifolds (M,ω M ) with symplectomorphisms φ. While the cylindrical contact homology of M φ is given by the Floer homologies of powers of φ, the other algebraic invariants of symplectic field theory for M φ provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian φ we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M φ ≅ S 1 x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic field theory are established, our result implies that the

  7. Accurate overlaying for mobile augmented reality

    NARCIS (Netherlands)

    Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.

    1999-01-01

    Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency

  8. Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models

    Science.gov (United States)

    Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.

    2015-01-01

    Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024

  9. Computing the Local Field Potential (LFP from Integrate-and-Fire Network Models.

    Directory of Open Access Journals (Sweden)

    Alberto Mazzoni

    2015-12-01

    Full Text Available Leaky integrate-and-fire (LIF network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP. Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.

  10. Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques

    Energy Technology Data Exchange (ETDEWEB)

    Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)

    2001-05-01

    Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)

  11. Computationally efficient and quantitatively accurate multiscale simulation of solid-solution strengthening by ab initio calculation

    International Nuclear Information System (INIS)

    Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg

    2015-01-01

    We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement

  12. Computer simulation of ductile fracture

    International Nuclear Information System (INIS)

    Wilkins, M.L.; Streit, R.D.

    1979-01-01

    Finite difference computer simulation programs are capable of very accurate solutions to problems in plasticity with large deformations and rotation. This opens the possibility of developing models of ductile fracture by correlating experiments with equivalent computer simulations. Selected experiments were done to emphasize different aspects of the model. A difficult problem is the establishment of a fracture-size effect. This paper is a study of the strain field around notched tensile specimens of aluminum 6061-T651. A series of geometrically scaled specimens are tested to fracture. The scaled experiments are conducted for different notch radius-to-diameter ratios. The strains at fracture are determined from computer simulations. An estimate is made of the fracture-size effect

  13. Computation of magnetic field in DC brushless linear motors built with NdFeB magnets

    International Nuclear Information System (INIS)

    Basak, A.; Shirkoohi, G.H.

    1990-01-01

    A software package based on finite element technique has been used to compute three-dimensional magnetic fields and static forces developed in brushless d.c. linear motors. As the field flux-source two different types of permanent magnets, one of them being the high energy neodymium- iron-boron type, has been used in computer models. Motors with the same specifications as the computer models were built and experimental results obtained from them are compared with the computed results

  14. BQP-completeness of scattering in scalar quantum field theory

    Directory of Open Access Journals (Sweden)

    Stephen P. Jordan

    2018-01-01

    Full Text Available Recent work has shown that quantum computers can compute scattering probabilities in massive quantum field theories, with a run time that is polynomial in the number of particles, their energy, and the desired precision. Here we study a closely related quantum field-theoretical problem: estimating the vacuum-to-vacuum transition amplitude, in the presence of spacetime-dependent classical sources, for a massive scalar field theory in (1+1 dimensions. We show that this problem is BQP-hard; in other words, its solution enables one to solve any problem that is solvable in polynomial time by a quantum computer. Hence, the vacuum-to-vacuum amplitude cannot be accurately estimated by any efficient classical algorithm, even if the field theory is very weakly coupled, unless BQP=BPP. Furthermore, the corresponding decision problem can be solved by a quantum computer in a time scaling polynomially with the number of bits needed to specify the classical source fields, and this problem is therefore BQP-complete. Our construction can be regarded as an idealized architecture for a universal quantum computer in a laboratory system described by massive phi^4 theory coupled to classical spacetime-dependent sources.

  15. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation

    Energy Technology Data Exchange (ETDEWEB)

    Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)

    2015-01-01

    The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  16. Touchable Computing: Computing-Inspired Bio-Detection.

    Science.gov (United States)

    Chen, Yifan; Shi, Shaolong; Yao, Xin; Nakano, Tadashi

    2017-12-01

    We propose a new computing-inspired bio-detection framework called touchable computing (TouchComp). Under the rubric of TouchComp, the best solution is the cancer to be detected, the parameter space is the tissue region at high risk of malignancy, and the agents are the nanorobots loaded with contrast medium molecules for tracking purpose. Subsequently, the cancer detection procedure (CDP) can be interpreted from the computational optimization perspective: a population of externally steerable agents (i.e., nanorobots) locate the optimal solution (i.e., cancer) by moving through the parameter space (i.e., tissue under screening), whose landscape (i.e., a prescribed feature of tissue environment) may be altered by these agents but the location of the best solution remains unchanged. One can then infer the landscape by observing the movement of agents by applying the "seeing-is-sensing" principle. The term "touchable" emphasizes the framework's similarity to controlling by touching the screen with a finger, where the external field for controlling and tracking acts as the finger. Given this analogy, we aim to answer the following profound question: can we look to the fertile field of computational optimization algorithms for solutions to achieve effective cancer detection that are fast, accurate, and robust? Along this line of thought, we consider the classical particle swarm optimization (PSO) as an example and propose the PSO-inspired CDP, which differs from the standard PSO by taking into account realistic in vivo propagation and controlling of nanorobots. Finally, we present comprehensive numerical examples to demonstrate the effectiveness of the PSO-inspired CDP for different blood flow velocity profiles caused by tumor-induced angiogenesis. The proposed TouchComp bio-detection framework may be regarded as one form of natural computing that employs natural materials to compute.

  17. Transversity results and computations in symplectic field theory

    Energy Technology Data Exchange (ETDEWEB)

    Fabert, Oliver

    2008-02-21

    Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M{sub {phi}} of symplectic manifolds (M,{omega}{sub M}) with symplectomorphisms {phi}. While the cylindrical contact homology of M{sub {phi}} is given by the Floer homologies of powers of {phi}, the other algebraic invariants of symplectic field theory for M{sub {phi}} provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian {phi} we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M{sub {phi}} {approx_equal} S{sup 1} x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic

  18. Quantitative x-ray dark-field computed tomography

    International Nuclear Information System (INIS)

    Bech, M; Pfeiffer, F; Bunk, O; Donath, T; David, C; Feidenhans'l, R

    2010-01-01

    The basic principles of x-ray image formation in radiology have remained essentially unchanged since Roentgen first discovered x-rays over a hundred years ago. The conventional approach relies on x-ray attenuation as the sole source of contrast and draws exclusively on ray or geometrical optics to describe and interpret image formation. Phase-contrast or coherent scatter imaging techniques, which can be understood using wave optics rather than ray optics, offer ways to augment or complement the conventional approach by incorporating the wave-optical interaction of x-rays with the specimen. With a recently developed approach based on x-ray optical gratings, advanced phase-contrast and dark-field scatter imaging modalities are now in reach for routine medical imaging and non-destructive testing applications. To quantitatively assess the new potential of particularly the grating-based dark-field imaging modality, we here introduce a mathematical formalism together with a material-dependent parameter, the so-called linear diffusion coefficient and show that this description can yield quantitative dark-field computed tomography (QDFCT) images of experimental test phantoms.

  19. Accurate metacognition for visual sensory memory representations.

    Science.gov (United States)

    Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F

    2014-04-01

    The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.

  20. Computation of demagnetizing fields and particle distribution in magnetic fluid with inhomogeneous density

    International Nuclear Information System (INIS)

    Pshenichnikov, A.F.

    2012-01-01

    A new algorithm for calculating magnetic fields in a concentrated magnetic fluid with inhomogeneous density is proposed. Inhomogeneity of the fluid is caused by magnetophoresis. In this case, the diffusion and magnetostatic parts of the problem are tightly linked together and are solved jointly. The dynamic diffusion equation is solved by the finite volume method and, to calculate the magnetic field inside the fluid, an iterative process is performed in parallel. The solution to the problem is sought in Cartesian coordinates, and the computational domain is decomposed into rectangular elements. This technique eliminates the need to solve the related boundary-value problem for magnetic fields, accelerates computations and eliminates the error caused by the finite sizes of the outer region. Formulas describing the contribution of the rectangular element to the field intensity in the case of a plane problem are given. Magnetic and concentration fields inside the magnetic fluid filling a rectangular cavity generated under the action of the uniform external filed are calculated. - Highlights: ▶ New algorithm for calculating magnetic field intense magnetic fluid with account of magnetophoresis and diffusion of particles. ▶ We do not need to solve boundary-value problem, but we accelerate computations and eliminate some errors. ▶ We solve nonlinear flow equation by the finite volume method and calculate magnetic and focus fields in the fluid for plane case.

  1. Homogenization of linear viscoelastic three phase media: internal variable formulation versus full-field computation

    International Nuclear Information System (INIS)

    Blanc, V.; Barbie, L.; Masson, R.

    2011-01-01

    Homogenization of linear viscoelastic heterogeneous media is here extended from two phase inclusion-matrix media to three phase inclusion-matrix media. Each phase obeying to a compressible Maxwellian behaviour, this analytic method leads to an equivalent elastic homogenization problem in the Laplace-Carson space. For some particular microstructures, such as the Hashin composite sphere assemblage, an exact solution is obtained. The inversion of the Laplace-Carson transforms of the overall stress-strain behaviour gives in such cases an internal variable formulation. As expected, the number of these internal variables and their evolution laws are modified to take into account the third phase. Moreover, evolution laws of averaged stresses and strains per phase can still be derived for three phase media. Results of this model are compared to full fields computations of representative volume elements using finite element method, for various concentrations and sizes of inclusion. Relaxation and creep test cases are performed in order to compare predictions of the effective response. The internal variable formulation is shown to yield accurate prediction in both cases. (authors)

  2. The accurate particle tracer code

    Science.gov (United States)

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun

    2017-11-01

    The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.

  3. Manual cross check of computed dose times for motorised wedged fields

    International Nuclear Information System (INIS)

    Porte, J.

    2001-01-01

    If a mass of tissue equivalent material is exposed in turn to wedged and open radiation fields of the same size, for equal times, it is incorrect to assume that the resultant isodose pattern will be effectively that of a wedge having half the angle of the wedged field. Computer programs have been written to address the problem of creating an intermediate wedge field, commonly known as a motorized wedge. The total exposure time is apportioned between the open and wedged fields, to produce a beam modification equivalent to that of a wedged field of a given wedge angle. (author)

  4. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    Energy Technology Data Exchange (ETDEWEB)

    Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)

    2015-01-15

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0

  5. Toward accurate tooth segmentation from computed tomography images using a hybrid level set model

    International Nuclear Information System (INIS)

    Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei

    2015-01-01

    Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm

  6. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion.

    Science.gov (United States)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan

    2013-09-01

    Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can

  7. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    International Nuclear Information System (INIS)

    Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao

    2013-01-01

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  8. The preliminary exploration of 64-slice volume computed tomography in the accurate measurement of pleural effusion

    Energy Technology Data Exchange (ETDEWEB)

    Guo, Zhi-Jun [Dept. of Radiology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)], e-mail: Gzj3@163.com; Lin, Qiang [Dept. of Oncology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China); Liu, Hai-Tao [Dept. of General Surgery, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)] [and others])

    2013-09-15

    Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0

  9. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    Science.gov (United States)

    Pan, Bing; Wang, Bo; Wu, Dafang; Lubineau, Gilles

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost.

  10. An efficient and accurate 3D displacements tracking strategy for digital volume correlation

    KAUST Repository

    Pan, Bing

    2014-07-01

    Owing to its inherent computational complexity, practical implementation of digital volume correlation (DVC) for internal displacement and strain mapping faces important challenges in improving its computational efficiency. In this work, an efficient and accurate 3D displacement tracking strategy is proposed for fast DVC calculation. The efficiency advantage is achieved by using three improvements. First, to eliminate the need of updating Hessian matrix in each iteration, an efficient 3D inverse compositional Gauss-Newton (3D IC-GN) algorithm is introduced to replace existing forward additive algorithms for accurate sub-voxel displacement registration. Second, to ensure the 3D IC-GN algorithm that converges accurately and rapidly and avoid time-consuming integer-voxel displacement searching, a generalized reliability-guided displacement tracking strategy is designed to transfer accurate and complete initial guess of deformation for each calculation point from its computed neighbors. Third, to avoid the repeated computation of sub-voxel intensity interpolation coefficients, an interpolation coefficient lookup table is established for tricubic interpolation. The computational complexity of the proposed fast DVC and the existing typical DVC algorithms are first analyzed quantitatively according to necessary arithmetic operations. Then, numerical tests are performed to verify the performance of the fast DVC algorithm in terms of measurement accuracy and computational efficiency. The experimental results indicate that, compared with the existing DVC algorithm, the presented fast DVC algorithm produces similar precision and slightly higher accuracy at a substantially reduced computational cost. © 2014 Elsevier Ltd.

  11. Field load and displacement boundary condition computer program used for the finite element analysis and design of toroidal field coils in a tokamak

    International Nuclear Information System (INIS)

    Smith, R.A.

    1975-06-01

    The design evaluation of toroidal field coils on the Princeton Large Torus (PLT), the Poloidal Diverter Experiment (PDX) and the Tokamak Fusion Test Reactor (TFTR) has been performed by structural analysis with the finite element method. The technique employed has been simplified with supplementary computer programs that are used to generate the input data for the finite element computer program. Significant automation has been provided by computer codes in three areas of data input. These are the definition of coil geometry by a mesh of node points, the definition of finite elements via the node points and the definition of the node point force/displacement boundary conditions. The computer programs by name that have been used to perform the above functions are PDXNODE, ELEMENT and PDXFORC. The geometric finite element modeling options for toroidal field coils provided by PDXNODE include one-fourth or one-half symmetric sections of circular coils, oval shaped coils or dee-shaped coils with or without a beveled wedging surface. The program ELEMENT which defines the finite elements for input to the finite element computer code can provide considerable time and labor savings when defining the model of coils of non-uniform cross-section or when defining the model of coils whose material properties are different in the R and THETA directions due to the laminations of alternate epoxy and copper windings. The modeling features provided by the program ELEMENT have been used to analyze the PLT and the TFTR toroidal field coils with integral support structures. The computer program named PDXFORC is described. It computes the node point forces in a model of a toroidal field coil from the vector crossproduct of the coil current and the magnetic field. The model can be of one-half or one-fourth symmetry to be consistent with the node model defined by PDXNODE, and the magnetic field is computed from toroidal or poloidal coils

  12. Two-dimensional computer simulation of high intensity proton beams

    CERN Document Server

    Lapostolle, Pierre M

    1972-01-01

    A computer program has been developed which simulates the two- dimensional transverse behaviour of a proton beam in a focusing channel. The model is represented by an assembly of a few thousand 'superparticles' acted upon by their own self-consistent electric field and an external focusing force. The evolution of the system is computed stepwise in time by successively solving Poisson's equation and Newton's law of motion. Fast Fourier transform techniques are used for speed in the solution of Poisson's equation, while extensive area weighting is utilized for the accurate evaluation of electric field components. A computer experiment has been performed on the CERN CDC 6600 computer to study the nonlinear behaviour of an intense beam in phase space, showing under certain circumstances a filamentation due to space charge and an apparent emittance growth. (14 refs).

  13. A Neural Information Field Approach to Computational Cognition

    Science.gov (United States)

    2016-11-18

    effects of distraction during list memory . These distractions include short and long delays before recall, and continuous distraction (forced rehearsal... memory encoding and replay in hippocampus. Computational Neuroscience Society (CNS), p. 166, 2014. D. A. Pinotsis, Neural Field Coding of Short Term ...performance of children learning to count in a SPA model; proposed a new SPA model of cognitive load using the N-back task; developed a new model of the

  14. TBCI and URMEL - New computer codes for wake field and cavity mode calculations

    International Nuclear Information System (INIS)

    Weiland, T.

    1983-01-01

    Wake force computation is important for any study of instabilities in high current accelerators and storage rings. These forces are generated by intense bunches of charged particles passing cylindrically symmetric structures on or off axis. The adequate method for computing such forces is the time domain approach. The computer Code TBCI computes for relativistic as well as for nonrelativistic bunches of arbitrary shape longitudinal and transverse wake forces up to the octupole component. TBCI is not limited to cavity-like objects and thus applicable to bellows, beam pipes with varying cross sections and any other nonresonant structures. For the accelerating cavities one also needs to know the resonant modes and frequencies for the study of instabilities and mode couplers. The complementary code named URMEL computes these fields for any azimuthal dependence of the fields in ascending order. The mathematical procedure being used is very safe and does not miss modes. Both codes together represent a unique tool for accelerator design and are easy to use

  15. Velocity field calculation for non-orthogonal numerical grids

    Energy Technology Data Exchange (ETDEWEB)

    Flach, G. P. [Savannah River Site (SRS), Aiken, SC (United States). Savannah River National Lab. (SRNL)

    2015-03-01

    Computational grids containing cell faces that do not align with an orthogonal (e.g. Cartesian, cylindrical) coordinate system are routinely encountered in porous-medium numerical simulations. Such grids are referred to in this study as non-orthogonal grids because some cell faces are not orthogonal to a coordinate system plane (e.g. xy, yz or xz plane in Cartesian coordinates). Non-orthogonal grids are routinely encountered at the Savannah River Site in porous-medium flow simulations for Performance Assessments and groundwater flow modeling. Examples include grid lines that conform to the sloping roof of a waste tank or disposal unit in a 2D Performance Assessment simulation, and grid surfaces that conform to undulating stratigraphic surfaces in a 3D groundwater flow model. Particle tracking is routinely performed after a porous-medium numerical flow simulation to better understand the dynamics of the flow field and/or as an approximate indication of the trajectory and timing of advective solute transport. Particle tracks are computed by integrating the velocity field from cell to cell starting from designated seed (starting) positions. An accurate velocity field is required to attain accurate particle tracks. However, many numerical simulation codes report only the volumetric flowrate (e.g. PORFLOW) and/or flux (flowrate divided by area) crossing cell faces. For an orthogonal grid, the normal flux at a cell face is a component of the Darcy velocity vector in the coordinate system, and the pore velocity for particle tracking is attained by dividing by water content. For a non-orthogonal grid, the flux normal to a cell face that lies outside a coordinate plane is not a true component of velocity with respect to the coordinate system. Nonetheless, normal fluxes are often taken as Darcy velocity components, either naively or with accepted approximation. To enable accurate particle tracking or otherwise present an accurate depiction of the velocity field for a non

  16. Fast and accurate methods for phylogenomic analyses

    Directory of Open Access Journals (Sweden)

    Warnow Tandy

    2011-10-01

    Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.

  17. Measurement and tricubic interpolation of the magnetic field for the OLYMPUS experiment

    Energy Technology Data Exchange (ETDEWEB)

    Bernauer, J.C. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Diefenbach, J. [Hampton University, Hampton, VA (United States); Elbakian, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Gavrilov, G. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); Goerrissen, N. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Hasell, D.K.; Henderson, B.S. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Holler, Y. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Karyan, G. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Ludwig, J. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Marukyan, H. [Alikhanyan National Science Laboratory (Yerevan Physics Institute), Yerevan (Armenia); Naryshkin, Y. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation); O' Connor, C.; Russell, R.L.; Schmidt, A. [Massachusetts Institute of Technology, Laboratory for Nuclear Science, Cambridge, MA (United States); Schneekloth, U. [Deutsches Elektronen-Synchrotron DESY, Hamburg (Germany); Suvorov, K.; Veretennikov, D. [Petersburg Nuclear Physics Institute, Gatchina (Russian Federation)

    2016-07-01

    The OLYMPUS experiment used a 0.3 T toroidal magnetic spectrometer to measure the momenta of outgoing charged particles. In order to accurately determine particle trajectories, knowledge of the magnetic field was needed throughout the spectrometer volume. For that purpose, the magnetic field was measured at over 36,000 positions using a three-dimensional Hall probe actuated by a system of translation tables. We used these field data to fit a numerical magnetic field model, which could be employed to calculate the magnetic field at any point in the spectrometer volume. Calculations with this model were computationally intensive; for analysis applications where speed was crucial, we pre-computed the magnetic field and its derivatives on an evenly spaced grid so that the field could be interpolated between grid points. We developed a spline-based interpolation scheme suitable for SIMD implementations, with a memory layout chosen to minimize space and optimize the cache behavior to quickly calculate field values. This scheme requires only one-eighth of the memory needed to store necessary coefficients compared with a previous scheme (Lekien and Marsden, 2005 [1]). This method was accurate for the vast majority of the spectrometer volume, though special fits and representations were needed to improve the accuracy close to the magnet coils and along the toroidal axis.

  18. Probe-Hole Field Emission Microscope System Controlled by Computer

    Science.gov (United States)

    Gong, Yunming; Zeng, Haishan

    1991-09-01

    A probe-hole field emission microscope system, controlled by an Apple II computer, has been developed and operated successfully for measuring the work function of a single crystal plane. The work functions on the clean W(100) and W(111) planes are measured to be 4.67 eV and 4.45 eV, respectively.

  19. A simple highly accurate field-line mapping technique for three-dimensional Monte Carlo modeling of plasma edge transport

    International Nuclear Information System (INIS)

    Feng, Y.; Sardei, F.; Kisslinger, J.

    2005-01-01

    The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations

  20. Modeling electric fields in two dimensions using computer aided design

    International Nuclear Information System (INIS)

    Gilmore, D.W.; Giovanetti, D.

    1992-01-01

    The authors describe a method for analyzing static electric fields in two dimensions using AutoCAD. The algorithm is coded in LISP and is modeled after Coloumb's Law. The software platform allows for facile graphical manipulations of field renderings and supports a wide range of hardcopy-output and data-storage formats. More generally, this application is representative of the ability to analyze data that is the solution to known mathematical functions with computer aided design (CAD)

  1. Unsteady computational fluid dynamics in aeronautics

    CERN Document Server

    Tucker, P G

    2014-01-01

    The field of Large Eddy Simulation (LES) and hybrids is a vibrant research area. This book runs through all the potential unsteady modelling fidelity ranges, from low-order to LES. The latter is probably the highest fidelity for practical aerospace systems modelling. Cutting edge new frontiers are defined.  One example of a pressing environmental concern is noise. For the accurate prediction of this, unsteady modelling is needed. Hence computational aeroacoustics is explored. It is also emerging that there is a critical need for coupled simulations. Hence, this area is also considered and the tensions of utilizing such simulations with the already expensive LES.  This work has relevance to the general field of CFD and LES and to a wide variety of non-aerospace aerodynamic systems (e.g. cars, submarines, ships, electronics, buildings). Topics treated include unsteady flow techniques; LES and hybrids; general numerical methods; computational aeroacoustics; computational aeroelasticity; coupled simulations and...

  2. Numerical computation of gravitational field for general axisymmetric objects

    Science.gov (United States)

    Fukushima, Toshio

    2016-10-01

    We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.

  3. Computation of the velocity field and mass balance in the finite-element modeling of groundwater flow

    International Nuclear Information System (INIS)

    Yeh, G.T.

    1980-01-01

    Darcian velocity has been conventionally calculated in the finite-element modeling of groundwater flow by taking the derivatives of the computed pressure field. This results in discontinuities in the velocity field at nodal points and element boundaries. Discontinuities become enormous when the computed pressure field is far from a linear distribution. It is proposed in this paper that the finite element procedure that is used to simulate the pressure field or the moisture content field also be applied to Darcy's law with the derivatives of the computed pressure field as the load function. The problem of discontinuity is then eliminated, and the error of mass balance over the region of interest is much reduced. The reduction is from 23.8 to 2.2% by one numerical scheme and from 29.7 to -3.6% by another for a transient problem

  4. Multigrid Methods for the Computation of Propagators in Gauge Fields

    Science.gov (United States)

    Kalkreuter, Thomas

    Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.

  5. Numerical computation of the transport matrix in toroidal plasma with a stochastic magnetic field

    Science.gov (United States)

    Zhu, Siqiang; Chen, Dunqiang; Dai, Zongliang; Wang, Shaojie

    2018-04-01

    A new numerical method, based on integrating along the full orbit of guiding centers, to compute the transport matrix is realized. The method is successfully applied to compute the phase-space diffusion tensor of passing electrons in a tokamak with a stochastic magnetic field. The new method also computes the Lagrangian correlation function, which can be used to evaluate the Lagrangian correlation time and the turbulence correlation length. For the case of the stochastic magnetic field, we find that the order of magnitude of the parallel correlation length can be estimated by qR0, as expected previously.

  6. In pursuit of an accurate spatial and temporal model of biomolecules at the atomistic level: a perspective on computer simulation.

    Science.gov (United States)

    Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A

    2015-01-01

    Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.

  7. Development and Validation of a Multidisciplinary Tool for Accurate and Efficient Rotorcraft Noise Prediction (MUTE)

    Science.gov (United States)

    Liu, Yi; Anusonti-Inthra, Phuriwat; Diskin, Boris

    2011-01-01

    A physics-based, systematically coupled, multidisciplinary prediction tool (MUTE) for rotorcraft noise was developed and validated with a wide range of flight configurations and conditions. MUTE is an aggregation of multidisciplinary computational tools that accurately and efficiently model the physics of the source of rotorcraft noise, and predict the noise at far-field observer locations. It uses systematic coupling approaches among multiple disciplines including Computational Fluid Dynamics (CFD), Computational Structural Dynamics (CSD), and high fidelity acoustics. Within MUTE, advanced high-order CFD tools are used around the rotor blade to predict the transonic flow (shock wave) effects, which generate the high-speed impulsive noise. Predictions of the blade-vortex interaction noise in low speed flight are also improved by using the Particle Vortex Transport Method (PVTM), which preserves the wake flow details required for blade/wake and fuselage/wake interactions. The accuracy of the source noise prediction is further improved by utilizing a coupling approach between CFD and CSD, so that the effects of key structural dynamics, elastic blade deformations, and trim solutions are correctly represented in the analysis. The blade loading information and/or the flow field parameters around the rotor blade predicted by the CFD/CSD coupling approach are used to predict the acoustic signatures at far-field observer locations with a high-fidelity noise propagation code (WOPWOP3). The predicted results from the MUTE tool for rotor blade aerodynamic loading and far-field acoustic signatures are compared and validated with a variation of experimental data sets, such as UH60-A data, DNW test data and HART II test data.

  8. Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation

    International Nuclear Information System (INIS)

    Turner, L.R.; Levine, D.; Huang, M.; Papka, M.

    1995-01-01

    One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed

  9. Multi-GPU Jacobian accelerated computing for soft-field tomography

    International Nuclear Information System (INIS)

    Borsic, A; Attardo, E A; Halter, R J

    2012-01-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15–20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times

  10. Multi-GPU Jacobian accelerated computing for soft-field tomography.

    Science.gov (United States)

    Borsic, A; Attardo, E A; Halter, R J

    2012-10-01

    Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20

  11. Portable, remotely operated, computer-controlled, quadrupole mass spectrometer for field use

    International Nuclear Information System (INIS)

    Friesen, R.D.; Newton, J.C.; Smith, C.F.

    1982-04-01

    A portable, remote-controlled mass spectrometer was required at the Nevada Test Site to analyze prompt post-event gas from the nuclear cavity in support of the underground testing program. A Balzers QMG-511 quadrupole was chosen for its ability to be interfaced to a DEC LSI-11 computer and to withstand the ground movement caused by this field environment. The inlet system valves, the pumps, the pressure and temperature transducers, and the quadrupole mass spectrometer are controlled by a read-only-memory-based DEC LSI-11/2 with a high-speed microwave link to the control point which is typically 30 miles away. The computer at the control point is a DEC LSI-11/23 running the RSX-11 operating system. The instrument was automated as much as possible because the system is run by inexperienced operators at times. The mass spectrometer has been used on an initial field event with excellent performance. The gas analysis system is described, including automation by a novel computer control method which reduces operator errors and allows dynamic access to the system parameters

  12. THEORETICAL COMPUTATION OF A STRESS FIELD IN A CYLINDRICAL GLASS SPECIMEN

    Directory of Open Access Journals (Sweden)

    NORBERT KREČMER

    2011-03-01

    Full Text Available This work deals with the computation of the stress field generated in an infinitely high glass cylinder while cooling. The theory of structural relaxation is used in order to compute the heat capacity, the thermal expansion coefficient, and the viscosity. The relaxation of the stress components is solved in the frame of the Maxwell viscoelasticity model. The obtained results were verified by the sensitivity analysis and compared with some experimental data.

  13. A Computational Model of Cellular Response to Modulated Radiation Fields

    Energy Technology Data Exchange (ETDEWEB)

    McMahon, Stephen J., E-mail: stephen.mcmahon@qub.ac.uk [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Butterworth, Karl T. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); McGarry, Conor K. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Radiotherapy Physics, Northern Ireland Cancer Centre, Belfast Health and Social Care Trust, Northern Ireland (United Kingdom); Trainor, Colman [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); O' Sullivan, Joe M. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Clinical Oncology, Northern Ireland Cancer Centre, Belfast Health and Social Care Trust, Belfast, Northern Ireland (United Kingdom); Hounsell, Alan R. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Radiotherapy Physics, Northern Ireland Cancer Centre, Belfast Health and Social Care Trust, Northern Ireland (United Kingdom); Prise, Kevin M. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom)

    2012-09-01

    Purpose: To develop a model to describe the response of cell populations to spatially modulated radiation exposures of relevance to advanced radiotherapies. Materials and Methods: A Monte Carlo model of cellular radiation response was developed. This model incorporated damage from both direct radiation and intercellular communication including bystander signaling. The predictions of this model were compared to previously measured survival curves for a normal human fibroblast line (AGO1522) and prostate tumor cells (DU145) exposed to spatially modulated fields. Results: The model was found to be able to accurately reproduce cell survival both in populations which were directly exposed to radiation and those which were outside the primary treatment field. The model predicts that the bystander effect makes a significant contribution to cell killing even in uniformly irradiated cells. The bystander effect contribution varies strongly with dose, falling from a high of 80% at low doses to 25% and 50% at 4 Gy for AGO1522 and DU145 cells, respectively. This was verified using the inducible nitric oxide synthase inhibitor aminoguanidine to inhibit the bystander effect in cells exposed to different doses, which showed significantly larger reductions in cell killing at lower doses. Conclusions: The model presented in this work accurately reproduces cell survival following modulated radiation exposures, both in and out of the primary treatment field, by incorporating a bystander component. In addition, the model suggests that the bystander effect is responsible for a significant portion of cell killing in uniformly irradiated cells, 50% and 70% at doses of 2 Gy in AGO1522 and DU145 cells, respectively. This description is a significant departure from accepted radiobiological models and may have a significant impact on optimization of treatment planning approaches if proven to be applicable in vivo.

  14. A Computational Model of Cellular Response to Modulated Radiation Fields

    International Nuclear Information System (INIS)

    McMahon, Stephen J.; Butterworth, Karl T.; McGarry, Conor K.; Trainor, Colman; O’Sullivan, Joe M.; Hounsell, Alan R.; Prise, Kevin M.

    2012-01-01

    Purpose: To develop a model to describe the response of cell populations to spatially modulated radiation exposures of relevance to advanced radiotherapies. Materials and Methods: A Monte Carlo model of cellular radiation response was developed. This model incorporated damage from both direct radiation and intercellular communication including bystander signaling. The predictions of this model were compared to previously measured survival curves for a normal human fibroblast line (AGO1522) and prostate tumor cells (DU145) exposed to spatially modulated fields. Results: The model was found to be able to accurately reproduce cell survival both in populations which were directly exposed to radiation and those which were outside the primary treatment field. The model predicts that the bystander effect makes a significant contribution to cell killing even in uniformly irradiated cells. The bystander effect contribution varies strongly with dose, falling from a high of 80% at low doses to 25% and 50% at 4 Gy for AGO1522 and DU145 cells, respectively. This was verified using the inducible nitric oxide synthase inhibitor aminoguanidine to inhibit the bystander effect in cells exposed to different doses, which showed significantly larger reductions in cell killing at lower doses. Conclusions: The model presented in this work accurately reproduces cell survival following modulated radiation exposures, both in and out of the primary treatment field, by incorporating a bystander component. In addition, the model suggests that the bystander effect is responsible for a significant portion of cell killing in uniformly irradiated cells, 50% and 70% at doses of 2 Gy in AGO1522 and DU145 cells, respectively. This description is a significant departure from accepted radiobiological models and may have a significant impact on optimization of treatment planning approaches if proven to be applicable in vivo.

  15. Dual field theories of quantum computation

    International Nuclear Information System (INIS)

    Vanchurin, Vitaly

    2016-01-01

    Given two quantum states of N q-bits we are interested to find the shortest quantum circuit consisting of only one- and two- q-bit gates that would transfer one state into another. We call it the quantum maze problem for the reasons described in the paper. We argue that in a large N limit the quantum maze problem is equivalent to the problem of finding a semiclassical trajectory of some lattice field theory (the dual theory) on an N+1 dimensional space-time with geometrically flat, but topologically compact spatial slices. The spatial fundamental domain is an N dimensional hyper-rhombohedron, and the temporal direction describes transitions from an arbitrary initial state to an arbitrary target state and so the initial and final dual field theory conditions are described by these two quantum computational states. We first consider a complex Klein-Gordon field theory and argue that it can only be used to study the shortest quantum circuits which do not involve generators composed of tensor products of multiple Pauli Z matrices. Since such situation is not generic we call it the Z-problem. On the dual field theory side the Z-problem corresponds to massless excitations of the phase (Goldstone modes) that we attempt to fix using Higgs mechanism. The simplest dual theory which does not suffer from the massless excitation (or from the Z-problem) is the Abelian-Higgs model which we argue can be used for finding the shortest quantum circuits. Since every trajectory of the field theory is mapped directly to a quantum circuit, the shortest quantum circuits are identified with semiclassical trajectories. We also discuss the complexity of an actual algorithm that uses a dual theory prospective for solving the quantum maze problem and compare it with a geometric approach. We argue that it might be possible to solve the problem in sub-exponential time in 2 N , but for that we must consider the Klein-Gordon theory on curved spatial geometry and/or more complicated (than N

  16. Three-dimensional magnetic field computation on a distributed memory parallel processor

    International Nuclear Information System (INIS)

    Barion, M.L.

    1990-01-01

    The analysis of three-dimensional magnetic fields by finite element methods frequently proves too onerous a task for the computing resource on which it is attempted. When non-linear and transient effects are included, it may become impossible to calculate the field distribution to sufficient resolution. One approach to this problem is to exploit the natural parallelism in the finite element method via parallel processing. This paper reports on an implementation of a finite element code for non-linear three-dimensional low-frequency magnetic field calculation on Intel's iPSC/2

  17. Accurate Modeling of Advanced Reflectarrays

    DEFF Research Database (Denmark)

    Zhou, Min

    to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...

  18. Accurate Calculation of Magnetic Fields in the End Regions of Superconducting Accelerator Magnets using the BEM-FEM Coupling Method

    CERN Document Server

    Kurz, S

    1999-01-01

    In this paper a new technique for the accurate calculation of magnetic fields in the end regions of superconducting accelerator magnets is presented. This method couples Boundary Elements (BEM) which discretize the surface of the iron yoke and Finite Elements (FEM) for the modelling of the nonlinear interior of the yoke. The BEM-FEM method is therefore specially suited for the calculation of 3-dimensional effects in the magnets, as the coils and the air regions do not have to be represented in the finite-element mesh and discretization errors only influence the calculation of the magnetization (reduced field) of the yoke. The method has been recently implemented into the CERN-ROXIE program package for the design and optimization of the LHC magnets. The field shape and multipole errors in the two-in-one LHC dipoles with its coil ends sticking out of the common iron yoke is presented.

  19. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort.

    Science.gov (United States)

    Mah, K; Danjoux, C E; Manship, S; Makhani, N; Cardoso, M; Sixel, K E

    1998-07-15

    To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of > or =5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall, it has improved staff efficiency and resource utilization.

  20. Computed tomographic simulation of craniospinal fields in pediatric patients: improved treatment accuracy and patient comfort

    International Nuclear Information System (INIS)

    Mah, Katherine; Danjoux, Cyril E.; Manship, Sharan; Makhani, Nadiya; Cardoso, Marlene; Sixel, Katharina E.

    1998-01-01

    Purpose: To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. Methods and Materials: A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Results: Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was <10 min in all cases. In the absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of ≥5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Conclusions: Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall

  1. Engineering drawing field verification program. Revision 3

    International Nuclear Information System (INIS)

    Ulk, P.F.

    1994-01-01

    Safe, efficient operation of waste tank farm facilities is dependent in part upon the availability of accurate, up-to-date plant drawings. Accurate plant drawings are also required in support of facility upgrades and future engineering remediation projects. This supporting document establishes the procedure for performing a visual field verification of engineering drawings, the degree of visual observation being performed and documenting the results. A copy of the drawing attesting to the degree of visual observation will be paginated into the released Engineering Change Notice (ECN) documenting the field verification for future retrieval and reference. All waste tank farm essential and support drawings within the scope of this program will be converted from manual to computer aided drafting (CAD) drawings. A permanent reference to the field verification status will be placed along the right border of the CAD-converted drawing, referencing the revision level, at which the visual verification was performed and documented

  2. Development of accurate techniques for controlling polarization of a long wavelength neutron beam in very low magnetic fields. I

    International Nuclear Information System (INIS)

    Kawai, Takeshi; Ebisawa, Toru; Tasaki, Seiji; Akiyoshi, Tsunekazu; Eguchi, Yoshiaki; Hino, Masahiro; Achiwa, Norio.

    1995-01-01

    The purpose of our study is to develop accurate techniques for controlling polarization of a long wavelength neutron beam and to make a thin-film dynamical spin-flip device operated in magnetizing fields less than 100 gauss and in a shorter switching time up to 20 kHz. The device would work as a chopper for a polarized neutron beam and as a magnetic switching device for a multilayer neutron interferometer. We have started to develop multilayer polarizing mirrors functioning under magnetizing fields less than 100 gauss. The multilayers of Permalloy-Ge and Fe-Ge have been produced using the evaporation method under magnetizing fields of about 100 gauss parallel to the Si-wafer substrate surface. The hysteresis loop for in-plane magnetization of the multilayers were measured to discuss their feasibilities for the polarizing device functioning under very low magnetizing fields. The polarizing efficiencies of Fe-Ge and Permalloy-Ge multilayers were 95 % and 91 % with reflectivities of 50 % and 66 % respectively under magnetizing fields of 80 gauss. The report also discusses problems in applying these multilayer polarizing mirrors to ultracold neutrons. (author)

  3. A computationally efficient tool for assessing the depth resolution in large-scale potential-field inversion

    DEFF Research Database (Denmark)

    Paoletti, Valeria; Hansen, Per Christian; Hansen, Mads Friis

    2014-01-01

    In potential-field inversion, careful management of singular value decomposition components is crucial for obtaining information about the source distribution with respect to depth. In principle, the depth-resolution plot provides a convenient visual tool for this analysis, but its computational...... on memory and computing time. We used the ApproxDRP to study retrievable depth resolution in inversion of the gravity field of the Neapolitan Volcanic Area. Our main contribution is the combined use of the Lanczos bidiagonalization algorithm, established in the scientific computing community, and the depth...

  4. A modeling approach to predict acoustic nonlinear field generated by a transmitter with an aluminum lens.

    Science.gov (United States)

    Fan, Tingbo; Liu, Zhenbo; Chen, Tao; Li, Faqi; Zhang, Dong

    2011-09-01

    In this work, the authors propose a modeling approach to compute the nonlinear acoustic field generated by a flat piston transmitter with an attached aluminum lens. In this approach, the geometrical parameters (radius and focal length) of a virtual source are initially determined by Snell's refraction law and then adjusted based on the Rayleigh integral result in the linear case. Then, this virtual source is used with the nonlinear spheroidal beam equation (SBE) model to predict the nonlinear acoustic field in the focal region. To examine the validity of this approach, the calculated nonlinear result is compared with those from the Westervelt and (Khokhlov-Zabolotskaya-Kuznetsov) KZK equations for a focal intensity of 7 kW/cm(2). Results indicate that this approach could accurately describe the nonlinear acoustic field in the focal region with less computation time. The proposed modeling approach is shown to accurately describe the nonlinear acoustic field in the focal region. Compared with the Westervelt equation, the computation time of this approach is significantly reduced. It might also be applicable for the widely used concave focused transmitter with a large aperture angle.

  5. Internal and external Field of View: computer games and cybersickness

    NARCIS (Netherlands)

    Vries, S.C. de; Bos, J.E.; Emmerik, M.L. van; Groen, E.L.

    2007-01-01

    In an experiment with a computer game environment, we studied the effect of Field-of-View (FOV) on cybersickness. In particular, we examined the effect of differences between the internal FOV (IFOV, the FOV which the graphics generator is using to render its images) and the external FOV (EFOV, the

  6. Computation of Galois field expressions for quaternary logic functions on GPUs

    Directory of Open Access Journals (Sweden)

    Gajić Dušan B.

    2014-01-01

    Full Text Available Galois field (GF expressions are polynomials used as representations of multiple-valued logic (MVL functions. For this purpose, MVL functions are considered as functions defined over a finite (Galois field of order p - GF(p. The problem of computing these functional expressions has an important role in areas such as digital signal processing and logic design. Time needed for computing GF-expressions increases exponentially with the number of variables in MVL functions and, as a result, it often represents a limiting factor in applications. This paper proposes a method for an accelerated computation of GF(4-expressions for quaternary (four-valued logic functions using graphics processing units (GPUs. The method is based on the spectral interpretation of GF-expressions, permitting the use of fast Fourier transform (FFT-like algorithms for their computation. These algorithms are then adapted for highly parallel processing on GPUs. The performance of the proposed solutions is compared with referent C/C++ implementations of the same algorithms processed on central processing units (CPUs. Experimental results confirm that the presented approach leads to significant reduction in processing times (up to 10.86 times when compared to CPU processing. Therefore, the proposed approach widens the set of problem instances which can be efficiently handled in practice. [Projekat Ministarstva nauke Republike Srbije, br. ON174026 i br. III44006

  7. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for a Thin Solenoid with Uniform Current Density

    Energy Technology Data Exchange (ETDEWEB)

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-07

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for radially thin solenoids with uniform current density is described in this note. An algorithm for computing the vector potential Aθ is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. The (apparently) new feature of the algorithms described in this note applies to cases where the field point is outside of the bore of the solenoid and the field-point radius approaches the solenoid radius. Since the elliptic integrals of the third kind normally used in computing Bz and Aθ become infinite in this region of parameter space, fields for points with the axial coordinate z outside of the ends of the solenoid and near the solenoid radius are treated by use of elliptic integrals of the third kind of modified argument, derived by use of an addition theorem. Also, the algorithms also avoid the numerical difficulties the textbook solutions have for points near the axis arising from explicit factors of 1/r or 1/r2 in the some of the expressions.

  8. An accurate determination of the flux within a slab

    International Nuclear Information System (INIS)

    Ganapol, B.D.; Lapenta, G.

    1993-01-01

    During the past decade, several articles have been written concerning accurate solutions to the monoenergetic neutron transport equation in infinite and semi-infinite geometries. The numerical formulations found in these articles were based primarily on the extensive theoretical investigations performed by the open-quotes transport greatsclose quotes such as Chandrasekhar, Busbridge, Sobolev, and Ivanov, to name a few. The development of numerical solutions in infinite and semi-infinite geometries represents an example of how mathematical transport theory can be utilized to provide highly accurate and efficient numerical transport solutions. These solutions, or analytical benchmarks, are useful as open-quotes industry standards,close quotes which provide guidance to code developers and promote learning in the classroom. The high accuracy of these benchmarks is directly attributable to the rapid advancement of the state of computing and computational methods. Transport calculations that were beyond the capability of the open-quotes supercomputersclose quotes of just a few years ago are now possible at one's desk. In this paper, we again build upon the past to tackle the slab problem, which is of the next level of difficulty in comparison to infinite media problems. The formulation is based on the monoenergetic Green's function, which is the most fundamental transport solution. This method of solution requires a fast and accurate evaluation of the Green's function, which, with today's computational power, is now readily available

  9. Computation of mode eigenfunctions in graded-index optical fibers by the propagating beam method

    International Nuclear Information System (INIS)

    Feit, M.D.; Fleck, J.A. Jr.

    1980-01-01

    The propagating beam method utilizes discrete Fourier transforms for generating configuration-space solutions to optical waveguide problems without reference to modes. The propagating beam method can also give a complete description of the field in terms of modes by a Fourier analysis with respect to axial distance of the computed fields. Earlier work dealt with the accurate determination of mode propagation constants and group delays. In this paper the method is extended to the computation of mode eigenfunctions. The method is efficient, allowing generation of a large number of eigenfunctions from a single propagation run. Computations for parabolic-index profiles show excellent agreement between analytic and numerically generated eigenfunctions

  10. Adiabatic compression of elongated field-reversed configurations

    International Nuclear Information System (INIS)

    Spencer, R.L.; Tuszewski, M.; Linford, R.K.

    1983-01-01

    The adiabatic compression of an elongated field-reversed configuration (FRC) is computed by using a one-dimensional approximation. The one-dimensional results are checked against a two-dimensional equilibrium code. For ratios of FRC separatrix length to separatrix radius greater than about ten, the one-dimensional results are accurate within 10%. To this accuracy, the adiabatic compression of FRC's can be described by simple analytic formulas

  11. Accurate quantum chemical calculations

    Science.gov (United States)

    Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.

    1989-01-01

    An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.

  12. 38 CFR 4.76a - Computation of average concentric contraction of visual fields.

    Science.gov (United States)

    2010-07-01

    ... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Organs of Special Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual...

  13. A computational theory of visual receptive fields.

    Science.gov (United States)

    Lindeberg, Tony

    2013-12-01

    A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative

  14. On the computation of the demagnetization tensor field for an arbitrary particle shape using a Fourier space approach

    International Nuclear Information System (INIS)

    Beleggia, M.; Graef, M. de

    2003-01-01

    A method is presented to compute the demagnetization tensor field for uniformly magnetized particles of arbitrary shape. By means of a Fourier space approach it is possible to compute analytically the Fourier representation of the demagnetization tensor field for a given shape. Then, specifying the direction of the uniform magnetization, the demagnetizing field and the magnetostatic energy associated with the particle can be evaluated. In some particular cases, the real space representation is computable analytically. In general, a numerical inverse fast Fourier transform is required to perform the inversion. As an example, the demagnetization tensor field for the tetrahedron will be given

  15. A program for computing cohomology of Lie superalgebras of vector fields

    International Nuclear Information System (INIS)

    Kornyak, V.V.

    1998-01-01

    An algorithm and its C implementation for computing the cohomology of Lie algebras and superalgebras is described. When elaborating the algorithm we paid primary attention to cohomology in trivial, adjoint and coadjoint modules for Lie algebras and superalgebras of the formal vector fields. These algebras have found many applications to modern supersymmetric models of theoretical and mathematical physics. As an example, we present 3- and 5-cocycles from the cohomology in the trivial module for the Poisson algebra Po (2), as found by computer

  16. Accurate determination of antenna directivity

    DEFF Research Database (Denmark)

    Dich, Mikael

    1997-01-01

    The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...

  17. Combined tangential-normal vector elements for computing electric and magnetic fields

    International Nuclear Information System (INIS)

    Sachdev, S.; Cendes, Z.J.

    1993-01-01

    A direct method for computing electric and magnetic fields in two dimensions is developed. This method determines both the fields and fluxes directly from Maxwell's curl and divergence equations without introducing potential functions. This allows both the curl and the divergence of the field to be set independently in all elements. The technique is based on a new type of vector finite element that simultaneously interpolates to the tangential component of the electric or the magnetic field and the normal component of the electric or magnetic flux. Continuity conditions are imposed across element edges simply by setting like variables to be the same across element edges. This guarantees the continuity of the field and flux at the mid-point of each edge and that for all edges the average value of the tangential component of the field and of the normal component of the flux is identical

  18. Preliminary Phase Field Computational Model Development

    Energy Technology Data Exchange (ETDEWEB)

    Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)

    2014-12-15

    This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in

  19. Computational algorithms for simulations in atmospheric optics.

    Science.gov (United States)

    Konyaev, P A; Lukin, V P

    2016-04-20

    A computer simulation technique for atmospheric and adaptive optics based on parallel programing is discussed. A parallel propagation algorithm is designed and a modified spectral-phase method for computer generation of 2D time-variant random fields is developed. Temporal power spectra of Laguerre-Gaussian beam fluctuations are considered as an example to illustrate the applications discussed. Implementation of the proposed algorithms using Intel MKL and IPP libraries and NVIDIA CUDA technology is shown to be very fast and accurate. The hardware system for the computer simulation is an off-the-shelf desktop with an Intel Core i7-4790K CPU operating at a turbo-speed frequency up to 5 GHz and an NVIDIA GeForce GTX-960 graphics accelerator with 1024 1.5 GHz processors.

  20. An Accurate Computational Tool for Performance Estimation of FSO Communication Links over Weak to Strong Atmospheric Turbulent Channels

    Directory of Open Access Journals (Sweden)

    Theodore D. Katsilieris

    2017-03-01

    Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using

  1. Adiabatic compression of elongated field-reversed configurations

    Energy Technology Data Exchange (ETDEWEB)

    Spencer, R.L.; Tuszewski, M.; Linford, R.K.

    1983-06-01

    The adiabatic compression of an elongated field-reversed configuration (FRC) is computed by using a one-dimensional approximation. The one-dimensional results are checked against a two-dimensional equilibrium code. For ratios of FRC separatrix length to separatrix radius greater than about ten, the one-dimensional results are accurate within 10%. To this accuracy, the adiabatic compression of FRC's can be described by simple analytic formulas.

  2. Symbolic math for computation of radiation shielding

    International Nuclear Information System (INIS)

    Suman, Vitisha; Datta, D.; Sarkar, P.K.; Kushwaha, H.S.

    2010-01-01

    Radiation transport calculations for shielding studies in the field of accelerator technology often involve intensive numerical computations. Traditionally, radiation transport equation is solved using finite difference scheme or advanced finite element method with respect to specific initial and boundary conditions suitable for the geometry of the problem. All these computations need CPU intensive computer codes for accurate calculation of scalar and angular fluxes. Computation using symbols of the analytical expression representing the transport equation as objects is an enhanced numerical technique in which the computation is completely algorithm and data oriented. Algorithm on the basis of symbolic math architecture is developed using Symbolic math toolbox of MATLAB software. Present paper describes the symbolic math algorithm and its application as a case study in which shielding calculation of rectangular slab geometry is studied for a line source of specific activity. Study of application of symbolic math in this domain evolves a new paradigm compared to the existing computer code such as DORT. (author)

  3. Efficient and Accurate Computational Framework for Injector Design and Analysis, Phase I

    Data.gov (United States)

    National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...

  4. Sentinel nodes identified by computed tomography-lymphography accurately stage the axilla in patients with breast cancer

    International Nuclear Information System (INIS)

    Motomura, Kazuyoshi; Sumino, Hiroshi; Noguchi, Atsushi; Horinouchi, Takashi; Nakanishi, Katsuyuki

    2013-01-01

    Sentinel node biopsy often results in the identification and removal of multiple nodes as sentinel nodes, although most of these nodes could be non-sentinel nodes. This study investigated whether computed tomography-lymphography (CT-LG) can distinguish sentinel nodes from non-sentinel nodes and whether sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. This study included 184 patients with breast cancer and clinically negative nodes. Contrast agent was injected interstitially. The location of sentinel nodes was marked on the skin surface using a CT laser light navigator system. Lymph nodes located just under the marks were first removed as sentinel nodes. Then, all dyed nodes or all hot nodes were removed. The mean number of sentinel nodes identified by CT-LG was significantly lower than that of dyed and/or hot nodes removed (1.1 vs 1.8, p <0.0001). Twenty-three (12.5%) patients had ≥2 sentinel nodes identified by CT-LG removed, whereas 94 (51.1%) of patients had ≥2 dyed and/or hot nodes removed (p <0.0001). Pathological evaluation demonstrated that 47 (25.5%) of 184 patients had metastasis to at least one node. All 47 patients demonstrated metastases to at least one of the sentinel nodes identified by CT-LG. CT-LG can distinguish sentinel nodes from non-sentinel nodes, and sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. Successful identification of sentinel nodes using CT-LG may facilitate image-based diagnosis of metastasis, possibly leading to the omission of sentinel node biopsy

  5. Axial-field permanent magnet motors for electric vehicles

    Science.gov (United States)

    Campbell, P.

    1981-01-01

    The modelling of an anisotropic alnico magnet for the purpose of field computation involves assigning a value for the material's permeability in the transverse direction. This is generally based upon the preferred direction properties, being all that are easily available. By analyzing the rotation of intrinsic magnetization due to the self demagnetizing field, it is shown that the common assumptions relating the transverse to the preferred direction are not accurate. Transverse magnetization characteristics are needed, and these are given for Alnico 5, 5-7, and 8 magnets, yielding appropriate permeability values.

  6. High accurate time system of the Low Latitude Meridian Circle.

    Science.gov (United States)

    Yang, Jing; Wang, Feng; Li, Zhiming

    In order to obtain the high accurate time signal for the Low Latitude Meridian Circle (LLMC), a new GPS accurate time system is developed which include GPS, 1 MC frequency source and self-made clock system. The second signal of GPS is synchronously used in the clock system and information can be collected by a computer automatically. The difficulty of the cancellation of the time keeper can be overcomed by using this system.

  7. Computations of the Magnus effect for slender bodies in supersonic flow

    Science.gov (United States)

    Sturek, W. B.; Schiff, L. B.

    1980-01-01

    A recently reported Parabolized Navier-Stokes code has been employed to compute the supersonic flow field about spinning cone, ogive-cylinder, and boattailed bodies of revolution at moderate incidence. The computations were performed for flow conditions where extensive measurements for wall pressure, boundary layer velocity profiles and Magnus force had been obtained. Comparisons between the computational results and experiment indicate excellent agreement for angles of attack up to six degrees. The comparisons for Magnus effects show that the code accurately predicts the effects of body shape and Mach number for the selected models for Mach numbers in the range of 2-4.

  8. Dosimetry of normal and wedge fields for a cobalt-60 teletherapy unit

    International Nuclear Information System (INIS)

    Tripathi, U.B.; Kelkar, N.Y.

    1980-01-01

    A simple analytical method for computation of dose distributions for normal and wedge fields is described and the use of the method in planning radiation treatment is outlined. Formulas has been given to compute: (1) depth dose along central axis of cobalt-60 beam, (2) dose to off-axis points, and (3) dose distribution for a wedge field. Good agreement has been found between theoretical and experimental values. With the help of these formulae, the dose at any point can be easily and accurately calculated and radiotherapy can be planned for tumours of very odd shape and sizes. The limitation of the method is that the formulae have been derived for 50% field definition. For cobalt-60 machine having any other field definition, appropriate correction factors have to be applied. (M.G.B.)

  9. Late enhanced computed tomography in Hypertrophic Cardiomyopathy enables accurate left-ventricular volumetry

    Energy Technology Data Exchange (ETDEWEB)

    Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)

    2014-10-15

    Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)

  10. Electron correlation in molecules: concurrent computation Many-Body Perturbation Theory (ccMBPT) calculations using macrotasking on the NEC SX-3/44 computer

    International Nuclear Information System (INIS)

    Moncrieff, D.; Wilson, S.

    1992-06-01

    The ab initio determination of the electronic structure of molecules is a many-fermion problem involving the approximate description of the motion of the electrons in the field of fixed nuclei. It is an area of research which demands considerable computational resources but having enormous potential in fields as diverse as interstellar chemistry and drug design, catalysis and solid state chemistry, molecular biology and environmental chemistry. Electronic structure calculations almost invariably divide into two main stages: the approximate solution of an independent electron model, in which each electron moves in the average field created by the other electrons in the system, and then, the more computationally demanding determination of a series of corrections to this model, the electron correlation effects. The many-body perturbation theory expansion affords a systematic description of correlation effects, which leads directly to algorithms which are suitable for concurrent computation. We term this concurrent computation Many-Body Perturbation Theory (ccMBPT). The use of a dynamic load balancing technique on the NEC SX-3/44 computer in electron correlation calculations is investigated for the calculation of the most demanding energy component in the most accurate of contemporary ab initio studies. An application to the ground state of the nitrogen molecule is described. We also briefly discuss the extent to which the calculation of the dominant corrections to such studies can be rendered computationally tractable by exploiting both the vector processing and parallel processor capabilities of the NEC SX-3/44 computer. (author)

  11. Effects of Force Field Selection on the Computational Ranking of MOFs for CO2 Separations.

    Science.gov (United States)

    Dokur, Derya; Keskin, Seda

    2018-02-14

    Metal-organic frameworks (MOFs) have been considered as highly promising materials for adsorption-based CO 2 separations. The number of synthesized MOFs has been increasing very rapidly. High-throughput molecular simulations are very useful to screen large numbers of MOFs in order to identify the most promising adsorbents prior to extensive experimental studies. Results of molecular simulations depend on the force field used to define the interactions between gas molecules and MOFs. Choosing the appropriate force field for MOFs is essential to make reliable predictions about the materials' performance. In this work, we performed two sets of molecular simulations using the two widely used generic force fields, Dreiding and UFF, and obtained adsorption data of CO 2 /H 2 , CO 2 /N 2 , and CO 2 /CH 4 mixtures in 100 different MOF structures. Using this adsorption data, several adsorbent evaluation metrics including selectivity, working capacity, sorbent selection parameter, and percent regenerability were computed for each MOF. MOFs were then ranked based on these evaluation metrics, and top performing materials were identified. We then examined the sensitivity of the MOF rankings to the force field type. Our results showed that although there are significant quantitative differences between some adsorbent evaluation metrics computed using different force fields, rankings of the top MOF adsorbents for CO 2 separations are generally similar: 8, 8, and 9 out of the top 10 most selective MOFs were found to be identical in the ranking for CO 2 /H 2 , CO 2 /N 2 , and CO 2 /CH 4 separations using Dreiding and UFF. We finally suggested a force field factor depending on the energy parameters of atoms present in the MOFs to quantify the robustness of the simulation results to the force field selection. This easily computable factor will be highly useful to determine whether the results are sensitive to the force field type or not prior to performing computationally demanding

  12. Evaluating amber force fields using computed NMR chemical shifts.

    Science.gov (United States)

    Koes, David R; Vries, John K

    2017-10-01

    NMR chemical shifts can be computed from molecular dynamics (MD) simulations using a template matching approach and a library of conformers containing chemical shifts generated from ab initio quantum calculations. This approach has potential utility for evaluating the force fields that underlie these simulations. Imperfections in force fields generate flawed atomic coordinates. Chemical shifts obtained from flawed coordinates have errors that can be traced back to these imperfections. We use this approach to evaluate a series of AMBER force fields that have been refined over the course of two decades (ff94, ff96, ff99SB, ff14SB, ff14ipq, and ff15ipq). For each force field a series of MD simulations are carried out for eight model proteins. The calculated chemical shifts for the 1 H, 15 N, and 13 C a atoms are compared with experimental values. Initial evaluations are based on root mean squared (RMS) errors at the protein level. These results are further refined based on secondary structure and the types of atoms involved in nonbonded interactions. The best chemical shift for identifying force field differences is the shift associated with peptide protons. Examination of the model proteins on a residue by residue basis reveals that force field performance is highly dependent on residue position. Examination of the time course of nonbonded interactions at these sites provides explanations for chemical shift differences at the atomic coordinate level. Results show that the newer ff14ipq and ff15ipq force fields developed with the implicitly polarized charge method perform better than the older force fields. © 2017 Wiley Periodicals, Inc.

  13. Computer-based measurement and automatizatio aplication research in nuclear technology fields

    International Nuclear Information System (INIS)

    Jiang Hongfei; Zhang Xiangyang

    2003-01-01

    This paper introduces computer-based measurement and automatization application research in nuclear technology fields. The emphasis of narration are the role of software in the development of system, and the network measurement and control software model which has optimistic application foreground. And presents the application examples of research and development. (authors)

  14. Computer-implemented method and apparatus for autonomous position determination using magnetic field data

    Science.gov (United States)

    Ketchum, Eleanor A. (Inventor)

    2000-01-01

    A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.

  15. Computation of 3-D magnetostatic fields using a reduced scalar potential

    International Nuclear Information System (INIS)

    Biro, O.; Preis, K.; Vrisk, G.; Richter, K.R.

    1993-01-01

    The paper presents some improvements to the finite element computation of static magnetic fields in three dimensions using a reduced magnetic scalar potential. New methods are described for obtaining an edge element representation of the rotational part of the magnetic field from a given source current distribution. In the case when the current distribution is not known in advance, a boundary value problem is set up in terms of a current vector potential. An edge element representation of the solution can be directly used in the subsequent magnetostatic calculation. The magnetic field in a D.C. arc furnace is calculated by first determining the current distribution in terms of a current vector potential. A three dimensional problem involving a permanent magnet as well as a coil is solved and the magnetic field in some points is compared with measurement results

  16. Algorithms for Computing the Magnetic Field, Vector Potential, and Field Derivatives for Circular Current Loops in Cylindrical Coordinates

    Energy Technology Data Exchange (ETDEWEB)

    Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-08-24

    A numerical algorithm for computing the field components Br and Bz and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairly general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r2 in the some of the expressions.

  17. Fully automated laboratory and field-portable goniometer used for performing accurate and precise multiangular reflectance measurements

    Science.gov (United States)

    Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily

    2017-10-01

    Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.

  18. Spectral fitting method for the solution of time-dependent Schroedinger equations: Applications to atoms in intense laser fields

    International Nuclear Information System (INIS)

    Qiao Haoxue; Cai Qingyu; Rao Jianguo; Li Baiwen

    2002-01-01

    A spectral fitting method for solving the time-dependent Schroedinger equation has been developed and applied to the atom in intense laser fields. This method allows us to obtain a highly accurate time-dependent wave function with a contribution from the high-order term of Δt. Moreover, the time-dependent wave function is determined on a small number of discrete mesh points, thus making calculations simple and accurate. This method is illustrated by computing wave functions and harmonic generation spectra of a model atom in laser fields

  19. Use of Cone Beam Computed Tomography in Endodontics

    Science.gov (United States)

    Scarfe, William C.; Levin, Martin D.; Gane, David; Farman, Allan G.

    2009-01-01

    Cone Beam Computed Tomography (CBCT) is a diagnostic imaging modality that provides high-quality, accurate three-dimensional (3D) representations of the osseous elements of the maxillofacial skeleton. CBCT systems are available that provide small field of view images at low dose with sufficient spatial resolution for applications in endodontic diagnosis, treatment guidance, and posttreatment evaluation. This article provides a literature review and pictorial demonstration of CBCT as an imaging adjunct for endodontics. PMID:20379362

  20. Funnel metadynamics as accurate binding free-energy method

    Science.gov (United States)

    Limongelli, Vittorio; Bonomi, Massimiliano; Parrinello, Michele

    2013-01-01

    A detailed description of the events ruling ligand/protein interaction and an accurate estimation of the drug affinity to its target is of great help in speeding drug discovery strategies. We have developed a metadynamics-based approach, named funnel metadynamics, that allows the ligand to enhance the sampling of the target binding sites and its solvated states. This method leads to an efficient characterization of the binding free-energy surface and an accurate calculation of the absolute protein–ligand binding free energy. We illustrate our protocol in two systems, benzamidine/trypsin and SC-558/cyclooxygenase 2. In both cases, the X-ray conformation has been found as the lowest free-energy pose, and the computed protein–ligand binding free energy in good agreement with experiments. Furthermore, funnel metadynamics unveils important information about the binding process, such as the presence of alternative binding modes and the role of waters. The results achieved at an affordable computational cost make funnel metadynamics a valuable method for drug discovery and for dealing with a variety of problems in chemistry, physics, and material science. PMID:23553839

  1. Toward Improved Force-Field Accuracy through Sensitivity Analysis of Host-Guest Binding Thermodynamics

    Science.gov (United States)

    Yin, Jian; Fenley, Andrew T.; Henriksen, Niel M.; Gilson, Michael K.

    2015-01-01

    Improving the capability of atomistic computer models to predict the thermodynamics of noncovalent binding is critical for successful structure-based drug design, and the accuracy of such calculations remains limited by non-optimal force field parameters. Ideally, one would incorporate protein-ligand affinity data into force field parametrization, but this would be inefficient and costly. We now demonstrate that sensitivity analysis can be used to efficiently tune Lennard-Jones parameters of aqueous host-guest systems for increasingly accurate calculations of binding enthalpy. These results highlight the promise of a comprehensive use of calorimetric host-guest binding data, along with existing validation data sets, to improve force field parameters for the simulation of noncovalent binding, with the ultimate goal of making protein-ligand modeling more accurate and hence speeding drug discovery. PMID:26181208

  2. Accurate prediction of stability changes in protein mutants by combining machine learning with structure based computational mutagenesis.

    Science.gov (United States)

    Masso, Majid; Vaisman, Iosif I

    2008-09-15

    Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.

  3. A robust and accurate approach to computing compressible multiphase flow: Stratified flow model and AUSM+-up scheme

    International Nuclear Information System (INIS)

    Chang, Chih-Hao; Liou, Meng-Sing

    2007-01-01

    In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion

  4. The Accurate Particle Tracer Code

    OpenAIRE

    Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi

    2016-01-01

    The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...

  5. CAST: a new program package for the accurate characterization of large and flexible molecular systems.

    Science.gov (United States)

    Grebner, Christoph; Becker, Johannes; Weber, Daniel; Bellinger, Daniel; Tafipolski, Maxim; Brückner, Charlotte; Engels, Bernd

    2014-09-15

    The presented program package, Conformational Analysis and Search Tool (CAST) allows the accurate treatment of large and flexible (macro) molecular systems. For the determination of thermally accessible minima CAST offers the newly developed TabuSearch algorithm, but algorithms such as Monte Carlo (MC), MC with minimization, and molecular dynamics are implemented as well. For the determination of reaction paths, CAST provides the PathOpt, the Nudge Elastic band, and the umbrella sampling approach. Access to free energies is possible through the free energy perturbation approach. Along with a number of standard force fields, a newly developed symmetry-adapted perturbation theory-based force field is included. Semiempirical computations are possible through DFTB+ and MOPAC interfaces. For calculations based on density functional theory, a Message Passing Interface (MPI) interface to the Graphics Processing Unit (GPU)-accelerated TeraChem program is available. The program is available on request. Copyright © 2014 Wiley Periodicals, Inc.

  6. Accurate magnetic field calculations for contactless energy transfer coils

    OpenAIRE

    Sonntag, C.L.W.; Spree, M.; Lomonova, E.A.; Duarte, J.L.; Vandenput, A.J.A.

    2007-01-01

    In this paper, a method for estimating the magnetic field intensity from hexagon spiral windings commonly found in contactless energy transfer applications is presented. The hexagonal structures are modeled in a magneto-static environment using Biot-Savart current stick vectors. The accuracy of the models are evaluated by mapping the current sticks and the hexagon spiral winding tracks to a local twodimensional plane, and comparing their two-dimensional magnetic field intensities. The accurac...

  7. Neural Network Design on the SRC-6 Reconfigurable Computer

    Science.gov (United States)

    2006-12-01

    fingerprint identification. In this field, automatic identification methods are used to save time, especially for the purpose of fingerprint matching in...grid widths and lengths and therefore was useful in producing an accurate canvas with which to create sample training images. The added benefit of...tools available free of charge and readily accessible on the computer, it was simple to design bitmap data files visually on a canvas and then

  8. Prostate cancer nodal oligometastasis accurately assessed using prostate-specific membrane antigen positron emission tomography-computed tomography and confirmed histologically following robotic-assisted lymph node dissection.

    Science.gov (United States)

    O'Kane, Dermot B; Lawrentschuk, Nathan; Bolton, Damien M

    2016-01-01

    We herein present a case of a 76-year-old gentleman, where prostate-specific membrane antigen positron emission tomography-computed tomography (PSMA PET-CT) was used to accurately detect prostate cancer (PCa), pelvic lymph node (LN) metastasis in the setting of biochemical recurrence following definitive treatment for PCa. The positive PSMA PET-CT result was confirmed with histological examination of the involved pelvic LNs following pelvic LN dissection.

  9. Prostate cancer nodal oligometastasis accurately assessed using prostate-specific membrane antigen positron emission tomography-computed tomography and confirmed histologically following robotic-assisted lymph node dissection

    Directory of Open Access Journals (Sweden)

    Dermot B O′Kane

    2016-01-01

    Full Text Available We herein present a case of a 76-year-old gentleman, where prostate-specific membrane antigen positron emission tomography-computed tomography (PSMA PET-CT was used to accurately detect prostate cancer (PCa, pelvic lymph node (LN metastasis in the setting of biochemical recurrence following definitive treatment for PCa. The positive PSMA PET-CT result was confirmed with histological examination of the involved pelvic LNs following pelvic LN dissection.

  10. Computation of magnetic fields within source regions of ionospheric and magnetospheric currents

    DEFF Research Database (Denmark)

    Engels, U.; Olsen, Nils

    1998-01-01

    A general method of computing the magnetic effect caused by a predetermined three-dimensional external current density is presented. It takes advantage of the representation of solenoidal vector fields in terms of toroidal and poloidal modes expressed by two independent series of spherical harmon...

  11. Thermodynamically Consistent Algorithms for the Solution of Phase-Field Models

    KAUST Repository

    Vignal, Philippe

    2016-02-11

    Phase-field models are emerging as a promising strategy to simulate interfacial phenomena. Rather than tracking interfaces explicitly as done in sharp interface descriptions, these models use a diffuse order parameter to monitor interfaces implicitly. This implicit description, as well as solid physical and mathematical footings, allow phase-field models to overcome problems found by predecessors. Nonetheless, the method has significant drawbacks. The phase-field framework relies on the solution of high-order, nonlinear partial differential equations. Solving these equations entails a considerable computational cost, so finding efficient strategies to handle them is important. Also, standard discretization strategies can many times lead to incorrect solutions. This happens because, for numerical solutions to phase-field equations to be valid, physical conditions such as mass conservation and free energy monotonicity need to be guaranteed. In this work, we focus on the development of thermodynamically consistent algorithms for time integration of phase-field models. The first part of this thesis focuses on an energy-stable numerical strategy developed for the phase-field crystal equation. This model was put forward to model microstructure evolution. The algorithm developed conserves, guarantees energy stability and is second order accurate in time. The second part of the thesis presents two numerical schemes that generalize literature regarding energy-stable methods for conserved and non-conserved phase-field models. The time discretization strategies can conserve mass if needed, are energy-stable, and second order accurate in time. We also develop an adaptive time-stepping strategy, which can be applied to any second-order accurate scheme. This time-adaptive strategy relies on a backward approximation to give an accurate error estimator. The spatial discretization, in both parts, relies on a mixed finite element formulation and isogeometric analysis. The codes are

  12. Application verification research of cloud computing technology in the field of real time aerospace experiment

    Science.gov (United States)

    Wan, Junwei; Chen, Hongyan; Zhao, Jing

    2017-08-01

    According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.

  13. Advanced methods for the computation of particle beam transport and the computation of electromagnetic fields and beam-cavity interactions

    International Nuclear Information System (INIS)

    Dragt, A.J.; Gluckstern, R.L.

    1992-11-01

    The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides

  14. A boundary integral method for numerical computation of radar cross section of 3D targets using hybrid BEM/FEM with edge elements

    Science.gov (United States)

    Dodig, H.

    2017-11-01

    This contribution presents the boundary integral formulation for numerical computation of time-harmonic radar cross section for 3D targets. Method relies on hybrid edge element BEM/FEM to compute near field edge element coefficients that are associated with near electric and magnetic fields at the boundary of the computational domain. Special boundary integral formulation is presented that computes radar cross section directly from these edge element coefficients. Consequently, there is no need for near-to-far field transformation (NTFFT) which is common step in RCS computations. By the end of the paper it is demonstrated that the formulation yields accurate results for canonical models such as spheres, cubes, cones and pyramids. Method has demonstrated accuracy even in the case of dielectrically coated PEC sphere at interior resonance frequency which is common problem for computational electromagnetic codes.

  15. Time-Accurate Unsteady Pressure Loads Simulated for the Space Launch System at Wind Tunnel Conditions

    Science.gov (United States)

    Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, William L.; Glass, Christopher E.; Streett, Craig L.; Schuster, David M.

    2015-01-01

    A transonic flow field about a Space Launch System (SLS) configuration was simulated with the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics (CFD) code at wind tunnel conditions. Unsteady, time-accurate computations were performed using second-order Delayed Detached Eddy Simulation (DDES) for up to 1.5 physical seconds. The surface pressure time history was collected at 619 locations, 169 of which matched locations on a 2.5 percent wind tunnel model that was tested in the 11 ft. x 11 ft. test section of the NASA Ames Research Center's Unitary Plan Wind Tunnel. Comparisons between computation and experiment showed that the peak surface pressure RMS level occurs behind the forward attach hardware, and good agreement for frequency and power was obtained in this region. Computational domain, grid resolution, and time step sensitivity studies were performed. These included an investigation of pseudo-time sub-iteration convergence. Using these sensitivity studies and experimental data comparisons, a set of best practices to date have been established for FUN3D simulations for SLS launch vehicle analysis. To the author's knowledge, this is the first time DDES has been used in a systematic approach and establish simulation time needed, to analyze unsteady pressure loads on a space launch vehicle such as the NASA SLS.

  16. RichMol: A general variational approach for rovibrational molecular dynamics in external electric fields

    Science.gov (United States)

    Owens, Alec; Yachmenev, Andrey

    2018-03-01

    In this paper, a general variational approach for computing the rovibrational dynamics of polyatomic molecules in the presence of external electric fields is presented. Highly accurate, full-dimensional variational calculations provide a basis of field-free rovibrational states for evaluating the rovibrational matrix elements of high-rank Cartesian tensor operators and for solving the time-dependent Schrödinger equation. The effect of the external electric field is treated as a multipole moment expansion truncated at the second hyperpolarizability interaction term. Our fully numerical and computationally efficient method has been implemented in a new program, RichMol, which can simulate the effects of multiple external fields of arbitrary strength, polarization, pulse shape, and duration. Illustrative calculations of two-color orientation and rotational excitation with an optical centrifuge of NH3 are discussed.

  17. Experimental and computational investigation of the NASA low-speed centrifugal compressor flow field

    Science.gov (United States)

    Hathaway, Michael D.; Chriss, Randall M.; Wood, Jerry R.; Strazisar, Anthony J.

    1993-01-01

    An experimental and computational investigation of the NASA Lewis Research Center's low-speed centrifugal compressor (LSCC) flow field was conducted using laser anemometry and Dawes' three-dimensional viscous code. The experimental configuration consisted of a backswept impeller followed by a vaneless diffuser. Measurements of the three-dimensional velocity field were acquired at several measurement planes through the compressor. The measurements describe both the throughflow and secondary velocity field along each measurement plane. In several cases the measurements provide details of the flow within the blade boundary layers. Insight into the complex flow physics within centrifugal compressors is provided by the computational fluid dynamics analysis (CFD), and assessment of the CFD predictions is provided by comparison with the measurements. Five-hole probe and hot-wire surveys at the inlet and exit to the impeller as well as surface flow visualization along the impeller blade surfaces provided independent confirmation of the laser measurement technique. The results clearly document the development of the throughflow velocity wake that is characteristic of unshrouded centrifugal compressors.

  18. A novel method for computation of the discrete Fourier transform over characteristic two finite field of even extension degree

    OpenAIRE

    Fedorenko, Sergei V.

    2011-01-01

    A novel method for computation of the discrete Fourier transform over a finite field with reduced multiplicative complexity is described. If the number of multiplications is to be minimized, then the novel method for the finite field of even extension degree is the best known method of the discrete Fourier transform computation. A constructive method of constructing for a cyclic convolution over a finite field is introduced.

  19. Supplemental computational phantoms to estimate out-of-field absorbed dose in photon radiotherapy

    Science.gov (United States)

    Gallagher, Kyle J.; Tannous, Jaad; Nabha, Racile; Feghali, Joelle Ann; Ayoub, Zeina; Jalbout, Wassim; Youssef, Bassem; Taddei, Phillip J.

    2018-01-01

    The purpose of this study was to develop a straightforward method of supplementing patient anatomy and estimating out-of-field absorbed dose for a cohort of pediatric radiotherapy patients with limited recorded anatomy. A cohort of nine children, aged 2-14 years, who received 3D conformal radiotherapy for low-grade localized brain tumors (LBTs), were randomly selected for this study. The extent of these patients’ computed tomography simulation image sets were cranial only. To approximate their missing anatomy, we supplemented the LBT patients’ image sets with computed tomography images of patients in a previous study with larger extents of matched sex, height, and mass and for whom contours of organs at risk for radiogenic cancer had already been delineated. Rigid fusion was performed between the LBT patients’ data and that of the supplemental computational phantoms using commercial software and in-house codes. In-field dose was calculated with a clinically commissioned treatment planning system, and out-of-field dose was estimated with a previously developed analytical model that was re-fit with parameters based on new measurements for intracranial radiotherapy. Mean doses greater than 1 Gy were found in the red bone marrow, remainder, thyroid, and skin of the patients in this study. Mean organ doses between 150 mGy and 1 Gy were observed in the breast tissue of the girls and lungs of all patients. Distant organs, i.e. prostate, bladder, uterus, and colon, received mean organ doses less than 150 mGy. The mean organ doses of the younger, smaller LBT patients (0-4 years old) were a factor of 2.4 greater than those of the older, larger patients (8-12 years old). Our findings demonstrated the feasibility of a straightforward method of applying supplemental computational phantoms and dose-calculation models to estimate absorbed dose for a set of children of various ages who received radiotherapy and for whom anatomies were largely missing in their original

  20. Computer-aided dispatch--traffic management center field operational test : Washington State final report

    Science.gov (United States)

    2006-05-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...

  1. Use of Cone Beam Computed Tomography in Endodontics

    Directory of Open Access Journals (Sweden)

    William C. Scarfe

    2009-01-01

    Full Text Available Cone Beam Computed Tomography (CBCT is a diagnostic imaging modality that provides high-quality, accurate three-dimensional (3D representations of the osseous elements of the maxillofacial skeleton. CBCT systems are available that provide small field of view images at low dose with sufficient spatial resolution for applications in endodontic diagnosis, treatment guidance, and posttreatment evaluation. This article provides a literature review and pictorial demonstration of CBCT as an imaging adjunct for endodontics.

  2. High degree utilization of computers for design of nuclear power plants

    International Nuclear Information System (INIS)

    Masui, Takao; Sawada, Takashi

    1992-01-01

    Nuclear power plants are the huge technology in which various technologies are compounded, and the high safety is demanded. Therefore, in the design of nuclear power plants, it is necessary to carry out the design by sufficiently grasping the behavior of the plants, and to confirm the safety by carrying out the accurate design evaluation supposing the various operational conditions, and as the indispensable tool for these analysis and evaluation, the most advanced computers in that age have been utilized. As to the utilization for the design, there are the fields of design, analysis and evaluation and another fields of the application to the support of design. Also in the field of the application to operation control, computers are utilized. The utilization of computers for the core design, hydrothermal design, core structure design, safety analysis and structural analysis of PWR plants, and for the nuclear design, safety analysis and heat flow analysis of FBR plants, the application to the support of design and the application to operation control are explained. (K.I.)

  3. Quality Assurance Programme for Computed Tomography: Diagnostic and Therapy Applications

    Energy Technology Data Exchange (ETDEWEB)

    NONE

    2012-08-15

    This publication presents a harmonized approach to quality assurance in the field of computed tomography applied to both diagnostics and therapy. It gives a careful analysis of the principles and specific instructions that can be used for a quality assurance programme for optimal performance and reduced patient dose in diagnostic radiology. In some cases, radiotherapy programmes are making a transition from 2-D to 3-D radiotherapy, a complex process which critically depends on accurate treatment planning. In this respect, the authors also provide detailed information about the elements needed for quality assurance testing, including those relating to accurate patient characterization as needed for radiotherapy treatment planning.

  4. Determination of electrical characteristics of body tissues for computational dosimetry studies

    International Nuclear Information System (INIS)

    Silva, Rafael Monteiro da Cruz; Domingues, Luis Adriano M.C.; Neto, Athanasio Mpalantinos; Barbosa, Carlos Ruy Nunez

    2008-01-01

    Increasing public concern about human exposure to electromagnetic fields led to the development of International Exposure Standards, which reflect the actual scientific knowledge on this subject. Existing exposure limits (reference levels), are based on maximum admissible fields or induced currents densities inside human bodies, called basic restrictions. Since those physical quantities can not be readily measured, they must be estimated using techniques of computational dosimetry. These techniques rely on accurate computational modelling of human bodies to establish the relation of external field (electric / magnetic) to induced current (internal field). Nowadays the models available for human body simulation (FEM, FDM,...) are quite accurate, specially when using geometric discretization obtained from medical imaging techniques, however the determination of tissues characteristics (permittivity and conductivity) is still an issue to be dealt with. In current studies the electrical characteristics (permittivity and conductivity) of body tissues are based on values which were obtained from measurements done on tissue simples obtained from dead bodies. However those values may not represent adequately the behaviour of living tissues. In this paper a research designed to characterize the permittivity of human body tissues is presented, consisting of measurements and simulations designed to determine, using indirect methods, the electrical behaviour of living tissues. A study of exposure assessment on a real high voltage transmission line in Brazil, using measured permittivity values combined with a finite element model of the human body is presented in the panel. (author)

  5. Approach and tool for computer animation of fields in electrical apparatus

    International Nuclear Information System (INIS)

    Miltchev, Radoslav; Yatchev, Ivan S.; Ritchie, Ewen

    2002-01-01

    The paper presents a technical approach and post-processing tool for creating and displaying computer animation. The approach enables handling of two- and three-dimensional physical field phenomena results obtained from finite element software or to display movement processes in electrical apparatus simulations. The main goal of this work is to extend auxiliary features built in general-purpose CAD software working in the Windows environment. Different storage techniques were examined and the one employing image capturing was chosen. The developed tool provides benefits of independent visualisation, creating scenarios and facilities for exporting animations in common file fon-nats for distribution on different computer platforms. It also provides a valuable educational tool.(Author)

  6. Multidetector row computed tomography may accurately estimate plaque vulnerability. Does MDCT accurately estimate plaque vulnerability? (Pro)

    International Nuclear Information System (INIS)

    Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa

    2011-01-01

    Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)

  7. Cone Beam Computed Tomography (CBCT) in the Field of Interventional Oncology of the Liver

    Energy Technology Data Exchange (ETDEWEB)

    Bapst, Blanche, E-mail: blanchebapst@hotmail.com; Lagadec, Matthieu, E-mail: matthieu.lagadec@bjn.aphp.fr [Beaujon Hospital, University Hospitals Paris Nord Val de Seine, Beaujon, Department of Radiology (France); Breguet, Romain, E-mail: romain.breguet@hcuge.ch [University Hospital of Geneva (Switzerland); Vilgrain, Valérie, E-mail: Valerie.vilgrain@bjn.aphp.fr; Ronot, Maxime, E-mail: maxime.ronot@bjn.aphp.fr [Beaujon Hospital, University Hospitals Paris Nord Val de Seine, Beaujon, Department of Radiology (France)

    2016-01-15

    Cone beam computed tomography (CBCT) is an imaging modality that provides computed tomographic images using a rotational C-arm equipped with a flat panel detector as part of the Angiography suite. The aim of this technique is to provide additional information to conventional 2D imaging to improve the performance of interventional liver oncology procedures (intraarterial treatments such as chemoembolization or selective internal radiation therapy, and percutaneous tumor ablation). CBCT provides accurate tumor detection and targeting, periprocedural guidance, and post-procedural evaluation of treatment success. This technique can be performed during intraarterial or intravenous contrast agent administration with various acquisition protocols to highlight liver tumors, liver vessels, or the liver parenchyma. The purpose of this review is to present an extensive overview of published data on CBCT in interventional oncology of the liver, for both percutaneous ablation and intraarterial procedures.

  8. Accurate Bit Error Rate Calculation for Asynchronous Chaos-Based DS-CDMA over Multipath Channel

    Science.gov (United States)

    Kaddoum, Georges; Roviras, Daniel; Chargé, Pascal; Fournier-Prunaret, Daniele

    2009-12-01

    An accurate approach to compute the bit error rate expression for multiuser chaosbased DS-CDMA system is presented in this paper. For more realistic communication system a slow fading multipath channel is considered. A simple RAKE receiver structure is considered. Based on the bit energy distribution, this approach compared to others computation methods existing in literature gives accurate results with low computation charge. Perfect estimation of the channel coefficients with the associated delays and chaos synchronization is assumed. The bit error rate is derived in terms of the bit energy distribution, the number of paths, the noise variance, and the number of users. Results are illustrated by theoretical calculations and numerical simulations which point out the accuracy of our approach.

  9. Fast sweeping algorithm for accurate solution of the TTI eikonal equation using factorization

    KAUST Repository

    bin Waheed, Umair

    2017-06-10

    Traveltime computation is essential for many seismic data processing applications and velocity analysis tools. High-resolution seismic imaging requires eikonal solvers to account for anisotropy whenever it significantly affects the seismic wave kinematics. Moreover, computation of auxiliary quantities, such as amplitude and take-off angle, rely on highly accurate traveltime solutions. However, the finite-difference based eikonal solution for a point-source initial condition has an upwind source-singularity at the source position, since the wavefront curvature is large near the source point. Therefore, all finite-difference solvers, even the high-order ones, show inaccuracies since the errors due to source-singularity spread from the source point to the whole computational domain. We address the source-singularity problem for tilted transversely isotropic (TTI) eikonal solvers using factorization. We solve a sequence of factored tilted elliptically anisotropic (TEA) eikonal equations iteratively, each time by updating the right hand side function. At each iteration, we factor the unknown TEA traveltime into two factors. One of the factors is specified analytically, such that the other factor is smooth in the source neighborhood. Therefore, through the iterative procedure we obtain accurate solution to the TTI eikonal equation. Numerical tests show significant improvement in accuracy due to factorization. The idea can be easily extended to compute accurate traveltimes for models with lower anisotropic symmetries, such as orthorhombic, monoclinic or even triclinic media.

  10. Advances in Computational Stability Analysis of Composite Aerospace Structures

    International Nuclear Information System (INIS)

    Degenhardt, R.; Araujo, F. C. de

    2010-01-01

    European aircraft industry demands for reduced development and operating costs. Structural weight reduction by exploitation of structural reserves in composite aerospace structures contributes to this aim, however, it requires accurate and experimentally validated stability analysis of real structures under realistic loading conditions. This paper presents different advances from the area of computational stability analysis of composite aerospace structures which contribute to that field. For stringer stiffened panels main results of the finished EU project COCOMAT are given. It investigated the exploitation of reserves in primary fibre composite fuselage structures through an accurate and reliable simulation of postbuckling and collapse. For unstiffened cylindrical composite shells a proposal for a new design method is presented.

  11. Absolute Hounsfield unit measurement on noncontrast computed tomography cannot accurately predict struvite stone composition.

    Science.gov (United States)

    Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj

    2013-02-01

    The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can accurately predict the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if pR=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the suspicion for a pure struvite calculus.

  12. Parallel computation of electrostatic potentials and fields in technical geometries on SUPRENUM

    International Nuclear Information System (INIS)

    Alef, M.

    1990-02-01

    The programs EPOTZR und EFLDZR have been developed in order to compute electrostatic potentials and the corresponding fields in technical geometries (example: Diode geometry for optimum focussing of ion beams in pulsed high-current ion diodes). The Poisson equation is discretized in a two-dimensional boundary-fitted grid in the (r,z)-plane and solved using multigrid methods. The z- and r-components of the field are determined by numerical differentiation of the potential. This report contains the user's guide of the SUPRENUM versions EPOTZR-P and EFLDZR-P. (orig./HP) [de

  13. An accurate solver for forward and inverse transport

    International Nuclear Information System (INIS)

    Monard, Francois; Bal, Guillaume

    2010-01-01

    This paper presents a robust and accurate way to solve steady-state linear transport (radiative transfer) equations numerically. Our main objective is to address the inverse transport problem, in which the optical parameters of a domain of interest are reconstructed from measurements performed at the domain's boundary. This inverse problem has important applications in medical and geophysical imaging, and more generally in any field involving high frequency waves or particles propagating in scattering environments. Stable solutions of the inverse transport problem require that the singularities of the measurement operator, which maps the optical parameters to the available measurements, be captured with sufficient accuracy. This in turn requires that the free propagation of particles be calculated with care, which is a difficult problem on a Cartesian grid. A standard discrete ordinates method is used for the direction of propagation of the particles. Our methodology to address spatial discretization is based on rotating the computational domain so that each direction of propagation is always aligned with one of the grid axes. Rotations are performed in the Fourier domain to achieve spectral accuracy. The numerical dispersion of the propagating particles is therefore minimal. As a result, the ballistic and single scattering components of the transport solution are calculated robustly and accurately. Physical blurring effects, such as small angular diffusion, are also incorporated into the numerical tool. Forward and inverse calculations performed in a two-dimensional setting exemplify the capabilities of the method. Although the methodology might not be the fastest way to solve transport equations, its physical accuracy provides us with a numerical tool to assess what can and cannot be reconstructed in inverse transport theory.

  14. Computing black hole entropy in loop quantum gravity from a conformal field theory perspective

    International Nuclear Information System (INIS)

    Agulló, Iván; Borja, Enrique F.; Díaz-Polo, Jacobo

    2009-01-01

    Motivated by the analogy proposed by Witten between Chern-Simons and conformal field theories, we explore an alternative way of computing the entropy of a black hole starting from the isolated horizon framework in loop quantum gravity. The consistency of the result opens a window for the interplay between conformal field theory and the description of black holes in loop quantum gravity

  15. Classification of integrable two-dimensional models of relativistic field theory by means of computer

    International Nuclear Information System (INIS)

    Getmanov, B.S.

    1988-01-01

    The results of classification of two-dimensional relativistic field models (1) spinor; (2) essentially-nonlinear scalar) possessing higher conservation laws using the system of symbolic computer calculations are presented shortly

  16. Multiscale Methods, Parallel Computation, and Neural Networks for Real-Time Computer Vision.

    Science.gov (United States)

    Battiti, Roberto

    1990-01-01

    This thesis presents new algorithms for low and intermediate level computer vision. The guiding ideas in the presented approach are those of hierarchical and adaptive processing, concurrent computation, and supervised learning. Processing of the visual data at different resolutions is used not only to reduce the amount of computation necessary to reach the fixed point, but also to produce a more accurate estimation of the desired parameters. The presented adaptive multiple scale technique is applied to the problem of motion field estimation. Different parts of the image are analyzed at a resolution that is chosen in order to minimize the error in the coefficients of the differential equations to be solved. Tests with video-acquired images show that velocity estimation is more accurate over a wide range of motion with respect to the homogeneous scheme. In some cases introduction of explicit discontinuities coupled to the continuous variables can be used to avoid propagation of visual information from areas corresponding to objects with different physical and/or kinematic properties. The human visual system uses concurrent computation in order to process the vast amount of visual data in "real -time." Although with different technological constraints, parallel computation can be used efficiently for computer vision. All the presented algorithms have been implemented on medium grain distributed memory multicomputers with a speed-up approximately proportional to the number of processors used. A simple two-dimensional domain decomposition assigns regions of the multiresolution pyramid to the different processors. The inter-processor communication needed during the solution process is proportional to the linear dimension of the assigned domain, so that efficiency is close to 100% if a large region is assigned to each processor. Finally, learning algorithms are shown to be a viable technique to engineer computer vision systems for different applications starting from

  17. Real-time temperature field measurement based on acoustic tomography

    International Nuclear Information System (INIS)

    Bao, Yong; Jia, Jiabin; Polydorides, Nick

    2017-01-01

    Acoustic tomography can be used to measure the temperature field from the time-of-flight (TOF). In order to capture real-time temperature field changes and accurately yield quantitative temperature images, two improvements to the conventional acoustic tomography system are studied: simultaneous acoustic transmission and TOF collection along multiple ray paths, and an offline iteration reconstruction algorithm. During system operation, all the acoustic transceivers send modulated and filtered wideband Kasami sequences simultaneously to facilitate fast and accurate TOF measurements using cross-correlation detection. For image reconstruction, the iteration process is separated and executed offline beforehand to shorten computation time for online temperature field reconstruction. The feasibility and effectiveness of the developed methods are validated in the simulation study. The simulation results demonstrate that the proposed method can reduce the processing time per frame from 160 ms to 20 ms, while the reconstruction error remains less than 5%. Hence, the proposed method has great potential in the measurement of rapid temperature change with good temporal and spatial resolution. (paper)

  18. Computational model for superconducting toroidal-field magnets for a tokamak reactor

    International Nuclear Information System (INIS)

    Turner, L.R.; Abdou, M.A.

    1978-01-01

    A computational model for predicting the performance characteristics and cost of superconducting toroidal-field (TF) magnets in tokamak reactors is presented. The model can be used to compare the technical and economic merits of different approaches to the design of TF magnets for a reactor system. The model has been integrated into the ANL Systems Analysis Program. Samples of results obtainable with the model are presented

  19. Atomistic Force Field for Pyridinium-Based Ionic Liquids: Reliable Transport Properties

    DEFF Research Database (Denmark)

    Voroshylova, I. V.; Chaban, V. V.

    2014-01-01

    Reliable force field (FF) is a central issue in successful prediction of physical chemical properties via computer simulations. This work introduces refined FF parameters for six popular ionic liquids (ILs) of the pyridinium family (butylpyridinium tetrafluoroborate, bis(trifluoromethanesulfonyl)......Reliable force field (FF) is a central issue in successful prediction of physical chemical properties via computer simulations. This work introduces refined FF parameters for six popular ionic liquids (ILs) of the pyridinium family (butylpyridinium tetrafluoroborate, bis......(trifluoromethanesulfonyl)imide, dicyanamide, hexafluorophosphate, triflate, chloride). We elaborate a systematic procedure, which allows accounting for specific cationanion interactions in the liquid phase. Once these interactions are described accurately, all experimentally determined transport properties can be reproduced. We prove...... and elevated temperature. The developed atomistic models provide a systematic refinement upon the well-known Canongia LopesPadua (CL&P) FF. Together with the original CL&P parameters the present models foster a computational investigation of ionic liquids....

  20. High-Precision Computation and Mathematical Physics

    International Nuclear Information System (INIS)

    Bailey, David H.; Borwein, Jonathan M.

    2008-01-01

    At the present time, IEEE 64-bit floating-point arithmetic is sufficiently accurate for most scientific applications. However, for a rapidly growing body of important scientific computing applications, a higher level of numeric precision is required. Such calculations are facilitated by high-precision software packages that include high-level language translation modules to minimize the conversion effort. This paper presents a survey of recent applications of these techniques and provides some analysis of their numerical requirements. These applications include supernova simulations, climate modeling, planetary orbit calculations, Coulomb n-body atomic systems, scattering amplitudes of quarks, gluons and bosons, nonlinear oscillator theory, Ising theory, quantum field theory and experimental mathematics. We conclude that high-precision arithmetic facilities are now an indispensable component of a modern large-scale scientific computing environment.

  1. Computer codes for tasks in the fields of isotope and radiation research

    International Nuclear Information System (INIS)

    Friedrich, K.; Gebhardt, O.

    1978-11-01

    Concise descriptions of computer codes developed for solving problems in the fields of isotope and radiation research at the Zentralinstitut fuer Isotopen- und Strahlenforschung (ZfI) are compiled. In part two the structure of the ZfI program library MABIF is outlined and a complete list of all codes available is given

  2. Computational efficiency for the surface renewal method

    Science.gov (United States)

    Kelley, Jason; Higgins, Chad

    2018-04-01

    Measuring surface fluxes using the surface renewal (SR) method requires programmatic algorithms for tabulation, algebraic calculation, and data quality control. A number of different methods have been published describing automated calibration of SR parameters. Because the SR method utilizes high-frequency (10 Hz+) measurements, some steps in the flux calculation are computationally expensive, especially when automating SR to perform many iterations of these calculations. Several new algorithms were written that perform the required calculations more efficiently and rapidly, and that tested for sensitivity to length of flux averaging period, ability to measure over a large range of lag timescales, and overall computational efficiency. These algorithms utilize signal processing techniques and algebraic simplifications that demonstrate simple modifications that dramatically improve computational efficiency. The results here complement efforts by other authors to standardize a robust and accurate computational SR method. Increased speed of computation time grants flexibility to implementing the SR method, opening new avenues for SR to be used in research, for applied monitoring, and in novel field deployments.

  3. The FLUKA code: An accurate simulation tool for particle therapy

    CERN Document Server

    Battistoni, Giuseppe; Böhlen, Till T; Cerutti, Francesco; Chin, Mary Pik Wai; Dos Santos Augusto, Ricardo M; Ferrari, Alfredo; Garcia Ortega, Pablo; Kozlowska, Wioletta S; Magro, Giuseppe; Mairani, Andrea; Parodi, Katia; Sala, Paola R; Schoofs, Philippe; Tessonnier, Thomas; Vlachoudis, Vasilis

    2016-01-01

    Monte Carlo (MC) codes are increasingly spreading in the hadrontherapy community due to their detailed description of radiation transport and interaction with matter. The suitability of a MC code for application to hadrontherapy demands accurate and reliable physical models capable of handling all components of the expected radiation field. This becomes extremely important for correctly performing not only physical but also biologically-based dose calculations, especially in cases where ions heavier than protons are involved. In addition, accurate prediction of emerging secondary radiation is of utmost importance in innovative areas of research aiming at in-vivo treatment verification. This contribution will address the recent developments of the FLUKA MC code and its practical applications in this field. Refinements of the FLUKA nuclear models in the therapeutic energy interval lead to an improved description of the mixed radiation field as shown in the presented benchmarks against experimental data with bot...

  4. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    International Nuclear Information System (INIS)

    Kawasaki, Shoji; Nakamura, Kazuo; Nakamura, Yukio; Hiraki, Naoharu; Toi, Kazuo

    1981-01-01

    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance. (Kato, T.)

  5. Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1

    Energy Technology Data Exchange (ETDEWEB)

    Kawasaki, S; Nakamura, K; Nakamura, Y; Hiraki, N; Toi, K [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics

    1981-02-01

    A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance.

  6. Near real-time digital holographic microscope based on GPU parallel computing

    Science.gov (United States)

    Zhu, Gang; Zhao, Zhixiong; Wang, Huarui; Yang, Yan

    2018-01-01

    A transmission near real-time digital holographic microscope with in-line and off-axis light path is presented, in which the parallel computing technology based on compute unified device architecture (CUDA) and digital holographic microscopy are combined. Compared to other holographic microscopes, which have to implement reconstruction in multiple focal planes and are time-consuming the reconstruction speed of the near real-time digital holographic microscope can be greatly improved with the parallel computing technology based on CUDA, so it is especially suitable for measurements of particle field in micrometer and nanometer scale. Simulations and experiments show that the proposed transmission digital holographic microscope can accurately measure and display the velocity of particle field in micrometer scale, and the average velocity error is lower than 10%.With the graphic processing units(GPU), the computing time of the 100 reconstruction planes(512×512 grids) is lower than 120ms, while it is 4.9s using traditional reconstruction method by CPU. The reconstruction speed has been raised by 40 times. In other words, it can handle holograms at 8.3 frames per second and the near real-time measurement and display of particle velocity field are realized. The real-time three-dimensional reconstruction of particle velocity field is expected to achieve by further optimization of software and hardware. Keywords: digital holographic microscope,

  7. Finite element and node point generation computer programs used for the design of toroidal field coils in tokamak fusion devices

    International Nuclear Information System (INIS)

    Smith, R.A.

    1975-06-01

    The structural analysis of toroidal field coils in Tokamak fusion machines can be performed with the finite element method. This technique has been employed for design evaluations of toroidal field coils on the Princeton Large Torus (PLT), the Poloidal Diverter Experiment (PDX), and the Tokamak Fusion Test Reactor (TFTR). The application of the finite element method can be simplified with computer programs that are used to generate the input data for the finite element code. There are three areas of data input where significant automation can be provided by supplementary computer codes. These concern the definition of geometry by a node point mesh, the definition of the finite elements from the geometric node points, and the definition of the node point force/displacement boundary conditions. The node point forces in a model of a toroidal field coil are computed from the vector cross product of the coil current and the magnetic field. The computer programs named PDXNODE and ELEMENT are described. The program PDXNODE generates the geometric node points of a finite element model for a toroidal field coil. The program ELEMENT defines the finite elements of the model from the node points and from material property considerations. The program descriptions include input requirements, the output, the program logic, the methods of generating complex geometries with multiple runs, computational time and computer compatibility. The output format of PDXNODE and ELEMENT make them compatible with PDXFORC and two general purpose finite element computer codes: (ANSYS) the Engineering Analysis System written by the Swanson Analysis Systems, Inc., and (WECAN) the Westinghouse Electric Computer Analysis general purpose finite element program. The Fortran listings of PDXNODE and ELEMENT are provided

  8. Digital imaging of root traits (DIRT): a high-throughput computing and collaboration platform for field-based root phenomics.

    Science.gov (United States)

    Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander

    2015-01-01

    Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots

  9. Accurate calculation of Green functions on the d-dimensional hypercubic lattice

    International Nuclear Information System (INIS)

    Loh, Yen Lee

    2011-01-01

    We write the Green function of the d-dimensional hypercubic lattice in a piecewise form covering the entire real frequency axis. Each piece is a single integral involving modified Bessel functions of the first and second kinds. The smoothness of the integrand allows both real and imaginary parts of the Green function to be computed quickly and accurately for any dimension d and any real frequency, and the computational time scales only linearly with d.

  10. Leveraging Two Kinect Sensors for Accurate Full-Body Motion Capture

    Directory of Open Access Journals (Sweden)

    Zhiquan Gao

    2015-09-01

    Full Text Available Accurate motion capture plays an important role in sports analysis, the medical field and virtual reality. Current methods for motion capture often suffer from occlusions, which limits the accuracy of their pose estimation. In this paper, we propose a complete system to measure the pose parameters of the human body accurately. Different from previous monocular depth camera systems, we leverage two Kinect sensors to acquire more information about human movements, which ensures that we can still get an accurate estimation even when significant occlusion occurs. Because human motion is temporally constant, we adopt a learning analysis to mine the temporal information across the posture variations. Using this information, we estimate human pose parameters accurately, regardless of rapid movement. Our experimental results show that our system can perform an accurate pose estimation of the human body with the constraint of information from the temporal domain.

  11. Cycle accurate and cycle reproducible memory for an FPGA based hardware accelerator

    Science.gov (United States)

    Asaad, Sameh W.; Kapur, Mohit

    2016-03-15

    A method, system and computer program product are disclosed for using a Field Programmable Gate Array (FPGA) to simulate operations of a device under test (DUT). The DUT includes a device memory having a number of input ports, and the FPGA is associated with a target memory having a second number of input ports, the second number being less than the first number. In one embodiment, a given set of inputs is applied to the device memory at a frequency Fd and in a defined cycle of time, and the given set of inputs is applied to the target memory at a frequency Ft. Ft is greater than Fd and cycle accuracy is maintained between the device memory and the target memory. In an embodiment, a cycle accurate model of the DUT memory is created by separating the DUT memory interface protocol from the target memory storage array.

  12. Computer model of copper resistivity will improve the efficiency of field-compression devices

    International Nuclear Information System (INIS)

    Burgess, T.J.

    1977-01-01

    By detonating a ring of high explosive around an existing magnetic field, we can, under certain conditions, compress the field and multiply its strength tremendously. In this way, we can duplicate for a fraction of a second the extreme pressures that normally exist only in the interior of stars and planets. Under such pressures, materials may exhibit behavior that will confirm or alter current notions about the fundamental structure of matter and the ongoing processes in planetary interiors. However, we cannot design an efficient field-compression device unless we can calculate the electrical resistivity of certain basic metal components, which interact with the field. To aid in the design effort, we have developed a computer code that calculates the resistivity of copper and other metals over the wide range of temperatures and pressures found in a field-compression device

  13. A study of computer graphics technology in application of communication resource management

    Science.gov (United States)

    Li, Jing; Zhou, Liang; Yang, Fei

    2017-08-01

    With the development of computer technology, computer graphics technology has been widely used. Especially, the success of object-oriented technology and multimedia technology promotes the development of graphics technology in the computer software system. Therefore, the computer graphics theory and application technology have become an important topic in the field of computer, while the computer graphics technology becomes more and more extensive in various fields of application. In recent years, with the development of social economy, especially the rapid development of information technology, the traditional way of communication resource management cannot effectively meet the needs of resource management. In this case, the current communication resource management is still using the original management tools and management methods, resource management equipment management and maintenance, which brought a lot of problems. It is very difficult for non-professionals to understand the equipment and the situation in communication resource management. Resource utilization is relatively low, and managers cannot quickly and accurately understand the resource conditions. Aimed at the above problems, this paper proposes to introduce computer graphics technology into the communication resource management. The introduction of computer graphics not only makes communication resource management more vivid, but also reduces the cost of resource management and improves work efficiency.

  14. Boundary element numerical method for the electric field generated by oblique multi-needle electrodes

    Institute of Scientific and Technical Information of China (English)

    LIU FuPing; WANG AnLing; WANG AnXuan; CAO YueZu; CHEN Qiang; YANG ChangChun

    2009-01-01

    According to the electric potential of oblique multi-needle electrodes (OMNE) in biological tissue, the discrete equations based on the indetermination linear current density were established by the boundary element integral equations (BEIE). The non-uniform distribution of the current flowing from multi-needle electrodes to conductive biological tissues was imaged by solving a set of linear equa-tions. Then, the electric field and potential generated by OMNE in biological tissues at any point may be determined through the boundary element method (BEM). The time of program running and stability of computing method are examined by an example. It demonstrates that the algorithm possesses a quick speed and the steady computed results. It means that this method has an important referenced significance for computing the field and the potential generated by OMNE in bio-tissue, which is a fast, effective and accurate computing method.

  15. High accuracy ion optics computing

    International Nuclear Information System (INIS)

    Amos, R.J.; Evans, G.A.; Smith, R.

    1986-01-01

    Computer simulation of focused ion beams for surface analysis of materials by SIMS, or for microfabrication by ion beam lithography plays an important role in the design of low energy ion beam transport and optical systems. Many computer packages currently available, are limited in their applications, being inaccurate or inappropriate for a number of practical purposes. This work describes an efficient and accurate computer programme which has been developed and tested for use on medium sized machines. The programme is written in Algol 68 and models the behaviour of a beam of charged particles through an electrostatic system. A variable grid finite difference method is used with a unique data structure, to calculate the electric potential in an axially symmetric region, for arbitrary shaped boundaries. Emphasis has been placed upon finding an economic method of solving the resulting set of sparse linear equations in the calculation of the electric field and several of these are described. Applications include individual ion lenses, extraction optics for ions in surface analytical instruments and the design of columns for ion beam lithography. Computational results have been compared with analytical calculations and with some data obtained from individual einzel lenses. (author)

  16. Study of electric and magnetic fields on transmission lines using a computer simulation program

    International Nuclear Information System (INIS)

    Robelo Mojica, Nelson

    2011-01-01

    A study was conducted to determine and reduce levels of electric and magnetic fields with different configurations used by the Instituto Costarricense de Electricidad in power transmission lines in Costa Rica. The computer simulation program PLS-CADD with EPRI algorithm has been used to obtain field values close to those actual to lines easements that have worked to date. Different configurations have been compared on equal terms and the lowest levels of electric and magnetic fields are determined. The most appropriate configuration of the tower has been obtained and therefore has decreased exposure to electromagnetic fields people, without affecting the energy demand of the population. (author) [es

  17. An efficient approach for computing the geometrical optics field reflected from a numerically specified surface

    Science.gov (United States)

    Mittra, R.; Rushdi, A.

    1979-01-01

    An approach for computing the geometrical optic fields reflected from a numerically specified surface is presented. The approach includes the step of deriving a specular point and begins with computing the reflected rays off the surface at the points where their coordinates, as well as the partial derivatives (or equivalently, the direction of the normal), are numerically specified. Then, a cluster of three adjacent rays are chosen to define a 'mean ray' and the divergence factor associated with this mean ray. Finally, the ampilitude, phase, and vector direction of the reflected field at a given observation point are derived by associating this point with the nearest mean ray and determining its position relative to such a ray.

  18. Accurate predictions for the LHC made easy

    CERN Multimedia

    CERN. Geneva

    2014-01-01

    The data recorded by the LHC experiments is of a very high quality. To get the most out of the data, precise theory predictions, including uncertainty estimates, are needed to reduce as much as possible theoretical bias in the experimental analyses. Recently, significant progress has been made in computing Next-to-Leading Order (NLO) computations, including matching to the parton shower, that allow for these accurate, hadron-level predictions. I shall discuss one of these efforts, the MadGraph5_aMC@NLO program, that aims at the complete automation of predictions at the NLO accuracy within the SM as well as New Physics theories. I’ll illustrate some of the theoretical ideas behind this program, show some selected applications to LHC physics, as well as describe the future plans.

  19. Computation of multiphase systems with phase field models

    International Nuclear Information System (INIS)

    Badalassi, V.E.; Ceniceros, H.D.; Banerjee, S.

    2003-01-01

    Phase field models offer a systematic physical approach for investigating complex multiphase systems behaviors such as near-critical interfacial phenomena, phase separation under shear, and microstructure evolution during solidification. However, because interfaces are replaced by thin transition regions (diffuse interfaces), phase field simulations require resolution of very thin layers to capture the physics of the problems studied. This demands robust numerical methods that can efficiently achieve high resolution and accuracy, especially in three dimensions. We present here an accurate and efficient numerical method to solve the coupled Cahn-Hilliard/Navier-Stokes system, known as Model H, that constitutes a phase field model for density-matched binary fluids with variable mobility and viscosity. The numerical method is a time-split scheme that combines a novel semi-implicit discretization for the convective Cahn-Hilliard equation with an innovative application of high-resolution schemes employed for direct numerical simulations of turbulence. This new semi-implicit discretization is simple but effective since it removes the stability constraint due to the nonlinearity of the Cahn-Hilliard equation at the same cost as that of an explicit scheme. It is derived from a discretization used for diffusive problems that we further enhance to efficiently solve flow problems with variable mobility and viscosity. Moreover, we solve the Navier-Stokes equations with a robust time-discretization of the projection method that guarantees better stability properties than those for Crank-Nicolson-based projection methods. For channel geometries, the method uses a spectral discretization in the streamwise and spanwise directions and a combination of spectral and high order compact finite difference discretizations in the wall normal direction. The capabilities of the method are demonstrated with several examples including phase separation with, and without, shear in two and three

  20. Apps for Angiosperms: The Usability of Mobile Computers and Printed Field Guides for UK Wild Flower and Winter Tree Identification

    Science.gov (United States)

    Stagg, Bethan C.; Donkin, Maria E.

    2017-01-01

    We investigated usability of mobile computers and field guide books with adult botanical novices, for the identification of wildflowers and deciduous trees in winter. Identification accuracy was significantly higher for wildflowers using a mobile computer app than field guide books but significantly lower for deciduous trees. User preference…

  1. Computer-aided dispatch--traffic management center field operational test : state of Utah final report

    Science.gov (United States)

    2006-07-01

    This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...

  2. Online corrections - Evidence based practice utilizing electronic portal imaging to improve the accuracy of field placement for locally advanced prostate cancer

    International Nuclear Information System (INIS)

    Middleton, M.; Medwell, S.; Rolfo, A.; Joon, M.L.

    2003-01-01

    The requirement of accurate field placement in the treatment of locally advanced prostate cancer is of great significance given the onset of dose escalation and increased Planning Target Volume (PTV) conformity. With these factors in mind, it becomes essential to ensure accurate field placement for the duration of a course of Radiotherapy. This study examines the role of Online Corrections to increase accuracy of field placement, utilizing Varian Vision EPI equipment. The study also examines the hypothetical scenario of effect on three-dimensional computer dosimetry if Online Corrections were not performed, incorporating TCP and NTCP data. Field placement data was collected on patients receiving radical radiotherapy to the prostate utilizing the Varian Vision (TM)EPI software. Both intra and inter field data was collected with Online Corrections being carried out within the confines of the BAROC PROSTATE EPI POLICY. Analysis was performed on the data to illustrate the value of Online Corrections in the pursuit of accurate field placement. This evidence was further supported by computer dosimetry presenting the worst case possible impact upon a patients total course of treatment if Online Corrections were not performed. The use of Online Corrections can prove to be of enormous benefit to both patient and practitioner. For centres with the available technology, it places the responsibility of field placement upon the Radiation Therapist. This responsibility in turn impacts on the education, training and empowerment of the Radiation Therapy group. These are issues of the utmost importance to centres considering the use of Online Corrections

  3. The advantage of the three dimensional computed tomographic (3 D-CT for ensuring accurate bone incision in sagittal split ramus osteotomy

    Directory of Open Access Journals (Sweden)

    Coen Pramono D

    2005-03-01

    Full Text Available Functional and aesthetic dysgnathia surgery requires accurate pre-surgical planning, including the surgical technique to be used related with the difference of anatomical structures amongst individuals. Programs that simulate the surgery become increasingly important. This can be mediated by using a surgical model, conventional x-rays as panoramic, cephalometric projections and another sophisticated method such as a three dimensional computed tomography (3 D-CT. A patient who had undergone double jaw surgeries with difficult anatomical landmarks was presented. In this case the mandible foramens were seen highly relatively related to the sigmoid notches. Therefore, ensuring the bone incisions in sagittal split was presumed to be difficult. A 3D-CT was made and considered to be very helpful in supporting the pre-operative diagnostic.

  4. Computation and analysis of backward ray-tracing in aero-optics flow fields.

    Science.gov (United States)

    Xu, Liang; Xue, Deting; Lv, Xiaoyi

    2018-01-08

    A backward ray-tracing method is proposed for aero-optics simulation. Different from forward tracing, the backward tracing direction is from the internal sensor to the distant target. Along this direction, the tracing in turn goes through the internal gas region, the aero-optics flow field, and the freestream. The coordinate value, the density, and the refractive index are calculated at each tracing step. A stopping criterion is developed to ensure the tracing stops at the outer edge of the aero-optics flow field. As a demonstration, the analysis is carried out for a typical blunt nosed vehicle. The backward tracing method and stopping criterion greatly simplify the ray-tracing computations in the aero-optics flow field, and they can be extended to our active laser illumination aero-optics study because of the reciprocity principle.

  5. Computation of antenna pattern correlation and MIMO performance by means of surface current distribution and spherical wave theory

    Directory of Open Access Journals (Sweden)

    O. Klemp

    2006-01-01

    Full Text Available In order to satisfy the stringent demand for an accurate prediction of MIMO channel capacity and diversity performance in wireless communications, more effective and suitable models that account for real antenna radiation behavior have to be taken into account. One of the main challenges is the accurate modeling of antenna correlation that is directly related to the amount of channel capacity or diversity gain which might be achieved in multi element antenna configurations. Therefore spherical wave theory in electromagnetics is a well known technique to express antenna far fields by means of a compact field expansion with a reduced number of unknowns that was recently applied to derive an analytical approach in the computation of antenna pattern correlation. In this paper we present a novel and efficient computational technique to determine antenna pattern correlation based on the evaluation of the surface current distribution by means of a spherical mode expansion.

  6. Reducing dose calculation time for accurate iterative IMRT planning

    International Nuclear Information System (INIS)

    Siebers, Jeffrey V.; Lauterbach, Marc; Tong, Shidong; Wu Qiuwen; Mohan, Radhe

    2002-01-01

    A time-consuming component of IMRT optimization is the dose computation required in each iteration for the evaluation of the objective function. Accurate superposition/convolution (SC) and Monte Carlo (MC) dose calculations are currently considered too time-consuming for iterative IMRT dose calculation. Thus, fast, but less accurate algorithms such as pencil beam (PB) algorithms are typically used in most current IMRT systems. This paper describes two hybrid methods that utilize the speed of fast PB algorithms yet achieve the accuracy of optimizing based upon SC algorithms via the application of dose correction matrices. In one method, the ratio method, an infrequently computed voxel-by-voxel dose ratio matrix (R=D SC /D PB ) is applied for each beam to the dose distributions calculated with the PB method during the optimization. That is, D PB xR is used for the dose calculation during the optimization. The optimization proceeds until both the IMRT beam intensities and the dose correction ratio matrix converge. In the second method, the correction method, a periodically computed voxel-by-voxel correction matrix for each beam, defined to be the difference between the SC and PB dose computations, is used to correct PB dose distributions. To validate the methods, IMRT treatment plans developed with the hybrid methods are compared with those obtained when the SC algorithm is used for all optimization iterations and with those obtained when PB-based optimization is followed by SC-based optimization. In the 12 patient cases studied, no clinically significant differences exist in the final treatment plans developed with each of the dose computation methodologies. However, the number of time-consuming SC iterations is reduced from 6-32 for pure SC optimization to four or less for the ratio matrix method and five or less for the correction method. Because the PB algorithm is faster at computing dose, this reduces the inverse planning optimization time for our implementation

  7. Retrieval of effective cloud field parameters from radiometric data

    Science.gov (United States)

    Paulescu, Marius; Badescu, Viorel; Brabec, Marek

    2017-06-01

    Clouds play a key role in establishing the Earth's climate. Real cloud fields are very different and very complex in both morphological and microphysical senses. Consequently, the numerical description of the cloud field is a critical task for accurate climate modeling. This study explores the feasibility of retrieving the effective cloud field parameters (namely the cloud aspect ratio and cloud factor) from systematic radiometric measurements at high frequency (measurement is taken every 15 s). Two different procedures are proposed, evaluated, and discussed with respect to both physical and numerical restrictions. None of the procedures is classified as best; therefore, the specific advantages and weaknesses are discussed. It is shown that the relationship between the cloud shade and point cloudiness computed using the estimated cloud field parameters recovers the typical relationship derived from measurements.

  8. Computationally efficient near-field source localization using third-order moments

    Science.gov (United States)

    Chen, Jian; Liu, Guohong; Sun, Xiaoying

    2014-12-01

    In this paper, a third-order moment-based estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm is proposed for passive localization of near-field sources. By properly choosing sensor outputs of the symmetric uniform linear array, two special third-order moment matrices are constructed, in which the steering matrix is the function of electric angle γ, while the rotational factor is the function of electric angles γ and ϕ. With the singular value decomposition (SVD) operation, all direction-of-arrivals (DOAs) are estimated from a polynomial rooting version. After substituting the DOA information into the steering matrix, the rotational factor is determined via the total least squares (TLS) version, and the related range estimations are performed. Compared with the high-order ESPRIT method, the proposed algorithm requires a lower computational burden, and it avoids the parameter-match procedure. Computer simulations are carried out to demonstrate the performance of the proposed algorithm.

  9. Computational physics an introduction to Monte Carlo simulations of matrix field theory

    CERN Document Server

    Ydri, Badis

    2017-01-01

    This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...

  10. Toward Accurate On-Ground Attitude Determination for the Gaia Spacecraft

    Science.gov (United States)

    Samaan, Malak A.

    2010-03-01

    The work presented in this paper concerns the accurate On-Ground Attitude (OGA) reconstruction for the astrometry spacecraft Gaia in the presence of disturbance and of control torques acting on the spacecraft. The reconstruction of the expected environmental torques which influence the spacecraft dynamics will be also investigated. The telemetry data from the spacecraft will include the on-board real-time attitude, which is of order of several arcsec. This raw attitude is the starting point for the further attitude reconstruction. The OGA will use the inputs from the field coordinates of known stars (attitude stars) and also the field coordinate differences of objects on the Sky Mapper (SM) and Astrometric Field (AF) payload instruments to improve this raw attitude. The on-board attitude determination uses a Kalman Filter (KF) to minimize the attitude errors and produce a more accurate attitude estimation than the pure star tracker measurement. Therefore the first approach for the OGA will be an adapted version of KF. Furthermore, we will design a batch least squares algorithm to investigate how to obtain a more accurate OGA estimation. Finally, a comparison between these different attitude determination techniques in terms of accuracy, robustness, speed and memory required will be evaluated in order to choose the best attitude algorithm for the OGA. The expected resulting accuracy for the OGA determination will be on the order of milli-arcsec.

  11. Computer simulation of nonstationary thermal fields in design and operation of northern oil and gas fields

    Energy Technology Data Exchange (ETDEWEB)

    Vaganova, N. A., E-mail: vna@imm.uran.ru [Institute of Mathematics and Mechanics of Ural Branch of Russian Academy of Sciences, Ekaterinburg (Russian Federation); Filimonov, M. Yu., E-mail: fmy@imm.uran.ru [Ural Federal University, Ekaterinburg, Russia and Institute of Mathematics and Mechanics of Ural Branch of Russian Academy of Sciences, Ekaterinburg (Russian Federation)

    2015-11-30

    A mathematical model, numerical algorithm and program code for simulation and long-term forecasting of changes in permafrost as a result of operation of a multiple well pad of northern oil and gas field are presented. In the model the most significant climatic and physical factors are taken into account such as solar radiation, determined by specific geographical location, heterogeneous structure of frozen soil, thermal stabilization of soil, possible insulation of the objects, seasonal fluctuations in air temperature, and freezing and thawing of the upper soil layer. Results of computing are presented.

  12. Advanced Computational Methods in Bio-Mechanics.

    Science.gov (United States)

    Al Qahtani, Waleed M S; El-Anwar, Mohamed I

    2018-04-15

    A novel partnership between surgeons and machines, made possible by advances in computing and engineering technology, could overcome many of the limitations of traditional surgery. By extending surgeons' ability to plan and carry out surgical interventions more accurately and with fewer traumas, computer-integrated surgery (CIS) systems could help to improve clinical outcomes and the efficiency of healthcare delivery. CIS systems could have a similar impact on surgery to that long since realised in computer-integrated manufacturing. Mathematical modelling and computer simulation have proved tremendously successful in engineering. Computational mechanics has enabled technological developments in virtually every area of our lives. One of the greatest challenges for mechanists is to extend the success of computational mechanics to fields outside traditional engineering, in particular to biology, the biomedical sciences, and medicine. Biomechanics has significant potential for applications in orthopaedic industry, and the performance arts since skills needed for these activities are visibly related to the human musculoskeletal and nervous systems. Although biomechanics is widely used nowadays in the orthopaedic industry to design orthopaedic implants for human joints, dental parts, external fixations and other medical purposes, numerous researches funded by billions of dollars are still running to build a new future for sports and human healthcare in what is called biomechanics era.

  13. Phantom measurements and computed estimates of breast dose with radiotherapy for Hodgkin's lymphoma: dose reduction with the use of the involved field

    International Nuclear Information System (INIS)

    Wirth, A.; Kron, T.; Sorell, G.; Cramb, J.; Wittwer, H.; Sullivan, K.

    2008-01-01

    Full text: The risk of breast cancer following radiotherapy for Hodgkin's lymphoma appears to be dose related. In this study we compared breast dose in an anthropomorphic phantom for conventional 'mantle'; upper mediastinal/bilateral neck (minimantle) and unilateral neck fields, and evaluated the accuracy of computer planned dose estimates for out-of-field doses. For each field, computer-planned breast dose (CPD) estimates were compared with thermolu-minescence dosimetry measurements in five locations within 'breast tissue'. CPD were also compared with ion chamber measurements in a slab phantom. Measured dose and CPD were within 20% of each other up to approximately 10 cm from the field edge. Beyond 10 cm, the CPD underestimated dose by a factor of 2 or more. The minimantle reduced the breast dose by a factor of approximately 10 compared with the mantle treatment. Treating the neck field lowered the breast dose by a further 50% or more. Modern involved-field radiotherapy for lymphoma substantially reduces breast dose compared with mantle fields. Computer dosimetery underestimated dose at larger distances from the field. This needs to be considered if computer dosimetery is used to estimate breast dose and, by extrapolation, breast cancer risk.

  14. A combined vector potential-scalar potential method for FE computation of 3D magnetic fields in electrical devices with iron cores

    Science.gov (United States)

    Wang, R.; Demerdash, N. A.

    1991-01-01

    A method of combined use of magnetic vector potential based finite-element (FE) formulations and magnetic scalar potential (MSP) based formulations for computation of three-dimensional magnetostatic fields is introduced. In this method, the curl-component of the magnetic field intensity is computed by a reduced magnetic vector potential. This field intensity forms the basic of a forcing function for a global magnetic scalar potential solution over the entire volume of the region. This method allows one to include iron portions sandwiched in between conductors within partitioned current-carrying subregions. The method is most suited for large-scale global-type 3-D magnetostatic field computations in electrical devices, and in particular rotating electric machinery.

  15. Solar magnetic field - 1976 through 1985: an atlas of photospheric magnetic field observations and computed coronal magnetic fields from the John M. Wilcox Solar Observatory at Stanford, 1976-1985

    International Nuclear Information System (INIS)

    Hoeksema, J.T.; Scherrer, P.H.

    1986-01-01

    Daily magnetogram observations of the large-scale photospheric magnetic field have been made at the John M. Wilcox Solar Observatory at Stanford since May of 1976. These measurements provide a homogeneous record of the changing solar field through most of Solar Cycle 21. Using the photospheric data, the configuration of the coronal and heliospheric fields can be calculated using a Potential Field -- Source Surface model. This provides a 3-dimensional picture of the heliospheric field-evolution during the solar cycle. In this report the authors present the complete set of synoptic charts of the measured photospheric magnetic field, the computed field at the source surface, and the coefficients of the multipole expansion of the coronal field. The general underlying structure of the solar and heliospheric fields, which determine the environment for solar - terrestrial relations and provide the context within which solar-activity-related events occur, can be approximated from these data

  16. Accurate X-Ray Spectral Predictions: An Advanced Self-Consistent-Field Approach Inspired by Many-Body Perturbation Theory.

    Science.gov (United States)

    Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David

    2017-03-03

    Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.

  17. Developing a framework for assessing muscle effort and postures during computer work in the field: The effect of computer activities on neck/shoulder muscle effort and postures

    NARCIS (Netherlands)

    Garza, J.L.B.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.; Huysmans, M.A.; Dieën, J.H. van; Beek, A.J. van der; Blatter, B.M.; Dennerlein, J.T.

    2012-01-01

    The present study, a part of the PROOF (PRedicting Occupational biomechanics in OFfice workers) study, aimed to determine whether trapezius muscle effort was different across computer activities in a field study of computer workers, and also investigated whether head and shoulder postures were

  18. Developing a framework for assessing muscle effort and postures during computer work in the field: the effect of computer activities on neck/shoulder muscle effort and postures

    NARCIS (Netherlands)

    Bruno-Garza, J.L.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.; Huijsmans, M.A.; van Dieen, J.H.; van der Beek, A.J.; Blatter, B.M.; Dennerlein, J.T.

    2012-01-01

    The present study, a part of the PROOF (PRedicting Occupational biomechanics in OFfice workers) study, aimed to determine whether trapezius muscle effort was different across computer activities in a field study of computer workers, and also investigated whether head and shoulder postures were

  19. Accurate Predictions of Mean Geomagnetic Dipole Excursion and Reversal Frequencies, Mean Paleomagnetic Field Intensity, and the Radius of Earth's Core Using McLeod's Rule

    Science.gov (United States)

    Voorhies, Coerte V.; Conrad, Joy

    1996-01-01

    The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field

  20. A newly developed maneuver, field change conversion (FCC), improved evaluation of the left ventricular volume more accurately on quantitative gated SPECT (QGS) analysis

    International Nuclear Information System (INIS)

    Tajima, Osamu; Shibasaki, Masaki; Hoshi, Toshiko; Imai, Kamon

    2002-01-01

    The purpose of this study was to investigate whether a newly developed maneuver that reduces the reconstruction area by a half more accurately evaluates left ventricular (LV) volume on quantitative gated SPECT (QGS) analysis. The subjects were 38 patients who underwent left ventricular angiography (LVG) followed by G-SPECT within 2 weeks. Acquisition was performed with a general purpose collimator and a 64 x 64 matrix. On QGS analysis, the field magnification was 34 cm in original image (Original: ORI), and furthermore it was changed from 34 cm to 17 cm to enlarge the re-constructed image (Field Change Conversion: FCC). End-diastolic volume (EDV) and end-systolic volume (ESV) of the left ventricle were also obtained using LVG. EDV was 71±19 ml, 83±20 ml and 98±23 ml for ORI, FCC and LVG, respectively (p<0.001: ORI versus LVG, p<0.001: ORI versus FCC, p<0.001: FCC versus LVG). ESV was 28±12 ml, 34±13 ml and 41±14 ml for ORI, FCC and LVG, respectively (p<0.001: ORI versus LVG, p<0.001: ORI versus FCC, p<0.001: FCC versus LVG). FCC was better than ORI for calculating LV volume in clinical cases. Furthermore, FCC is a useful method for accurately measuring the LV volume on QGS analysis. (author)

  1. Simultaneous fitting of a potential-energy surface and its corresponding force fields using feedforward neural networks

    Science.gov (United States)

    Pukrittayakamee, A.; Malshe, M.; Hagan, M.; Raff, L. M.; Narulkar, R.; Bukkapatnum, S.; Komanduri, R.

    2009-04-01

    An improved neural network (NN) approach is presented for the simultaneous development of accurate potential-energy hypersurfaces and corresponding force fields that can be utilized to conduct ab initio molecular dynamics and Monte Carlo studies on gas-phase chemical reactions. The method is termed as combined function derivative approximation (CFDA). The novelty of the CFDA method lies in the fact that although the NN has only a single output neuron that represents potential energy, the network is trained in such a way that the derivatives of the NN output match the gradient of the potential-energy hypersurface. Accurate force fields can therefore be computed simply by differentiating the network. Both the computed energies and the gradients are then accurately interpolated using the NN. This approach is superior to having the gradients appear in the output layer of the NN because it greatly simplifies the required architecture of the network. The CFDA permits weighting of function fitting relative to gradient fitting. In every test that we have run on six different systems, CFDA training (without a validation set) has produced smaller out-of-sample testing error than early stopping (with a validation set) or Bayesian regularization (without a validation set). This indicates that CFDA training does a better job of preventing overfitting than the standard methods currently in use. The training data can be obtained using an empirical potential surface or any ab initio method. The accuracy and interpolation power of the method have been tested for the reaction dynamics of H+HBr using an analytical potential. The results show that the present NN training technique produces more accurate fits to both the potential-energy surface as well as the corresponding force fields than the previous methods. The fitting and interpolation accuracy is so high (rms error=1.2 cm-1) that trajectories computed on the NN potential exhibit point-by-point agreement with corresponding

  2. Possible Computer Vision Systems and Automated or Computer-Aided Edging and Trimming

    Science.gov (United States)

    Philip A. Araman

    1990-01-01

    This paper discusses research which is underway to help our industry reduce costs, increase product volume and value recovery, and market more accurately graded and described products. The research is part of a team effort to help the hardwood sawmill industry automate with computer vision systems, and computer-aided or computer controlled processing. This paper...

  3. Accurate expansion of cylindrical paraxial waves for its straightforward implementation in electromagnetic scattering

    Science.gov (United States)

    Naserpour, Mahin; Zapata-Rodríguez, Carlos J.

    2018-01-01

    The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.

  4. Simple, accurate equations for human blood O2 dissociation computations.

    Science.gov (United States)

    Severinghaus, J W

    1979-03-01

    Hill's equation can be slightly modified to fit the standard human blood O2 dissociation curve to within plus or minus 0.0055 fractional saturation (S) from O less than S less than 1. Other modifications of Hill's equation may be used to compute Po2 (Torr) from S (Eq. 2), and the temperature coefficient of Po2 (Eq. 3). Variations of the Bohr coefficient with Po2 are given by Eq. 4. S = (((Po2(3) + 150 Po2)(-1) x 23,400) + 1)(-1) (1) In Po2 = 0.385 In (S-1 - 1)(-1) + 3.32 - (72 S)(-1) - 0.17(S6) (2) DELTA In Po2/delta T = 0.058 ((0.243 X Po2/100)(3.88) + 1)(-1) + 0.013 (3) delta In Po2/delta pH = (Po2/26.6)(0.184) - 2.2 (4) Procedures are described to determine Po2 and S of blood iteratively after extraction or addition of a defined amount of O2 and to compute P50 of blood from a single sample after measuring Po2, pH, and S.

  5. High speed resonant frequency determination applied to field mapping using perturbation techniques

    International Nuclear Information System (INIS)

    Smith, B.H.; Burton, R.J.; Hutcheon, R.M.

    1992-01-01

    Perturbation techniques are commonly used for measuring electric and magnetic field distributions in resonant structures. A field measurement system has been assembled using a Hewlett Packard model 8753C network analyzer interfaced via an HPIB bus to a personal computer to form an accurate, rapid and flexible system for data acquisition, control, and analysis of such measurements. Characterization of long linac structures (up to 3 m) is accomplished in about three minutes, minimizing thermal drift effects. This paper describes the system, its application and its extension to applications such as confirming the presence of weak, off-axis quadrupole fields in an on-axis coupled linac. (Author) 5 figs., 10 refs

  6. Accurate magnetic field calculations for contactless energy transfer coils

    NARCIS (Netherlands)

    Sonntag, C.L.W.; Spree, M.; Lomonova, E.A.; Duarte, J.L.; Vandenput, A.J.A.

    2007-01-01

    In this paper, a method for estimating the magnetic field intensity from hexagon spiral windings commonly found in contactless energy transfer applications is presented. The hexagonal structures are modeled in a magneto-static environment using Biot-Savart current stick vectors. The accuracy of the

  7. Unsteady Reynolds averaged Navier-Stokes: toward accurate predictions in fuel-bundles and T-junctions

    International Nuclear Information System (INIS)

    Merzari, E.; Ninokata, H.; Baglietto, E.

    2008-01-01

    Traditional steady-state simulation and turbulence modelling are not always reliable. Even in simple flows, the results can be not accurate when particular conditions occur. Examples are buoyancy, flow oscillations, and turbulent mixing. Often, unsteady simulations are necessary, but they tend to be computationally not affordable. The Unsteady Reynolds Averaged Navier-Stokes (URANS) approach holds promise to be less computational expensive than Large Eddy Simulation (LES) or Direct Numerical Simulation (DNS), reaching a considerable degree of accuracy. Moreover, URANS methodologies do not need complex boundary formulations for the inlet and the outlet like LES or DNS. The Test cases for this methodology will be Fuel Bundles and T-junctions. Tight-Fuel Rod-Bundles present large scale coherent structures than cannot be taken into account by a simple steady-state simulation. T-junctions where a hot fluid and a cold fluid mix present temperature fluctuations and therefore thermal fatigue. For both cases the capacity of the methodology to reproduce the flow field are assessed and it is evaluated that URANS holds promise to be the industrial standard in nuclear engineering applications that do not involve buoyancy. The codes employed are STAR-CD 3.26 and 4.06. (author)

  8. Snore related signals processing in a private cloud computing system.

    Science.gov (United States)

    Qian, Kun; Guo, Jian; Xu, Huijie; Zhu, Zhaomeng; Zhang, Gongxuan

    2014-09-01

    Snore related signals (SRS) have been demonstrated to carry important information about the obstruction site and degree in the upper airway of Obstructive Sleep Apnea-Hypopnea Syndrome (OSAHS) patients in recent years. To make this acoustic signal analysis method more accurate and robust, big SRS data processing is inevitable. As an emerging concept and technology, cloud computing has motivated numerous researchers and engineers to exploit applications both in academic and industry field, which could have an ability to implement a huge blue print in biomedical engineering. Considering the security and transferring requirement of biomedical data, we designed a system based on private cloud computing to process SRS. Then we set the comparable experiments of processing a 5-hour audio recording of an OSAHS patient by a personal computer, a server and a private cloud computing system to demonstrate the efficiency of the infrastructure we proposed.

  9. Computer Based Procedures for Field Workers - FY16 Research Activities

    International Nuclear Information System (INIS)

    Oxstrand, Johanna; Bly, Aaron

    2016-01-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven job aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages - Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to

  10. Computer Based Procedures for Field Workers - FY16 Research Activities

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven job aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to

  11. Complex image method for calculating electric and magnetic fields produced by an auroral electrojet of finite length

    Directory of Open Access Journals (Sweden)

    R. Pirjola

    1998-11-01

    Full Text Available The electromagnetic field due to ionospheric currents has to be known when evaluating space weather effects at the earth's surface. Forecasting methods of these effects, which include geomagnetically induced currents in technological systems, are being developed. Such applications are time-critical, so the calculation techniques of the electromagnetic field have to be fast but still accurate. The contribution of secondary sources induced within the earth leads to complicated integral formulas for the field at the earth's surface with a time-consuming computation. An approximate method of calculation based on replacing the earth contribution by an image source having mathematically a complex location results in closed-form expressions and in a much faster computation. In this paper we extend the complex image method (CIM to the case of a more realistic electrojet system consisting of a horizontal line current filament with vertical currents at its ends above a layered earth. To be able to utilize previous CIM results, we prove that the current system can be replaced by a purely horizontal current distribution which is equivalent regarding the total (=primary + induced magnetic field and the total horizontal electric field at the earth's surface. The latter result is new. Numerical calculations demonstrate that CIM is very accurate and several magnitudes faster than the exact conventional approach.Key words. Electromagnetic theory · Geomagnetic induction · Auroral ionosphere

  12. Perspective: Ab initio force field methods derived from quantum mechanics

    Science.gov (United States)

    Xu, Peng; Guidez, Emilie B.; Bertoni, Colleen; Gordon, Mark S.

    2018-03-01

    It is often desirable to accurately and efficiently model the behavior of large molecular systems in the condensed phase (thousands to tens of thousands of atoms) over long time scales (from nanoseconds to milliseconds). In these cases, ab initio methods are difficult due to the increasing computational cost with the number of electrons. A more computationally attractive alternative is to perform the simulations at the atomic level using a parameterized function to model the electronic energy. Many empirical force fields have been developed for this purpose. However, the functions that are used to model interatomic and intermolecular interactions contain many fitted parameters obtained from selected model systems, and such classical force fields cannot properly simulate important electronic effects. Furthermore, while such force fields are computationally affordable, they are not reliable when applied to systems that differ significantly from those used in their parameterization. They also cannot provide the information necessary to analyze the interactions that occur in the system, making the systematic improvement of the functional forms that are used difficult. Ab initio force field methods aim to combine the merits of both types of methods. The ideal ab initio force fields are built on first principles and require no fitted parameters. Ab initio force field methods surveyed in this perspective are based on fragmentation approaches and intermolecular perturbation theory. This perspective summarizes their theoretical foundation, key components in their formulation, and discusses key aspects of these methods such as accuracy and formal computational cost. The ab initio force fields considered here were developed for different targets, and this perspective also aims to provide a balanced presentation of their strengths and shortcomings. Finally, this perspective suggests some future directions for this actively developing area.

  13. Computer-aided dispatch--traffic management center field operational test final detailed test plan : WSDOT deployment

    Science.gov (United States)

    2003-10-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...

  14. Enriching Elementary Quantum Mechanics with the Computer: Self-Consistent Field Problems in One Dimension

    Science.gov (United States)

    Bolemon, Jay S.; Etzold, David J.

    1974-01-01

    Discusses the use of a small computer to solve self-consistent field problems of one-dimensional systems of two or more interacting particles in an elementary quantum mechanics course. Indicates that the calculation can serve as a useful introduction to the iterative technique. (CC)

  15. Some exact computations on the twisted butterfly state in string field theory

    International Nuclear Information System (INIS)

    Okawa, Yuji

    2004-01-01

    The twisted butterfly state solves the equation of motion of vacuum string field theory in the singular limit. The finiteness of the energy density of the solution is an important issue, but possible conformal anomaly resulting from the twisting has prevented us from addressing this problem. We present a description of the twisted regulated butterfly state in terms of a conformal field theory with a vanishing central charge which consists of the ordinary bc ghosts and a matter system with c=26. Various quantities relevant to vacuum string field theory are computed exactly using this description. We find that the energy density of the solution can be finite in the limit, but the finiteness depends on the sub leading structure of vacuum string field theory. We further argue, contrary to our previous expectation, that contributions from sub leading terms in the kinetic term to the energy density can be of the same order as the contribution from the leading term which consists of the midpoint ghost insertion. (author)

  16. Accurate calculation of mutational effects on the thermodynamics of inhibitor binding to p38α MAP kinase: a combined computational and experimental study.

    Science.gov (United States)

    Zhu, Shun; Travis, Sue M; Elcock, Adrian H

    2013-07-09

    A major current challenge for drug design efforts focused on protein kinases is the development of drug resistance caused by spontaneous mutations in the kinase catalytic domain. The ubiquity of this problem means that it would be advantageous to develop fast, effective computational methods that could be used to determine the effects of potential resistance-causing mutations before they arise in a clinical setting. With this long-term goal in mind, we have conducted a combined experimental and computational study of the thermodynamic effects of active-site mutations on a well-characterized and high-affinity interaction between a protein kinase and a small-molecule inhibitor. Specifically, we developed a fluorescence-based assay to measure the binding free energy of the small-molecule inhibitor, SB203580, to the p38α MAP kinase and used it measure the inhibitor's affinity for five different kinase mutants involving two residues (Val38 and Ala51) that contact the inhibitor in the crystal structure of the inhibitor-kinase complex. We then conducted long, explicit-solvent thermodynamic integration (TI) simulations in an attempt to reproduce the experimental relative binding affinities of the inhibitor for the five mutants; in total, a combined simulation time of 18.5 μs was obtained. Two widely used force fields - OPLS-AA/L and Amber ff99SB-ILDN - were tested in the TI simulations. Both force fields produced excellent agreement with experiment for three of the five mutants; simulations performed with the OPLS-AA/L force field, however, produced qualitatively incorrect results for the constructs that contained an A51V mutation. Interestingly, the discrepancies with the OPLS-AA/L force field could be rectified by the imposition of position restraints on the atoms of the protein backbone and the inhibitor without destroying the agreement for other mutations; the ability to reproduce experiment depended, however, upon the strength of the restraints' force constant

  17. Computation of Ground-State Properties in Molecular Systems: Back-Propagation with Auxiliary-Field Quantum Monte Carlo.

    Science.gov (United States)

    Motta, Mario; Zhang, Shiwei

    2017-11-14

    We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the Fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated, and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties in several molecular systems, including small organic molecules.

  18. Auxiliary fields as a tool for computing analytical solutions of the Schroedinger equation

    International Nuclear Information System (INIS)

    Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien

    2008-01-01

    We propose a new method to obtain approximate solutions for the Schroedinger equation with an arbitrary potential that possesses bound states. This method, relying on the auxiliary field technique, allows to find in many cases, analytical solutions. It offers a convenient way to study the qualitative features of the energy spectrum of bound states in any potential. In particular, we illustrate our method by solving the case of central potentials with power-law form and with logarithmic form. For these types of potentials, we propose very accurate analytical energy formulae which greatly improves the corresponding formulae that can be found in the literature

  19. Auxiliary fields as a tool for computing analytical solutions of the Schroedinger equation

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be

    2008-07-11

    We propose a new method to obtain approximate solutions for the Schroedinger equation with an arbitrary potential that possesses bound states. This method, relying on the auxiliary field technique, allows to find in many cases, analytical solutions. It offers a convenient way to study the qualitative features of the energy spectrum of bound states in any potential. In particular, we illustrate our method by solving the case of central potentials with power-law form and with logarithmic form. For these types of potentials, we propose very accurate analytical energy formulae which greatly improves the corresponding formulae that can be found in the literature.

  20. THE CHALLENGE OF THE PERFORMANCE CONCEPT WITHIN THE SUSTAINABILITY AND COMPUTATIONAL DESIGN FIELD

    Directory of Open Access Journals (Sweden)

    Marcio Nisenbaum

    2017-11-01

    Full Text Available This paper discusses the notion of performance and its appropriation within the research fields related to sustainability and computational design, focusing on the design processes of the architectural and urban fields. Recently, terms such as “performance oriented design” or “performance driven architecture”, especially when related to sustainability, have been used by many authors and professionals as an attempt to engender project guidelines based on simulation processes and systematic use of digital tools. In this context, the notion of performance has basically been understood as the way in which an action is fulfilled, agreeing to contemporary discourses of efficiency and optimization – in this circumstance it is considered that a building or urban area “performs” if it fulfills certain objective sustainability evaluation criteria, reduced to mathematical parameters. This paper intends to broaden this understanding by exploring new theoretical interpretations, referring to etymological investigation, historical research, and literature review, based on authors from different areas and on the case study of the solar houses academic competition, Solar Decathlon. This initial analysis is expected to contribute to the emergence of new forms of interpretation of the performance concept, relativizing the notion of the “body” that “performs” in different manners, thus enhancing its appropriation and use within the fields of sustainability and computational design.

  1. An energy-stable time-integrator for phase-field models

    KAUST Repository

    Vignal, Philippe

    2016-12-27

    We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.

  2. An energy-stable time-integrator for phase-field models

    KAUST Repository

    Vignal, Philippe; Collier, N.; Dalcin, Lisandro; Brown, D.L.; Calo, V.M.

    2016-01-01

    We introduce a provably energy-stable time-integration method for general classes of phase-field models with polynomial potentials. We demonstrate how Taylor series expansions of the nonlinear terms present in the partial differential equations of these models can lead to expressions that guarantee energy-stability implicitly, which are second-order accurate in time. The spatial discretization relies on a mixed finite element formulation and isogeometric analysis. We also propose an adaptive time-stepping discretization that relies on a first-order backward approximation to give an error-estimator. This error estimator is accurate, robust, and does not require the computation of extra solutions to estimate the error. This methodology can be applied to any second-order accurate time-integration scheme. We present numerical examples in two and three spatial dimensions, which confirm the stability and robustness of the method. The implementation of the numerical schemes is done in PetIGA, a high-performance isogeometric analysis framework.

  3. Trajectory Evaluation of Rotor-Flying Robots Using Accurate Inverse Computation Based on Algorithm Differentiation

    Directory of Open Access Journals (Sweden)

    Yuqing He

    2014-01-01

    Full Text Available Autonomous maneuvering flight control of rotor-flying robots (RFR is a challenging problem due to the highly complicated structure of its model and significant uncertainties regarding many aspects of the field. As a consequence, it is difficult in many cases to decide whether or not a flight maneuver trajectory is feasible. It is necessary to conduct an analysis of the flight maneuvering ability of an RFR prior to test flight. Our aim in this paper is to use a numerical method called algorithm differentiation (AD to solve this problem. The basic idea is to compute the internal state (i.e., attitude angles and angular rates and input profiles based on predetermined maneuvering trajectory information denoted by the outputs (i.e., positions and yaw angle and their higher-order derivatives. For this purpose, we first present a model of the RFR system and show that it is flat. We then cast the procedure for obtaining the required state/input based on the desired outputs as a static optimization problem, which is solved using AD and a derivative based optimization algorithm. Finally, we test our proposed method using a flight maneuver trajectory to verify its performance.

  4. Effect of Numerical Error on Gravity Field Estimation for GRACE and Future Gravity Missions

    Science.gov (United States)

    McCullough, Christopher; Bettadpur, Srinivas

    2015-04-01

    In recent decades, gravity field determination from low Earth orbiting satellites, such as the Gravity Recovery and Climate Experiment (GRACE), has become increasingly more effective due to the incorporation of high accuracy measurement devices. Since instrumentation quality will only increase in the near future and the gravity field determination process is computationally and numerically intensive, numerical error from the use of double precision arithmetic will eventually become a prominent error source. While using double-extended or quadruple precision arithmetic will reduce these errors, the numerical limitations of current orbit determination algorithms and processes must be accurately identified and quantified in order to adequately inform the science data processing techniques of future gravity missions. The most obvious numerical limitation in the orbit determination process is evident in the comparison of measured observables with computed values, derived from mathematical models relating the satellites' numerically integrated state to the observable. Significant error in the computed trajectory will corrupt this comparison and induce error in the least squares solution of the gravitational field. In addition, errors in the numerically computed trajectory propagate into the evaluation of the mathematical measurement model's partial derivatives. These errors amalgamate in turn with numerical error from the computation of the state transition matrix, computed using the variational equations of motion, in the least squares mapping matrix. Finally, the solution of the linearized least squares system, computed using a QR factorization, is also susceptible to numerical error. Certain interesting combinations of each of these numerical errors are examined in the framework of GRACE gravity field determination to analyze and quantify their effects on gravity field recovery.

  5. A stabilized second-order time accurate finite element formulation for incompressible viscous flow with heat transfer

    International Nuclear Information System (INIS)

    Curi, Marcos Filardy

    2011-01-01

    In view of the problem of global warming and the search for clean energy sources, a worldwide expansion on the use of nuclear energy is foreseen. Thus, the development of science and technology regarding nuclear power plants is essential, in particular in the field of reactor engineering. Fluid mechanics and heat transfer play an important role in the development of nuclear reactors. Computational Fluid Mechanics (CFD) is becoming ever more important in the optimization of cost and safety of the designs. This work presents a stabilized second-order time accurate finite element formulation for incompressible flows with heat transfer. A second order time discretization precedes a spatial discretization using finite elements. The terms that stabilize the finite element method arise naturally from the discretization process, rather than being introduced a priori in the variational formulation. The method was implemented in the program 'ns n ew s olvec2d av 2 M PI' written in FORTRAN90, developed in the Parallel Computing Laboratory at the Institute of Nuclear Engineering (LCP/IEN). Numerical solutions of some representative examples, including free, mixed and forced convection, demonstrate that the proposed stabilized formulation attains very good agreement with experimental and computational results available in the literature. (author)

  6. Computer-aided dispatch--traffic management center field operational test final test plans : state of Utah

    Science.gov (United States)

    2004-01-01

    The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...

  7. Accurate donor electron wave functions from a multivalley effective mass theory.

    Science.gov (United States)

    Pendo, Luke; Hu, Xuedong

    Multivalley effective mass (MEM) theories combine physical intuition with a marginal need for computational resources, but they tend to be insensitive to variations in the wavefunction. However, recent papers suggest full Bloch functions and suitable central cell donor potential corrections are essential to replicating qualitative and quantitative features of the wavefunction. In this talk, we consider a variational MEM method that can accurately predict both spectrum and wavefunction of isolated phosphorus donors. As per Gamble et. al, we employ a truncated series representation of the Bloch function with a tetrahedrally symmetric central cell correction. We use a dynamic dielectric constant, a feature commonly seen in tight-binding methods. Uniquely, we use a freely extensible basis of either all Slater- or all Gaussian-type functions. With a large basis able to capture the influence of higher energy eigenstates, this method is well positioned to consider the influence of external perturbations, such as electric field or applied strain, on the charge density. This work is supported by the US Army Research Office (W911NF1210609).

  8. Wide-field two-dimensional multifocal optical-resolution photoacoustic computed microscopy

    Science.gov (United States)

    Xia, Jun; Li, Guo; Wang, Lidai; Nasiriavanaki, Mohammadreza; Maslov, Konstantin; Engelbach, John A.; Garbow, Joel R.; Wang, Lihong V.

    2014-01-01

    Optical-resolution photoacoustic microscopy (OR-PAM) is an emerging technique that directly images optical absorption in tissue at high spatial resolution. To date, the majority of OR-PAM systems are based on single focused optical excitation and ultrasonic detection, limiting the wide-field imaging speed. While one-dimensional multifocal OR-PAM (1D-MFOR-PAM) has been developed, the potential of microlens and transducer arrays has not been fully realized. Here, we present the development of two-dimensional multifocal optical-resolution photoacoustic computed microscopy (2D-MFOR-PACM), using a 2D microlens array and a full-ring ultrasonic transducer array. The 10 × 10 mm2 microlens array generates 1800 optical foci within the focal plane of the 512-element transducer array, and raster scanning the microlens array yields optical-resolution photoacoustic images. The system has improved the in-plane resolution of a full-ring transducer array from ≥100 µm to 29 µm and achieved an imaging time of 36 seconds over a 10 × 10 mm2 field of view. In comparison, the 1D-MFOR-PAM would take more than 4 minutes to image over the same field of view. The imaging capability of the system was demonstrated on phantoms and animals both ex vivo and in vivo. PMID:24322226

  9. Extensions of the auxiliary field method to solve Schroedinger equations

    International Nuclear Information System (INIS)

    Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien

    2008-01-01

    It has recently been shown that the auxiliary field method is an interesting tool to compute approximate analytical solutions of the Schroedinger equation. This technique can generate the spectrum associated with an arbitrary potential V(r) starting from the analytically known spectrum of a particular potential P(r). In the present work, general important properties of the auxiliary field method are proved, such as scaling laws and independence of the results on the choice of P(r). The method is extended in order to find accurate analytical energy formulae for radial potentials of the form aP(r) + V(r), and several explicit examples are studied. Connections existing between the perturbation theory and the auxiliary field method are also discussed

  10. Extensions of the auxiliary field method to solve Schroedinger equations

    Energy Technology Data Exchange (ETDEWEB)

    Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be

    2008-10-24

    It has recently been shown that the auxiliary field method is an interesting tool to compute approximate analytical solutions of the Schroedinger equation. This technique can generate the spectrum associated with an arbitrary potential V(r) starting from the analytically known spectrum of a particular potential P(r). In the present work, general important properties of the auxiliary field method are proved, such as scaling laws and independence of the results on the choice of P(r). The method is extended in order to find accurate analytical energy formulae for radial potentials of the form aP(r) + V(r), and several explicit examples are studied. Connections existing between the perturbation theory and the auxiliary field method are also discussed.

  11. Computation of fields in an arbitrarily shaped heterogeneous dielectric or biological body by an iterative conjugate gradient method

    International Nuclear Information System (INIS)

    Wang, J.J.H.; Dubberley, J.R.

    1989-01-01

    Electromagnetic (EM) fields in a three-dimensional, arbitrarily shaped heterogeneous dielectric or biological body illuminated by a plane wave are computed by an iterative conjugate gradient method. The method is a generalized method of moments applied to the volume integral equation. Because no matrix is explicitly involved or stored, the present iterative method is capable of computing EM fields in objects an order of magnitude larger than those that can be handled by the conventional method of moments. Excellent numerical convergence is achieved. Perfect convergence to the result of the conventional moment method using the same basis and weighted with delta functions is consistently achieved in all the cases computed, indicating that these two algorithms (direct and interactive) are equivalent

  12. Computing the scalar field couplings in 6D supergravity

    Science.gov (United States)

    Saidi, El Hassan

    2008-11-01

    Using non-chiral supersymmetry in 6D space-time, we compute the explicit expression of the metric the scalar manifold SO(1,1)×{SO(4,20)}/{SO(4)×SO(20)} of the ten-dimensional type IIA superstring on generic K3. We consider as well the scalar field self-couplings in the general case where the non-chiral 6D supergravity multiplet is coupled to generic n vector supermultiplets with moduli space SO(1,1)×{SO(4,n)}/{SO(4)×SO(n)}. We also work out a dictionary giving a correspondence between hyper-Kähler geometry and the Kähler geometry of the Coulomb branch of 10D type IIA on Calabi-Yau threefolds. Others features are also discussed.

  13. Reproducing Quantum Probability Distributions at the Speed of Classical Dynamics: A New Approach for Developing Force-Field Functors.

    Science.gov (United States)

    Sundar, Vikram; Gelbwaser-Klimovsky, David; Aspuru-Guzik, Alán

    2018-04-05

    Modeling nuclear quantum effects is required for accurate molecular dynamics (MD) simulations of molecules. The community has paid special attention to water and other biomolecules that show hydrogen bonding. Standard methods of modeling nuclear quantum effects like Ring Polymer Molecular Dynamics (RPMD) are computationally costlier than running classical trajectories. A force-field functor (FFF) is an alternative method that computes an effective force field that replicates quantum properties of the original force field. In this work, we propose an efficient method of computing FFF using the Wigner-Kirkwood expansion. As a test case, we calculate a range of thermodynamic properties of Neon, obtaining the same level of accuracy as RPMD, but with the shorter runtime of classical simulations. By modifying existing MD programs, the proposed method could be used in the future to increase the efficiency and accuracy of MD simulations involving water and proteins.

  14. Matrix-vector multiplication using digital partitioning for more accurate optical computing

    Science.gov (United States)

    Gary, C. K.

    1992-01-01

    Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.

  15. Geant4 Hadronic Cascade Models and CMS Data Analysis : Computational Challenges in the LHC era

    CERN Document Server

    Heikkinen, Aatos

    This work belongs to the field of computational high-energy physics (HEP). The key methods used in this thesis work to meet the challenges raised by the Large Hadron Collider (LHC) era experiments are object-orientation with software engineering, Monte Carlo simulation, the computer technology of clusters, and artificial neural networks. The first aspect discussed is the development of hadronic cascade models, used for the accurate simulation of medium-energy hadron-nucleus reactions, up to 10 GeV. These models are typically needed in hadronic calorimeter studies and in the estimation of radiation backgrounds. Various applications outside HEP include the medical field (such as hadron treatment simulations), space science (satellite shielding), and nuclear physics (spallation studies). Validation results are presented for several significant improvements released in Geant4 simulation tool, and the significance of the new models for computing in the Large Hadron Collider era is estimated. In particular, we es...

  16. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    International Nuclear Information System (INIS)

    Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.

    1999-01-01

    We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm

  17. Quantum field theory and coalgebraic logic in theoretical computer science.

    Science.gov (United States)

    Basti, Gianfranco; Capolupo, Antonio; Vitiello, Giuseppe

    2017-11-01

    We suggest that in the framework of the Category Theory it is possible to demonstrate the mathematical and logical dual equivalence between the category of the q-deformed Hopf Coalgebras and the category of the q-deformed Hopf Algebras in quantum field theory (QFT), interpreted as a thermal field theory. Each pair algebra-coalgebra characterizes a QFT system and its mirroring thermal bath, respectively, so to model dissipative quantum systems in far-from-equilibrium conditions, with an evident significance also for biological sciences. Our study is in fact inspired by applications to neuroscience where the brain memory capacity, for instance, has been modeled by using the QFT unitarily inequivalent representations. The q-deformed Hopf Coalgebras and the q-deformed Hopf Algebras constitute two dual categories because characterized by the same functor T, related with the Bogoliubov transform, and by its contravariant application T op , respectively. The q-deformation parameter is related to the Bogoliubov angle, and it is effectively a thermal parameter. Therefore, the different values of q identify univocally, and label the vacua appearing in the foliation process of the quantum vacuum. This means that, in the framework of Universal Coalgebra, as general theory of dynamic and computing systems ("labelled state-transition systems"), the so labelled infinitely many quantum vacua can be interpreted as the Final Coalgebra of an "Infinite State Black-Box Machine". All this opens the way to the possibility of designing a new class of universal quantum computing architectures based on this coalgebraic QFT formulation, as its ability of naturally generating a Fibonacci progression demonstrates. Copyright © 2017 Elsevier Ltd. All rights reserved.

  18. Multidisciplinary Computational Research

    National Research Council Canada - National Science Library

    Visbal, Miguel R

    2006-01-01

    The purpose of this work is to develop advanced multidisciplinary numerical simulation capabilities for aerospace vehicles with emphasis on highly accurate, massively parallel computational methods...

  19. Graduate Enrollment Increases in Science and Engineering Fields, Especially in Engineering and Computer Sciences. InfoBrief: Science Resources Statistics.

    Science.gov (United States)

    Burrelli, Joan S.

    This brief describes graduate enrollment increases in the science and engineering fields, especially in engineering and computer sciences. Graduate student enrollment is summarized by enrollment status, citizenship, race/ethnicity, and fields. (KHR)

  20. Design Guidance for Computer-Based Procedures for Field Workers

    Energy Technology Data Exchange (ETDEWEB)

    Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States)

    2016-09-01

    Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, the U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying

  1. Integrated Design of Superconducting Magnets with the CERN Field Computation Program ROXIE

    CERN Document Server

    Russenschuck, Stephan; Bazan, M; Lucas, J; Ramberger, S; Völlinger, Christine

    2000-01-01

    The program package ROXIE has been developed at CERN for the field computation of superconducting accelerator magnets and is used as an approach towards the integrated design of such magnets. It is also an example of fruitful international collaborations in software development.The integrated design of magnets includes feature based geometry generation, conceptual design using genetic optimization algorithms, optimization of the iron yoke (both in 2d and 3d) using deterministic methods, end-spacer design and inverse field calculation.The paper describes the version 8.0 of ROXIE which comprises an automatic mesh generator, an hysteresis model for the magnetization in superconducting filaments, the BEM-FEM coupling method for the 3d field calculation, a routine for the calculation of the peak temperature during a quench and neural network approximations of the objective function for the speed-up of optimization algorithms, amongst others.New results of the magnet design work for the LHC are given as examples.

  2. Computed tomography in radiation therapy planning: Thoracic region

    International Nuclear Information System (INIS)

    Seydel, H.G.; Zingas, A.; Haghbin, M.; Mondalek, P.; Smereka, R.

    1983-01-01

    With the explosive spread of computed tomographic (CT) scanning throughout the United States, one of the main applications has been in patients who are treated for cancer by surgery, radiation therapy, or chemotherapy. For the radiation oncologist, the desire to provide local tumor control and avoid geographic misses to achieve an expected prolongation of survival has led to the use of large radiation fields in the treatment of intrathoracic cancer, including bronchogenic carcinoma, cancer of the esophagus, and other malignant tumors. The optimal radiation therapy plan is a balance between local tumor control and the necessity to preserve normal structures by the use of directed and limited fields for bulk disease. CT scanning has been employed to accurately demonstrate the extent of tumor as well as to determine the isodose distribution of radiation, including the spatial distribution of radiation portals in single planar and three-dimensional aspects as well as consideration of tissue inhomogeneities. The accurate planning of the distribution of therapeutic irradiation includes both the tumor-bearing target volume and the critical normal tissues. This chapter provides information regarding these aspects of the application of CT scanning to radiation therapy for bronchogenic carcinoma and carcinoma of the esophagus

  3. Finite Element Modelling of a Field-Sensed Magnetic Suspended System for Accurate Proximity Measurement Based on a Sensor Fusion Algorithm with Unscented Kalman Filter.

    Science.gov (United States)

    Chowdhury, Amor; Sarjaš, Andrej

    2016-09-15

    The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.

  4. Solving computationally expensive engineering problems

    CERN Document Server

    Leifsson, Leifur; Yang, Xin-She

    2014-01-01

    Computational complexity is a serious bottleneck for the design process in virtually any engineering area. While migration from prototyping and experimental-based design validation to verification using computer simulation models is inevitable and has a number of advantages, high computational costs of accurate, high-fidelity simulations can be a major issue that slows down the development of computer-aided design methodologies, particularly those exploiting automated design improvement procedures, e.g., numerical optimization. The continuous increase of available computational resources does not always translate into shortening of the design cycle because of the growing demand for higher accuracy and necessity to simulate larger and more complex systems. Accurate simulation of a single design of a given system may be as long as several hours, days or even weeks, which often makes design automation using conventional methods impractical or even prohibitive. Additional problems include numerical noise often pr...

  5. Look back and look forward to the future of computer applications in the field of nuclear science and technology

    International Nuclear Information System (INIS)

    Yang Yanming; Dai Guiling

    1988-01-01

    All previous National Conferences on computer application in the field of nuclear science and technology sponsored by the Society of Nuclear Electronics and Detection Technology are reviewed. Surveys are geiven on the basic situations and technique levels of computer applications for each time period. Some points concerning possible developments of computer techniques are given as well

  6. Accurate and efficient calculation of response times for groundwater flow

    Science.gov (United States)

    Carr, Elliot J.; Simpson, Matthew J.

    2018-03-01

    We study measures of the amount of time required for transient flow in heterogeneous porous media to effectively reach steady state, also known as the response time. Here, we develop a new approach that extends the concept of mean action time. Previous applications of the theory of mean action time to estimate the response time use the first two central moments of the probability density function associated with the transition from the initial condition, at t = 0, to the steady state condition that arises in the long time limit, as t → ∞ . This previous approach leads to a computationally convenient estimation of the response time, but the accuracy can be poor. Here, we outline a powerful extension using the first k raw moments, showing how to produce an extremely accurate estimate by making use of asymptotic properties of the cumulative distribution function. Results are validated using an existing laboratory-scale data set describing flow in a homogeneous porous medium. In addition, we demonstrate how the results also apply to flow in heterogeneous porous media. Overall, the new method is: (i) extremely accurate; and (ii) computationally inexpensive. In fact, the computational cost of the new method is orders of magnitude less than the computational effort required to study the response time by solving the transient flow equation. Furthermore, the approach provides a rigorous mathematical connection with the heuristic argument that the response time for flow in a homogeneous porous medium is proportional to L2 / D , where L is a relevant length scale, and D is the aquifer diffusivity. Here, we extend such heuristic arguments by providing a clear mathematical definition of the proportionality constant.

  7. Accurate Determination of Glacier Surface Velocity Fields with a DEM-Assisted Pixel-Tracking Technique from SAR Imagery

    Directory of Open Access Journals (Sweden)

    Shiyong Yan

    2015-08-01

    Full Text Available We obtained accurate, detailed motion distribution of glaciers in Central Asia by applying digital elevation model (DEM assisted pixel-tracking method to L-band synthetic aperture radar imagery. The paper firstly introduces and analyzes each component of the offset field briefly, and then describes the method used to efficiently and precisely compensate the topography-related offset caused by the large spatial baseline and rugged terrain with the help of DEM. The results indicate that the rugged topography not only forms the complex shapes of glaciers, but also affects the glacier velocity estimation, especially with large spatial baseline. The maximum velocity, 0.85 m∙d−1, was observed in the middle part on the Fedchenko Glacier, which is the world’s longest mountain glacier. The motion fluctuation on its main trunk is apparently influenced by mass flowing in from tributaries, as well as angles between tributaries and the main stream. The approach presented in this paper was proved to be highly appropriate for monitoring glacier motion and will provide valuable sensitive indicators of current and future climate change for environmental analysis.

  8. Accurate technique for complete geometric calibration of cone-beam computed tomography systems

    International Nuclear Information System (INIS)

    Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.

    2005-01-01

    Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 deg. (around beam direction) to 0.3 deg. (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in

  9. 3D deformation field throughout the interior of materials.

    Energy Technology Data Exchange (ETDEWEB)

    Jin, Huiqing; Lu, Wei-Yang

    2013-09-01

    This report contains the one-year feasibility study for our three-year LDRD proposal that is aimed to develop an experimental technique to measure the 3D deformation fields inside a material body. In this feasibility study, we first apply Digital Volume Correlation (DVC) algorithm to pre-existing in-situ Xray Computed Tomography (XCT) image sets with pure rigid body translation. The calculated displacement field has very large random errors and low precision that are unacceptable. Then we enhance these tomography images by setting threshold of the intensity of each slice. DVC algorithm is able to obtain accurate deformation fields from these enhanced image sets and the deformation fields are consistent with the global mechanical loading that is applied to the specimen. Through this study, we prove that the internal markers inside the pre-existing tomography images of aluminum alloy can be enhanced and are suitable for DVC to calculate the deformation field throughout the material body.

  10. Advanced field-solver techniques for RC extraction of integrated circuits

    CERN Document Server

    Yu, Wenjian

    2014-01-01

    Resistance and capacitance (RC) extraction is an essential step in modeling the interconnection wires and substrate coupling effect in nanometer-technology integrated circuits (IC). The field-solver techniques for RC extraction guarantee the accuracy of modeling, and are becoming increasingly important in meeting the demand for accurate modeling and simulation of VLSI designs. Advanced Field-Solver Techniques for RC Extraction of Integrated Circuits presents a systematic introduction to, and treatment of, the key field-solver methods for RC extraction of VLSI interconnects and substrate coupling in mixed-signal ICs. Various field-solver techniques are explained in detail, with real-world examples to illustrate the advantages and disadvantages of each algorithm. This book will benefit graduate students and researchers in the field of electrical and computer engineering, as well as engineers working in the IC design and design automation industries. Dr. Wenjian Yu is an Associate Professor at the Department of ...

  11. Can smartphones be used to bring computer-based tasks from the lab to the field? A mobile experience-sampling method study about the pace of life.

    Science.gov (United States)

    Stieger, Stefan; Lewetz, David; Reips, Ulf-Dietrich

    2017-12-06

    Researchers are increasingly using smartphones to collect scientific data. To date, most smartphone studies have collected questionnaire data or data from the built-in sensors. So far, few studies have analyzed whether smartphones can also be used to conduct computer-based tasks (CBTs). Using a mobile experience-sampling method study and a computer-based tapping task as examples (N = 246; twice a day for three weeks, 6,000+ measurements), we analyzed how well smartphones can be used to conduct a CBT. We assessed methodological aspects such as potential technologically induced problems, dropout, task noncompliance, and the accuracy of millisecond measurements. Overall, we found few problems: Dropout rate was low, and the time measurements were very accurate. Nevertheless, particularly at the beginning of the study, some participants did not comply with the task instructions, probably because they did not read the instructions before beginning the task. To summarize, the results suggest that smartphones can be used to transfer CBTs from the lab to the field, and that real-world variations across device manufacturers, OS types, and CPU load conditions did not substantially distort the results.

  12. Algorithms for computing solvents of unilateral second-order matrix polynomials over prime finite fields using lambda-matrices

    Science.gov (United States)

    Burtyka, Filipp

    2018-01-01

    The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.

  13. Explorations in quantum computing

    CERN Document Server

    Williams, Colin P

    2011-01-01

    By the year 2020, the basic memory components of a computer will be the size of individual atoms. At such scales, the current theory of computation will become invalid. ""Quantum computing"" is reinventing the foundations of computer science and information theory in a way that is consistent with quantum physics - the most accurate model of reality currently known. Remarkably, this theory predicts that quantum computers can perform certain tasks breathtakingly faster than classical computers -- and, better yet, can accomplish mind-boggling feats such as teleporting information, breaking suppos

  14. Accurate expansion of cylindrical paraxial waves for its straightforward implementation in electromagnetic scattering

    International Nuclear Information System (INIS)

    Naserpour, Mahin; Zapata-Rodríguez, Carlos J.

    2018-01-01

    Highlights: • Paraxial beams are represented in a series expansion in terms of Bessel wave functions. • The coefficients of the series expansion can be analytically determined by using the pattern in the focal plane. • In particular, Gaussian beams and apertured wave fields have been critically examined. • This representation of the wave field is adequate for scattering problems with shaped beams. - Abstract: The evaluation of vector wave fields can be accurately performed by means of diffraction integrals, differential equations and also series expansions. In this paper, a Bessel series expansion which basis relies on the exact solution of the Helmholtz equation in cylindrical coordinates is theoretically developed for the straightforward yet accurate description of low-numerical-aperture focal waves. The validity of this approach is confirmed by explicit application to Gaussian beams and apertured focused fields in the paraxial regime. Finally we discuss how our procedure can be favorably implemented in scattering problems.

  15. Computational study of the influence of mirror parameters on FRC (field-reversed configuration) equilibria:

    International Nuclear Information System (INIS)

    Fuentes, N.O.; Sakanaka, P.H.

    1990-01-01

    Field-reversed configuration equilibria are studied by solving the Grad-Shafranov equation. A multiple coil system (main coil and end mirrors) is considered to simulate the coil geometry of CNEA device. First results are presented for computed two-dimensional FRC equilibria produced varying the mirror coil current with two different mirror lenghts. (Author)

  16. Infrared Testing of the Wide-field Infrared Survey Telescope Grism Using Computer Generated Holograms

    Science.gov (United States)

    Dominguez, Margaret Z.; Content, David A.; Gong, Qian; Griesmann, Ulf; Hagopian, John G.; Marx, Catherine T; Whipple, Arthur L.

    2017-01-01

    Infrared Computer Generated Holograms (CGHs) were designed, manufactured and used to measure the performance of the grism (grating prism) prototype which includes testing Diffractive Optical Elements (DOE). The grism in the Wide Field Infrared Survey Telescope (WFIRST) will allow the surveying of a large section of the sky to find bright galaxies.

  17. Covariance approximation for fast and accurate computation of channelized Hotelling observer statistics

    Science.gov (United States)

    Bonetto, P.; Qi, Jinyi; Leahy, R. M.

    2000-08-01

    Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.

  18. Calculations of transient fields in the Felix experiments at Argonne using null field integrated techniques

    International Nuclear Information System (INIS)

    Han, H.C.; Davey, K.R.; Turner, L.

    1985-08-01

    The transient eddy current problem is characteristically computationally intensive. The motivation for this research was to realize an efficient, accurate, solution technique involving small matrices via an eigenvalue approach. Such a technique is indeed realized and tested using the null field integral technique. Using smart (i.e., efficient, global) basis functions to represent unknowns in terms of a minimum number of unknowns, homogeneous eigenvectors and eigenvalues are first determined. The general excitatory response is then represented in terms of these eigenvalues/eigenvectors. Excellent results are obtained for the Argonne Felix cylinder experiments using a 4 x 4 matrix. Extension to the 3-D problem (short cylinder) is set up in terms of an 8 x 8 matrix

  19. Application of large computers for predicting the oil field production

    Energy Technology Data Exchange (ETDEWEB)

    Philipp, W; Gunkel, W; Marsal, D

    1971-10-01

    The flank injection drive plays a dominant role in the exploitation of the BEB-oil fields. Therefore, 2-phase flow computer models were built up, adapted to a predominance of a single flow direction and combining a high accuracy of prediction with a low job time. Any case study starts with the partitioning of the reservoir into blocks. Then the statistics of the time-independent reservoir properties are analyzed by means of an IBM 360/25 unit. Using these results and the past production of oil, water and gas, a Fortran-program running on a CDC-3300 computer yields oil recoveries and the ratios of the relative permeabilities as a function of the local oil saturation for all blocks penetrated by mobile water. In order to assign kDwU/KDoU-functions to blocks not yet reached by the advancing water-front, correlation analysis is used to relate reservoir properties to kDwU/KDoU-functions. All these results are used as input into a CDC-660 Fortran program, allowing short-, medium-, and long-term forecasts as well as the handling of special problems.

  20. Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes

    DEFF Research Database (Denmark)

    Kepp, Kasper Planeta; Cirera, J

    2009-01-01

    Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...

  1. Computing the demagnetizing tensor for finite difference micromagnetic simulations via numerical integration

    International Nuclear Information System (INIS)

    Chernyshenko, Dmitri; Fangohr, Hans

    2015-01-01

    In the finite difference method which is commonly used in computational micromagnetics, the demagnetizing field is usually computed as a convolution of the magnetization vector field with the demagnetizing tensor that describes the magnetostatic field of a cuboidal cell with constant magnetization. An analytical expression for the demagnetizing tensor is available, however at distances far from the cuboidal cell, the numerical evaluation of the analytical expression can be very inaccurate. Due to this large-distance inaccuracy numerical packages such as OOMMF compute the demagnetizing tensor using the explicit formula at distances close to the originating cell, but at distances far from the originating cell a formula based on an asymptotic expansion has to be used. In this work, we describe a method to calculate the demagnetizing field by numerical evaluation of the multidimensional integral in the demagnetizing tensor terms using a sparse grid integration scheme. This method improves the accuracy of computation at intermediate distances from the origin. We compute and report the accuracy of (i) the numerical evaluation of the exact tensor expression which is best for short distances, (ii) the asymptotic expansion best suited for large distances, and (iii) the new method based on numerical integration, which is superior to methods (i) and (ii) for intermediate distances. For all three methods, we show the measurements of accuracy and execution time as a function of distance, for calculations using single precision (4-byte) and double precision (8-byte) floating point arithmetic. We make recommendations for the choice of scheme order and integrating coefficients for the numerical integration method (iii). - Highlights: • We study the accuracy of demagnetization in finite difference micromagnetics. • We introduce a new sparse integration method to compute the tensor more accurately. • Newell, sparse integration and asymptotic method are compared for all ranges

  2. Accurate phylogenetic tree reconstruction from quartets: a heuristic approach.

    Science.gov (United States)

    Reaz, Rezwana; Bayzid, Md Shamsuzzoha; Rahman, M Sohel

    2014-01-01

    Supertree methods construct trees on a set of taxa (species) combining many smaller trees on the overlapping subsets of the entire set of taxa. A 'quartet' is an unrooted tree over 4 taxa, hence the quartet-based supertree methods combine many 4-taxon unrooted trees into a single and coherent tree over the complete set of taxa. Quartet-based phylogeny reconstruction methods have been receiving considerable attentions in the recent years. An accurate and efficient quartet-based method might be competitive with the current best phylogenetic tree reconstruction methods (such as maximum likelihood or Bayesian MCMC analyses), without being as computationally intensive. In this paper, we present a novel and highly accurate quartet-based phylogenetic tree reconstruction method. We performed an extensive experimental study to evaluate the accuracy and scalability of our approach on both simulated and biological datasets.

  3. Accurate first-principles structures and energies of diversely bonded systems from an efficient density functional.

    Science.gov (United States)

    Sun, Jianwei; Remsing, Richard C; Zhang, Yubo; Sun, Zhaoru; Ruzsinszky, Adrienn; Peng, Haowei; Yang, Zenghui; Paul, Arpita; Waghmare, Umesh; Wu, Xifan; Klein, Michael L; Perdew, John P

    2016-09-01

    One atom or molecule binds to another through various types of bond, the strengths of which range from several meV to several eV. Although some computational methods can provide accurate descriptions of all bond types, those methods are not efficient enough for many studies (for example, large systems, ab initio molecular dynamics and high-throughput searches for functional materials). Here, we show that the recently developed non-empirical strongly constrained and appropriately normed (SCAN) meta-generalized gradient approximation (meta-GGA) within the density functional theory framework predicts accurate geometries and energies of diversely bonded molecules and materials (including covalent, metallic, ionic, hydrogen and van der Waals bonds). This represents a significant improvement at comparable efficiency over its predecessors, the GGAs that currently dominate materials computation. Often, SCAN matches or improves on the accuracy of a computationally expensive hybrid functional, at almost-GGA cost. SCAN is therefore expected to have a broad impact on chemistry and materials science.

  4. Cheap but accurate calculation of chemical reaction rate constants from ab initio data, via system-specific, black-box force fields.

    Science.gov (United States)

    Steffen, Julien; Hartke, Bernd

    2017-10-28

    Building on the recently published quantum-mechanically derived force field (QMDFF) and its empirical valence bond extension, EVB-QMDFF, it is now possible to generate a reliable potential energy surface for any given elementary reaction step in an essentially black box manner. This requires a limited and pre-defined set of reference data near the reaction path and generates an accurate approximation of the reference potential energy surface, on and off the reaction path. This intermediate representation can be used to generate reaction rate data, with far better accuracy and reliability than with traditional approaches based on transition state theory (TST) or variational extensions thereof (VTST), even if those include sophisticated tunneling corrections. However, the additional expense at the reference level remains very modest. We demonstrate all this for three arbitrarily chosen example reactions.

  5. Improved fingercode alignment for accurate and compact fingerprint recognition

    CSIR Research Space (South Africa)

    Brown, Dane

    2016-05-01

    Full Text Available Alignment for Accurate and Compact Fingerprint Recognition Dane Brown∗† and Karen Bradshaw∗ ∗Department of Computer Science Rhodes University Grahamstown, South Africa †Council for Scientific and Industrial Research Modelling and Digital Sciences Pretoria.... The experimental analysis and results are discussed in Section IV. Section V concludes the paper. II. RELATED STUDIES FingerCode [1] uses circular tessellation of filtered finger- print images centered at the reference point, which results in a circular ROI...

  6. New Distributed Multipole Methods for Accurate Electrostatics for Large-Scale Biomolecular Simultations

    Science.gov (United States)

    Sagui, Celeste

    2006-03-01

    An accurate and numerically efficient treatment of electrostatics is essential for biomolecular simulations, as this stabilizes much of the delicate 3-d structure associated with biomolecules. Currently, force fields such as AMBER and CHARMM assign ``partial charges'' to every atom in a simulation in order to model the interatomic electrostatic forces, so that the calculation of the electrostatics rapidly becomes the computational bottleneck in large-scale simulations. There are two main issues associated with the current treatment of classical electrostatics: (i) how does one eliminate the artifacts associated with the point-charges (e.g., the underdetermined nature of the current RESP fitting procedure for large, flexible molecules) used in the force fields in a physically meaningful way? (ii) how does one efficiently simulate the very costly long-range electrostatic interactions? Recently, we have dealt with both of these challenges as follows. In order to improve the description of the molecular electrostatic potentials (MEPs), a new distributed multipole analysis based on localized functions -- Wannier, Boys, and Edminston-Ruedenberg -- was introduced, which allows for a first principles calculation of the partial charges and multipoles. Through a suitable generalization of the particle mesh Ewald (PME) and multigrid method, one can treat electrostatic multipoles all the way to hexadecapoles all without prohibitive extra costs. The importance of these methods for large-scale simulations will be discussed, and examplified by simulations from polarizable DNA models.

  7. A New Multiscale Technique for Time-Accurate Geophysics Simulations

    Science.gov (United States)

    Omelchenko, Y. A.; Karimabadi, H.

    2006-12-01

    Large-scale geophysics systems are frequently described by multiscale reactive flow models (e.g., wildfire and climate models, multiphase flows in porous rocks, etc.). Accurate and robust simulations of such systems by traditional time-stepping techniques face a formidable computational challenge. Explicit time integration suffers from global (CFL and accuracy) timestep restrictions due to inhomogeneous convective and diffusion processes, as well as closely coupled physical and chemical reactions. Application of adaptive mesh refinement (AMR) to such systems may not be always sufficient since its success critically depends on a careful choice of domain refinement strategy. On the other hand, implicit and timestep-splitting integrations may result in a considerable loss of accuracy when fast transients in the solution become important. To address this issue, we developed an alternative explicit approach to time-accurate integration of such systems: Discrete-Event Simulation (DES). DES enables asynchronous computation by automatically adjusting the CPU resources in accordance with local timescales. This is done by encapsulating flux- conservative updates of numerical variables in the form of events, whose execution and synchronization is explicitly controlled by imposing accuracy and causality constraints. As a result, at each time step DES self- adaptively updates only a fraction of the global system state, which eliminates unnecessary computation of inactive elements. DES can be naturally combined with various mesh generation techniques. The event-driven paradigm results in robust and fast simulation codes, which can be efficiently parallelized via a new preemptive event processing (PEP) technique. We discuss applications of this novel technology to time-dependent diffusion-advection-reaction and CFD models representative of various geophysics applications.

  8. A comparative design view for accurate control of servos using a field programmable gate array

    International Nuclear Information System (INIS)

    Tickle, A J; Harvey, P K; Smith, J S; Wu, F; Buckle, J R

    2009-01-01

    An embedded system is a special-purpose computer system designed to perform one or a few dedicated functions. Altera DSP Builder presents designers and users with an alternate approach when creating their systems by employing a blockset similar to that already used in Simulink. The application considered in this paper is the design of a Pulse Width Modulation (PWM) system for use in stereo vision. PWM can replace a digital-to-analogue converter to control audio speakers, LED intensity, motor speed, and servo position. Rather than the conventional HDL coding approach this Simulink approach provides an easy understanding platform to the PWM design. This paper includes a comparison between two approaches regarding resource usage and flexibility etc. Included is how DSP Builder manipulates an onboard clock signal, in order to create the control pulses to the 'raw' coding of a PWM generator in VHDL. Both methods were shown to a selection of people and their views on which version they would subsequently use in their relative fields is discussed.

  9. Image communication scheme based on dynamic visual cryptography and computer generated holography

    Science.gov (United States)

    Palevicius, Paulius; Ragulskis, Minvydas

    2015-01-01

    Computer generated holograms are often exploited to implement optical encryption schemes. This paper proposes the integration of dynamic visual cryptography (an optical technique based on the interplay of visual cryptography and time-averaging geometric moiré) with Gerchberg-Saxton algorithm. A stochastic moiré grating is used to embed the secret into a single cover image. The secret can be visually decoded by a naked eye if only the amplitude of harmonic oscillations corresponds to an accurately preselected value. The proposed visual image encryption scheme is based on computer generated holography, optical time-averaging moiré and principles of dynamic visual cryptography. Dynamic visual cryptography is used both for the initial encryption of the secret image and for the final decryption. Phase data of the encrypted image are computed by using Gerchberg-Saxton algorithm. The optical image is decrypted using the computationally reconstructed field of amplitudes.

  10. An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor

    Science.gov (United States)

    Chou, M. D.

    1980-01-01

    The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.

  11. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    Directory of Open Access Journals (Sweden)

    Karlheinz Schwarz

    2013-09-01

    Full Text Available Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized below. In each section a further focusing will be provided by occasionally organizing special issues on topics of high interests, collecting papers on fundamental work in the field. More applied papers should be submitted to their corresponding specialist journals. To help us achieve our goal with this journal, we have an excellent editorial board to advise us on the exciting current and future trends in computation from methodology to application. We very much look forward to hearing all about the research going on across the world. [...

  12. Computation: A New Open Access Journal of Computational Chemistry, Computational Biology and Computational Engineering

    OpenAIRE

    Karlheinz Schwarz; Rainer Breitling; Christian Allen

    2013-01-01

    Computation (ISSN 2079-3197; http://www.mdpi.com/journal/computation) is an international scientific open access journal focusing on fundamental work in the field of computational science and engineering. Computational science has become essential in many research areas by contributing to solving complex problems in fundamental science all the way to engineering. The very broad range of application domains suggests structuring this journal into three sections, which are briefly characterized ...

  13. The effect of metal artifacts on the identification of vertical root fractures using different fields of view in cone-beam computed tomography

    Energy Technology Data Exchange (ETDEWEB)

    Moudi, Ehsan; Haghanifar, Sina; Madani, Zahrasadat; Bijani, Ali; Nabavi, Zeynab Sadat [Babol University of Medical Science, Babol (Iran, Islamic Republic of)

    2015-09-15

    The aim of this study was to investigate the effects of metal artifacts on the accurate diagnosis of root fractures using cone-beam computed tomography (CBCT) images with large and small/limited fields of view (FOVs). Forty extracted molar and premolar teeth were collected. Access canals were made in all teeth using a rotary system. In half of the teeth, fractures were created by the application of mild pressure with a hammer. The teeth were then randomly put into a wax rim on an acryl base designed in the shape of a mandible. CBCT scans were obtained using a Newtom 5G system with FOVs of 18 cm×16 cm and 6 cm×6 cm. A metal pin was then placed into each tooth, and CBCT imaging was again performed using the same fields of view. All scans were evaluated by two oral and maxillofacial radiologists. The specificity, sensitivity, positive predictive value, negative predictive value, and likelihood ratios (positive and negative) were calculated. The maximum levels of sensitivity and specificity (100% and 100%, respectively) were observed in small volume CBCT scans of teeth without pins. The highest negative predictive value was found in the small-volume group without pins, whereas the positive predictive value was 100% in all groups except the large-volume group with pins.

  14. Color fields of the static pentaquark system computed in SU(3) lattice QCD

    Science.gov (United States)

    Cardoso, Nuno; Bicudo, Pedro

    2013-02-01

    We compute the color fields of SU(3) lattice QCD created by static pentaquark systems, in a 243×48 lattice at β=6.2 corresponding to a lattice spacing a=0.07261(85)fm. We find that the pentaquark color fields are well described by a multi-Y-type shaped flux tube. The flux tube junction points are compatible with Fermat-Steiner points minimizing the total flux tube length. We also compare the pentaquark flux tube profile with the diquark-diantiquark central flux tube profile in the tetraquark and the quark-antiquark fundamental flux tube profile in the meson, and they match, thus showing that the pentaquark flux tubes are composed of fundamental flux tubes.

  15. Color fields computed in SU(3) lattice QCD for the static tetraquark system

    International Nuclear Information System (INIS)

    Cardoso, Nuno; Cardoso, Marco; Bicudo, Pedro

    2011-01-01

    The color fields created by the static tetraquark system are computed in quenched SU(3) lattice QCD, in a 24 3 x48 lattice at β=6.2 corresponding to a lattice spacing a=0.07261(85) fm. We find that the tetraquark color fields are well described by a double-Y, or butterfly, shaped flux tube. The two flux-tube junction points are compatible with Fermat points minimizing the total flux-tube length. We also compare the diquark-diantiquark central flux-tube profile in the tetraquark with the quark-antiquark fundamental flux-tube profile in the meson, and they match, thus showing that the tetraquark flux tubes are composed of fundamental flux tubes.

  16. Fast and accurate calculation of the properties of water and steam for simulation

    International Nuclear Information System (INIS)

    Szegi, Zs.; Gacs, A.

    1990-01-01

    A basic principle simulator was developed at the CRIP, Budapest, for real time simulation of the transients of WWER-440 type nuclear power plants. Its integral part is the fast and accurate calculation of the thermodynamic properties of water and steam. To eliminate successive approximations, the model system of the secondary coolant circuit requires binary forms which are known as inverse functions, countinuous when crossing the saturation line, accurate and coherent for all argument combinations. A solution which reduces the computer memory and execution time demand is reported. (author) 36 refs.; 5 figs.; 3 tabs

  17. Computational Tools To Model Halogen Bonds in Medicinal Chemistry.

    Science.gov (United States)

    Ford, Melissa Coates; Ho, P Shing

    2016-03-10

    The use of halogens in therapeutics dates back to the earliest days of medicine when seaweed was used as a source of iodine to treat goiters. The incorporation of halogens to improve the potency of drugs is now fairly standard in medicinal chemistry. In the past decade, halogens have been recognized as direct participants in defining the affinity of inhibitors through a noncovalent interaction called the halogen bond or X-bond. Incorporating X-bonding into structure-based drug design requires computational models for the anisotropic distribution of charge and the nonspherical shape of halogens, which lead to their highly directional geometries and stabilizing energies. We review here current successes and challenges in developing computational methods to introduce X-bonding into lead compound discovery and optimization during drug development. This fast-growing field will push further development of more accurate and efficient computational tools to accelerate the exploitation of halogens in medicinal chemistry.

  18. Accurately bi-orthogonal direct and adjoint lambda modes via two-sided Eigen-solvers

    International Nuclear Information System (INIS)

    Roman, J.E.; Vidal, V.; Verdu, G.

    2005-01-01

    This work is concerned with the accurate computation of the dominant l-modes (Lambda mode) of the reactor core in order to approximate the solution of the neutron diffusion equation in different situations such as the transient modal analysis. In a previous work, the problem was already addressed by implementing a parallel program based on SLEPc (Scalable Library for Eigenvalue Problem Computations), a public domain software for the solution of eigenvalue problems. Now, the proposed solution is extended by incorporating also the computation of the adjoint l-modes in such a way that the bi-orthogonality condition is enforced very accurately. This feature is very desirable in some types of analyses, and in the proposed scheme it is achieved by making use of two-sided eigenvalue solving software. Current implementations of some of these software, while still susceptible of improvement, show that they can be competitive in terms of response time and accuracy with respect to other types of eigenvalue solving software. The code developed by the authors has parallel capabilities in order to be able to analyze reactors with a great level of detail in a short time. (authors)

  19. Accurately bi-orthogonal direct and adjoint lambda modes via two-sided Eigen-solvers

    Energy Technology Data Exchange (ETDEWEB)

    Roman, J.E.; Vidal, V. [Valencia Univ. Politecnica, D. Sistemas Informaticos y Computacion (Spain); Verdu, G. [Valencia Univ. Politecnica, D. Ingenieria Quimica y Nuclear (Spain)

    2005-07-01

    This work is concerned with the accurate computation of the dominant l-modes (Lambda mode) of the reactor core in order to approximate the solution of the neutron diffusion equation in different situations such as the transient modal analysis. In a previous work, the problem was already addressed by implementing a parallel program based on SLEPc (Scalable Library for Eigenvalue Problem Computations), a public domain software for the solution of eigenvalue problems. Now, the proposed solution is extended by incorporating also the computation of the adjoint l-modes in such a way that the bi-orthogonality condition is enforced very accurately. This feature is very desirable in some types of analyses, and in the proposed scheme it is achieved by making use of two-sided eigenvalue solving software. Current implementations of some of these software, while still susceptible of improvement, show that they can be competitive in terms of response time and accuracy with respect to other types of eigenvalue solving software. The code developed by the authors has parallel capabilities in order to be able to analyze reactors with a great level of detail in a short time. (authors)

  20. μ-PIV measurements of the ensemble flow fields surrounding a migrating semi-infinite bubble.

    Science.gov (United States)

    Yamaguchi, Eiichiro; Smith, Bradford J; Gaver, Donald P

    2009-08-01

    Microscale particle image velocimetry (μ-PIV) measurements of ensemble flow fields surrounding a steadily-migrating semi-infinite bubble through the novel adaptation of a computer controlled linear motor flow control system. The system was programmed to generate a square wave velocity input in order to produce accurate constant bubble propagation repeatedly and effectively through a fused glass capillary tube. We present a novel technique for re-positioning of the coordinate axis to the bubble tip frame of reference in each instantaneous field through the analysis of the sudden change of standard deviation of centerline velocity profiles across the bubble interface. Ensemble averages were then computed in this bubble tip frame of reference. Combined fluid systems of water/air, glycerol/air, and glycerol/Si-oil were used to investigate flows comparable to computational simulations described in Smith and Gaver (2008) and to past experimental observations of interfacial shape. Fluorescent particle images were also analyzed to measure the residual film thickness trailing behind the bubble. The flow fields and film thickness agree very well with the computational simulations as well as existing experimental and analytical results. Particle accumulation and migration associated with the flow patterns near the bubble tip after long experimental durations are discussed as potential sources of error in the experimental method.

  1. μ-PIV measurements of the ensemble flow fields surrounding a migrating semi-infinite bubble

    Science.gov (United States)

    Yamaguchi, Eiichiro; Smith, Bradford J.; Gaver, Donald P.

    2012-01-01

    Microscale particle image velocimetry (μ-PIV) measurements of ensemble flow fields surrounding a steadily-migrating semi-infinite bubble through the novel adaptation of a computer controlled linear motor flow control system. The system was programmed to generate a square wave velocity input in order to produce accurate constant bubble propagation repeatedly and effectively through a fused glass capillary tube. We present a novel technique for re-positioning of the coordinate axis to the bubble tip frame of reference in each instantaneous field through the analysis of the sudden change of standard deviation of centerline velocity profiles across the bubble interface. Ensemble averages were then computed in this bubble tip frame of reference. Combined fluid systems of water/air, glycerol/air, and glycerol/Si-oil were used to investigate flows comparable to computational simulations described in Smith and Gaver (2008) and to past experimental observations of interfacial shape. Fluorescent particle images were also analyzed to measure the residual film thickness trailing behind the bubble. The flow fields and film thickness agree very well with the computational simulations as well as existing experimental and analytical results. Particle accumulation and migration associated with the flow patterns near the bubble tip after long experimental durations are discussed as potential sources of error in the experimental method. PMID:23049158

  2. Field programmable gate array-assigned complex-valued computation and its limits

    Energy Technology Data Exchange (ETDEWEB)

    Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)

    2014-09-15

    We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.

  3. K Basins Field Verification Program

    International Nuclear Information System (INIS)

    Booth, H.W.

    1994-01-01

    The Field Verification Program establishes a uniform and systematic process to ensure that technical information depicted on selected engineering drawings accurately reflects the actual existing physical configuration. This document defines the Field Verification Program necessary to perform the field walkdown and inspection process that identifies the physical configuration of the systems required to support the mission objectives of K Basins. This program is intended to provide an accurate accounting of the actual field configuration by documenting the as-found information on a controlled drawing

  4. High-speed algorithm for calculating the neutron field in a reactor when working in dialog mode with a computer

    International Nuclear Information System (INIS)

    Afanas'ev, A.M.

    1987-01-01

    The large-scale construction of atomic power stations results in a need for trainers to instruct power-station personnel. The present work considers one problem of developing training computer software, associated with the development of a high-speed algorithm for calculating the neutron field after control-rod (CR) shift by the operator. The case considered here is that in which training units are developed on the basis of small computers of SM-2 type, which fall significantly short of the BESM-6 and EC-type computers used for the design calculations, in terms of speed and memory capacity. Depending on the apparatus for solving the criticality problem, in a two-dimensional single-group approximation, the physical-calculation programs require ∼ 1 min of machine time on a BESM-6 computer, which translates to ∼ 10 min on an SM-2 machine. In practice, this time is even longer, since ultimately it is necessary to determine not the effective multiplication factor K/sub ef/, but rather the local perturbations of the emergency-control (EC) system (to reach criticality) and change in the neutron field on shifting the CR and the EC rods. This long time means that it is very problematic to use physical-calculation programs to work in dialog mode with a computer. The algorithm presented below allows the neutron field following shift of the CR and EC rods to be calculated in a few seconds on a BESM-6 computer (tens of second on an SM-2 machine. This high speed may be achieved as a result of the preliminary calculation of the influence function (IF) for each CR. The IF may be calculated at high speed on a computer. Then it is stored in the external memory (EM) and, where necessary, used as the initial information

  5. A singularity extraction technique for computation of antenna aperture fields from singular plane wave spectra

    DEFF Research Database (Denmark)

    Cappellin, Cecilia; Breinbjerg, Olav; Frandsen, Aksel

    2008-01-01

    An effective technique for extracting the singularity of plane wave spectra in the computation of antenna aperture fields is proposed. The singular spectrum is first factorized into a product of a finite function and a singular function. The finite function is inverse Fourier transformed...... numerically using the Inverse Fast Fourier Transform, while the singular function is inverse Fourier transformed analytically, using the Weyl-identity, and the two resulting spatial functions are then convolved to produce the antenna aperture field. This article formulates the theory of the singularity...

  6. Computational Model Prediction and Biological Validation Using Simplified Mixed Field Exposures for the Development of a GCR Reference Field

    Science.gov (United States)

    Hada, M.; Rhone, J.; Beitman, A.; Saganti, P.; Plante, I.; Ponomarev, A.; Slaba, T.; Patel, Z.

    2018-01-01

    The yield of chromosomal aberrations has been shown to increase in the lymphocytes of astronauts after long-duration missions of several months in space. Chromosome exchanges, especially translocations, are positively correlated with many cancers and are therefore a potential biomarker of cancer risk associated with radiation exposure. Although extensive studies have been carried out on the induction of chromosomal aberrations by low- and high-LET radiation in human lymphocytes, fibroblasts, and epithelial cells exposed in vitro, there is a lack of data on chromosome aberrations induced by low dose-rate chronic exposure and mixed field beams such as those expected in space. Chromosome aberration studies at NSRL will provide the biological validation needed to extend the computational models over a broader range of experimental conditions (more complicated mixed fields leading up to the galactic cosmic rays (GCR) simulator), helping to reduce uncertainties in radiation quality effects and dose-rate dependence in cancer risk models. These models can then be used to answer some of the open questions regarding requirements for a full GCR reference field, including particle type and number, energy, dose rate, and delivery order. In this study, we designed a simplified mixed field beam with a combination of proton, helium, oxygen, and iron ions with shielding or proton, helium, oxygen, and titanium without shielding. Human fibroblasts cells were irradiated with these mixed field beam as well as each single beam with acute and chronic dose rate, and chromosome aberrations (CA) were measured with 3-color fluorescent in situ hybridization (FISH) chromosome painting methods. Frequency and type of CA induced with acute dose rate and chronic dose rates with single and mixed field beam will be discussed. A computational chromosome and radiation-induced DNA damage model, BDSTRACKS (Biological Damage by Stochastic Tracks), was updated to simulate various types of CA induced by

  7. Robust and accurate vectorization of line drawings.

    Science.gov (United States)

    Hilaire, Xavier; Tombre, Karl

    2006-06-01

    This paper presents a method for vectorizing the graphical parts of paper-based line drawings. The method consists of separating the input binary image into layers of homogeneous thickness, skeletonizing each layer, segmenting the skeleton by a method based on random sampling, and simplifying the result. The segmentation method is robust with a best bound of 50 percent noise reached for indefinitely long primitives. Accurate estimation of the recognized vector's parameters is enabled by explicitly computing their feasibility domains. Theoretical performance analysis and expression of the complexity of the segmentation method are derived. Experimental results and comparisons with other vectorization systems are also provided.

  8. An evaluation method of cross-type H-coil angle for accurate two-dimensional vector magnetic measurement

    International Nuclear Information System (INIS)

    Maeda, Yoshitaka; Todaka, Takashi; Shimoji, Hiroyasu; Enokizono, Masato; Sievert, Johanes

    2006-01-01

    Recently, two-dimensional vector magnetic measurement has become popular and many researchers concerned with this field have attracted to develop more accurate measuring systems and standard measurement systems. Because the two-dimensional vector magnetic property is the relationship between the magnetic flux density vector B and the magnetic field strength vector H , the most important parameter is those components. For the accurate measurement of the field strength vector, we have developed an evaluation apparatus, which consists of a standard solenoid coil and a high-precision turntable. Angle errors of a double H-coil (a cross-type H-coil), which is wound one after the other around a former, can be evaluated with this apparatus. The magnetic field strength is compensated with the measured angle error

  9. Application of CT-PSF-based computer-simulated lung nodules for evaluating the accuracy of computer-aided volumetry.

    Science.gov (United States)

    Funaki, Ayumu; Ohkubo, Masaki; Wada, Shinichi; Murao, Kohei; Matsumoto, Toru; Niizuma, Shinji

    2012-07-01

    With the wide dissemination of computed tomography (CT) screening for lung cancer, measuring the nodule volume accurately with computer-aided volumetry software is increasingly important. Many studies for determining the accuracy of volumetry software have been performed using a phantom with artificial nodules. These phantom studies are limited, however, in their ability to reproduce the nodules both accurately and in the variety of sizes and densities required. Therefore, we propose a new approach of using computer-simulated nodules based on the point spread function measured in a CT system. The validity of the proposed method was confirmed by the excellent agreement obtained between computer-simulated nodules and phantom nodules regarding the volume measurements. A practical clinical evaluation of the accuracy of volumetry software was achieved by adding simulated nodules onto clinical lung images, including noise and artifacts. The tested volumetry software was revealed to be accurate within an error of 20 % for nodules >5 mm and with the difference between nodule density and background (lung) (CT value) being 400-600 HU. Such a detailed analysis can provide clinically useful information on the use of volumetry software in CT screening for lung cancer. We concluded that the proposed method is effective for evaluating the performance of computer-aided volumetry software.

  10. Infinities in Quantum Field Theory and in Classical Computing: Renormalization Program

    Science.gov (United States)

    Manin, Yuri I.

    Introduction. The main observable quantities in Quantum Field Theory, correlation functions, are expressed by the celebrated Feynman path integrals. A mathematical definition of them involving a measure and actual integration is still lacking. Instead, it is replaced by a series of ad hoc but highly efficient and suggestive heuristic formulas such as perturbation formalism. The latter interprets such an integral as a formal series of finite-dimensional but divergent integrals, indexed by Feynman graphs, the list of which is determined by the Lagrangian of the theory. Renormalization is a prescription that allows one to systematically "subtract infinities" from these divergent terms producing an asymptotic series for quantum correlation functions. On the other hand, graphs treated as "flowcharts", also form a combinatorial skeleton of the abstract computation theory. Partial recursive functions that according to Church's thesis exhaust the universe of (semi)computable maps are generally not everywhere defined due to potentially infinite searches and loops. In this paper I argue that such infinities can be addressed in the same way as Feynman divergences. More details can be found in [9,10].

  11. Accurately Detecting Students' Lies regarding Relational Aggression by Correctional Instructions

    Science.gov (United States)

    Dickhauser, Oliver; Reinhard, Marc-Andre; Marksteiner, Tamara

    2012-01-01

    This study investigates the effect of correctional instructions when detecting lies about relational aggression. Based on models from the field of social psychology, we predict that correctional instruction will lead to a less pronounced lie bias and to more accurate lie detection. Seventy-five teachers received videotapes of students' true denial…

  12. An accurate method for computer-generating tungsten anode x-ray spectra from 30 to 140 kV.

    Science.gov (United States)

    Boone, J M; Seibert, J A

    1997-11-01

    A tungsten anode spectral model using interpolating polynomials (TASMIP) was used to compute x-ray spectra at 1 keV intervals over the range from 30 kV to 140 kV. The TASMIP is not semi-empirical and uses no physical assumptions regarding x-ray production, but rather interpolates measured constant potential x-ray spectra published by Fewell et al. [Handbook of Computed Tomography X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1981)]. X-ray output measurements (mR/mAs measured at 1 m) were made on a calibrated constant potential generator in our laboratory from 50 kV to 124 kV, and with 0-5 mm added aluminum filtration. The Fewell spectra were slightly modified (numerically hardened) and normalized based on the attenuation and output characteristics of a constant potential generator and metal-insert x-ray tube in our laboratory. Then, using the modified Fewell spectra of different kVs, the photon fluence phi at each 1 keV energy bin (E) over energies from 10 keV to 140 keV was characterized using polynomial functions of the form phi (E) = a0[E] + a1[E] kV + a2[E] kV2 + ... + a(n)[E] kVn. A total of 131 polynomial functions were used to calculate accurate x-ray spectra, each function requiring between two and four terms. The resulting TASMIP algorithm produced x-ray spectra that match both the quality and quantity characteristics of the x-ray system in our laboratory. For photon fluences above 10% of the peak fluence in the spectrum, the average percent difference (and standard deviation) between the modified Fewell spectra and the TASMIP photon fluence was -1.43% (3.8%) for the 50 kV spectrum, -0.89% (1.37%) for the 70 kV spectrum, and for the 80, 90, 100, 110, 120, 130 and 140 kV spectra, the mean differences between spectra were all less than 0.20% and the standard deviations were less than approximately 1.1%. The model was also extended to include the effects of generator-induced kV ripple. Finally, the x-ray photon fluence in the units of

  13. Three dimensional field computation software package DE3D and its applications

    International Nuclear Information System (INIS)

    Fan Mingwu; Zhang Tianjue; Yan Weili

    1992-07-01

    A software package, DE3D that can be run on PC for three dimensional electrostatic and magnetostatic field analysis has been developed in CIAE (China Institute of Atomic Energy). Two scalar potential method and special numerical techniques have made the code with high precision. It can be used for electrostatic and magnetostatic fields computations with complex boundary conditions. In the most cases, the result accuracy is better than 1% comparing with the measured. In some situations, the results are more acceptable than the other codes because some tricks are used for the current integral. Typical examples, design of a cyclotron magnet and magnetic elements on its beam transport line, given in the paper show how the program helps the designer to improve the design of the product. The software package could bring advantages to the producers and designers

  14. Experimental study of matrix carbon field-emission cathodes and computer aided design of electron guns for microwave power devices, exploring these cathodes

    International Nuclear Information System (INIS)

    Grigoriev, Y.A.; Petrosyan, A.I.; Penzyakov, V.V.; Pimenov, V.G.; Rogovin, V.I.; Shesterkin, V.I.; Kudryashov, V.P.; Semyonov, V.C.

    1997-01-01

    The experimental study of matrix carbon field-emission cathodes (MCFECs), which has led to the stable operation of the cathodes with current emission values up to 100 mA, is described. A method of computer aided design of TWT electron guns (EGs) with MCFEC, based on the results of the MCFEC emission experimental study, is presented. The experimental MCFEC emission characteristics are used to define the field gain coefficient K and the cathode effective emission area S eff . The EG program computes the electric field upon the MCFEC surface, multiplies it by the K value and uses the Fowler Nordheim law and the S eff value to calculate the MCFEC current; the electron trajectories are computed as well. copyright 1997 American Vacuum Society

  15. Computer applications in nuclear medicine

    International Nuclear Information System (INIS)

    Lancaster, J.L.; Lasher, J.C.; Blumhardt, R.

    1987-01-01

    Digital computers were introduced to nuclear medicine research as an imaging modality in the mid-1960s. Widespread use of imaging computers (scintigraphic computers) was not seen in nuclear medicine clinics until the mid-1970s. For the user, the ability to acquire scintigraphic images into the computer for quantitative purposes, with accurate selection of regions of interest (ROIs), promised almost endless computational capabilities. Investigators quickly developed many new methods for quantitating the distribution patterns of radiopharmaceuticals within the body both spatially and temporally. The computer was used to acquire data on practically every organ that could be imaged by means of gamma cameras or rectilinear scanners. Methods of image processing borrowed from other disciplines were applied to scintigraphic computer images in an attempt to improve image quality. Image processing in nuclear medicine has evolved into a relatively extensive set of tasks that can be called on by the user to provide additional clinical information rather than to improve image quality. Digital computers are utilized in nuclear medicine departments for nonimaging applications also, Patient scheduling, archiving, radiopharmaceutical inventory, radioimmunoassay (RIA), and health physics are just a few of the areas in which the digital computer has proven helpful. The computer is useful in any area in which a large quantity of data needs to be accurately managed, especially over a long period of time

  16. Computer Vision Malaria Diagnostic Systems—Progress and Prospects

    Directory of Open Access Journals (Sweden)

    Joseph Joel Pollak

    2017-08-01

    Full Text Available Accurate malaria diagnosis is critical to prevent malaria fatalities, curb overuse of antimalarial drugs, and promote appropriate management of other causes of fever. While several diagnostic tests exist, the need for a rapid and highly accurate malaria assay remains. Microscopy and rapid diagnostic tests are the main diagnostic modalities available, yet they can demonstrate poor performance and accuracy. Automated microscopy platforms have the potential to significantly improve and standardize malaria diagnosis. Based on image recognition and machine learning algorithms, these systems maintain the benefits of light microscopy and provide improvements such as quicker scanning time, greater scanning area, and increased consistency brought by automation. While these applications have been in development for over a decade, recently several commercial platforms have emerged. In this review, we discuss the most advanced computer vision malaria diagnostic technologies and investigate several of their features which are central to field use. Additionally, we discuss the technological and policy barriers to implementing these technologies in low-resource settings world-wide.

  17. Quantum control with noisy fields: computational complexity versus sensitivity to noise

    International Nuclear Information System (INIS)

    Kallush, S; Khasin, M; Kosloff, R

    2014-01-01

    A closed quantum system is defined as completely controllable if an arbitrary unitary transformation can be executed using the available controls. In practice, control fields are a source of unavoidable noise, which has to be suppressed to retain controllability. Can one design control fields such that the effect of noise is negligible on the time-scale of the transformation? This question is intimately related to the fundamental problem of a connection between the computational complexity of the control problem and the sensitivity of the controlled system to noise. The present study considers a paradigm of control, where the Lie-algebraic structure of the control Hamiltonian is fixed, while the size of the system increases with the dimension of the Hilbert space representation of the algebra. We find two types of control tasks, easy and hard. Easy tasks are characterized by a small variance of the evolving state with respect to the operators of the control operators. They are relatively immune to noise and the control field is easy to find. Hard tasks have a large variance, are sensitive to noise and the control field is hard to find. The influence of noise increases with the size of the system, which is measured by the scaling factor N of the largest weight of the representation. For fixed time and control field the ability to control degrades as O(N) for easy tasks and as O(N 2 ) for hard tasks. As a consequence, even in the most favorable estimate, for large quantum systems, generic noise in the controls dominates for a typical class of target transformations, i.e. complete controllability is destroyed by noise. (paper)

  18. Energy law preserving C0 finite element schemes for phase field models in two-phase flow computations

    International Nuclear Information System (INIS)

    Hua Jinsong; Lin Ping; Liu Chun; Wang Qi

    2011-01-01

    Highlights: → We study phase-field models for multi-phase flow computation. → We develop an energy-law preserving C0 FEM. → We show that the energy-law preserving method work better. → We overcome unphysical oscillation associated with the Cahn-Hilliard model. - Abstract: We use the idea in to develop the energy law preserving method and compute the diffusive interface (phase-field) models of Allen-Cahn and Cahn-Hilliard type, respectively, governing the motion of two-phase incompressible flows. We discretize these two models using a C 0 finite element in space and a modified midpoint scheme in time. To increase the stability in the pressure variable we treat the divergence free condition by a penalty formulation, under which the discrete energy law can still be derived for these diffusive interface models. Through an example we demonstrate that the energy law preserving method is beneficial for computing these multi-phase flow models. We also demonstrate that when applying the energy law preserving method to the model of Cahn-Hilliard type, un-physical interfacial oscillations may occur. We examine the source of such oscillations and a remedy is presented to eliminate the oscillations. A few two-phase incompressible flow examples are computed to show the good performance of our method.

  19. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology.

    Directory of Open Access Journals (Sweden)

    Kevin S Bonham

    2017-10-01

    Full Text Available While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.

  20. Women are underrepresented in computational biology: An analysis of the scholarly literature in biology, computer science and computational biology.

    Science.gov (United States)

    Bonham, Kevin S; Stefan, Melanie I

    2017-10-01

    While women are generally underrepresented in STEM fields, there are noticeable differences between fields. For instance, the gender ratio in biology is more balanced than in computer science. We were interested in how this difference is reflected in the interdisciplinary field of computational/quantitative biology. To this end, we examined the proportion of female authors in publications from the PubMed and arXiv databases. There are fewer female authors on research papers in computational biology, as compared to biology in general. This is true across authorship position, year, and journal impact factor. A comparison with arXiv shows that quantitative biology papers have a higher ratio of female authors than computer science papers, placing computational biology in between its two parent fields in terms of gender representation. Both in biology and in computational biology, a female last author increases the probability of other authors on the paper being female, pointing to a potential role of female PIs in influencing the gender balance.

  1. Analysis of Craniofacial Images using Computational Atlases and Deformation Fields

    DEFF Research Database (Denmark)

    Ólafsdóttir, Hildur

    2008-01-01

    purposes. The basis for most of the applications is non-rigid image registration. This approach brings one image into the coordinate system of another resulting in a deformation field describing the anatomical correspondence between the two images. A computational atlas representing the average anatomy...... of asymmetry. The analyses are applied to the study of three different craniofacial anomalies. The craniofacial applications include studies of Crouzon syndrome (in mice), unicoronal synostosis plagiocephaly and deformational plagiocephaly. Using the proposed methods, the thesis reveals novel findings about...... the craniofacial morphology and asymmetry of Crouzon mice. Moreover, a method to plan and evaluate treatment of children with deformational plagiocephaly, based on asymmetry assessment, is established. Finally, asymmetry in children with unicoronal synostosis is automatically assessed, confirming previous results...

  2. Computer Vision Photogrammetry for Underwater Archaeological Site Recording in a Low-Visibility Environment

    Science.gov (United States)

    Van Damme, T.

    2015-04-01

    Computer Vision Photogrammetry allows archaeologists to accurately record underwater sites in three dimensions using simple twodimensional picture or video sequences, automatically processed in dedicated software. In this article, I share my experience in working with one such software package, namely PhotoScan, to record a Dutch shipwreck site. In order to demonstrate the method's reliability and flexibility, the site in question is reconstructed from simple GoPro footage, captured in low-visibility conditions. Based on the results of this case study, Computer Vision Photogrammetry compares very favourably to manual recording methods both in recording efficiency, and in the quality of the final results. In a final section, the significance of Computer Vision Photogrammetry is then assessed from a historical perspective, by placing the current research in the wider context of about half a century of successful use of Analytical and later Digital photogrammetry in the field of underwater archaeology. I conclude that while photogrammetry has been used in our discipline for several decades now, for various reasons the method was only ever used by a relatively small percentage of projects. This is likely to change in the near future since, compared to the `traditional' photogrammetry approaches employed in the past, today Computer Vision Photogrammetry is easier to use, more reliable and more affordable than ever before, while at the same time producing more accurate and more detailed three-dimensional results.

  3. The development of accurate data for the desing of fast reactors

    International Nuclear Information System (INIS)

    Rossouw, P.A.

    1976-04-01

    The proposed use of nuclear power in the generation of electricity in South Africa and the use of fast reactors in the country's nuclear porgram, requires a method for fast reactor evluation. The availability of accurate neutron data and neutronics computation techniques for fast reactors are required for such an evaluation. The reacotr physics and reactor parameters of importance in the evaluation of fast reacotrs are discussed, and computer programs for the computation of reactor spectra and reacotr parameters from differential nuclear data are presented in this treatise. In endeavouring to increase the accuracy in fast reactor design, two methods for the improvement of differential nuclear data were developed and are discussed in detail. The computer programs which were developed for this purpose are also given. The neutron data of the most important fissionable and breeding nuclei (U 235 x U 238 x Pu 239 and Pu 240 ) are adjusted using both methods and the improved neutron data are tested by computation with an advanced neutronics computer program. The improved and orginal neutron data are compared and the use of the improved data in fast reactor design is discussed

  4. DeepBound: accurate identification of transcript boundaries via deep convolutional neural fields

    KAUST Repository

    Shao, Mingfu; Ma, Jianzhu; Wang, Sheng

    2017-01-01

    Motivation: Reconstructing the full- length expressed transcripts (a. k. a. the transcript assembly problem) from the short sequencing reads produced by RNA-seq protocol plays a central role in identifying novel genes and transcripts as well as in studying gene expressions and gene functions. A crucial step in transcript assembly is to accurately determine the splicing junctions and boundaries of the expressed transcripts from the reads alignment. In contrast to the splicing junctions that can be efficiently detected from spliced reads, the problem of identifying boundaries remains open and challenging, due to the fact that the signal related to boundaries is noisy and weak.

  5. DeepBound: accurate identification of transcript boundaries via deep convolutional neural fields

    KAUST Repository

    Shao, Mingfu

    2017-04-20

    Motivation: Reconstructing the full- length expressed transcripts (a. k. a. the transcript assembly problem) from the short sequencing reads produced by RNA-seq protocol plays a central role in identifying novel genes and transcripts as well as in studying gene expressions and gene functions. A crucial step in transcript assembly is to accurately determine the splicing junctions and boundaries of the expressed transcripts from the reads alignment. In contrast to the splicing junctions that can be efficiently detected from spliced reads, the problem of identifying boundaries remains open and challenging, due to the fact that the signal related to boundaries is noisy and weak.

  6. Can a numerically stable subgrid-scale model for turbulent flow computation be ideally accurate?: a preliminary theoretical study for the Gaussian filtered Navier-Stokes equations.

    Science.gov (United States)

    Ida, Masato; Taniguchi, Nobuyuki

    2003-09-01

    This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.

  7. A graphical user interface for RAId, a knowledge integrated proteomics analysis suite with accurate statistics

    OpenAIRE

    Joyce, Brendan; Lee, Danny; Rubio, Alex; Ogurtsov, Aleksey; Alves, Gelio; Yu, Yi-Kuo

    2018-01-01

    Abstract Objective RAId is a software package that has been actively developed for the past 10 years for computationally and visually analyzing MS/MS data. Founded on rigorous statistical methods, RAId’s core program computes accurate E-values for peptides and proteins identified during database searches. Making this robust tool readily accessible for the proteomics community by developing a graphical user interface (GUI) is our main goa...

  8. Fast and accurate implementation of Fourier spectral approximations of nonlocal diffusion operators and its applications

    International Nuclear Information System (INIS)

    Du, Qiang; Yang, Jiang

    2017-01-01

    This work is concerned with the Fourier spectral approximation of various integral differential equations associated with some linear nonlocal diffusion and peridynamic operators under periodic boundary conditions. For radially symmetric kernels, the nonlocal operators under consideration are diagonalizable in the Fourier space so that the main computational challenge is on the accurate and fast evaluation of their eigenvalues or Fourier symbols consisting of possibly singular and highly oscillatory integrals. For a large class of fractional power-like kernels, we propose a new approach based on reformulating the Fourier symbols both as coefficients of a series expansion and solutions of some simple ODE models. We then propose a hybrid algorithm that utilizes both truncated series expansions and high order Runge–Kutta ODE solvers to provide fast evaluation of Fourier symbols in both one and higher dimensional spaces. It is shown that this hybrid algorithm is robust, efficient and accurate. As applications, we combine this hybrid spectral discretization in the spatial variables and the fourth-order exponential time differencing Runge–Kutta for temporal discretization to offer high order approximations of some nonlocal gradient dynamics including nonlocal Allen–Cahn equations, nonlocal Cahn–Hilliard equations, and nonlocal phase-field crystal models. Numerical results show the accuracy and effectiveness of the fully discrete scheme and illustrate some interesting phenomena associated with the nonlocal models.

  9. Logging into the Field—Methodological Reflections on Ethnographic Research in a Pluri-Local and Computer-Mediated Field

    Directory of Open Access Journals (Sweden)

    Heike Mónika Greschke

    2007-09-01

    Full Text Available This article aims to introduce an ethnic group inhabiting a common virtual space in the World Wide Web (WWW, while being physically located in different socio-geographical contexts. Potentially global in its geographical extent, this social formation is constituted by means of interrelating virtual-global dimensions with physically grounded parts of the actors' lifeworlds. In addition, the communities' social life relies on specific communicative practices joining mediated forms of communication with co-presence based encounters. Ethnographic research in a pluri-local and computer-mediated field poses a set of problems which demand thorough reflection as well as a search for creative solutions. How can the boundaries of the field be determined? What does "being there" signify in such a case? Is it possible to enter the field while sitting at my own desk, just by visiting the respective site in the WWW, simply observing the communication going on without even being noticed by the subjects in the field? Or does "being in the field" imply that I ought to turn into a member of the studied community? Am I supposed to effectively live with the others for a while? And then, what can "living together" actually mean in that case? Will I learn enough about the field simply by participating in its virtual activities? Or do I have to account for the physically grounded dimensions of the actors' lifeworlds, as well? Ethnographic research in a pluri-local and computer-mediated field in practice raises a lot of questions regarding the ways of entering the field and being in the field. Some of them will be discussed in this paper by means of reflecting research experiences gained in the context of a recently concluded case study. URN: urn:nbn:de:0114-fqs0703321

  10. Accurate reconstruction of hyperspectral images from compressive sensing measurements

    Science.gov (United States)

    Greer, John B.; Flake, J. C.

    2013-05-01

    The emerging field of Compressive Sensing (CS) provides a new way to capture data by shifting the heaviest burden of data collection from the sensor to the computer on the user-end. This new means of sensing requires fewer measurements for a given amount of information than traditional sensors. We investigate the efficacy of CS for capturing HyperSpectral Imagery (HSI) remotely. We also introduce a new family of algorithms for constructing HSI from CS measurements with Split Bregman Iteration [Goldstein and Osher,2009]. These algorithms combine spatial Total Variation (TV) with smoothing in the spectral dimension. We examine models for three different CS sensors: the Coded Aperture Snapshot Spectral Imager-Single Disperser (CASSI-SD) [Wagadarikar et al.,2008] and Dual Disperser (CASSI-DD) [Gehm et al.,2007] cameras, and a hypothetical random sensing model closer to CS theory, but not necessarily implementable with existing technology. We simulate the capture of remotely sensed images by applying the sensor forward models to well-known HSI scenes - an AVIRIS image of Cuprite, Nevada and the HYMAP Urban image. To measure accuracy of the CS models, we compare the scenes constructed with our new algorithm to the original AVIRIS and HYMAP cubes. The results demonstrate the possibility of accurately sensing HSI remotely with significantly fewer measurements than standard hyperspectral cameras.

  11. THE DECAY OF A WEAK LARGE-SCALE MAGNETIC FIELD IN TWO-DIMENSIONAL TURBULENCE

    Energy Technology Data Exchange (ETDEWEB)

    Kondić, Todor; Hughes, David W.; Tobias, Steven M., E-mail: t.kondic@leeds.ac.uk [Department of Applied Mathematics, University of Leeds, Leeds LS2 9JT (United Kingdom)

    2016-06-01

    We investigate the decay of a large-scale magnetic field in the context of incompressible, two-dimensional magnetohydrodynamic turbulence. It is well established that a very weak mean field, of strength significantly below equipartition value, induces a small-scale field strong enough to inhibit the process of turbulent magnetic diffusion. In light of ever-increasing computer power, we revisit this problem to investigate fluids and magnetic Reynolds numbers that were previously inaccessible. Furthermore, by exploiting the relation between the turbulent diffusion of the magnetic potential and that of the magnetic field, we are able to calculate the turbulent magnetic diffusivity extremely accurately through the imposition of a uniform mean magnetic field. We confirm the strong dependence of the turbulent diffusivity on the product of the magnetic Reynolds number and the energy of the large-scale magnetic field. We compare our findings with various theoretical descriptions of this process.

  12. Some far-field acoustics characteristics of the XV-15 tilt-rotor aircraft

    Science.gov (United States)

    Golub, Robert A.; Conner, David A.; Becker, Lawrence E.; Rutledge, C. Kendall; Smith, Rita A.

    1990-01-01

    Far-field acoustics tests have been conducted on an instrumented XV-15 tilt-rotor aircraft. The purpose of these acoustic measurements was to create an encompassing, high confidence (90 percent), and accurate (-1.4/ +1/8 dB theoretical confidence interval) far-field acoustics data base to validate ROTONET and other current rotorcraft noise prediction computer codes. This paper describes the flight techniques used, with emphasis on the care taken to obtain high-quality far-field acoustic data. The quality and extensiveness of the data base collected are shown by presentation of ground acoustic contours for level flyovers for the airplane flight mode and for several forward velocities and nacelle tilts for the transition mode and helicopter flight mode. Acoustic pressure time-histories and fully analyzed ensemble averaged far-field data results (spectra) are shown for each of the ground contour cases.

  13. Entertainment computing, social transformation and the quantum field

    NARCIS (Netherlands)

    Rauterberg, G.W.M.; Nijholt, A.; Reidsma, D.; Hondorp, H.

    2009-01-01

    Entertainment computing is on its way getting an established academic discipline. The scope of entertainment computing is quite broad (see the scope of the international journal Entertainment Computing). One unifying idea in this diverse community of entertainment researchers and developers might be

  14. Accurate Holdup Calculations with Predictive Modeling & Data Integration

    Energy Technology Data Exchange (ETDEWEB)

    Azmy, Yousry [North Carolina State Univ., Raleigh, NC (United States). Dept. of Nuclear Engineering; Cacuci, Dan [Univ. of South Carolina, Columbia, SC (United States). Dept. of Mechanical Engineering

    2017-04-03

    In facilities that process special nuclear material (SNM) it is important to account accurately for the fissile material that enters and leaves the plant. Although there are many stages and processes through which materials must be traced and measured, the focus of this project is material that is “held-up” in equipment, pipes, and ducts during normal operation and that can accumulate over time into significant quantities. Accurately estimating the holdup is essential for proper SNM accounting (vis-à-vis nuclear non-proliferation), criticality and radiation safety, waste management, and efficient plant operation. Usually it is not possible to directly measure the holdup quantity and location, so these must be inferred from measured radiation fields, primarily gamma and less frequently neutrons. Current methods to quantify holdup, i.e. Generalized Geometry Holdup (GGH), primarily rely on simple source configurations and crude radiation transport models aided by ad hoc correction factors. This project seeks an alternate method of performing measurement-based holdup calculations using a predictive model that employs state-of-the-art radiation transport codes capable of accurately simulating such situations. Inverse and data assimilation methods use the forward transport model to search for a source configuration that best matches the measured data and simultaneously provide an estimate of the level of confidence in the correctness of such configuration. In this work the holdup problem is re-interpreted as an inverse problem that is under-determined, hence may permit multiple solutions. A probabilistic approach is applied to solving the resulting inverse problem. This approach rates possible solutions according to their plausibility given the measurements and initial information. This is accomplished through the use of Bayes’ Theorem that resolves the issue of multiple solutions by giving an estimate of the probability of observing each possible solution. To use

  15. Modeling of the near field plume of a Hall thruster

    International Nuclear Information System (INIS)

    Boyd, Iain D.; Yim, John T.

    2004-01-01

    In this study, a detailed numerical model is developed to simulate the xenon plasma near-field plume from a Hall thruster. The model uses a detailed fluid model to describe the electrons and a particle-based kinetic approach is used to model the heavy xenon ions and atoms. The detailed model is applied to compute the near field plume of a small, 200 W Hall thruster. Results from the detailed model are compared with the standard modeling approach that employs the Boltzmann model. The usefulness of the model detailed is assessed through direct comparisons with a number of different measured data sets. The comparisons illustrate that the detailed model accurately predicts a number of features of the measured data not captured by the simpler Boltzmann approach

  16. Computer usage and national energy consumption: Results from a field-metering study

    Energy Technology Data Exchange (ETDEWEB)

    Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Greenblatt, Jeffery [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Pratt, Stacy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Willem, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Claybaugh, Erin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Beraki, Bereket [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Nagaraju, Mythri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Young, Scott [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division

    2014-12-01

    The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Bay Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power

  17. Grating-based X-ray Dark-field Computed Tomography of Living Mice.

    Science.gov (United States)

    Velroyen, A; Yaroshenko, A; Hahn, D; Fehringer, A; Tapfer, A; Müller, M; Noël, P B; Pauwels, B; Sasov, A; Yildirim, A Ö; Eickelberg, O; Hellbach, K; Auweter, S D; Meinel, F G; Reiser, M F; Bech, M; Pfeiffer, F

    2015-10-01

    Changes in x-ray attenuating tissue caused by lung disorders like emphysema or fibrosis are subtle and thus only resolved by high-resolution computed tomography (CT). The structural reorganization, however, is of strong influence for lung function. Dark-field CT (DFCT), based on small-angle scattering of x-rays, reveals such structural changes even at resolutions coarser than the pulmonary network and thus provides access to their anatomical distribution. In this proof-of-concept study we present x-ray in vivo DFCTs of lungs of a healthy, an emphysematous and a fibrotic mouse. The tomographies show excellent depiction of the distribution of structural - and thus indirectly functional - changes in lung parenchyma, on single-modality slices in dark field as well as on multimodal fusion images. Therefore, we anticipate numerous applications of DFCT in diagnostic lung imaging. We introduce a scatter-based Hounsfield Unit (sHU) scale to facilitate comparability of scans. In this newly defined sHU scale, the pathophysiological changes by emphysema and fibrosis cause a shift towards lower numbers, compared to healthy lung tissue.

  18. Synthetic Computation: Chaos Computing, Logical Stochastic Resonance, and Adaptive Computing

    Science.gov (United States)

    Kia, Behnam; Murali, K.; Jahed Motlagh, Mohammad-Reza; Sinha, Sudeshna; Ditto, William L.

    Nonlinearity and chaos can illustrate numerous behaviors and patterns, and one can select different patterns from this rich library of patterns. In this paper we focus on synthetic computing, a field that engineers and synthesizes nonlinear systems to obtain computation. We explain the importance of nonlinearity, and describe how nonlinear systems can be engineered to perform computation. More specifically, we provide an overview of chaos computing, a field that manually programs chaotic systems to build different types of digital functions. Also we briefly describe logical stochastic resonance (LSR), and then extend the approach of LSR to realize combinational digital logic systems via suitable concatenation of existing logical stochastic resonance blocks. Finally we demonstrate how a chaotic system can be engineered and mated with different machine learning techniques, such as artificial neural networks, random searching, and genetic algorithm, to design different autonomous systems that can adapt and respond to environmental conditions.

  19. Do Mendeley reader counts reflect the scholarly impact of conference papers? An investigation of Computer Science and Engineering fields

    Energy Technology Data Exchange (ETDEWEB)

    Aduku, K.J.; Thelwall, M.; Kousha, K.

    2016-07-01

    Counts of Mendeley readers may give useful evidence about the impact of research. Although several studies have indicated that there are significant positive correlations between counts of Mendeley readers and citation counts for journal articles, it is not known how the pattern of association may vary between journal articles and conference papers. To fill this gap, Mendeley readership data and Scopus citation counts were extracted for both journal articles and conference papers published in 2011 in four fields for which conferences are important; Computer Science Applications, Computer Software, Building & Construction Engineering and Industrial & Manufacturing Engineering. Mendeley readership counts were found to correlate moderately with citation counts for both journal articles and conference papers in Computer Science Applications and Computer Software. Nevertheless, the correlations were much lower between Mendeley readers and citation counts for conference papers than for journal articles in Building & Construction Engineering and Industrial & Manufacturing Engineering. Hence, there seems to be disciplinary differences in the usefulness of Mendeley readership counts as impact indicators for conference papers, even between fields for which conferences are important. (Author)

  20. High-field superconducting window-frame beam-transport magnets

    International Nuclear Information System (INIS)

    Allinger, J.; Carroll, A.; Danby, G.; DeVito, B.; Jackson, J.; Leonhardt, W.; Prodell, A.; Skarita, J.

    1982-01-01

    The window-frame design for high-field superconducting beam-transport magnets was first applied to two, 2-m-long, 4-T modules of an 8 0 bending magent which has operated for nine years in the primary proton beam line at the Brookhaven National Laboratory Alternating Gradient Synchrotron (AGS). The design of two 1.5-m long, 7.6-cm cold-bore superconducting windowframe magnets, described in this paper, intended for the external proton beam transport system at the AGS incorporated evolutionary changes. These magnets generated a maximum aperture field of 6.8 T with a peak field in the dipole coil of 7.1 T. Measured fields are very accurate and are compared to values calculated using the computer programs LINDA and POISSON. Results of quench-propagation studies demonstrate the excellent thermal stability of the magnets. The magnets quench safely without energy extraction at a maximum current density, J = 130 kA/cm 2 in the superconductor, corresponding to J = 57.6 kA/cm 2 overall the conductor at B = 6.7 T

  1. Comprehensive evaluation of attitude and orbit estimation using real earth magnetic field data

    Science.gov (United States)

    Deutschmann, Julie; Bar-Itzhack, Itzhack

    1997-01-01

    A single, augmented extended Kalman filter (EKF) which simultaneously and autonomously estimates spacecraft attitude and orbit was developed and tested with simulated and real magnetometer and rate data. Since the earth's magnetic field is a function of time and position, and since time is accurately known, the differences between the computed and measured magnetic field components, as measured by the magnetometers throughout the entire spacecraft's orbit, are a function of orbit and attitude errors. These differences can be used to estimate the orbit and attitude. The test results of the EKF with magnetometer and gyro data from three NASA satellites are presented and evaluated.

  2. A Highly Accurate Approach for Aeroelastic System with Hysteresis Nonlinearity

    Directory of Open Access Journals (Sweden)

    C. C. Cui

    2017-01-01

    Full Text Available We propose an accurate approach, based on the precise integration method, to solve the aeroelastic system of an airfoil with a pitch hysteresis. A major procedure for achieving high precision is to design a predictor-corrector algorithm. This algorithm enables accurate determination of switching points resulting from the hysteresis. Numerical examples show that the results obtained by the presented method are in excellent agreement with exact solutions. In addition, the high accuracy can be maintained as the time step increases in a reasonable range. It is also found that the Runge-Kutta method may sometimes provide quite different and even fallacious results, though the step length is much less than that adopted in the presented method. With such high computational accuracy, the presented method could be applicable in dynamical systems with hysteresis nonlinearities.

  3. A computational methodology for formulating gasoline surrogate fuels with accurate physical and chemical kinetic properties

    KAUST Repository

    Ahmed, Ahfaz; Goteng, Gokop; Shankar, Vijai; Al-Qurashi, Khalid; Roberts, William L.; Sarathy, Mani

    2015-01-01

    simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating

  4. A Backward Pyramid Oriented Optical Flow Field Computing Method for Aerial Image

    Directory of Open Access Journals (Sweden)

    LI Jiatian

    2016-09-01

    Full Text Available Aerial image optical flow field is the foundation for detecting moving objects at low altitude and obtaining change information. In general,the image pyramid structure is embedded in numerical procedure in order to enhance the convergence globally. However,more often than not,the pyramid structure is constructed using a bottom-up approach progressively,ignoring the geometry imaging process.In particular,when the ground objects moving it will lead to miss optical flow or the optical flow too small that could hardly sustain the subsequent modeling and analyzing issues. So a backward pyramid structure is proposed on the foundation of top-level standard image. Firstly,down sampled factors of top-level image are calculated quantitatively through central projection,which making the optical flow in top-level image represent the shifting threshold of the set ground target. Secondly,combining top-level image with its original,the down sampled factors in middle layer are confirmed in a constant proportion way. Finally,the image of middle layer is achieved by Gaussian smoothing and image interpolation,and meanwhile the pyramid is formed. The comparative experiments and analysis illustrate that the backward pyramid can calculate the optic flow field in aerial image accurately,and it has advantages in restraining small ground displacement.

  5. TU-F-BRE-05: Experimental Determination of K Factor in Small Field Dosimetry

    International Nuclear Information System (INIS)

    Das, I; Akino, Y; Francescon, P

    2014-01-01

    Purpose: Small-field dosimetry is challenging due to charged-particle disequilibrium, source occlusion and more importantly finite size of detectors. IAEA/AAPM has published approach to convert detector readings to dose by k factor. Manufacturers have been trying to provide various types of micro-detectors that could be used in small fields. However k factors depends on detector perturbations and are derived using Monte Carlo simulation. PTW has introduced a microDiamond for small-field dosimetry. An experimental approach is presented to derive the k factor for this detector. Methods: PTW microDiamond is a small volume detector with 1.1 mm radius and 1.0 micron thick synthetic diamond. Output factors were measured from 1×1cm2 to 12×12 cm2 on a Varian machine at various depths using various micro-detectors with published k factors. Dose is calculated as reading * K. Assuming k factor is accurate, output factor should be identical with every micro-detectors. Hence published k values (Francescon et al Med Phys 35, 504-513,2008) were used to covert readings and then output factors were computed. Based on the converged curve from other detectors, k factor for microDiamond was computed versus field size. Results: Traditional output factors as ratio of readings normalized to 10×10 cm2 differ significantly for micro-detectors for fields smaller than 3×3 cm2 which are now being used extensively. When readings are converted to dose, the output factor is independent of detector. Based on this method, k factor for microDiamond was estimated to be nearly constant 0.993±0.007 over varied field sizes. Conclusion: Our method provides a unique opportunity to determine the k factor for any unknown detector. It is shown that even though k factor depends on machine type due to focal spot, however for fields ≥1×1 cm2 this method provides accurate evaluation of k factor. Additionally microDiamond could be used with assumption that k factor is nearly unity

  6. Cone beam computed tomography: An accurate imaging technique in comparison with orthogonal portal imaging in intensity-modulated radiotherapy for prostate cancer

    Directory of Open Access Journals (Sweden)

    Om Prakash Gurjar

    2016-03-01

    Full Text Available Purpose: Various factors cause geometric uncertainties during prostate radiotherapy, including interfractional and intrafractional patient motions, organ motion, and daily setup errors. This may lead to increased normal tissue complications when a high dose to the prostate is administered. More-accurate treatment delivery is possible with daily imaging and localization of the prostate. This study aims to measure the shift of the prostate by using kilovoltage (kV cone beam computed tomography (CBCT after position verification by kV orthogonal portal imaging (OPI.Methods: Position verification in 10 patients with prostate cancer was performed by using OPI followed by CBCT before treatment delivery in 25 sessions per patient. In each session, OPI was performed by using an on-board imaging (OBI system and pelvic bone-to-pelvic bone matching was performed. After applying the noted shift by using OPI, CBCT was performed by using the OBI system and prostate-to-prostate matching was performed. The isocenter shifts along all three translational directions in both techniques were combined into a three-dimensional (3-D iso-displacement vector (IDV.Results: The mean (SD IDV (in centimeters calculated during the 250 imaging sessions was 0.931 (0.598, median 0.825 for OPI and 0.515 (336, median 0.43 for CBCT, p-value was less than 0.0001 which shows extremely statistical significant difference.Conclusion: Even after bone-to-bone matching by using OPI, a significant shift in prostate was observed on CBCT. This study concludes that imaging with CBCT provides a more accurate prostate localization than the OPI technique. Hence, CBCT should be chosen as the preferred imaging technique.

  7. Accurate position estimation methods based on electrical impedance tomography measurements

    Science.gov (United States)

    Vergara, Samuel; Sbarbaro, Daniel; Johansen, T. A.

    2017-08-01

    Electrical impedance tomography (EIT) is a technology that estimates the electrical properties of a body or a cross section. Its main advantages are its non-invasiveness, low cost and operation free of radiation. The estimation of the conductivity field leads to low resolution images compared with other technologies, and high computational cost. However, in many applications the target information lies in a low intrinsic dimensionality of the conductivity field. The estimation of this low-dimensional information is addressed in this work. It proposes optimization-based and data-driven approaches for estimating this low-dimensional information. The accuracy of the results obtained with these approaches depends on modelling and experimental conditions. Optimization approaches are sensitive to model discretization, type of cost function and searching algorithms. Data-driven methods are sensitive to the assumed model structure and the data set used for parameter estimation. The system configuration and experimental conditions, such as number of electrodes and signal-to-noise ratio (SNR), also have an impact on the results. In order to illustrate the effects of all these factors, the position estimation of a circular anomaly is addressed. Optimization methods based on weighted error cost functions and derivate-free optimization algorithms provided the best results. Data-driven approaches based on linear models provided, in this case, good estimates, but the use of nonlinear models enhanced the estimation accuracy. The results obtained by optimization-based algorithms were less sensitive to experimental conditions, such as number of electrodes and SNR, than data-driven approaches. Position estimation mean squared errors for simulation and experimental conditions were more than twice for the optimization-based approaches compared with the data-driven ones. The experimental position estimation mean squared error of the data-driven models using a 16-electrode setup was less

  8. Exploiting Locality in Quantum Computation for Quantum Chemistry.

    Science.gov (United States)

    McClean, Jarrod R; Babbush, Ryan; Love, Peter J; Aspuru-Guzik, Alán

    2014-12-18

    Accurate prediction of chemical and material properties from first-principles quantum chemistry is a challenging task on traditional computers. Recent developments in quantum computation offer a route toward highly accurate solutions with polynomial cost; however, this solution still carries a large overhead. In this Perspective, we aim to bring together known results about the locality of physical interactions from quantum chemistry with ideas from quantum computation. We show that the utilization of spatial locality combined with the Bravyi-Kitaev transformation offers an improvement in the scaling of known quantum algorithms for quantum chemistry and provides numerical examples to help illustrate this point. We combine these developments to improve the outlook for the future of quantum chemistry on quantum computers.

  9. 40 CFR 194.23 - Models and computer codes.

    Science.gov (United States)

    2010-07-01

    ... 40 Protection of Environment 24 2010-07-01 2010-07-01 false Models and computer codes. 194.23... General Requirements § 194.23 Models and computer codes. (a) Any compliance application shall include: (1... obtain stable solutions; (iv) Computer models accurately implement the numerical models; i.e., computer...

  10. Fast dose planning Monte Carlo simulations in inhomogeneous phantoms submerged in uniform, static magnetic fields

    International Nuclear Information System (INIS)

    Yanez, R.; Dempsey, J. F.

    2007-01-01

    We present studies in support of the development of a magnetic resonance imaging (MRI) guided intensity modulated radiation therapy (IMRT) device for the treatment of cancer patients. Fast and accurate computation of the absorbed ionizing radiation dose delivered in the presence of the MRI magnetic field are required for clinical implementation. The fast Monte Carlo simulation code DPM, optimized for radiotherapy treatment planning, is modified to simulate absorbed doses in uniform, static magnetic fields, and benchmarked against PENELOPE. Simulations of dose deposition in inhomogeneous phantoms in which a low density material is sandwiched in water shows that a lower MRI field strength (0.3 T) is to prefer in order to avoid dose build-up near material boundaries. (authors)

  11. Improvement of portable computed tomography system for on-field applications

    Science.gov (United States)

    Sukrod, K.; Khoonkamjorn, P.; Tippayakul, C.

    2015-05-01

    In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from the IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field.

  12. Improvement of Portable Computed Tomography System for On-field Applications

    International Nuclear Information System (INIS)

    Sukrod, K.; Khoonkamjorn, P.; Tippayakul, C.

    2014-01-01

    In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field.

  13. Improvement of portable computed tomography system for on-field applications

    International Nuclear Information System (INIS)

    Sukrod, K; Khoonkamjorn, P; Tippayakul, C

    2015-01-01

    In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from the IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field. (paper)

  14. FILMPAR: A parallel algorithm designed for the efficient and accurate computation of thin film flow on functional surfaces containing micro-structure

    Science.gov (United States)

    Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.

    2009-12-01

    , industrial and physical applications. However, despite recent modelling advances, the accurate numerical solution of the equations governing such problems is still at a relatively early stage. Indeed, recent studies employing a simplifying long-wave approximation have shown that highly efficient numerical methods are necessary to solve the resulting lubrication equations in order to achieve the level of grid resolution required to accurately capture the effects of micro- and nano-scale topographical features. Solution method: A portable parallel multigrid algorithm has been developed for the above purpose, for the particular case of flow over submerged topographical features. Within the multigrid framework adopted, a W-cycle is used to accelerate convergence in respect of the time dependent nature of the problem, with relaxation sweeps performed using a fixed number of pre- and post-Red-Black Gauss-Seidel Newton iterations. In addition, the algorithm incorporates automatic adaptive time-stepping to avoid the computational expense associated with repeated time-step failure. Running time: 1.31 minutes using 128 processors on BlueGene/P with a problem size of over 16.7 million mesh points.

  15. A fast point-cloud computing method based on spatial symmetry of Fresnel field

    Science.gov (United States)

    Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui

    2017-10-01

    Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.

  16. Field-flood requirements for emission computed tomography with an Anger camer

    International Nuclear Information System (INIS)

    Rogers, W.L.; Clinthorne, N.H.; Harkness, B.A.; Koral, K.F.; Keyes, J.W. Jr.

    1982-01-01

    Emission computed tomography with a rotating camera places stringent requirements on camera uniformity and the stability of camera response. In terms of clinical tomographic imaging, we have studied the statistical accuracy required for camera flood correction, the requirements for flood accuracy, the utility and validity of flood and data image smoothing to reduce random noise effects, and the magnitude and effect of camera variations as a function of angular position, energy window, and tuning. Uniformity of the corrected flood response must be held to better than 1% to eliminate image artifacts that are apparent in a million-count image of a liver slice. This requires calibration with an accurate, well-mixed flood source. Both random fluctuations and variations in camera response with rotation must be kept below 1%. To meet the statistical limit, one requires at least 30 million counts for the flod-correction image. Smoothing the flood image alone introduces unacceptable image artifacts. Smoothing both the flood image and data, however, appears to be a good approach toward reducing noise effects. Careful camera tuning and magnetic shield design provide camera stability suitable for present clinical applications

  17. Enhanced Particle Swarm Optimization Algorithm: Efficient Training of ReaxFF Reactive Force Fields.

    Science.gov (United States)

    Furman, David; Carmeli, Benny; Zeiri, Yehuda; Kosloff, Ronnie

    2018-05-04

    Particle swarm optimization is a powerful metaheuristic population-based global optimization algorithm. However, when applied to non-separable objective functions its performance on multimodal landscapes is significantly degraded. Here we show that a significant improvement in the search quality and efficiency on multimodal functions can be achieved by enhancing the basic rotation-invariant particle swarm optimization algorithm with isotropic Gaussian mutation operators. The new algorithm demonstrates a superior performance across several nonlinear, multimodal benchmark functions compared to the rotation-invariant Particle Swam Optimization (PSO) algorithm and the well-established simulated annealing and sequential one-parameter parabolic interpolation methods. A search for the optimal set of parameters for the dispersion interaction model in ReaxFF-lg reactive force field is carried out with respect to accurate DFT-TS calculations. The resulting optimized force field accurately describes the equations of state of several high-energy molecular crystals where such interactions are of crucial importance. The improved algorithm also presents a better performance compared to a Genetic Algorithm optimization method in the optimization of a ReaxFF-lg correction model parameters. The computational framework is implemented in a standalone C++ code that allows a straightforward development of ReaxFF reactive force fields.

  18. Computational Fluid Dynamics of Whole-Body Aircraft

    Science.gov (United States)

    Agarwal, Ramesh

    1999-01-01

    The current state of the art in computational aerodynamics for whole-body aircraft flowfield simulations is described. Recent advances in geometry modeling, surface and volume grid generation, and flow simulation algorithms have led to accurate flowfield predictions for increasingly complex and realistic configurations. As a result, computational aerodynamics has emerged as a crucial enabling technology for the design and development of flight vehicles. Examples illustrating the current capability for the prediction of transport and fighter aircraft flowfields are presented. Unfortunately, accurate modeling of turbulence remains a major difficulty in the analysis of viscosity-dominated flows. In the future, inverse design methods, multidisciplinary design optimization methods, artificial intelligence technology, and massively parallel computer technology will be incorporated into computational aerodynamics, opening up greater opportunities for improved product design at substantially reduced costs.

  19. Computer aided design of solonoid magnets

    Energy Technology Data Exchange (ETDEWEB)

    DeOlivares, J.M.

    1978-06-01

    Computer programs utilizing Legendre functions and elliptic integral functions have been written to aid in the design of solenoid magnets. The field inside an axisymmetric magnet can be expanded in a converging power series of Legendre functions. The Legendre function approach is very useful for designing solenoid magnets with a high degree of field uniformity. This approach has been programed on the LBL CDC 7600 computer so that one can design an axisymmetric magnet which meets any desired field structure. Two examples of computer designed solenoids are presented. A computer program utilizing elliptic integral functions was also written for the LBL CDC 7600 computer. This method was used in a computer program to verify the results obtained from the Legendre approach and for field calculations within the conductor. The elliptic integral field calculations within the conductor showed that thin solenoids produce field peaking at the ends of the magnet. Computer data is generated for various magnet geometries and compared with theoretical predictions. Computer results and theoretical prediction both show that field peaking is reduced for longer coils, increased for thinner coils and field peaking is a logarithmic function of length, thickness and radius.

  20. Application research of computational mass-transfer differential equation in MBR concentration field simulation.

    Science.gov (United States)

    Li, Chunqing; Tie, Xiaobo; Liang, Kai; Ji, Chanjuan

    2016-01-01

    After conducting the intensive research on the distribution of fluid's velocity and biochemical reactions in the membrane bioreactor (MBR), this paper introduces the use of the mass-transfer differential equation to simulate the distribution of the chemical oxygen demand (COD) concentration in MBR membrane pool. The solutions are as follows: first, use computational fluid dynamics to establish a flow control equation model of the fluid in MBR membrane pool; second, calculate this model by adopting direct numerical simulation to get the velocity field of the fluid in membrane pool; third, combine the data of velocity field to establish mass-transfer differential equation model for the concentration field in MBR membrane pool, and use Seidel iteration method to solve the equation model; last but not least, substitute the real factory data into the velocity and concentration field model to calculate simulation results, and use visualization software Tecplot to display the results. Finally by analyzing the nephogram of COD concentration distribution, it can be found that the simulation result conforms the distribution rule of the COD's concentration in real membrane pool, and the mass-transfer phenomenon can be affected by the velocity field of the fluid in membrane pool. The simulation results of this paper have certain reference value for the design optimization of the real MBR system.

  1. On canonical cylinder sections for accurate determination of contact angle in microgravity

    Science.gov (United States)

    Concus, Paul; Finn, Robert; Zabihi, Farhad

    1992-01-01

    Large shifts of liquid arising from small changes in certain container shapes in zero gravity can be used as a basis for accurately determining contact angle. Canonical geometries for this purpose, recently developed mathematically, are investigated here computationally. It is found that the desired nearly-discontinuous behavior can be obtained and that the shifts of liquid have sufficient volume to be readily observed.

  2. Organic Computing

    CERN Document Server

    Würtz, Rolf P

    2008-01-01

    Organic Computing is a research field emerging around the conviction that problems of organization in complex systems in computer science, telecommunications, neurobiology, molecular biology, ethology, and possibly even sociology can be tackled scientifically in a unified way. From the computer science point of view, the apparent ease in which living systems solve computationally difficult problems makes it inevitable to adopt strategies observed in nature for creating information processing machinery. In this book, the major ideas behind Organic Computing are delineated, together with a sparse sample of computational projects undertaken in this new field. Biological metaphors include evolution, neural networks, gene-regulatory networks, networks of brain modules, hormone system, insect swarms, and ant colonies. Applications are as diverse as system design, optimization, artificial growth, task allocation, clustering, routing, face recognition, and sign language understanding.

  3. Computational methods for describing the laser-induced mechanical response of tissue

    Energy Technology Data Exchange (ETDEWEB)

    Trucano, T.; McGlaun, J.M.; Farnsworth, A.

    1994-02-01

    Detailed computational modeling of laser surgery requires treatment of the photoablation of human tissue by high intensity pulses of laser light and the subsequent thermomechanical response of the tissue. Three distinct physical regimes must be considered to accomplish this: (1) the immediate absorption of the laser pulse by the tissue and following tissue ablation, which is dependent upon tissue light absorption characteristics; (2) the near field thermal and mechanical response of the tissue to this laser pulse, and (3) the potential far field (and longer time) mechanical response of witness tissue. Both (2) and (3) are dependent upon accurate constitutive descriptions of the tissue. We will briefly review tissue absorptivity and mechanical behavior, with an emphasis on dynamic loads characteristic of the photoablation process. In this paper our focus will center on the requirements of numerical modeling and the uncertainties of mechanical tissue behavior under photoablation. We will also discuss potential contributions that computational simulations can make in the design of surgical protocols which utilize lasers, for example, in assessing the potential for collateral mechanical damage by laser pulses.

  4. Discrete Calderon’s projections on parallelepipeds and their application to computing exterior magnetic fields for FRC plasmas

    International Nuclear Information System (INIS)

    Kansa, E.; Shumlak, U.; Tsynkov, S.

    2013-01-01

    Confining dense plasma in a field reversed configuration (FRC) is considered a promising approach to fusion. Numerical simulation of this process requires setting artificial boundary conditions (ABCs) for the magnetic field because whereas the plasma itself occupies a bounded region (within the FRC coils), the field extends from this region all the way to infinity. If the plasma is modeled using single fluid magnetohydrodynamics (MHD), then the exterior magnetic field can be considered quasi-static. This field has a scalar potential governed by the Laplace equation. The quasi-static ABC for the magnetic field is obtained using the method of difference potentials, in the form of a discrete Calderon boundary equation with projection on the artificial boundary shaped as a parallelepiped. The Calderon projection itself is computed by convolution with the discrete fundamental solution on the three-dimensional Cartesian grid.

  5. Characterization of the Elusive Conformers of Glycine from State-of-the-Art Structural, Thermodynamic, and Spectroscopic Computations: Theory Complements Experiment.

    Science.gov (United States)

    Barone, Vincenzo; Biczysko, Malgorzata; Bloino, Julien; Puzzarini, Cristina

    2013-03-12

    A state-of-the-art computational strategy for the evaluation of accurate molecular structures as well as thermodynamic and spectroscopic properties along with the direct simulation of infrared (IR) and Raman spectra is established, validated (on the basis of the experimental data available for the Ip glycine conformer) and then used to provide a reliable and accurate characterization of the elusive IVn/gtt and IIIp/tct glycine conformers. The integrated theoretical model proposed is based on accurate post-Hartree-Fock computations (involving composite schemes) of energies, structures, properties, and harmonic force fields coupled to DFT corrections for the proper inclusion of vibrational effects at an anharmonic level (as provided by general second-order perturbative approach). It is shown that the approach presented here allows the evaluation of structural, thermodynamic, and spectroscopic properties with an overall accuracy of about, or better than, 0.001 Å, 20 MHz, 1 kJ·mol(-1), and 10 cm(-1) for bond distances, rotational constants, conformational enthalpies, and vibrational frequencies, respectively. The high accuracy of the computational results allows one to support and complement experimental studies, thus providing (i) an unequivocal identification of several conformers concomitantly present in the experimental mixture and (ii) data not available or difficult to experimentally derive.

  6. Fast harmonic field mapper

    International Nuclear Information System (INIS)

    Au, R.; Fowler, M.; Hanawa, H.; Riedel, J.; Qua, Z.G.

    1984-01-01

    In early 1983 it was decided to mount coils on arms separated by 120 degrees and buck them out so that the third harmonic dphi/dt component would be cancelled and thus the first and second field harmonics could be very accurately measured. The original intention was to do as others had done, namely, use fast ADC's to read the voltages, and computer process the result to get the Fourier components. However, because of the 100 to 1 dynamic range of the fast ADC's and the likelihood that noise would be a problem, the authors decided to do things differently. Using a fast Fourier transform analyzer was considered, but this instrument is very expensive, so they decided to use a completely electronic analog approach: The authors decided to use active bandpass filters to render the harmonic components

  7. EIAGRID: In-field optimization of seismic data acquisition by real-time subsurface imaging using a remote GRID computing environment.

    Science.gov (United States)

    Heilmann, B. Z.; Vallenilla Ferrara, A. M.

    2009-04-01

    The constant growth of contaminated sites, the unsustainable use of natural resources, and, last but not least, the hydrological risk related to extreme meteorological events and increased climate variability are major environmental issues of today. Finding solutions for these complex problems requires an integrated cross-disciplinary approach, providing a unified basis for environmental science and engineering. In computer science, grid computing is emerging worldwide as a formidable tool allowing distributed computation and data management with administratively-distant resources. Utilizing these modern High Performance Computing (HPC) technologies, the GRIDA3 project bundles several applications from different fields of geoscience aiming to support decision making for reasonable and responsible land use and resource management. In this abstract we present a geophysical application called EIAGRID that uses grid computing facilities to perform real-time subsurface imaging by on-the-fly processing of seismic field data and fast optimization of the processing workflow. Even though, seismic reflection profiling has a broad application range spanning from shallow targets in a few meters depth to targets in a depth of several kilometers, it is primarily used by the hydrocarbon industry and hardly for environmental purposes. The complexity of data acquisition and processing poses severe problems for environmental and geotechnical engineering: Professional seismic processing software is expensive to buy and demands large experience from the user. In-field processing equipment needed for real-time data Quality Control (QC) and immediate optimization of the acquisition parameters is often not available for this kind of studies. As a result, the data quality will be suboptimal. In the worst case, a crucial parameter such as receiver spacing, maximum offset, or recording time turns out later to be inappropriate and the complete acquisition campaign has to be repeated. The

  8. Battery Management Systems: Accurate State-of-Charge Indication for Battery-Powered Applications

    NARCIS (Netherlands)

    Pop, V.; Bergveld, H.J.; Danilov, D.; Regtien, Paulus P.L.; Notten, P.H.L.

    2008-01-01

    Battery Management Systems – Universal State-of-Charge indication for portable applications describes the field of State-of-Charge (SoC) indication for rechargeable batteries. With the emergence of battery-powered devices with an increasing number of power-hungry features, accurately estimating the

  9. An approximate fractional Gaussian noise model with computational cost

    KAUST Repository

    Sørbye, Sigrunn H.

    2017-09-18

    Fractional Gaussian noise (fGn) is a stationary time series model with long memory properties applied in various fields like econometrics, hydrology and climatology. The computational cost in fitting an fGn model of length $n$ using a likelihood-based approach is ${\\\\mathcal O}(n^{2})$, exploiting the Toeplitz structure of the covariance matrix. In most realistic cases, we do not observe the fGn process directly but only through indirect Gaussian observations, so the Toeplitz structure is easily lost and the computational cost increases to ${\\\\mathcal O}(n^{3})$. This paper presents an approximate fGn model of ${\\\\mathcal O}(n)$ computational cost, both with direct or indirect Gaussian observations, with or without conditioning. This is achieved by approximating fGn with a weighted sum of independent first-order autoregressive processes, fitting the parameters of the approximation to match the autocorrelation function of the fGn model. The resulting approximation is stationary despite being Markov and gives a remarkably accurate fit using only four components. The performance of the approximate fGn model is demonstrated in simulations and two real data examples.

  10. Computational fluid dynamics simulation of indoor climate in low energy buildings: Computational set up

    Directory of Open Access Journals (Sweden)

    Risberg Daniel

    2017-01-01

    Full Text Available In this paper CFD was used for simulation of the indoor climate in a part of a low energy building. The focus of the work was on investigating the computational set up, such as grid size and boundary conditions in order to solve the indoor climate problems in an accurate way. Future work is to model a complete building, with reasonable calculation time and accuracy. A limited number of grid elements and knowledge of boundary settings are therefore essential. An accurate grid edge size of around 0.1 m was enough to predict the climate according to a grid independency study. Different turbulence models were compared with only small differences in the indoor air velocities and temperatures. The models show that radiation between building surfaces has a large impact on the temperature field inside the building, with the largest differences at the floor level. Simplifying the simulations by modelling the radiator as a surface in the outer wall of the room is appropriate for the calculations. The overall indoor climate is finally compared between three different cases for the outdoor air temperature. The results show a good indoor climate for a low energy building all around the year.

  11. Computational code in atomic and nuclear quantum optics: Advanced computing multiphoton resonance parameters for atoms in a strong laser field

    Science.gov (United States)

    Glushkov, A. V.; Gurskaya, M. Yu; Ignatenko, A. V.; Smirnov, A. V.; Serga, I. N.; Svinarenko, A. A.; Ternovsky, E. V.

    2017-10-01

    The consistent relativistic energy approach to the finite Fermi-systems (atoms and nuclei) in a strong realistic laser field is presented and applied to computing the multiphoton resonances parameters in some atoms and nuclei. The approach is based on the Gell-Mann and Low S-matrix formalism, multiphoton resonance lines moments technique and advanced Ivanov-Ivanova algorithm of calculating the Green’s function of the Dirac equation. The data for multiphoton resonance width and shift for the Cs atom and the 57Fe nucleus in dependence upon the laser intensity are listed.

  12. Reproduction of pressure field in ultrasonic-measurement-integrated simulation of blood flow.

    Science.gov (United States)

    Funamoto, Kenichi; Hayase, Toshiyuki

    2013-07-01

    Ultrasonic-measurement-integrated (UMI) simulation of blood flow is used to analyze the velocity and pressure fields by applying feedback signals of artificial body forces based on differences of Doppler velocities between ultrasonic measurement and numerical simulation. Previous studies have revealed that UMI simulation accurately reproduces the velocity field of a target blood flow, but that the reproducibility of the pressure field is not necessarily satisfactory. In the present study, the reproduction of the pressure field by UMI simulation was investigated. The effect of feedback on the pressure field was first examined by theoretical analysis, and a pressure compensation method was devised. When the divergence of the feedback force vector was not zero, it influenced the pressure field in the UMI simulation while improving the computational accuracy of the velocity field. Hence, the correct pressure was estimated by adding pressure compensation to remove the deteriorating effect of the feedback. A numerical experiment was conducted dealing with the reproduction of a synthetic three-dimensional steady flow in a thoracic aneurysm to validate results of the theoretical analysis and the proposed pressure compensation method. The ability of the UMI simulation to reproduce the pressure field deteriorated with a large feedback gain. However, by properly compensating the effects of the feedback signals on the pressure, the error in the pressure field was reduced, exhibiting improvement of the computational accuracy. It is thus concluded that the UMI simulation with pressure compensation allows for the reproduction of both velocity and pressure fields of blood flow. Copyright © 2012 John Wiley & Sons, Ltd.

  13. Applications of field portable computers to NDE of nuclear power plant steam turbine/generator rotors, discs, and retaining rings

    International Nuclear Information System (INIS)

    Reinhart, E.R.; Leon-Salamanca, T.

    2004-01-01

    The new generation of compact, powerful portable computers have been incorporated into a number of nondestructive evaluation (NDE) systems used to inspect critical areas of the steam turbine and generator units of nuclear power plants. Due to the complex geometry of turbine rotors, generator rotors, retaining rings, and shrunk-on turbine discs, the computers are needed to rapidly calculate the optimum position of an ultrasonic transducer or eddy current probe in order to detect defects at several critical areas. Examples where computers have been used to overcome problems in nondestructive evaluation include; analysis of large numbers of closely spaced near-bore ultrasonic reflectors to determine their potential for link-up in turbine and generator rotor bores, distinguishing ultrasonic crack signals from other reflectors such as the shrink-fit form reflector detected during ultrasonic scanning of shrunk-on generator retaining rings, and detection and recording of eddy current and ultrasonic signals from defects that could be missed by data acquisition systems with inadequate response. The computers are also used to control scanners to insure total inspection coverage. To facilitate the use of data from detected discontinuities in conjunction with stress and fracture mechanics analysis programs, the computers provide presentations of flaws in color and in three dimensions. The field computers have been instrumental in allowing the inspectors to develop on-site reports that enable the owner/operator to rapidly make run/repair/replace decisions. Examples of recent experiences using field portable computers in NDE systems will be presented along with anticipated future developments. (author)

  14. A fast and accurate dihedral interpolation loop subdivision scheme

    Science.gov (United States)

    Shi, Zhuo; An, Yalei; Wang, Zhongshuai; Yu, Ke; Zhong, Si; Lan, Rushi; Luo, Xiaonan

    2018-04-01

    In this paper, we propose a fast and accurate dihedral interpolation Loop subdivision scheme for subdivision surfaces based on triangular meshes. In order to solve the problem of surface shrinkage, we keep the limit condition unchanged, which is important. Extraordinary vertices are handled using modified Butterfly rules. Subdivision schemes are computationally costly as the number of faces grows exponentially at higher levels of subdivision. To address this problem, our approach is to use local surface information to adaptively refine the model. This is achieved simply by changing the threshold value of the dihedral angle parameter, i.e., the angle between the normals of a triangular face and its adjacent faces. We then demonstrate the effectiveness of the proposed method for various 3D graphic triangular meshes, and extensive experimental results show that it can match or exceed the expected results at lower computational cost.

  15. Parente2: a fast and accurate method for detecting identity by descent

    KAUST Repository

    Rodriguez, Jesse M.; Bercovici, Sivan; Huang, Lin; Frostig, Roy; Batzoglou, Serafim

    2014-01-01

    Identity-by-descent (IBD) inference is the problem of establishing a genetic connection between two individuals through a genomic segment that is inherited by both individuals from a recent common ancestor. IBD inference is an important preceding step in a variety of population genomic studies, ranging from demographic studies to linking genomic variation with phenotype and disease. The problem of accurate IBD detection has become increasingly challenging with the availability of large collections of human genotypes and genomes: Given a cohort's size, a quadratic number of pairwise genome comparisons must be performed. Therefore, computation time and the false discovery rate can also scale quadratically. To enable accurate and efficient large-scale IBD detection, we present Parente2, a novel method for detecting IBD segments. Parente2 is based on an embedded log-likelihood ratio and uses a model that accounts for linkage disequilibrium by explicitly modeling haplotype frequencies. Parente2 operates directly on genotype data without the need to phase data prior to IBD inference. We evaluate Parente2's performance through extensive simulations using real data, and we show that it provides substantially higher accuracy compared to previous state-of-the-art methods while maintaining high computational efficiency.

  16. Role of Cone Beam Computed Tomography in Diagnosis and Treatment Planning in Dentistry: An Update.

    Science.gov (United States)

    Shukla, Sagrika; Chug, Ashi; Afrashtehfar, Kelvin I

    2017-11-01

    Accurate diagnosis and treatment planning are the backbone of any medical therapy; for this reason, cone beam computed tomography (CBCT) was introduced and has been widely used. CBCT technology provides a three-dimensional image viewing, enabling exact location and extent of lesions or any anatomical region. For the very same reason, CBCT can not only be used for surgical fields but also for fields such as endodontics, prosthodontics, and orthodontics for appropriate treatment planning and effective dental care. The aim and clinical significance of this review are to update dental clinicians on the CBCT applications in each dental specialty for an appropriate diagnosis and more predictable treatment.

  17. Asymmetric focusing study from twin input power couplers using realistic rf cavity field maps

    Directory of Open Access Journals (Sweden)

    Colwyn Gulliford

    2011-03-01

    Full Text Available Advanced simulation codes now exist that can self-consistently solve Maxwell’s equations for the combined system of an rf cavity and a beam bunch. While these simulations are important for a complete understanding of the beam dynamics in rf cavities, they require significant time and computing power. These techniques are therefore not readily included in real time simulations useful to the beam physicist during beam operations. Thus, there exists a need for a simplified algorithm which simulates realistic cavity fields significantly faster than self-consistent codes, while still incorporating enough of the necessary physics to ensure accurate beam dynamics computation. To this end, we establish a procedure for producing realistic field maps using lossless cavity eigenmode field solvers. This algorithm incorporates all relevant cavity design and operating parameters, including beam loading from a nonrelativistic beam. The algorithm is then used to investigate the asymmetric quadrupolelike focusing produced by the input couplers of the Cornell ERL injector cavity for a variety of beam and operating parameters.

  18. Three-dimensional parallel edge-based finite element modeling of electromagnetic data with field redatuming

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Čuma, Martin; Zhdanov, Michael

    2015-01-01

    This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom signific......This paper presents a parallelized version of the edge-based finite element method with a novel post-processing approach for numerical modeling of an electromagnetic field in complex media. The method uses an unstructured tetrahedral mesh which can reduce the number of degrees of freedom...... significantly. The linear system of finite element equations is solved using parallel direct solvers which are robust for ill-conditioned systems and efficient for multiple source electromagnetic (EM) modeling. We also introduce a novel approach to compute the scalar components of the electric field from...... the tangential components along each edge based on field redatuming. The method can produce a more accurate result as compared to conventional approach. We have applied the developed algorithm to compute the EM response for a typical 3D anisotropic geoelectrical model of the off-shore HC reservoir with complex...

  19. Elucidation of complicated phenomena in nuclear power field by computation science techniques

    International Nuclear Information System (INIS)

    Takahashi, Ryoichi

    1996-01-01

    In this crossover research, the complicated phenomena treated in nuclear power field are elucidated, and for connecting them to engineering application research, the development of high speed computer utilization technology and the large scale numerical simulation utilizing it are carried out. As the scale of calculation, it is aimed at to realize the three-dimensional numerical simulation of the largest scale in the world of about 100 million mesh and to develop the results into engineering research. In the nuclear power plants of next generation, the further improvement of economical efficiency is demanded together with securing safety, and it is important that the design window is large. The work of confirming quantitatively the size of design window is not easy, and it is very difficult to separate observed phenomena into elementary events. As the method of forecasting and reproducing complicated phenomena and quantifying design window, large scale numerical simulation is promising. The roles of theory, experiment and computation science are discussed. The system of executing this crossover research is described. (K.I.)

  20. BONFIRE: benchmarking computers and computer networks

    OpenAIRE

    Bouckaert, Stefan; Vanhie-Van Gerwen, Jono; Moerman, Ingrid; Phillips, Stephen; Wilander, Jerker

    2011-01-01

    The benchmarking concept is not new in the field of computing or computer networking. With “benchmarking tools”, one usually refers to a program or set of programs, used to evaluate the performance of a solution under certain reference conditions, relative to the performance of another solution. Since the 1970s, benchmarking techniques have been used to measure the performance of computers and computer networks. Benchmarking of applications and virtual machines in an Infrastructure-as-a-Servi...

  1. Accurate Calculations of Rotationally Inelastic Scattering Cross Sections Using Mixed Quantum/Classical Theory.

    Science.gov (United States)

    Semenov, Alexander; Babikov, Dmitri

    2014-01-16

    For computational treatment of rotationally inelastic scattering of molecules, we propose to use the mixed quantum/classical theory, MQCT. The old idea of treating translational motion classically, while quantum mechanics is used for rotational degrees of freedom, is developed to the new level and is applied to Na + N2 collisions in a broad range of energies. Comparison with full-quantum calculations shows that MQCT accurately reproduces all, even minor, features of energy dependence of cross sections, except scattering resonances at very low energies. The remarkable success of MQCT opens up wide opportunities for computational predictions of inelastic scattering cross sections at higher temperatures and/or for polyatomic molecules and heavier quenchers, which is computationally close to impossible within the full-quantum framework.

  2. RAMSIM: A fast computer model for mean wind flow over hills

    Energy Technology Data Exchange (ETDEWEB)

    Corbett, J-F.

    2007-06-15

    The Riso Atmospheric Mixed Spectral-Integration Model (RAMSIM) is a micro-scale, linear flow model developed to quickly calculate the mean wind flow field over orography. It was designed to bridge the gap between WAsP and similar models that are fast but insufficiently accurate over steep slopes, and non-linear CFD models that are accurate but too computationally expensive for routine use on a PC. RAMSIM is governed by the RANS and E-{epsilon} turbulence closure equations, expressed in non-Cartesian coordinates. A terrain-following coordinate system is created from a simple analytical expression. The equations are linearized by a perturbation expansion about the flat-terrain case. The first-order equations, representing the spatial correction due to the presence of orography, are Fourier-transformed analytically in the two horizontal dimensions. The pressure and horizontal velocity components are eliminated, resulting in a set of four ordinary differential equations (ODEs). RAMSIM is currently implemented and tested in two-dimensional space; a 3D version has been formulated but not yet implemented. In the 2D case, there are only three ODEs, depending on only two non-dimensional parameters. This is exploited by solving the ODEs by Runge-Kutta integration for all useful combinations of these parameters, and storing the results in look-up tables (LUT). The flow field over any given orography is then quickly obtained by interpolating from the LUTs and scaling the value of the flow variables for each wavenumber component of the orography, and returning to real space by inverse Fourier transform. RAMSIM was tested against measurements, as well as other authors' flow models, in four test cases: two laboratory flows over idealized terrain, and two field experiments. RAMSIM calculations generally agree with measurements over upward slopes and hilltops, but overestimate the speed very near the ground at hilltops. RAMSIM appears to have an edge over other linear models

  3. Three dimensional magnetic fields in extra high speed modified Lundell alternators computed by a combined vector-scalar magnetic potential finite element method

    Science.gov (United States)

    Demerdash, N. A.; Wang, R.; Secunde, R.

    1992-01-01

    A 3D finite element (FE) approach was developed and implemented for computation of global magnetic fields in a 14.3 kVA modified Lundell alternator. The essence of the new method is the combined use of magnetic vector and scalar potential formulations in 3D FEs. This approach makes it practical, using state of the art supercomputer resources, to globally analyze magnetic fields and operating performances of rotating machines which have truly 3D magnetic flux patterns. The 3D FE-computed fields and machine inductances as well as various machine performance simulations of the 14.3 kVA machine are presented in this paper and its two companion papers.

  4. The influence of sulcus width on simulated electric fields induced by transcranial magnetic stimulation

    Science.gov (United States)

    Janssen, A. M.; Rampersad, S. M.; Lucka, F.; Lanfer, B.; Lew, S.; Aydin, Ü.; Wolters, C. H.; Stegeman, D. F.; Oostendorp, T. F.

    2013-07-01

    Volume conduction models can help in acquiring knowledge about the distribution of the electric field induced by transcranial magnetic stimulation. One aspect of a detailed model is an accurate description of the cortical surface geometry. Since its estimation is difficult, it is important to know how accurate the geometry has to be represented. Previous studies only looked at the differences caused by neglecting the complete boundary between cerebrospinal fluid (CSF) and grey matter (Thielscher et al 2011 NeuroImage 54 234-43, Bijsterbosch et al 2012 Med. Biol. Eng. Comput. 50 671-81), or by resizing the whole brain (Wagner et al 2008 Exp. Brain Res. 186 539-50). However, due to the high conductive properties of the CSF, it can be expected that alterations in sulcus width can already have a significant effect on the distribution of the electric field. To answer this question, the sulcus width of a highly realistic head model, based on T1-, T2- and diffusion-weighted magnetic resonance images, was altered systematically. This study shows that alterations in the sulcus width do not cause large differences in the majority of the electric field values. However, considerable overestimation of sulcus width produces an overestimation of the calculated field strength, also at locations distant from the target location.

  5. Accurate and computationally efficient prediction of thermochemical properties of biomolecules using the generalized connectivity-based hierarchy.

    Science.gov (United States)

    Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan

    2014-08-14

    In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.

  6. Accurate calculations of bound rovibrational states for argon trimer

    Energy Technology Data Exchange (ETDEWEB)

    Brandon, Drew; Poirier, Bill [Department of Chemistry and Biochemistry, and Department of Physics, Texas Tech University, Box 41061, Lubbock, Texas 79409-1061 (United States)

    2014-07-21

    This work presents a comprehensive quantum dynamics calculation of the bound rovibrational eigenstates of argon trimer (Ar{sub 3}), using the ScalIT suite of parallel codes. The Ar{sub 3} rovibrational energy levels are computed to a very high level of accuracy (10{sup −3} cm{sup −1} or better), and up to the highest rotational and vibrational excitations for which bound states exist. For many of these rovibrational states, wavefunctions are also computed. Rare gas clusters such as Ar{sub 3} are interesting because the interatomic interactions manifest through long-range van der Waals forces, rather than through covalent chemical bonding. As a consequence, they exhibit strong Coriolis coupling between the rotational and vibrational degrees of freedom, as well as highly delocalized states, all of which renders accurate quantum dynamical calculation difficult. Moreover, with its (comparatively) deep potential well and heavy masses, Ar{sub 3} is an especially challenging rare gas trimer case. There are a great many rovibrational eigenstates to compute, and a very high density of states. Consequently, very few previous rovibrational state calculations for Ar{sub 3} may be found in the current literature—and only for the lowest-lying rotational excitations.

  7. On the accurate fast evaluation of finite Fourier integrals using cubic splines

    International Nuclear Information System (INIS)

    Morishima, N.

    1993-01-01

    Finite Fourier integrals based on a cubic-splines fit to equidistant data are shown to be evaluated fast and accurately. Good performance, especially on computational speed, is achieved by the optimization of the spline fit and the internal use of the fast Fourier transform (FFT) algorithm for complex data. The present procedure provides high accuracy with much shorter CPU time than a trapezoidal FFT. (author)

  8. Electric Field Encephalography as a tool for functional brain research: a modeling study.

    Directory of Open Access Journals (Sweden)

    Yury Petrov

    Full Text Available We introduce the notion of Electric Field Encephalography (EFEG based on measuring electric fields of the brain and demonstrate, using computer modeling, that given the appropriate electric field sensors this technique may have significant advantages over the current EEG technique. Unlike EEG, EFEG can be used to measure brain activity in a contactless and reference-free manner at significant distances from the head surface. Principal component analysis using simulated cortical sources demonstrated that electric field sensors positioned 3 cm away from the scalp and characterized by the same signal-to-noise ratio as EEG sensors provided the same number of uncorrelated signals as scalp EEG. When positioned on the scalp, EFEG sensors provided 2-3 times more uncorrelated signals. This significant increase in the number of uncorrelated signals can be used for more accurate assessment of brain states for non-invasive brain-computer interfaces and neurofeedback applications. It also may lead to major improvements in source localization precision. Source localization simulations for the spherical and Boundary Element Method (BEM head models demonstrated that the localization errors are reduced two-fold when using electric fields instead of electric potentials. We have identified several techniques that could be adapted for the measurement of the electric field vector required for EFEG and anticipate that this study will stimulate new experimental approaches to utilize this new tool for functional brain research.

  9. The efficacy of a novel mobile phone application for goldmann ptosis visual field interpretation.

    Science.gov (United States)

    Maamari, Robi N; D'Ambrosio, Michael V; Joseph, Jeffrey M; Tao, Jeremiah P

    2014-01-01

    To evaluate the efficacy of a novel mobile phone application that calculates superior visual field defects on Goldmann visual field charts. Experimental study in which the mobile phone application and 14 oculoplastic surgeons interpreted the superior visual field defect in 10 Goldmann charts. Percent error of the mobile phone application and the oculoplastic surgeons' estimates were calculated compared with computer software computation of the actual defects. Precision and time efficiency of the application were evaluated by processing the same Goldmann visual field chart 10 repeated times. The mobile phone application was associated with a mean percent error of 1.98% (95% confidence interval[CI], 0.87%-3.10%) in superior visual field defect calculation. The average mean percent error of the oculoplastic surgeons' visual estimates was 19.75% (95% CI, 14.39%-25.11%). Oculoplastic surgeons, on average, underestimated the defect in all 10 Goldmann charts. There was high interobserver variance among oculoplastic surgeons. The percent error of the 10 repeated measurements on a single chart was 0.93% (95% CI, 0.40%-1.46%). The average time to process 1 chart was 12.9 seconds (95% CI, 10.9-15.0 seconds). The mobile phone application was highly accurate, precise, and time-efficient in calculating the percent superior visual field defect using Goldmann charts. Oculoplastic surgeon visual interpretations were highly inaccurate, highly variable, and usually underestimated the field vision loss.

  10. High-performance computing using FPGAs

    CERN Document Server

    Benkrid, Khaled

    2013-01-01

    This book is concerned with the emerging field of High Performance Reconfigurable Computing (HPRC), which aims to harness the high performance and relative low power of reconfigurable hardware–in the form Field Programmable Gate Arrays (FPGAs)–in High Performance Computing (HPC) applications. It presents the latest developments in this field from applications, architecture, and tools and methodologies points of view. We hope that this work will form a reference for existing researchers in the field, and entice new researchers and developers to join the HPRC community.  The book includes:  Thirteen application chapters which present the most important application areas tackled by high performance reconfigurable computers, namely: financial computing, bioinformatics and computational biology, data search and processing, stencil computation e.g. computational fluid dynamics and seismic modeling, cryptanalysis, astronomical N-body simulation, and circuit simulation.     Seven architecture chapters which...

  11. Parotid gland sparing effect by computed tomography-based modified lower field margin in whole brain radiotherapy

    International Nuclear Information System (INIS)

    Cho, Oyeon; Chun, Mi Son; Oh, Young Taek; Kim, Mi Hwa; Park, Hae Jin; Nam, Sang Soo; Heo, Jae Sung; Noh, O Kyu; Park, Sung Ho

    2013-01-01

    Parotid gland can be considered as a risk organ in whole brain radiotherapy (WBRT). The purpose of this study is to evaluate the parotid gland sparing effect of computed tomography (CT)-based WBRT compared to 2-dimensional plan with conventional field margin. From January 2008 to April 2011, 53 patients underwent WBRT using CT-based simulation. Bilateral two-field arrangement was used and the prescribed dose was 30 Gy in 10 fractions. We compared the parotid dose between 2 radiotherapy plans using different lower field margins: conventional field to the lower level of the atlas (CF) and modified field fitted to the brain tissue (MF). Averages of mean parotid dose of the 2 protocols with CF and MF were 17.4 Gy and 8.7 Gy, respectively (p 98% of prescribed dose were 99.7% for CF and 99.5% for MF. Compared to WBRT with CF, CT-based lower field margin modification is a simple and effective technique for sparing the parotid gland, while providing similar dose coverage of the whole brain.

  12. Computer tomography in otolaryngology

    International Nuclear Information System (INIS)

    Gradzki, J.

    1981-01-01

    The principles of design and the action of computer tomography which was applied also for the diagnosis of nose, ear and throat diseases are discussed. Computer tomography makes possible visualization of the structures of the nose, nasal sinuses and facial skeleton in transverse and eoronal planes. The method enables an accurate evaluation of the position and size of neoplasms in these regions and differentiation of inflammatory exudates against malignant masses. In otology computer tomography is used particularly in the diagnosis of pontocerebellar angle tumours and otogenic brain abscesses. Computer tomography of the larynx and pharynx provides new diagnostic data owing to the possibility of obtaining transverse sections and visualization of cartilage. Computer tomograms of some cases are presented. (author)

  13. Fast and accurate CMB computations in non-flat FLRW universes

    Science.gov (United States)

    Lesgourgues, Julien; Tram, Thomas

    2014-09-01

    We present a new method for calculating CMB anisotropies in a non-flat Friedmann universe, relying on a very stable algorithm for the calculation of hyperspherical Bessel functions, that can be pushed to arbitrary precision levels. We also introduce a new approximation scheme which gradually takes over in the flat space limit and leads to significant reductions of the computation time. Our method is implemented in the Boltzmann code class. It can be used to benchmark the accuracy of the camb code in curved space, which is found to match expectations. For default precision settings, corresponding to 0.1% for scalar temperature spectra and 0.2% for scalar polarisation spectra, our code is two to three times faster, depending on curvature. We also simplify the temperature and polarisation source terms significantly, so the different contributions to the Cl 's are easy to identify inside the code.

  14. Fast and accurate CMB computations in non-flat FLRW universes

    International Nuclear Information System (INIS)

    Lesgourgues, Julien; Tram, Thomas

    2014-01-01

    We present a new method for calculating CMB anisotropies in a non-flat Friedmann universe, relying on a very stable algorithm for the calculation of hyperspherical Bessel functions, that can be pushed to arbitrary precision levels. We also introduce a new approximation scheme which gradually takes over in the flat space limit and leads to significant reductions of the computation time. Our method is implemented in the Boltzmann code class. It can be used to benchmark the accuracy of the camb code in curved space, which is found to match expectations. For default precision settings, corresponding to 0.1% for scalar temperature spectra and 0.2% for scalar polarisation spectra, our code is two to three times faster, depending on curvature. We also simplify the temperature and polarisation source terms significantly, so the different contributions to the C ℓ  's are easy to identify inside the code

  15. Impact of the displacement current on low-frequency electromagnetic fields computed using high-resolution anatomy models

    International Nuclear Information System (INIS)

    Barchanski, A; Gersem, H de; Gjonaj, E; Weiland, T

    2005-01-01

    We present a comparison of simulated low-frequency electromagnetic fields in the human body, calculated by means of the electro-quasistatic formulation. The geometrical data in these simulations were provided by an anatomically realistic, high-resolution human body model, while the dielectric properties of the various body tissues were modelled by the parametric Cole-Cole equation. The model was examined under two different excitation sources and various spatial resolutions in a frequency range from 10 Hz to 1 MHz. An analysis of the differences in the computed fields resulting from a neglect of the permittivity was carried out. On this basis, an estimation of the impact of the displacement current on the simulated low-frequency electromagnetic fields in the human body is obtained. (note)

  16. MHD computation of feedback of resistive-shell instabilities in the reversed field pinch

    International Nuclear Information System (INIS)

    Zita, E.J.; Prager, S.C.

    1992-05-01

    MHD computation demonstrates that feedback can sustain reversal and reduce loop voltage in resistive-shell reversed field pinch (RFP) plasmas. Edge feedback on ∼2R/a tearing modes resonant near axis is found to restore plasma parameters to nearly their levels with a close-fitting conducting shell. When original dynamo modes are stabilized, neighboring tearing modes grow to maintain the RFP dynamo more efficiently. This suggests that experimentally observed limits on RFP pulselengths to the order of the shell time can be overcome by applying feedback to a few helical modes

  17. Communication: Calculation of interatomic forces and optimization of molecular geometry with auxiliary-field quantum Monte Carlo

    Science.gov (United States)

    Motta, Mario; Zhang, Shiwei

    2018-05-01

    We propose an algorithm for accurate, systematic, and scalable computation of interatomic forces within the auxiliary-field quantum Monte Carlo (AFQMC) method. The algorithm relies on the Hellmann-Feynman theorem and incorporates Pulay corrections in the presence of atomic orbital basis sets. We benchmark the method for small molecules by comparing the computed forces with the derivatives of the AFQMC potential energy surface and by direct comparison with other quantum chemistry methods. We then perform geometry optimizations using the steepest descent algorithm in larger molecules. With realistic basis sets, we obtain equilibrium geometries in agreement, within statistical error bars, with experimental values. The increase in computational cost for computing forces in this approach is only a small prefactor over that of calculating the total energy. This paves the way for a general and efficient approach for geometry optimization and molecular dynamics within AFQMC.

  18. Allele-sharing models: LOD scores and accurate linkage tests.

    Science.gov (United States)

    Kong, A; Cox, N J

    1997-11-01

    Starting with a test statistic for linkage analysis based on allele sharing, we propose an associated one-parameter model. Under general missing-data patterns, this model allows exact calculation of likelihood ratios and LOD scores and has been implemented by a simple modification of existing software. Most important, accurate linkage tests can be performed. Using an example, we show that some previously suggested approaches to handling less than perfectly informative data can be unacceptably conservative. Situations in which this model may not perform well are discussed, and an alternative model that requires additional computations is suggested.

  19. Experiments in computing: a survey.

    Science.gov (United States)

    Tedre, Matti; Moisseinen, Nella

    2014-01-01

    Experiments play a central role in science. The role of experiments in computing is, however, unclear. Questions about the relevance of experiments in computing attracted little attention until the 1980s. As the discipline then saw a push towards experimental computer science, a variety of technically, theoretically, and empirically oriented views on experiments emerged. As a consequence of those debates, today's computing fields use experiments and experiment terminology in a variety of ways. This paper analyzes experimentation debates in computing. It presents five ways in which debaters have conceptualized experiments in computing: feasibility experiment, trial experiment, field experiment, comparison experiment, and controlled experiment. This paper has three aims: to clarify experiment terminology in computing; to contribute to disciplinary self-understanding of computing; and, due to computing's centrality in other fields, to promote understanding of experiments in modern science in general.

  20. Computational prediction of miRNA genes from small RNA sequencing data

    Directory of Open Access Journals (Sweden)

    Wenjing eKang

    2015-01-01

    Full Text Available Next-generation sequencing now for the first time allows researchers to gauge the depth and variation of entire transcriptomes. However, now as rare transcripts can be detected that are present in cells at single copies, more advanced computational tools are needed to accurately annotate and profile them. miRNAs are 22 nucleotide small RNAs (sRNAs that post-transcriptionally reduce the output of protein coding genes. They have established roles in numerous biological processes, including cancers and other diseases. During miRNA biogenesis, the sRNAs are sequentially cleaved from precursor molecules that have a characteristic hairpin RNA structure. The vast majority of new miRNA genes that are discovered are mined from small RNA sequencing (sRNA-seq, which can detect more than a billion RNAs in a single run. However, given that many of the detected RNAs are degradation products from all types of transcripts, the accurate identification of miRNAs remain a non-trivial computational problem. Here we review the tools available to predict animal miRNAs from sRNA sequencing data. We present tools for generalist and specialist use cases, including prediction from massively pooled data or in species without reference genome. We also present wet-lab methods used to validate predicted miRNAs, and approaches to computationally benchmark prediction accuracy. For each tool, we reference validation experiments and benchmarking efforts. Last, we discuss the future of the field.