Accurate computation of transfer maps from magnetic field data
International Nuclear Information System (INIS)
Venturini, Marco; Dragt, Alex J.
1999-01-01
Consider an arbitrary beamline magnet. Suppose one component (for example, the radial component) of the magnetic field is known on the surface of some imaginary cylinder coaxial to and contained within the magnet aperture. This information can be obtained either by direct measurement or by computation with the aid of some 3D electromagnetic code. Alternatively, suppose that the field harmonics have been measured by using a spinning coil. We describe how this information can be used to compute the exact transfer map for the beamline element. This transfer map takes into account all effects of real beamline elements including fringe-field, pseudo-multipole, and real multipole error effects. The method we describe automatically takes into account the smoothing properties of the Laplace-Green function. Consequently, it is robust against both measurement and electromagnetic code errors. As an illustration we apply the method to the field analysis of high-gradient interaction region quadrupoles in the Large Hadron Collider (LHC)
International Nuclear Information System (INIS)
Gundtoft, H.E.; Nielsen, T.
1981-07-01
A rotational scanning system has recently been developed at Risoe National Laboratory. It allows sound fields from ultrasonic transducers to be examined in 3 dimensions. Using different calculation and plotting programs, any section in the sound field can be plotted. Results from examination of transducers for automatic inspection are presented. (author)
Pineda, M.; Stamatakis, M.
2017-07-01
Modeling the kinetics of surface catalyzed reactions is essential for the design of reactors and chemical processes. The majority of microkinetic models employ mean-field approximations, which lead to an approximate description of catalytic kinetics by assuming spatially uncorrelated adsorbates. On the other hand, kinetic Monte Carlo (KMC) methods provide a discrete-space continuous-time stochastic formulation that enables an accurate treatment of spatial correlations in the adlayer, but at a significant computation cost. In this work, we use the so-called cluster mean-field approach to develop higher order approximations that systematically increase the accuracy of kinetic models by treating spatial correlations at a progressively higher level of detail. We further demonstrate our approach on a reduced model for NO oxidation incorporating first nearest-neighbor lateral interactions and construct a sequence of approximations of increasingly higher accuracy, which we compare with KMC and mean-field. The latter is found to perform rather poorly, overestimating the turnover frequency by several orders of magnitude for this system. On the other hand, our approximations, while more computationally intense than the traditional mean-field treatment, still achieve tremendous computational savings compared to KMC simulations, thereby opening the way for employing them in multiscale modeling frameworks.
Directory of Open Access Journals (Sweden)
DePrince A
2010-01-01
Full Text Available Abstract We model the response of nanoscale Ag prolate spheroids to an external uniform static electric field using simulations based on the discrete dipole approximation, in which the spheroid is represented as a collection of polarizable subunits. We compare the results of simulations that employ subunit polarizabilities derived from the Clausius–Mossotti relation with those of simulations that employ polarizabilities that include a local environmental correction for subunits near the spheroid’s surface [Rahmani et al. Opt Lett 27: 2118 (2002]. The simulations that employ corrected polarizabilities give predictions in very good agreement with exact results obtained by solving Laplace’s equation. In contrast, simulations that employ uncorrected Clausius–Mossotti polarizabilities substantially underestimate the extent of the electric field “hot spot” near the spheroid’s sharp tip, and give predictions for the field enhancement factor near the tip that are 30 to 50% too small.
2010-01-01
We model the response of nanoscale Ag prolate spheroids to an external uniform static electric field using simulations based on the discrete dipole approximation, in which the spheroid is represented as a collection of polarizable subunits. We compare the results of simulations that employ subunit polarizabilities derived from the Clausius–Mossotti relation with those of simulations that employ polarizabilities that include a local environmental correction for subunits near the spheroid’s surface [Rahmani et al. Opt Lett 27: 2118 (2002)]. The simulations that employ corrected polarizabilities give predictions in very good agreement with exact results obtained by solving Laplace’s equation. In contrast, simulations that employ uncorrected Clausius–Mossotti polarizabilities substantially underestimate the extent of the electric field “hot spot” near the spheroid’s sharp tip, and give predictions for the field enhancement factor near the tip that are 30 to 50% too small. PMID:20672062
Accurate computation of Mathieu functions
Bibby, Malcolm M
2013-01-01
This lecture presents a modern approach for the computation of Mathieu functions. These functions find application in boundary value analysis such as electromagnetic scattering from elliptic cylinders and flat strips, as well as the analogous acoustic and optical problems, and many other applications in science and engineering. The authors review the traditional approach used for these functions, show its limitations, and provide an alternative ""tuned"" approach enabling improved accuracy and convergence. The performance of this approach is investigated for a wide range of parameters and mach
Accurate computer simulation of a drift chamber
International Nuclear Information System (INIS)
Killian, T.J.
1980-01-01
A general purpose program for drift chamber studies is described. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. Results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR
Accurate computer simulation of a drift chamber
Killian, T J
1980-01-01
The author describes a general purpose program for drift chamber studies. First the capacitance matrix is calculated using a Green's function technique. The matrix is used in a linear-least-squares fit to choose optimal operating voltages. Next the electric field is computed, and given knowledge of gas parameters and magnetic field environment, a family of electron trajectories is determined. These are finally used to make drift distance vs time curves which may be used directly by a track reconstruction program. The results are compared with data obtained from the cylindrical chamber in the Axial Field Magnet experiment at the CERN ISR. (1 refs).
Accurate evaluation of exchange fields in finite element micromagnetic solvers
Chang, R.; Escobar, M. A.; Li, S.; Lubarda, M. V.; Lomakin, V.
2012-04-01
Quadratic basis functions (QBFs) are implemented for solving the Landau-Lifshitz-Gilbert equation via the finite element method. This involves the introduction of a set of special testing functions compatible with the QBFs for evaluating the Laplacian operator. The results by using QBFs are significantly more accurate than those via linear basis functions. QBF approach leads to significantly more accurate results than conventionally used approaches based on linear basis functions. Importantly QBFs allow reducing the error of computing the exchange field by increasing the mesh density for structured and unstructured meshes. Numerical examples demonstrate the feasibility of the method.
Accurate Calculation of Fringe Fields in the LHC Main Dipoles
Kurz, S; Siegel, N
2000-01-01
The ROXIE program developed at CERN for the design and optimization of the superconducting LHC magnets has been recently extended in a collaboration with the University of Stuttgart, Germany, with a field computation method based on the coupling between the boundary element (BEM) and the finite element (FEM) technique. This avoids the meshing of the coils and the air regions, and avoids the artificial far field boundary conditions. The method is therefore specially suited for the accurate calculation of fields in the superconducting magnets in which the field is dominated by the coil. We will present the fringe field calculations in both 2d and 3d geometries to evaluate the effect of connections and the cryostat on the field quality and the flux density to which auxiliary bus-bars are exposed.
Accurate Assessment of Computed Order Tracking
Directory of Open Access Journals (Sweden)
P.N. Saavedra
2006-01-01
Full Text Available Spectral vibration analysis using the Fourier transform is the most common technique for evaluating the mechanical condition of machinery working in stationary regimen. However, machinery operating in transient modes, such as variable speed equipment, generates spectra with distinct frequency content at each time, and the standard approach is not directly applicable for diagnostic. The "order tracking" technique is a suitable tool for analyzing variable speed machines. We have studied the computed order tracking (COT, and a new computed procedure is proposed for solving the indeterminate results generated by the traditional method at constant speed. The effect on the accuracy of the assumptions inherent in the COT was assessed using data from various simulations. The use of these simulations allowed us to determine the effect on the overall true accuracy of the method of different user-defined factors: the signal and tachometric pulse sampling frequency, the method of amplitude interpolation, and the number of tachometric pulses per revolution. Tests on real data measured on the main transmissions of a mining shovel were carried out, and we concluded that the new method is appropriate for the condition monitoring of this type of machine.
Towards accurate simulation of fringe field effects
International Nuclear Information System (INIS)
Berz, M.; Erdelyi, B.; Makino, K.
2001-01-01
In this paper, we study various fringe field effects. Previously, we showed the large impact that fringe fields can have on certain lattice scenarios of the proposed Neutrino Factory. Besides the linear design of the lattice, the effects depend strongly on the details of the field fall off. Various scenarios are compared. Furthermore, in the absence of detailed information, we study the effects for the LHC, a case where the fringe fields are known, and try to draw some conclusions for Neutrino Factory lattices
Accurate method of the magnetic field measurement of quadrupole magnets
International Nuclear Information System (INIS)
Kumada, M.; Sakai, I.; Someya, H.; Sasaki, H.
1983-01-01
We present an accurate method of the magnetic field measurement of the quadrupole magnet. The method of obtaining the information of the field gradient and the effective focussing length is given. A new scheme to obtain the information of the skew field components is also proposed. The relative accuracy of the measurement was 1 x 10 -4 or less. (author)
Three dimensional field computation
International Nuclear Information System (INIS)
Trowbridge, C.W.
1981-06-01
Recent research work carried out at Rutherford and Appleton Laboratories into the Computation of Electromagnetic Fields is summarised. The topics covered include algorithms for integral and differential methods for the solution of 3D magnetostatic fields, comparison of results with experiment and an investigation into the strengths and weaknesses of both methods for an analytic problem. The paper concludes with a brief summary of the work in progress on the solution of 3D eddy currents using differential finite elements. (author)
Fast and accurate computation of projected two-point functions
Grasshorn Gebhardt, Henry S.; Jeong, Donghui
2018-01-01
We present the two-point function from the fast and accurate spherical Bessel transformation (2-FAST) algorithm1Our code is available at https://github.com/hsgg/twoFAST. for a fast and accurate computation of integrals involving one or two spherical Bessel functions. These types of integrals occur when projecting the galaxy power spectrum P (k ) onto the configuration space, ξℓν(r ), or spherical harmonic space, Cℓ(χ ,χ'). First, we employ the FFTLog transformation of the power spectrum to divide the calculation into P (k )-dependent coefficients and P (k )-independent integrations of basis functions multiplied by spherical Bessel functions. We find analytical expressions for the latter integrals in terms of special functions, for which recursion provides a fast and accurate evaluation. The algorithm, therefore, circumvents direct integration of highly oscillating spherical Bessel functions.
Accurate atom-mapping computation for biochemical reactions.
Latendresse, Mario; Malerich, Jeremiah P; Travers, Mike; Karp, Peter D
2012-11-26
The complete atom mapping of a chemical reaction is a bijection of the reactant atoms to the product atoms that specifies the terminus of each reactant atom. Atom mapping of biochemical reactions is useful for many applications of systems biology, in particular for metabolic engineering where synthesizing new biochemical pathways has to take into account for the number of carbon atoms from a source compound that are conserved in the synthesis of a target compound. Rapid, accurate computation of the atom mapping(s) of a biochemical reaction remains elusive despite significant work on this topic. In particular, past researchers did not validate the accuracy of mapping algorithms. We introduce a new method for computing atom mappings called the minimum weighted edit-distance (MWED) metric. The metric is based on bond propensity to react and computes biochemically valid atom mappings for a large percentage of biochemical reactions. MWED models can be formulated efficiently as Mixed-Integer Linear Programs (MILPs). We have demonstrated this approach on 7501 reactions of the MetaCyc database for which 87% of the models could be solved in less than 10 s. For 2.1% of the reactions, we found multiple optimal atom mappings. We show that the error rate is 0.9% (22 reactions) by comparing these atom mappings to 2446 atom mappings of the manually curated Kyoto Encyclopedia of Genes and Genomes (KEGG) RPAIR database. To our knowledge, our computational atom-mapping approach is the most accurate and among the fastest published to date. The atom-mapping data will be available in the MetaCyc database later in 2012; the atom-mapping software will be available within the Pathway Tools software later in 2012.
An Accurate liver segmentation method using parallel computing algorithm
International Nuclear Information System (INIS)
Elbasher, Eiman Mohammed Khalied
2014-12-01
Computed Tomography (CT or CAT scan) is a noninvasive diagnostic imaging procedure that uses a combination of X-rays and computer technology to produce horizontal, or axial, images (often called slices) of the body. A CT scan shows detailed images of any part of the body, including the bones muscles, fat and organs CT scans are more detailed than standard x-rays. CT scans may be done with or without "contrast Contrast refers to a substance taken by mouth and/ or injected into an intravenous (IV) line that causes the particular organ or tissue under study to be seen more clearly. CT scan of the liver and biliary tract are used in the diagnosis of many diseases in the abdomen structures, particularly when another type of examination, such as X-rays, physical examination, and ultra sound is not conclusive. Unfortunately, the presence of noise and artifact in the edges and fine details in the CT images limit the contrast resolution and make diagnostic procedure more difficult. This experimental study was conducted at the College of Medical Radiological Science, Sudan University of Science and Technology and Fidel Specialist Hospital. The sample of study was included 50 patients. The main objective of this research was to study an accurate liver segmentation method using a parallel computing algorithm, and to segment liver and adjacent organs using image processing technique. The main technique of segmentation used in this study was watershed transform. The scope of image processing and analysis applied to medical application is to improve the quality of the acquired image and extract quantitative information from medical image data in an efficient and accurate way. The results of this technique agreed wit the results of Jarritt et al, (2010), Kratchwil et al, (2010), Jover et al, (2011), Yomamoto et al, (1996), Cai et al (1999), Saudha and Jayashree (2010) who used different segmentation filtering based on the methods of enhancing the computed tomography images. Anther
An Accurate and Dynamic Computer Graphics Muscle Model
Levine, David Asher
1997-01-01
A computer based musculo-skeletal model was developed at the University in the departments of Mechanical and Biomedical Engineering. This model accurately represents human shoulder kinematics. The result of this model is the graphical display of bones moving through an appropriate range of motion based on inputs of EMGs and external forces. The need existed to incorporate a geometric muscle model in the larger musculo-skeletal model. Previous muscle models did not accurately represent muscle geometries, nor did they account for the kinematics of tendons. This thesis covers the creation of a new muscle model for use in the above musculo-skeletal model. This muscle model was based on anatomical data from the Visible Human Project (VHP) cadaver study. Two-dimensional digital images from the VHP were analyzed and reconstructed to recreate the three-dimensional muscle geometries. The recreated geometries were smoothed, reduced, and sliced to form data files defining the surfaces of each muscle. The muscle modeling function opened these files during run-time and recreated the muscle surface. The modeling function applied constant volume limitations to the muscle and constant geometry limitations to the tendons.
International Nuclear Information System (INIS)
Brandt, J.; Ebel, A.; Elbern, H.; Jakobs, H.; Memmesheimer, M.; Mikkelsen, T.; Thykier-Nielsen, S.; Zlatev, Z.
1997-01-01
Atmospheric transport of air pollutants is, in principle, a well understood process. If information about the state of the atmosphere is given in all details (infinitely accurate information about wind speed, etc.) and infinitely fast computers are available then the advection equation could in principle be solved exactly. This is, however, not the case: discretization of the equations and input data introduces some uncertainties and errors in the results. Therefore many different issues have to be carefully studied in order to diminish these uncertainties and to develop an accurate transport model. Some of these are e.g. the numerical treatment of the transport equation, accuracy of the mean meteorological input fields and parameterizations of sub-grid scale phenomena (as e.g. parameterizations of the 2 nd and higher order turbulence terms in order to reach closure in the perturbation equation). A tracer model for studying transport and dispersion of air pollution caused by a single but strong source is under development. The model simulations from the first ETEX release illustrate the differences caused by using various analyzed fields directly in the tracer model or using a meteorological driver. Also different parameterizations of the mixing height and the vertical exchange are compared. (author)
Accurate calculation of field and carrier distributions in doped semiconductors
Directory of Open Access Journals (Sweden)
Wenji Yang
2012-06-01
Full Text Available We use the numerical squeezing algorithm(NSA combined with the shooting method to accurately calculate the built-in fields and carrier distributions in doped silicon films (SFs in the micron and sub-micron thickness range and results are presented in graphical form for variety of doping profiles under different boundary conditions. As a complementary approach, we also present the methods and the results of the inverse problem (IVP - finding out the doping profile in the SFs for given field distribution. The solution of the IVP provides us the approach to arbitrarily design field distribution in SFs - which is very important for low dimensional (LD systems and device designing. Further more, the solution of the IVP is both direct and much easy for all the one-, two-, and three-dimensional semiconductor systems. With current efforts focused on the LD physics, knowing of the field and carrier distribution details in the LD systems will facilitate further researches on other aspects and hence the current work provides a platform for those researches.
Machine learning of accurate energy-conserving molecular force fields
Chmiela, Stefan; Tkatchenko, Alexandre; Sauceda, Huziel E.; Poltavsky, Igor; Schütt, Kristof T.; Müller, Klaus-Robert
2017-01-01
Using conservation of energy—a fundamental property of closed classical and quantum mechanical systems—we develop an efficient gradient-domain machine learning (GDML) approach to construct accurate molecular force fields using a restricted number of samples from ab initio molecular dynamics (AIMD) trajectories. The GDML implementation is able to reproduce global potential energy surfaces of intermediate-sized molecules with an accuracy of 0.3 kcal mol−1 for energies and 1 kcal mol−1 Å̊−1 for atomic forces using only 1000 conformational geometries for training. We demonstrate this accuracy for AIMD trajectories of molecules, including benzene, toluene, naphthalene, ethanol, uracil, and aspirin. The challenge of constructing conservative force fields is accomplished in our work by learning in a Hilbert space of vector-valued functions that obey the law of energy conservation. The GDML approach enables quantitative molecular dynamics simulations for molecules at a fraction of cost of explicit AIMD calculations, thereby allowing the construction of efficient force fields with the accuracy and transferability of high-level ab initio methods. PMID:28508076
Computers for lattice field theories
International Nuclear Information System (INIS)
Iwasaki, Y.
1994-01-01
Parallel computers dedicated to lattice field theories are reviewed with emphasis on the three recent projects, the Teraflops project in the US, the CP-PACS project in Japan and the 0.5-Teraflops project in the US. Some new commercial parallel computers are also discussed. Recent development of semiconductor technologies is briefly surveyed in relation to possible approaches toward Teraflops computers. (orig.)
An efficient and accurate method for calculating nonlinear diffraction beam fields
Energy Technology Data Exchange (ETDEWEB)
Jeong, Hyun Jo; Cho, Sung Jong; Nam, Ki Woong; Lee, Jang Hyun [Division of Mechanical and Automotive Engineering, Wonkwang University, Iksan (Korea, Republic of)
2016-04-15
This study develops an efficient and accurate method for calculating nonlinear diffraction beam fields propagating in fluids or solids. The Westervelt equation and quasilinear theory, from which the integral solutions for the fundamental and second harmonics can be obtained, are first considered. A computationally efficient method is then developed using a multi-Gaussian beam (MGB) model that easily separates the diffraction effects from the plane wave solution. The MGB models provide accurate beam fields when compared with the integral solutions for a number of transmitter-receiver geometries. These models can also serve as fast, powerful modeling tools for many nonlinear acoustics applications, especially in making diffraction corrections for the nonlinearity parameter determination, because of their computational efficiency and accuracy.
Computers in field ion microscopy
International Nuclear Information System (INIS)
Suvorov, A.L.; Razinkova, T.L.; Sokolov, A.G.
1980-01-01
A review is presented of computer applications in field ion microscopy (FIM). The following topics are discussed in detail: (1) modeling field ion images in perfect crystals, (2) a general scheme of modeling, (3) modeling of the process of field evaporation, (4) crystal structure defects, (5) alloys, and (6) automation of FIM experiments and computer-assisted processing of real images. 146 references are given
Visualizing the Computational Intelligence Field
L. Waltman (Ludo); J.H. van den Berg (Jan); U. Kaymak (Uzay); N.J.P. van Eck (Nees Jan)
2006-01-01
textabstractIn this paper, we visualize the structure and the evolution of the computational intelligence (CI) field. Based on our visualizations, we analyze the way in which the CI field is divided into several subfields. The visualizations provide insight into the characteristics of each subfield
International Nuclear Information System (INIS)
Komatsu, Sei; Imai, Atsuko; Kodama, Kazuhisa
2011-01-01
Over the past decade, multidetector row computed tomography (MDCT) has become the most reliable and established of the noninvasive examination techniques for detecting coronary heart disease. Now MDCT is chasing intravascular ultrasound (IVUS) in terms of spatial resolution. Among the components of vulnerable plaque, MDCT may detect lipid-rich plaque, the lipid pool, and calcified spots using computed tomography number. Plaque components are detected by MDCT with high accuracy compared with IVUS and angioscopy when assessing vulnerable plaque. The TWINS study and TOGETHAR trial demonstrated that angioscopic loss of yellow color occurred independently of volumetric plaque change by statin therapy. These 2 studies showed that plaque stabilization and regression reflect independent processes mediated by different mechanisms and time course. Noncalcified plaque and/or low-density plaque was found to be the strongest predictor of cardiac events, regardless of lesion severity, and act as a potential marker of plaque vulnerability. MDCT may be an effective tool for early triage of patients with chest pain who have a normal electrocardiogram (ECG) and cardiac enzymes in the emergency department. MDCT has the potential ability to analyze coronary plaque quantitatively and qualitatively if some problems are resolved. MDCT may become an essential tool for detecting and preventing coronary artery disease in the future. (author)
Accurate and efficient computation of synchrotron radiation functions
International Nuclear Information System (INIS)
MacLeod, Allan J.
2000-01-01
We consider the computation of three functions which appear in the theory of synchrotron radiation. These are F(x)=x∫x∞K 5/3 (y) dy))F p (x)=xK 2/3 (x) and G p (x)=x 1/3 K 1/3 (x), where K ν denotes a modified Bessel function. Chebyshev series coefficients are given which enable the functions to be computed with an accuracy of up to 15 sig. figures
Simple, accurate equations for human blood O2 dissociation computations.
Severinghaus, J W
1979-03-01
Hill's equation can be slightly modified to fit the standard human blood O2 dissociation curve to within plus or minus 0.0055 fractional saturation (S) from O less than S less than 1. Other modifications of Hill's equation may be used to compute Po2 (Torr) from S (Eq. 2), and the temperature coefficient of Po2 (Eq. 3). Variations of the Bohr coefficient with Po2 are given by Eq. 4. S = (((Po2(3) + 150 Po2)(-1) x 23,400) + 1)(-1) (1) In Po2 = 0.385 In (S-1 - 1)(-1) + 3.32 - (72 S)(-1) - 0.17(S6) (2) DELTA In Po2/delta T = 0.058 ((0.243 X Po2/100)(3.88) + 1)(-1) + 0.013 (3) delta In Po2/delta pH = (Po2/26.6)(0.184) - 2.2 (4) Procedures are described to determine Po2 and S of blood iteratively after extraction or addition of a defined amount of O2 and to compute P50 of blood from a single sample after measuring Po2, pH, and S.
Accurate magnetic field calculations for contactless energy transfer coils
Sonntag, C.L.W.; Spree, M.; Lomonova, E.A.; Duarte, J.L.; Vandenput, A.J.A.
2007-01-01
In this paper, a method for estimating the magnetic field intensity from hexagon spiral windings commonly found in contactless energy transfer applications is presented. The hexagonal structures are modeled in a magneto-static environment using Biot-Savart current stick vectors. The accuracy of the models are evaluated by mapping the current sticks and the hexagon spiral winding tracks to a local twodimensional plane, and comparing their two-dimensional magnetic field intensities. The accurac...
Accurate magnetic field calculations for contactless energy transfer coils
Sonntag, C.L.W.; Spree, M.; Lomonova, E.A.; Duarte, J.L.; Vandenput, A.J.A.
2007-01-01
In this paper, a method for estimating the magnetic field intensity from hexagon spiral windings commonly found in contactless energy transfer applications is presented. The hexagonal structures are modeled in a magneto-static environment using Biot-Savart current stick vectors. The accuracy of the
Quantum fields on the computer
1992-01-01
This book provides an overview of recent progress in computer simulations of nonperturbative phenomena in quantum field theory, particularly in the context of the lattice approach. It is a collection of extensive self-contained reviews of various subtopics, including algorithms, spectroscopy, finite temperature physics, Yukawa and chiral theories, bounds on the Higgs meson mass, the renormalization group, and weak decays of hadrons.Physicists with some knowledge of lattice gauge ideas will find this book a useful and interesting source of information on the recent developments in the field.
Lee, M.W.; Meuwly, M.
2013-01-01
The evaluation of hydration free energies is a sensitive test to assess force fields used in atomistic simulations. We showed recently that the vibrational relaxation times, 1D- and 2D-infrared spectroscopies for CN(-) in water can be quantitatively described from molecular dynamics (MD) simulations with multipolar force fields and slightly enlarged van der Waals radii for the C- and N-atoms. To validate such an approach, the present work investigates the solvation free energy of cyanide in water using MD simulations with accurate multipolar electrostatics. It is found that larger van der Waals radii are indeed necessary to obtain results close to the experimental values when a multipolar force field is used. For CN(-), the van der Waals ranges refined in our previous work yield hydration free energy between -72.0 and -77.2 kcal mol(-1), which is in excellent agreement with the experimental data. In addition to the cyanide ion, we also study the hydroxide ion to show that the method used here is readily applicable to similar systems. Hydration free energies are found to sensitively depend on the intermolecular interactions, while bonded interactions are less important, as expected. We also investigate in the present work the possibility of applying the multipolar force field in scoring trajectories generated using computationally inexpensive methods, which should be useful in broader parametrization studies with reduced computational resources, as scoring is much faster than the generation of the trajectories.
A simplified approach to characterizing a kilovoltage source spectrum for accurate dose computation
Energy Technology Data Exchange (ETDEWEB)
Poirier, Yannick; Kouznetsov, Alexei; Tambasco, Mauro [Department of Physics and Astronomy, University of Calgary, Calgary, Alberta T2N 4N2 (Canada); Department of Physics and Astronomy and Department of Oncology, University of Calgary and Tom Baker Cancer Centre, Calgary, Alberta T2N 4N2 (Canada)
2012-06-15
Purpose: To investigate and validate the clinical feasibility of using half-value layer (HVL) and peak tube potential (kVp) for characterizing a kilovoltage (kV) source spectrum for the purpose of computing kV x-ray dose accrued from imaging procedures. To use this approach to characterize a Varian Registered-Sign On-Board Imager Registered-Sign (OBI) source and perform experimental validation of a novel in-house hybrid dose computation algorithm for kV x-rays. Methods: We characterized the spectrum of an imaging kV x-ray source using the HVL and the kVp as the sole beam quality identifiers using third-party freeware Spektr to generate the spectra. We studied the sensitivity of our dose computation algorithm to uncertainties in the beam's HVL and kVp by systematically varying these spectral parameters. To validate our approach experimentally, we characterized the spectrum of a Varian Registered-Sign OBI system by measuring the HVL using a Farmer-type Capintec ion chamber (0.06 cc) in air and compared dose calculations using our computationally validated in-house kV dose calculation code to measured percent depth-dose and transverse dose profiles for 80, 100, and 125 kVp open beams in a homogeneous phantom and a heterogeneous phantom comprising tissue, lung, and bone equivalent materials. Results: The sensitivity analysis of the beam quality parameters (i.e., HVL, kVp, and field size) on dose computation accuracy shows that typical measurement uncertainties in the HVL and kVp ({+-}0.2 mm Al and {+-}2 kVp, respectively) source characterization parameters lead to dose computation errors of less than 2%. Furthermore, for an open beam with no added filtration, HVL variations affect dose computation accuracy by less than 1% for a 125 kVp beam when field size is varied from 5 Multiplication-Sign 5 cm{sup 2} to 40 Multiplication-Sign 40 cm{sup 2}. The central axis depth dose calculations and experimental measurements for the 80, 100, and 125 kVp energies agreed within
Zhu, Wuming; Trickey, S B
2017-12-28
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li + , Be + , and B + , in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B
Zhu, Wuming; Trickey, S. B.
2017-12-01
In high magnetic field calculations, anisotropic Gaussian type orbital (AGTO) basis functions are capable of reconciling the competing demands of the spherically symmetric Coulombic interaction and cylindrical magnetic (B field) confinement. However, the best available a priori procedure for composing highly accurate AGTO sets for atoms in a strong B field [W. Zhu et al., Phys. Rev. A 90, 022504 (2014)] yields very large basis sets. Their size is problematical for use in any calculation with unfavorable computational cost scaling. Here we provide an alternative constructive procedure. It is based upon analysis of the underlying physics of atoms in B fields that allow identification of several principles for the construction of AGTO basis sets. Aided by numerical optimization and parameter fitting, followed by fine tuning of fitting parameters, we devise formulae for generating accurate AGTO basis sets in an arbitrary B field. For the hydrogen iso-electronic sequence, a set depends on B field strength, nuclear charge, and orbital quantum numbers. For multi-electron systems, the basis set formulae also include adjustment to account for orbital occupations. Tests of the new basis sets for atoms H through C (1 ≤ Z ≤ 6) and ions Li+, Be+, and B+, in a wide B field range (0 ≤ B ≤ 2000 a.u.), show an accuracy better than a few μhartree for single-electron systems and a few hundredths to a few mHs for multi-electron atoms. The relative errors are similar for different atoms and ions in a large B field range, from a few to a couple of tens of millionths, thereby confirming rather uniform accuracy across the nuclear charge Z and B field strength values. Residual basis set errors are two to three orders of magnitude smaller than the electronic correlation energies in multi-electron atoms, a signal of the usefulness of the new AGTO basis sets in correlated wavefunction or density functional calculations for atomic and molecular systems in an external strong B field.
Lee, Kuo Hao; Chen, Jianhan
2017-06-15
Accurate treatment of solvent environment is critical for reliable simulations of protein conformational equilibria. Implicit treatment of solvation, such as using the generalized Born (GB) class of models arguably provides an optimal balance between computational efficiency and physical accuracy. Yet, GB models are frequently plagued by a tendency to generate overly compact structures. The physical origins of this drawback are relatively well understood, and the key to a balanced implicit solvent protein force field is careful optimization of physical parameters to achieve a sufficient level of cancellation of errors. The latter has been hampered by the difficulty of generating converged conformational ensembles of non-trivial model proteins using the popular replica exchange sampling technique. Here, we leverage improved sampling efficiency of a newly developed multi-scale enhanced sampling technique to re-optimize the generalized-Born with molecular volume (GBMV2) implicit solvent model with the CHARMM36 protein force field. Recursive optimization of key GBMV2 parameters (such as input radii) and protein torsion profiles (via the CMAP torsion cross terms) has led to a more balanced GBMV2 protein force field that recapitulates the structures and stabilities of both helical and β-hairpin model peptides. Importantly, this force field appears to be free of the over-compaction bias, and can generate structural ensembles of several intrinsically disordered proteins of various lengths that seem highly consistent with available experimental data. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Computational methods for reversed-field equilibrium
International Nuclear Information System (INIS)
Boyd, J.K.; Auerbach, S.P.; Willmann, P.A.; Berk, H.L.; McNamara, B.
1980-01-01
Investigating the temporal evolution of reversed-field equilibrium caused by transport processes requires the solution of the Grad-Shafranov equation and computation of field-line-averaged quantities. The technique for field-line averaging and the computation of the Grad-Shafranov equation are presented. Application of Green's function to specify the Grad-Shafranov equation boundary condition is discussed. Hill's vortex formulas used to verify certain computations are detailed. Use of computer software to implement computational methods is described
Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights
Hale, Nicholas
2013-03-06
An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton\\'s root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.
Fast and Accurate Computation of Gauss--Legendre and Gauss--Jacobi Quadrature Nodes and Weights
Hale, Nicholas; Townsend, Alex
2013-01-01
An efficient algorithm for the accurate computation of Gauss-Legendre and Gauss-Jacobi quadrature nodes and weights is presented. The algorithm is based on Newton's root-finding method with initial guesses and function evaluations computed via asymptotic formulae. The n-point quadrature rule is computed in O(n) operations to an accuracy of essentially double precision for any n ≥ 100. © 2013 Society for Industrial and Applied Mathematics.
Wang, Boshuo; Aberra, Aman S; Grill, Warren M; Peterchev, Angel V
2018-04-01
We present a theory and computational methods to incorporate transverse polarization of neuronal membranes into the cable equation to account for the secondary electric field generated by the membrane in response to transverse electric fields. The effect of transverse polarization on nonlinear neuronal activation thresholds is quantified and discussed in the context of previous studies using linear membrane models. The response of neuronal membranes to applied electric fields is derived under two time scales and a unified solution of transverse polarization is given for spherical and cylindrical cell geometries. The solution is incorporated into the cable equation re-derived using an asymptotic model that separates the longitudinal and transverse dimensions. Two numerical methods are proposed to implement the modified cable equation. Several common neural stimulation scenarios are tested using two nonlinear membrane models to compare thresholds of the conventional and modified cable equations. The implementations of the modified cable equation incorporating transverse polarization are validated against previous results in the literature. The test cases show that transverse polarization has limited effect on activation thresholds. The transverse field only affects thresholds of unmyelinated axons for short pulses and in low-gradient field distributions, whereas myelinated axons are mostly unaffected. The modified cable equation captures the membrane's behavior on different time scales and models more accurately the coupling between electric fields and neurons. It addresses the limitations of the conventional cable equation and allows sound theoretical interpretations. The implementation provides simple methods that are compatible with current simulation approaches to study the effect of transverse polarization on nonlinear membranes. The minimal influence by transverse polarization on axonal activation thresholds for the nonlinear membrane models indicates that
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul
2014-03-14
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ(i) of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ(i). A summarizing discussion highlights the achievements of the new theory and of its approximate solution particularly by
Energy Technology Data Exchange (ETDEWEB)
Bauer, Sebastian; Mathias, Gerald; Tavan, Paul, E-mail: paul.tavan@physik.uni-muenchen.de [Lehrstuhl für BioMolekulare Optik, Ludwig–Maximilians Universität München, Oettingenstr. 67, 80538 München (Germany)
2014-03-14
We present a reaction field (RF) method which accurately solves the Poisson equation for proteins embedded in dielectric solvent continua at a computational effort comparable to that of an electrostatics calculation with polarizable molecular mechanics (MM) force fields. The method combines an approach originally suggested by Egwolf and Tavan [J. Chem. Phys. 118, 2039 (2003)] with concepts generalizing the Born solution [Z. Phys. 1, 45 (1920)] for a solvated ion. First, we derive an exact representation according to which the sources of the RF potential and energy are inducible atomic anti-polarization densities and atomic shielding charge distributions. Modeling these atomic densities by Gaussians leads to an approximate representation. Here, the strengths of the Gaussian shielding charge distributions are directly given in terms of the static partial charges as defined, e.g., by standard MM force fields for the various atom types, whereas the strengths of the Gaussian anti-polarization densities are calculated by a self-consistency iteration. The atomic volumes are also described by Gaussians. To account for covalently overlapping atoms, their effective volumes are calculated by another self-consistency procedure, which guarantees that the dielectric function ε(r) is close to one everywhere inside the protein. The Gaussian widths σ{sub i} of the atoms i are parameters of the RF approximation. The remarkable accuracy of the method is demonstrated by comparison with Kirkwood's analytical solution for a spherical protein [J. Chem. Phys. 2, 351 (1934)] and with computationally expensive grid-based numerical solutions for simple model systems in dielectric continua including a di-peptide (Ac-Ala-NHMe) as modeled by a standard MM force field. The latter example shows how weakly the RF conformational free energy landscape depends on the parameters σ{sub i}. A summarizing discussion highlights the achievements of the new theory and of its approximate solution
A method for accurate computation of elastic and discrete inelastic scattering transfer matrix
International Nuclear Information System (INIS)
Garcia, R.D.M.; Santina, M.D.
1986-05-01
A method for accurate computation of elastic and discrete inelastic scattering transfer matrices is discussed. In particular, a partition scheme for the source energy range that avoids integration over intervals containing points where the integrand has discontinuous derivative is developed. Five-figure accurate numerical results are obtained for several test problems with the TRAMA program which incorporates the porposed method. A comparison with numerical results from existing processing codes is also presented. (author) [pt
Spherical near-field antenna measurements — The most accurate antenna measurement technique
DEFF Research Database (Denmark)
Breinbjerg, Olav
2016-01-01
The spherical near-field antenna measurement technique combines several advantages and generally constitutes the most accurate technique for experimental characterization of radiation from antennas. This paper/presentation discusses these advantages, briefly reviews the early history and present...
Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter
2009-03-31
AFRL-RV-HA-TR-2009-1055 Accurate Modeling of Ionospheric Electromagnetic Fields Generated by a Low Altitude VLF Transmitter ...m (or even 500 m) at mid to high latitudes . At low latitudes , the FDTD model exhibits variations that make it difficult to determine a reliable...Scientific, Final 3. DATES COVERED (From - To) 02-08-2006 – 31-12-2008 4. TITLE AND SUBTITLE Accurate Modeling of Ionospheric Electromagnetic Fields
DEFF Research Database (Denmark)
Blasques, José Pedro Albergaria Amaral; Bitsche, Robert
2015-01-01
This paper proposes a novel, efficient, and accurate framework for fracture analysis of beam structures with longitudinal cracks. The three-dimensional local stress field is determined using a high-fidelity beam model incorporating a finite element based cross section analysis tool. The Virtual...... Crack Closure Technique is used for computation of strain energy release rates. The devised framework was employed for analysis of cracks in beams with different cross section geometries. The results show that the accuracy of the proposed method is comparable to that of conventional three......-dimensional solid finite element models while using only a fraction of the computation time....
Defect correction and multigrid for an efficient and accurate computation of airfoil flows
Koren, B.
1988-01-01
Results are presented for an efficient solution method for second-order accurate discretizations of the 2D steady Euler equations. The solution method is based on iterative defect correction. Several schemes are considered for the computation of the second-order defect. In each defect correction
Automated Development of Accurate Algorithms and Efficient Codes for Computational Aeroacoustics
Goodrich, John W.; Dyson, Rodger W.
1999-01-01
The simulation of sound generation and propagation in three space dimensions with realistic aircraft components is a very large time dependent computation with fine details. Simulations in open domains with embedded objects require accurate and robust algorithms for propagation, for artificial inflow and outflow boundaries, and for the definition of geometrically complex objects. The development, implementation, and validation of methods for solving these demanding problems is being done to support the NASA pillar goals for reducing aircraft noise levels. Our goal is to provide algorithms which are sufficiently accurate and efficient to produce usable results rapidly enough to allow design engineers to study the effects on sound levels of design changes in propulsion systems, and in the integration of propulsion systems with airframes. There is a lack of design tools for these purposes at this time. Our technical approach to this problem combines the development of new, algorithms with the use of Mathematica and Unix utilities to automate the algorithm development, code implementation, and validation. We use explicit methods to ensure effective implementation by domain decomposition for SPMD parallel computing. There are several orders of magnitude difference in the computational efficiencies of the algorithms which we have considered. We currently have new artificial inflow and outflow boundary conditions that are stable, accurate, and unobtrusive, with implementations that match the accuracy and efficiency of the propagation methods. The artificial numerical boundary treatments have been proven to have solutions which converge to the full open domain problems, so that the error from the boundary treatments can be driven as low as is required. The purpose of this paper is to briefly present a method for developing highly accurate algorithms for computational aeroacoustics, the use of computer automation in this process, and a brief survey of the algorithms that
Computer-based personality judgments are more accurate than those made by humans
Youyou, Wu; Kosinski, Michal; Stillwell, David
2015-01-01
Judging others’ personalities is an essential skill in successful social living, as personality is a key driver behind people’s interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants’ Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy. PMID:25583507
Computer-based personality judgments are more accurate than those made by humans.
Youyou, Wu; Kosinski, Michal; Stillwell, David
2015-01-27
Judging others' personalities is an essential skill in successful social living, as personality is a key driver behind people's interactions, behaviors, and emotions. Although accurate personality judgments stem from social-cognitive skills, developments in machine learning show that computer models can also make valid judgments. This study compares the accuracy of human and computer-based personality judgments, using a sample of 86,220 volunteers who completed a 100-item personality questionnaire. We show that (i) computer predictions based on a generic digital footprint (Facebook Likes) are more accurate (r = 0.56) than those made by the participants' Facebook friends using a personality questionnaire (r = 0.49); (ii) computer models show higher interjudge agreement; and (iii) computer personality judgments have higher external validity when predicting life outcomes such as substance use, political attitudes, and physical health; for some outcomes, they even outperform the self-rated personality scores. Computers outpacing humans in personality judgment presents significant opportunities and challenges in the areas of psychological assessment, marketing, and privacy.
Fast magnetic field computation in fusion technology using GPU technology
Energy Technology Data Exchange (ETDEWEB)
Chiariello, Andrea Gaetano [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Formisano, Alessandro, E-mail: Alessandro.Formisano@unina2.it [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy); Martone, Raffaele [Ass. EURATOM/ENEA/CREATE, Dipartimento di Ingegneria Industriale e dell’Informazione, Seconda Università di Napoli, Via Roma 29, Aversa (CE) (Italy)
2013-10-15
Highlights: ► The paper deals with high accuracy numerical simulations of high field magnets. ► The porting of existing codes of High Performance Computing architectures allowed to obtain a relevant speedup while not reducing computational accuracy. ► Some examples of applications, referred to ITER-like magnets, are reported. -- Abstract: One of the main issues in the simulation of Tokamaks functioning is the reliable and accurate computation of actual field maps in the plasma chamber. In this paper a tool able to accurately compute magnetic field maps produced by active coils of any 3D shape, wound with high number of conductors, is presented. Under linearity assumption, the coil winding is modeled by means of “sticks”, following each conductor's shape, and the contribution of each stick is computed using high speed Graphic Computing Units (GPU's). Relevant speed enhancements with respect to standard parallel computing environment are achieved in this way.
Energy Technology Data Exchange (ETDEWEB)
Allardice, J.T.; Jacomb-Hood, J.; Abulafi, A.M.; Williams, N.S. (Royal London Hospital (United Kingdom)); Cookson, J.; Dykes, E.; Holman, J. (London Hospital Medical College (United Kingdom))
1993-05-01
There is a need for accurate surface area measurement of internal anatomical structures in order to define light dosimetry in adjunctive intraoperative photodynamic therapy (AIOPDT). The authors investigated whether computer-assisted triangulation of serial sections generated by computed tomography (CT) scanning can give an accurate assessment of the surface area of the walls of the true pelvis after anterior resection and before colorectal anastomosis. They show that the technique of paper density tessellation is an acceptable method of measuring the surface areas of phantom objects, with a maximum error of 0.5%, and is used as the gold standard. Computer-assisted triangulation of CT images of standard geometric objects and accurately-constructed pelvic phantoms gives a surface area assessment with a maximum error of 2.5% compared with the gold standard. The CT images of 20 patients' pelves have been analysed by computer-assisted triangulation and this shows the surface area of the walls varies from 143 cm[sup 2] to 392 cm[sup 2]. (Author).
International Nuclear Information System (INIS)
Jain, P.C.
1985-12-01
The monthly average daily values of the extraterrestrial irradiation on a horizontal plane and the maximum possible sunshine duration are two important parameters that are frequently needed in various solar energy applications. These are generally calculated by solar scientists and engineers each time they are needed and often by using the approximate short-cut methods. Using the accurate analytical expressions developed by Spencer for the declination and the eccentricity correction factor, computations for these parameters have been made for all the latitude values from 90 deg. N to 90 deg. S at intervals of 1 deg. and are presented in a convenient tabular form. Monthly average daily values of the maximum possible sunshine duration as recorded on a Campbell Stoke's sunshine recorder are also computed and presented. These tables would avoid the need for repetitive and approximate calculations and serve as a useful ready reference for providing accurate values to the solar energy scientists and engineers
Directory of Open Access Journals (Sweden)
Heng-Yi Su
2016-11-01
Full Text Available This paper proposes an efficient approach for the computation of voltage stability margin (VSM in a large-scale power grid. The objective is to accurately and rapidly determine the load power margin which corresponds to voltage collapse phenomena. The proposed approach is based on the impedance match-based technique and the model-based technique. It combines the Thevenin equivalent (TE network method with cubic spline extrapolation technique and the continuation technique to achieve fast and accurate VSM computation for a bulk power grid. Moreover, the generator Q limits are taken into account for practical applications. Extensive case studies carried out on Institute of Electrical and Electronics Engineers (IEEE benchmark systems and the Taiwan Power Company (Taipower, Taipei, Taiwan system are used to demonstrate the effectiveness of the proposed approach.
Improved Patient Size Estimates for Accurate Dose Calculations in Abdomen Computed Tomography
Energy Technology Data Exchange (ETDEWEB)
Lee, Chang-Lae [Yonsei University, Wonju (Korea, Republic of)
2017-07-15
The radiation dose of CT (computed tomography) is generally represented by the CTDI (CT dose index). CTDI, however, does not accurately predict the actual patient doses for different human body sizes because it relies on a cylinder-shaped head (diameter : 16 cm) and body (diameter : 32 cm) phantom. The purpose of this study was to eliminate the drawbacks of the conventional CTDI and to provide more accurate radiation dose information. Projection radiographs were obtained from water cylinder phantoms of various sizes, and the sizes of the water cylinder phantoms were calculated and verified using attenuation profiles. The effective diameter was also calculated using the attenuation of the abdominal projection radiographs of 10 patients. When the results of the attenuation-based method and the geometry-based method shown were compared with the results of the reconstructed-axial-CT-image-based method, the effective diameter of the attenuation-based method was found to be similar to the effective diameter of the reconstructed-axial-CT-image-based method, with a difference of less than 3.8%, but the geometry-based method showed a difference of less than 11.4%. This paper proposes a new method of accurately computing the radiation dose of CT based on the patient sizes. This method computes and provides the exact patient dose before the CT scan, and can therefore be effectively used for imaging and dose control.
Kearns, F L; Hudson, P S; Boresch, S; Woodcock, H L
2016-01-01
Enzyme activity is inherently linked to free energies of transition states, ligand binding, protonation/deprotonation, etc.; these free energies, and thus enzyme function, can be affected by residue mutations, allosterically induced conformational changes, and much more. Therefore, being able to predict free energies associated with enzymatic processes is critical to understanding and predicting their function. Free energy simulation (FES) has historically been a computational challenge as it requires both the accurate description of inter- and intramolecular interactions and adequate sampling of all relevant conformational degrees of freedom. The hybrid quantum mechanical molecular mechanical (QM/MM) framework is the current tool of choice when accurate computations of macromolecular systems are essential. Unfortunately, robust and efficient approaches that employ the high levels of computational theory needed to accurately describe many reactive processes (ie, ab initio, DFT), while also including explicit solvation effects and accounting for extensive conformational sampling are essentially nonexistent. In this chapter, we will give a brief overview of two recently developed methods that mitigate several major challenges associated with QM/MM FES: the QM non-Boltzmann Bennett's acceptance ratio method and the QM nonequilibrium work method. We will also describe usage of these methods to calculate free energies associated with (1) relative properties and (2) along reaction paths, using simple test cases with relevance to enzymes examples. © 2016 Elsevier Inc. All rights reserved.
Field Level Computer Exploitation Package
2007-03-01
to take advantage of the data retrieved from the computer. Major Barge explained that if a tool could be designed that nearly anyone could use...the study of network forensics. This has become a necessity because of the constantly growing eCommerce industry and the stiff competition between...Security. One big advantage that Insert has is the fact that it is quite small compared to most bootable CDs. At only 60 megabytes it can be burned
Nurturing a growing field: Computers & Geosciences
Mariethoz, Gregoire; Pebesma, Edzer
2017-10-01
Computational issues are becoming increasingly critical for virtually all fields of geoscience. This includes the development of improved algorithms and models, strategies for implementing high-performance computing, or the management and visualization of the large datasets provided by an ever-growing number of environmental sensors. Such issues are central to scientific fields as diverse as geological modeling, Earth observation, geophysics or climatology, to name just a few. Related computational advances, across a range of geoscience disciplines, are the core focus of Computers & Geosciences, which is thus a truly multidisciplinary journal.
Electromagnetic field computation by network methods
Felsen, Leopold B; Russer, Peter
2009-01-01
This monograph proposes a systematic and rigorous treatment of electromagnetic field representations in complex structures. The book presents new strong models by combining important computational methods. This is the last book of the late Leopold Felsen.
Fast and accurate three-dimensional point spread function computation for fluorescence microscopy.
Li, Jizhou; Xue, Feng; Blu, Thierry
2017-06-01
The point spread function (PSF) plays a fundamental role in fluorescence microscopy. A realistic and accurately calculated PSF model can significantly improve the performance in 3D deconvolution microscopy and also the localization accuracy in single-molecule microscopy. In this work, we propose a fast and accurate approximation of the Gibson-Lanni model, which has been shown to represent the PSF suitably under a variety of imaging conditions. We express the Kirchhoff's integral in this model as a linear combination of rescaled Bessel functions, thus providing an integral-free way for the calculation. The explicit approximation error in terms of parameters is given numerically. Experiments demonstrate that the proposed approach results in a significantly smaller computational time compared with current state-of-the-art techniques to achieve the same accuracy. This approach can also be extended to other microscopy PSF models.
DEFF Research Database (Denmark)
Bogdanov, Andrey; Kavun, Elif Bilge; Tischhauser, Elmar
2012-01-01
An accurate estimation of the success probability and data complexity of linear cryptanalysis is a fundamental question in symmetric cryptography. In this paper, we propose an efficient reconfigurable hardware architecture to compute the success probability and data complexity of Matsui's Algorithm...... block lengths ensures that any empirical observations are not due to differences in statistical behavior for artificially small block lengths. Rather surprisingly, we observed in previous experiments a significant deviation between the theory and practice for Matsui's Algorithm 2 for larger block sizes...
Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field
International Nuclear Information System (INIS)
Hoang, Ngoc-Tram D.; Nguyen, Duy-Anh P.; Hoang, Van-Hung; Le, Van-Hoang
2016-01-01
Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.
Highly accurate analytical energy of a two-dimensional exciton in a constant magnetic field
Energy Technology Data Exchange (ETDEWEB)
Hoang, Ngoc-Tram D. [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Nguyen, Duy-Anh P. [Department of Natural Science, Thu Dau Mot University, 6, Tran Van On Street, Thu Dau Mot City, Binh Duong Province (Viet Nam); Hoang, Van-Hung [Department of Physics, Ho Chi Minh City University of Pedagogy 280, An Duong Vuong Street, District 5, Ho Chi Minh City (Viet Nam); Le, Van-Hoang, E-mail: levanhoang@tdt.edu.vn [Atomic Molecular and Optical Physics Research Group, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam); Faculty of Applied Sciences, Ton Duc Thang University, 19 Nguyen Huu Tho Street, Tan Phong Ward, District 7, Ho Chi Minh City (Viet Nam)
2016-08-15
Explicit expressions are given for analytically describing the dependence of the energy of a two-dimensional exciton on magnetic field intensity. These expressions are highly accurate with the precision of up to three decimal places for the whole range of the magnetic field intensity. The results are shown for the ground state and some excited states; moreover, we have all formulae to obtain similar expressions of any excited state. Analysis of numerical results shows that the precision of three decimal places is maintained for the excited states with the principal quantum number of up to n=100.
Directory of Open Access Journals (Sweden)
Bryant Jamie
2011-11-01
Full Text Available Abstract Background Self report of smoking status is potentially unreliable in certain situations and in high-risk populations. This study aimed to determine the accuracy and acceptability of computer administered self-report of smoking status among a low socioeconomic (SES population. Methods Clients attending a community service organisation for welfare support were invited to complete a cross-sectional touch screen computer health survey. Following survey completion, participants were invited to provide a breath sample to measure exposure to tobacco smoke in expired air. Sensitivity, specificity, positive predictive value and negative predictive value were calculated. Results Three hundred and eighty three participants completed the health survey, and 330 (86% provided a breath sample. Of participants included in the validation analysis, 59% reported being a daily or occasional smoker. Sensitivity was 94.4% and specificity 92.8%. The positive and negative predictive values were 94.9% and 92.0% respectively. The majority of participants reported that the touch screen survey was both enjoyable (79% and easy (88% to complete. Conclusions Computer administered self report is both acceptable and accurate as a method of assessing smoking status among low SES smokers in a community setting. Routine collection of health information using touch-screen computer has the potential to identify smokers and increase provision of support and referral in the community setting.
Fast and accurate algorithm for the computation of complex linear canonical transforms.
Koç, Aykut; Ozaktas, Haldun M; Hesselink, Lambertus
2010-09-01
A fast and accurate algorithm is developed for the numerical computation of the family of complex linear canonical transforms (CLCTs), which represent the input-output relationship of complex quadratic-phase systems. Allowing the linear canonical transform parameters to be complex numbers makes it possible to represent paraxial optical systems that involve complex parameters. These include lossy systems such as Gaussian apertures, Gaussian ducts, or complex graded-index media, as well as lossless thin lenses and sections of free space and any arbitrary combinations of them. Complex-ordered fractional Fourier transforms (CFRTs) are a special case of CLCTs, and therefore a fast and accurate algorithm to compute CFRTs is included as a special case of the presented algorithm. The algorithm is based on decomposition of an arbitrary CLCT matrix into real and complex chirp multiplications and Fourier transforms. The samples of the output are obtained from the samples of the input in approximately N log N time, where N is the number of input samples. A space-bandwidth product tracking formalism is developed to ensure that the number of samples is information-theoretically sufficient to reconstruct the continuous transform, but not unnecessarily redundant.
Kelly, Aaron; Brackbill, Nora; Markland, Thomas E
2015-03-07
In this article, we show how Ehrenfest mean field theory can be made both a more accurate and efficient method to treat nonadiabatic quantum dynamics by combining it with the generalized quantum master equation framework. The resulting mean field generalized quantum master equation (MF-GQME) approach is a non-perturbative and non-Markovian theory to treat open quantum systems without any restrictions on the form of the Hamiltonian that it can be applied to. By studying relaxation dynamics in a wide range of dynamical regimes, typical of charge and energy transfer, we show that MF-GQME provides a much higher accuracy than a direct application of mean field theory. In addition, these increases in accuracy are accompanied by computational speed-ups of between one and two orders of magnitude that become larger as the system becomes more nonadiabatic. This combination of quantum-classical theory and master equation techniques thus makes it possible to obtain the accuracy of much more computationally expensive approaches at a cost lower than even mean field dynamics, providing the ability to treat the quantum dynamics of atomistic condensed phase systems for long times.
Energy Technology Data Exchange (ETDEWEB)
Kelly, Aaron; Markland, Thomas E., E-mail: tmarkland@stanford.edu [Department of Chemistry, Stanford University, Stanford, California 94305 (United States); Brackbill, Nora [Department of Physics, Stanford University, Stanford, California 94305 (United States)
2015-03-07
In this article, we show how Ehrenfest mean field theory can be made both a more accurate and efficient method to treat nonadiabatic quantum dynamics by combining it with the generalized quantum master equation framework. The resulting mean field generalized quantum master equation (MF-GQME) approach is a non-perturbative and non-Markovian theory to treat open quantum systems without any restrictions on the form of the Hamiltonian that it can be applied to. By studying relaxation dynamics in a wide range of dynamical regimes, typical of charge and energy transfer, we show that MF-GQME provides a much higher accuracy than a direct application of mean field theory. In addition, these increases in accuracy are accompanied by computational speed-ups of between one and two orders of magnitude that become larger as the system becomes more nonadiabatic. This combination of quantum-classical theory and master equation techniques thus makes it possible to obtain the accuracy of much more computationally expensive approaches at a cost lower than even mean field dynamics, providing the ability to treat the quantum dynamics of atomistic condensed phase systems for long times.
Accurate van der Waals force field for gas adsorption in porous materials.
Sun, Lei; Yang, Li; Zhang, Ya-Dong; Shi, Qi; Lu, Rui-Feng; Deng, Wei-Qiao
2017-09-05
An accurate van der Waals force field (VDW FF) was derived from highly precise quantum mechanical (QM) calculations. Small molecular clusters were used to explore van der Waals interactions between gas molecules and porous materials. The parameters of the accurate van der Waals force field were determined by QM calculations. To validate the force field, the prediction results from the VDW FF were compared with standard FFs, such as UFF, Dreiding, Pcff, and Compass. The results from the VDW FF were in excellent agreement with the experimental measurements. This force field can be applied to the prediction of the gas density (H 2 , CO 2 , C 2 H 4 , CH 4 , N 2 , O 2 ) and adsorption performance inside porous materials, such as covalent organic frameworks (COFs), zeolites and metal organic frameworks (MOFs), consisting of H, B, N, C, O, S, Si, Al, Zn, Mg, Ni, and Co. This work provides a solid basis for studying gas adsorption in porous materials. © 2017 Wiley Periodicals, Inc. © 2017 Wiley Periodicals, Inc.
Ahmed, Ahfaz
2015-03-01
Gasoline is the most widely used fuel for light duty automobile transportation, but its molecular complexity makes it intractable to experimentally and computationally study the fundamental combustion properties. Therefore, surrogate fuels with a simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating surrogates for FACE (fuels for advanced combustion engines) gasolines A and C by combining regression modeling with physical and chemical kinetics simulations. The computational methodology integrates simulation tools executed across different software platforms. Initially, the palette of surrogate species and carbon types for the target fuels were determined from a detailed hydrocarbon analysis (DHA). A regression algorithm implemented in MATLAB was linked to REFPROP for simulation of distillation curves and calculation of physical properties of surrogate compositions. The MATLAB code generates a surrogate composition at each iteration, which is then used to automatically generate CHEMKIN input files that are submitted to homogeneous batch reactor simulations for prediction of research octane number (RON). The regression algorithm determines the optimal surrogate composition to match the fuel properties of FACE A and C gasoline, specifically hydrogen/carbon (H/C) ratio, density, distillation characteristics, carbon types, and RON. The optimal surrogate fuel compositions obtained using the present computational approach was compared to the real fuel properties, as well as with surrogate compositions available in the literature. Experiments were conducted within a Cooperative Fuels Research (CFR) engine operating under controlled autoignition (CAI) mode to compare the formulated surrogates against the real fuels. Carbon monoxide measurements indicated that the proposed surrogates
Manz, Thomas A; Sholl, David S
2011-12-13
The partitioning of electron spin density among atoms in a material gives atomic spin moments (ASMs), which are important for understanding magnetic properties. We compare ASMs computed using different population analysis methods and introduce a method for computing density derived electrostatic and chemical (DDEC) ASMs. Bader and DDEC ASMs can be computed for periodic and nonperiodic materials with either collinear or noncollinear magnetism, while natural population analysis (NPA) ASMs can be computed for nonperiodic materials with collinear magnetism. Our results show Bader, DDEC, and (where applicable) NPA methods give similar ASMs, but different net atomic charges. Because they are optimized to reproduce both the magnetic field and the chemical states of atoms in a material, DDEC ASMs are especially suitable for constructing interaction potentials for atomistic simulations. We describe the computation of accurate ASMs for (a) a variety of systems using collinear and noncollinear spin DFT, (b) highly correlated materials (e.g., magnetite) using DFT+U, and (c) various spin states of ozone using coupled cluster expansions. The computed ASMs are in good agreement with available experimental results for a variety of periodic and nonperiodic materials. Examples considered include the antiferromagnetic metal organic framework Cu3(BTC)2, several ozone spin states, mono- and binuclear transition metal complexes, ferri- and ferro-magnetic solids (e.g., Fe3O4, Fe3Si), and simple molecular systems. We briefly discuss the theory of exchange-correlation functionals for studying noncollinear magnetism. A method for finding the ground state of systems with highly noncollinear magnetism is introduced. We use these methods to study the spin-orbit coupling potential energy surface of the single molecule magnet Fe4C40H52N4O12, which has highly noncollinear magnetism, and find that it contains unusual features that give a new interpretation to experimental data.
International Nuclear Information System (INIS)
Ma, Duancheng; Friák, Martin; Pezold, Johann von; Raabe, Dierk; Neugebauer, Jörg
2015-01-01
We propose an approach for the computationally efficient and quantitatively accurate prediction of solid-solution strengthening. It combines the 2-D Peierls–Nabarro model and a recently developed solid-solution strengthening model. Solid-solution strengthening is examined with Al–Mg and Al–Li as representative alloy systems, demonstrating a good agreement between theory and experiments within the temperature range in which the dislocation motion is overdamped. Through a parametric study, two guideline maps of the misfit parameters against (i) the critical resolved shear stress, τ 0 , at 0 K and (ii) the energy barrier, ΔE b , against dislocation motion in a solid solution with randomly distributed solute atoms are created. With these two guideline maps, τ 0 at finite temperatures is predicted for other Al binary systems, and compared with available experiments, achieving good agreement
Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao; Lu, Jun-Ying; Zeng, Yan-Hong; Meng, Fan-Jie; Cao, Bin; Zi, Xue-Rong; Han, Shu-Ming; Zhang, Yu-Huan
2013-09-01
Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 × d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l × h × d): V = 0.56 × (l × h × d) + 39.44 (r = 0.92, P = 0.000). The 64-slice CT volume-rendering technique can
International Nuclear Information System (INIS)
Guo, Zhi-Jun; Lin, Qiang; Liu, Hai-Tao
2013-01-01
Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0
Energy Technology Data Exchange (ETDEWEB)
Guo, Zhi-Jun [Dept. of Radiology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)], e-mail: Gzj3@163.com; Lin, Qiang [Dept. of Oncology, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China); Liu, Hai-Tao [Dept. of General Surgery, North China Petroleum Bureau General Hospital, Renqiu, Hebei (China)] [and others])
2013-09-15
Background: Using computed tomography (CT) to rapidly and accurately quantify pleural effusion volume benefits medical and scientific research. However, the precise volume of pleural effusions still involves many challenges and currently does not have a recognized accurate measuring. Purpose: To explore the feasibility of using 64-slice CT volume-rendering technology to accurately measure pleural fluid volume and to then analyze the correlation between the volume of the free pleural effusion and the different diameters of the pleural effusion. Material and Methods: The 64-slice CT volume-rendering technique was used to measure and analyze three parts. First, the fluid volume of a self-made thoracic model was measured and compared with the actual injected volume. Second, the pleural effusion volume was measured before and after pleural fluid drainage in 25 patients, and the volume reduction was compared with the actual volume of the liquid extract. Finally, the free pleural effusion volume was measured in 26 patients to analyze the correlation between it and the diameter of the effusion, which was then used to calculate the regression equation. Results: After using the 64-slice CT volume-rendering technique to measure the fluid volume of the self-made thoracic model, the results were compared with the actual injection volume. No significant differences were found, P = 0.836. For the 25 patients with drained pleural effusions, the comparison of the reduction volume with the actual volume of the liquid extract revealed no significant differences, P = 0.989. The following linear regression equation was used to compare the pleural effusion volume (V) (measured by the CT volume-rendering technique) with the pleural effusion greatest depth (d): V = 158.16 X d - 116.01 (r = 0.91, P = 0.000). The following linear regression was used to compare the volume with the product of the pleural effusion diameters (l X h X d): V = 0.56 X (l X h X d) + 39.44 (r = 0.92, P = 0
Development of highly accurate approximate scheme for computing the charge transfer integral
Energy Technology Data Exchange (ETDEWEB)
Pershin, Anton; Szalay, Péter G. [Laboratory for Theoretical Chemistry, Institute of Chemistry, Eötvös Loránd University, P.O. Box 32, H-1518 Budapest (Hungary)
2015-08-21
The charge transfer integral is a key parameter required by various theoretical models to describe charge transport properties, e.g., in organic semiconductors. The accuracy of this important property depends on several factors, which include the level of electronic structure theory and internal simplifications of the applied formalism. The goal of this paper is to identify the performance of various approximate approaches of the latter category, while using the high level equation-of-motion coupled cluster theory for the electronic structure. The calculations have been performed on the ethylene dimer as one of the simplest model systems. By studying different spatial perturbations, it was shown that while both energy split in dimer and fragment charge difference methods are equivalent with the exact formulation for symmetrical displacements, they are less efficient when describing transfer integral along the asymmetric alteration coordinate. Since the “exact” scheme was found computationally expensive, we examine the possibility to obtain the asymmetric fluctuation of the transfer integral by a Taylor expansion along the coordinate space. By exploring the efficiency of this novel approach, we show that the Taylor expansion scheme represents an attractive alternative to the “exact” calculations due to a substantial reduction of computational costs, when a considerably large region of the potential energy surface is of interest. Moreover, we show that the Taylor expansion scheme, irrespective of the dimer symmetry, is very accurate for the entire range of geometry fluctuations that cover the space the molecule accesses at room temperature.
Accurate Computation of Periodic Regions' Centers in the General M-Set with Integer Index Number
Directory of Open Access Journals (Sweden)
Wang Xingyuan
2010-01-01
Full Text Available This paper presents two methods for accurately computing the periodic regions' centers. One method fits for the general M-sets with integer index number, the other fits for the general M-sets with negative integer index number. Both methods improve the precision of computation by transforming the polynomial equations which determine the periodic regions' centers. We primarily discuss the general M-sets with negative integer index, and analyze the relationship between the number of periodic regions' centers on the principal symmetric axis and in the principal symmetric interior. We can get the centers' coordinates with at least 48 significant digits after the decimal point in both real and imaginary parts by applying the Newton's method to the transformed polynomial equation which determine the periodic regions' centers. In this paper, we list some centers' coordinates of general M-sets' k-periodic regions (k=3,4,5,6 for the index numbers α=−25,−24,…,−1 , all of which have highly numerical accuracy.
Computation within the auxiliary field approach
International Nuclear Information System (INIS)
Baeurle, S.A.
2003-01-01
Recently, the classical auxiliary field methodology has been developed as a new simulation technique for performing calculations within the framework of classical statistical mechanics. Since the approach suffers from a sign problem, a judicious choice of the sampling algorithm, allowing a fast statistical convergence and an efficient generation of field configurations, is of fundamental importance for a successful simulation. In this paper we focus on the computational aspects of this simulation methodology. We introduce two different types of algorithms, the single-move auxiliary field Metropolis Monte Carlo algorithm and two new classes of force-based algorithms, which enable multiple-move propagation. In addition, to further optimize the sampling, we describe a preconditioning scheme, which permits to treat each field degree of freedom individually with regard to the evolution through the auxiliary field configuration space. Finally, we demonstrate the validity and assess the competitiveness of these algorithms on a representative practical example. We believe that they may also provide an interesting possibility for enhancing the computational efficiency of other auxiliary field methodologies
DEFF Research Database (Denmark)
Kepp, Kasper Planeta; Ooi, Bee Lean; Christensen, Hans Erik Mølager
2007-01-01
This work describes the computation and accurate reproduction of subtle shifts in reduction potentials for two mutants of the iron-sulfur protein Pyrococcus furiosus ferredoxin. The computational models involved only first-sphere ligands and differed with respect to one ligand, either acetate (as...
Forward Field Computation with OpenMEEG
Directory of Open Access Journals (Sweden)
Alexandre Gramfort
2011-01-01
must be computed. We present OpenMEEG, which solves the electromagnetic forward problem in the quasistatic regime, for head models with piecewise constant conductivity. The core of OpenMEEG consists of the symmetric Boundary Element Method, which is based on an extended Green Representation theorem. OpenMEEG is able to provide lead fields for four different electromagnetic forward problems: Electroencephalography (EEG, Magnetoencephalography (MEG, Electrical Impedance Tomography (EIT, and intracranial electric potentials (IPs. OpenMEEG is open source and multiplatform. It can be used from Python and Matlab in conjunction with toolboxes that solve the inverse problem; its integration within FieldTrip is operational since release 2.0.
Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors
Choi, Hyun Ho; Rodionov, Yaroslav I.; Paterson, Alexandra F.; Panidi, Julianna; Saranin, Danila; Kharlamov, Nikolai; Didenko, Sergei I.; Anthopoulos, Thomas D.; Cho, Kilwon; Podzorov, Vitaly
2018-01-01
Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.
Accurate Extraction of Charge Carrier Mobility in 4-Probe Field-Effect Transistors
Choi, Hyun Ho
2018-04-30
Charge carrier mobility is an important characteristic of organic field-effect transistors (OFETs) and other semiconductor devices. However, accurate mobility determination in FETs is frequently compromised by issues related to Schottky-barrier contact resistance, that can be efficiently addressed by measurements in 4-probe/Hall-bar contact geometry. Here, it is shown that this technique, widely used in materials science, can still lead to significant mobility overestimation due to longitudinal channel shunting caused by voltage probes in 4-probe structures. This effect is investigated numerically and experimentally in specially designed multiterminal OFETs based on optimized novel organic-semiconductor blends and bulk single crystals. Numerical simulations reveal that 4-probe FETs with long but narrow channels and wide voltage probes are especially prone to channel shunting, that can lead to mobilities overestimated by as much as 350%. In addition, the first Hall effect measurements in blended OFETs are reported and how Hall mobility can be affected by channel shunting is shown. As a solution to this problem, a numerical correction factor is introduced that can be used to obtain much more accurate experimental mobilities. This methodology is relevant to characterization of a variety of materials, including organic semiconductors, inorganic oxides, monolayer materials, as well as carbon nanotube and semiconductor nanocrystal arrays.
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
Energy Technology Data Exchange (ETDEWEB)
Gan, Yangzhou; Zhao, Qunfei [Department of Automation, Shanghai Jiao Tong University, and Key Laboratory of System Control and Information Processing, Ministry of Education of China, Shanghai 200240 (China); Xia, Zeyang, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn; Hu, Ying [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, and The Chinese University of Hong Kong, Shenzhen 518055 (China); Xiong, Jing, E-mail: zy.xia@siat.ac.cn, E-mail: jing.xiong@siat.ac.cn [Shenzhen Institutes of Advanced Technology, Chinese Academy of Sciences, Shenzhen 510855 (China); Zhang, Jianwei [TAMS, Department of Informatics, University of Hamburg, Hamburg 22527 (Germany)
2015-01-15
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm{sup 3}) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm{sup 3}, 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm{sup 3}, 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0
Toward accurate tooth segmentation from computed tomography images using a hybrid level set model
International Nuclear Information System (INIS)
Gan, Yangzhou; Zhao, Qunfei; Xia, Zeyang; Hu, Ying; Xiong, Jing; Zhang, Jianwei
2015-01-01
Purpose: A three-dimensional (3D) model of the teeth provides important information for orthodontic diagnosis and treatment planning. Tooth segmentation is an essential step in generating the 3D digital model from computed tomography (CT) images. The aim of this study is to develop an accurate and efficient tooth segmentation method from CT images. Methods: The 3D dental CT volumetric images are segmented slice by slice in a two-dimensional (2D) transverse plane. The 2D segmentation is composed of a manual initialization step and an automatic slice by slice segmentation step. In the manual initialization step, the user manually picks a starting slice and selects a seed point for each tooth in this slice. In the automatic slice segmentation step, a developed hybrid level set model is applied to segment tooth contours from each slice. Tooth contour propagation strategy is employed to initialize the level set function automatically. Cone beam CT (CBCT) images of two subjects were used to tune the parameters. Images of 16 additional subjects were used to validate the performance of the method. Volume overlap metrics and surface distance metrics were adopted to assess the segmentation accuracy quantitatively. The volume overlap metrics were volume difference (VD, mm 3 ) and Dice similarity coefficient (DSC, %). The surface distance metrics were average symmetric surface distance (ASSD, mm), RMS (root mean square) symmetric surface distance (RMSSSD, mm), and maximum symmetric surface distance (MSSD, mm). Computation time was recorded to assess the efficiency. The performance of the proposed method has been compared with two state-of-the-art methods. Results: For the tested CBCT images, the VD, DSC, ASSD, RMSSSD, and MSSD for the incisor were 38.16 ± 12.94 mm 3 , 88.82 ± 2.14%, 0.29 ± 0.03 mm, 0.32 ± 0.08 mm, and 1.25 ± 0.58 mm, respectively; the VD, DSC, ASSD, RMSSSD, and MSSD for the canine were 49.12 ± 9.33 mm 3 , 91.57 ± 0.82%, 0.27 ± 0.02 mm, 0.28 ± 0.03 mm
Thermal Conductivities in Solids from First Principles: Accurate Computations and Rapid Estimates
Carbogno, Christian; Scheffler, Matthias
In spite of significant research efforts, a first-principles determination of the thermal conductivity κ at high temperatures has remained elusive. Boltzmann transport techniques that account for anharmonicity perturbatively become inaccurate under such conditions. Ab initio molecular dynamics (MD) techniques using the Green-Kubo (GK) formalism capture the full anharmonicity, but can become prohibitively costly to converge in time and size. We developed a formalism that accelerates such GK simulations by several orders of magnitude and that thus enables its application within the limited time and length scales accessible in ab initio MD. For this purpose, we determine the effective harmonic potential occurring during the MD, the associated temperature-dependent phonon properties and lifetimes. Interpolation in reciprocal and frequency space then allows to extrapolate to the macroscopic scale. For both force-field and ab initio MD, we validate this approach by computing κ for Si and ZrO2, two materials known for their particularly harmonic and anharmonic character. Eventually, we demonstrate how these techniques facilitate reasonable estimates of κ from existing MD calculations at virtually no additional computational cost.
Marchini, Giovanni Scala; Gebreselassie, Surafel; Liu, Xiaobo; Pynadath, Cindy; Snyder, Grace; Monga, Manoj
2013-02-01
The purpose of our study was to determine, in vivo, whether single-energy noncontrast computed tomography (NCCT) can accurately predict the presence/percentage of struvite stone composition. We retrospectively searched for all patients with struvite components on stone composition analysis between January 2008 and March 2012. Inclusion criteria were NCCT prior to stone analysis and stone size ≥4 mm. A single urologist, blinded to stone composition, reviewed all NCCT to acquire stone location, dimensions, and Hounsfield unit (HU). HU density (HUD) was calculated by dividing mean HU by the stone's largest transverse diameter. Stone analysis was performed via Fourier transform infrared spectrometry. Independent sample Student's t-test and analysis of variance (ANOVA) were used to compare HU/HUD among groups. Spearman's correlation test was used to determine the correlation between HU and stone size and also HU/HUD to % of each component within the stone. Significance was considered if pR=0.017; p=0.912) and negative with HUD (R=-0.20; p=0.898). Overall, 3 (6.8%) had stones (n=5) with other miscellaneous stones (n=39), no difference was found for HU (p=0.09) but HUD was significantly lower for pure stones (27.9±23.6 v 72.5±55.9, respectively; p=0.006). Again, significant overlaps were seen. Pure struvite stones have significantly lower HUD than mixed struvite stones, but overlap exists. A low HUD may increase the suspicion for a pure struvite calculus.
Energy Technology Data Exchange (ETDEWEB)
Langer, Christoph; Lutz, M.; Kuehl, C.; Frey, N. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany); Partner Site Hamburg/Kiel/Luebeck, DZHK (German Centre for Cardiovascular Research), Kiel (Germany); Both, M.; Sattler, B.; Jansen, O; Schaefer, P. [Christian-Albrechts-Universitaet Kiel, Department of Diagnostic Radiology, University Medical Center Schleswig-Holstein (Germany); Harders, H.; Eden, M. [Christian-Albrechts-Universitaet Kiel, Department of Cardiology, Angiology and Critical Care Medicine, University Medical Center Schleswig-Holstein (Germany)
2014-10-15
Late enhancement (LE) multi-slice computed tomography (leMDCT) was introduced for the visualization of (intra-) myocardial fibrosis in Hypertrophic Cardiomyopathy (HCM). LE is associated with adverse cardiac events. This analysis focuses on leMDCT derived LV muscle mass (LV-MM) which may be related to LE resulting in LE proportion for potential risk stratification in HCM. N=26 HCM-patients underwent leMDCT (64-slice-CT) and cardiovascular magnetic resonance (CMR). In leMDCT iodine contrast (Iopromid, 350 mg/mL; 150mL) was injected 7 minutes before imaging. Reconstructed short cardiac axis views served for planimetry. The study group was divided into three groups of varying LV-contrast. LeMDCT was correlated with CMR. The mean age was 64.2 ± 14 years. The groups of varying contrast differed in weight and body mass index (p < 0.05). In the group with good LV-contrast assessment of LV-MM resulted in 147.4 ± 64.8 g in leMDCT vs. 147.1 ± 65.9 in CMR (p > 0.05). In the group with sufficient contrast LV-MM appeared with 172 ± 30.8 g in leMDCT vs. 165.9 ± 37.8 in CMR (p > 0.05). Overall intra-/inter-observer variability of semiautomatic assessment of LV-MM showed an accuracy of 0.9 ± 8.6 g and 0.8 ± 9.2 g in leMDCT. All leMDCT-measures correlated well with CMR (r > 0.9). LeMDCT primarily performed for LE-visualization in HCM allows for accurate LV-volumetry including LV-MM in > 90 % of the cases. (orig.)
A computational theory of visual receptive fields.
Lindeberg, Tony
2013-12-01
A receptive field constitutes a region in the visual field where a visual cell or a visual operator responds to visual stimuli. This paper presents a theory for what types of receptive field profiles can be regarded as natural for an idealized vision system, given a set of structural requirements on the first stages of visual processing that reflect symmetry properties of the surrounding world. These symmetry properties include (i) covariance properties under scale changes, affine image deformations, and Galilean transformations of space-time as occur for real-world image data as well as specific requirements of (ii) temporal causality implying that the future cannot be accessed and (iii) a time-recursive updating mechanism of a limited temporal buffer of the past as is necessary for a genuine real-time system. Fundamental structural requirements are also imposed to ensure (iv) mutual consistency and a proper handling of internal representations at different spatial and temporal scales. It is shown how a set of families of idealized receptive field profiles can be derived by necessity regarding spatial, spatio-chromatic, and spatio-temporal receptive fields in terms of Gaussian kernels, Gaussian derivatives, or closely related operators. Such image filters have been successfully used as a basis for expressing a large number of visual operations in computer vision, regarding feature detection, feature classification, motion estimation, object recognition, spatio-temporal recognition, and shape estimation. Hence, the associated so-called scale-space theory constitutes a both theoretically well-founded and general framework for expressing visual operations. There are very close similarities between receptive field profiles predicted from this scale-space theory and receptive field profiles found by cell recordings in biological vision. Among the family of receptive field profiles derived by necessity from the assumptions, idealized models with very good qualitative
Directory of Open Access Journals (Sweden)
Yuqing He
2014-01-01
Full Text Available Autonomous maneuvering flight control of rotor-flying robots (RFR is a challenging problem due to the highly complicated structure of its model and significant uncertainties regarding many aspects of the field. As a consequence, it is difficult in many cases to decide whether or not a flight maneuver trajectory is feasible. It is necessary to conduct an analysis of the flight maneuvering ability of an RFR prior to test flight. Our aim in this paper is to use a numerical method called algorithm differentiation (AD to solve this problem. The basic idea is to compute the internal state (i.e., attitude angles and angular rates and input profiles based on predetermined maneuvering trajectory information denoted by the outputs (i.e., positions and yaw angle and their higher-order derivatives. For this purpose, we first present a model of the RFR system and show that it is flat. We then cast the procedure for obtaining the required state/input based on the desired outputs as a static optimization problem, which is solved using AD and a derivative based optimization algorithm. Finally, we test our proposed method using a flight maneuver trajectory to verify its performance.
Ahmed, Ahfaz; Goteng, Gokop; Shankar, Vijai; Al-Qurashi, Khalid; Roberts, William L.; Sarathy, Mani
2015-01-01
simpler molecular composition that represent real fuel behavior in one or more aspects are needed to enable repeatable experimental and computational combustion investigations. This study presents a novel computational methodology for formulating
Accurate technique for complete geometric calibration of cone-beam computed tomography systems
International Nuclear Information System (INIS)
Cho Youngbin; Moseley, Douglas J.; Siewerdsen, Jeffrey H.; Jaffray, David A.
2005-01-01
Cone-beam computed tomography systems have been developed to provide in situ imaging for the purpose of guiding radiation therapy. Clinical systems have been constructed using this approach, a clinical linear accelerator (Elekta Synergy RP) and an iso-centric C-arm. Geometric calibration involves the estimation of a set of parameters that describes the geometry of such systems, and is essential for accurate image reconstruction. We have developed a general analytic algorithm and corresponding calibration phantom for estimating these geometric parameters in cone-beam computed tomography (CT) systems. The performance of the calibration algorithm is evaluated and its application is discussed. The algorithm makes use of a calibration phantom to estimate the geometric parameters of the system. The phantom consists of 24 steel ball bearings (BBs) in a known geometry. Twelve BBs are spaced evenly at 30 deg in two plane-parallel circles separated by a given distance along the tube axis. The detector (e.g., a flat panel detector) is assumed to have no spatial distortion. The method estimates geometric parameters including the position of the x-ray source, position, and rotation of the detector, and gantry angle, and can describe complex source-detector trajectories. The accuracy and sensitivity of the calibration algorithm was analyzed. The calibration algorithm estimates geometric parameters in a high level of accuracy such that the quality of CT reconstruction is not degraded by the error of estimation. Sensitivity analysis shows uncertainty of 0.01 deg. (around beam direction) to 0.3 deg. (normal to the beam direction) in rotation, and 0.2 mm (orthogonal to the beam direction) to 4.9 mm (beam direction) in position for the medical linear accelerator geometry. Experimental measurements using a laboratory bench Cone-beam CT system of known geometry demonstrate the sensitivity of the method in detecting small changes in the imaging geometry with an uncertainty of 0.1 mm in
Efficient and Accurate Computational Framework for Injector Design and Analysis, Phase I
National Aeronautics and Space Administration — CFD codes used to simulate upper stage expander cycle engines are not adequately mature to support design efforts. Rapid and accurate simulations require more...
A comparative design view for accurate control of servos using a field programmable gate array
International Nuclear Information System (INIS)
Tickle, A J; Harvey, P K; Smith, J S; Wu, F; Buckle, J R
2009-01-01
An embedded system is a special-purpose computer system designed to perform one or a few dedicated functions. Altera DSP Builder presents designers and users with an alternate approach when creating their systems by employing a blockset similar to that already used in Simulink. The application considered in this paper is the design of a Pulse Width Modulation (PWM) system for use in stereo vision. PWM can replace a digital-to-analogue converter to control audio speakers, LED intensity, motor speed, and servo position. Rather than the conventional HDL coding approach this Simulink approach provides an easy understanding platform to the PWM design. This paper includes a comparison between two approaches regarding resource usage and flexibility etc. Included is how DSP Builder manipulates an onboard clock signal, in order to create the control pulses to the 'raw' coding of a PWM generator in VHDL. Both methods were shown to a selection of people and their views on which version they would subsequently use in their relative fields is discussed.
Dual field theories of quantum computation
International Nuclear Information System (INIS)
Vanchurin, Vitaly
2016-01-01
Given two quantum states of N q-bits we are interested to find the shortest quantum circuit consisting of only one- and two- q-bit gates that would transfer one state into another. We call it the quantum maze problem for the reasons described in the paper. We argue that in a large N limit the quantum maze problem is equivalent to the problem of finding a semiclassical trajectory of some lattice field theory (the dual theory) on an N+1 dimensional space-time with geometrically flat, but topologically compact spatial slices. The spatial fundamental domain is an N dimensional hyper-rhombohedron, and the temporal direction describes transitions from an arbitrary initial state to an arbitrary target state and so the initial and final dual field theory conditions are described by these two quantum computational states. We first consider a complex Klein-Gordon field theory and argue that it can only be used to study the shortest quantum circuits which do not involve generators composed of tensor products of multiple Pauli Z matrices. Since such situation is not generic we call it the Z-problem. On the dual field theory side the Z-problem corresponds to massless excitations of the phase (Goldstone modes) that we attempt to fix using Higgs mechanism. The simplest dual theory which does not suffer from the massless excitation (or from the Z-problem) is the Abelian-Higgs model which we argue can be used for finding the shortest quantum circuits. Since every trajectory of the field theory is mapped directly to a quantum circuit, the shortest quantum circuits are identified with semiclassical trajectories. We also discuss the complexity of an actual algorithm that uses a dual theory prospective for solving the quantum maze problem and compare it with a geometric approach. We argue that it might be possible to solve the problem in sub-exponential time in 2 N , but for that we must consider the Klein-Gordon theory on curved spatial geometry and/or more complicated (than N
Preliminary Phase Field Computational Model Development
Energy Technology Data Exchange (ETDEWEB)
Li, Yulan [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Hu, Shenyang Y. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Xu, Ke [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Suter, Jonathan D. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); McCloy, John S. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Johnson, Bradley R. [Pacific Northwest National Lab. (PNNL), Richland, WA (United States); Ramuhalli, Pradeep [Pacific Northwest National Lab. (PNNL), Richland, WA (United States)
2014-12-15
This interim report presents progress towards the development of meso-scale models of magnetic behavior that incorporate microstructural information. Modeling magnetic signatures in irradiated materials with complex microstructures (such as structural steels) is a significant challenge. The complexity is addressed incrementally, using the monocrystalline Fe (i.e., ferrite) film as model systems to develop and validate initial models, followed by polycrystalline Fe films, and by more complicated and representative alloys. In addition, the modeling incrementally addresses inclusion of other major phases (e.g., martensite, austenite), minor magnetic phases (e.g., carbides, FeCr precipitates), and minor nonmagnetic phases (e.g., Cu precipitates, voids). The focus of the magnetic modeling is on phase-field models. The models are based on the numerical solution to the Landau-Lifshitz-Gilbert equation. From the computational standpoint, phase-field modeling allows the simulation of large enough systems that relevant defect structures and their effects on functional properties like magnetism can be simulated. To date, two phase-field models have been generated in support of this work. First, a bulk iron model with periodic boundary conditions was generated as a proof-of-concept to investigate major loop effects of single versus polycrystalline bulk iron and effects of single non-magnetic defects. More recently, to support the experimental program herein using iron thin films, a new model was generated that uses finite boundary conditions representing surfaces and edges. This model has provided key insights into the domain structures observed in magnetic force microscopy (MFM) measurements. Simulation results for single crystal thin-film iron indicate the feasibility of the model for determining magnetic domain wall thickness and mobility in an externally applied field. Because the phase-field model dimensions are limited relative to the size of most specimens used in
Field microcomputerized multichannel γ ray spectrometer based on notebook computer
International Nuclear Information System (INIS)
Jia Wenyi; Wei Biao; Zhou Rongsheng; Li Guodong; Tang Hong
1996-01-01
Currently, field γ ray spectrometry can not rapidly measure γ ray full spectrum, so a field microcomputerized multichannel γ ray spectrometer based on notebook computer is described, and the γ ray full spectrum can be rapidly measured in the field
Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea
2010-01-01
Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amou...
An Accurate Method for Computing the Absorption of Solar Radiation by Water Vapor
Chou, M. D.
1980-01-01
The method is based upon molecular line parameters and makes use of a far wing scaling approximation and k distribution approach previously applied to the computation of the infrared cooling rate due to water vapor. Taking into account the wave number dependence of the incident solar flux, the solar heating rate is computed for the entire water vapor spectrum and for individual absorption bands. The accuracy of the method is tested against line by line calculations. The method introduces a maximum error of 0.06 C/day. The method has the additional advantage over previous methods in that it can be applied to any portion of the spectral region containing the water vapor bands. The integrated absorptances and line intensities computed from the molecular line parameters were compared with laboratory measurements. The comparison reveals that, among the three different sources, absorptance is the largest for the laboratory measurements.
International Nuclear Information System (INIS)
Bonetto, Paola; Qi, Jinyi; Leahy, Richard M.
1999-01-01
We describe a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, we derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. We show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow us to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm
Accurate Computed Enthalpies of Spin Crossover in Iron and Cobalt Complexes
DEFF Research Database (Denmark)
Kepp, Kasper Planeta; Cirera, J
2009-01-01
Despite their importance in many chemical processes, the relative energies of spin states of transition metal complexes have so far been haunted by large computational errors. By the use of six functionals, B3LYP, BP86, TPSS, TPSSh, M06L, and M06L, this work studies nine complexes (seven with iron...
Bonetto, P.; Qi, Jinyi; Leahy, R. M.
2000-08-01
Describes a method for computing linear observer statistics for maximum a posteriori (MAP) reconstructions of PET images. The method is based on a theoretical approximation for the mean and covariance of MAP reconstructions. In particular, the authors derive here a closed form for the channelized Hotelling observer (CHO) statistic applied to 2D MAP images. The theoretical analysis models both the Poission statistics of PET data and the inhomogeneity of tracer uptake. The authors show reasonably good correspondence between these theoretical results and Monte Carlo studies. The accuracy and low computational cost of the approximation allow the authors to analyze the observer performance over a wide range of operating conditions and parameter settings for the MAP reconstruction algorithm.
Matrix-vector multiplication using digital partitioning for more accurate optical computing
Gary, C. K.
1992-01-01
Digital partitioning offers a flexible means of increasing the accuracy of an optical matrix-vector processor. This algorithm can be implemented with the same architecture required for a purely analog processor, which gives optical matrix-vector processors the ability to perform high-accuracy calculations at speeds comparable with or greater than electronic computers as well as the ability to perform analog operations at a much greater speed. Digital partitioning is compared with digital multiplication by analog convolution, residue number systems, and redundant number representation in terms of the size and the speed required for an equivalent throughput as well as in terms of the hardware requirements. Digital partitioning and digital multiplication by analog convolution are found to be the most efficient alogrithms if coding time and hardware are considered, and the architecture for digital partitioning permits the use of analog computations to provide the greatest throughput for a single processor.
Magnetostatic fields computed using an integral equation derived from Green's theorems
International Nuclear Information System (INIS)
Simkin, J.; Trowbridge, C.W.
1976-04-01
A method of computing magnetostatic fields is described that is based on a numerical solution of the integral equation obtained from Green's Theorems. The magnetic scalar potential and its normal derivative on the surfaces of volumes are found by solving a set of linear equations. These are obtained from Green's Second Theorem and the continuity conditions at interfaces between volumes. Results from a two-dimensional computer program are presented and these show the method to be accurate and efficient. (author)
An accurate and computationally efficient small-scale nonlinear FEA of flexible risers
Rahmati, MT; Bahai, H; Alfano, G
2016-01-01
This paper presents a highly efficient small-scale, detailed finite-element modelling method for flexible risers which can be effectively implemented in a fully-nested (FE2) multiscale analysis based on computational homogenisation. By exploiting cyclic symmetry and applying periodic boundary conditions, only a small fraction of a flexible pipe is used for a detailed nonlinear finite-element analysis at the small scale. In this model, using three-dimensional elements, all layer components are...
Martinek, Tomas; Duboué-Dijon, Elise; Timr, Štěpán; Mason, Philip E.; Baxová, Katarina; Fischer, Henry E.; Schmidt, Burkhard; Pluhařová, Eva; Jungwirth, Pavel
2018-06-01
We present a combination of force field and ab initio molecular dynamics simulations together with neutron scattering experiments with isotopic substitution that aim at characterizing ion hydration and pairing in aqueous calcium chloride and formate/acetate solutions. Benchmarking against neutron scattering data on concentrated solutions together with ion pairing free energy profiles from ab initio molecular dynamics allows us to develop an accurate calcium force field which accounts in a mean-field way for electronic polarization effects via charge rescaling. This refined calcium parameterization is directly usable for standard molecular dynamics simulations of processes involving this key biological signaling ion.
Frangi, Attilio; Guerrieri, Andrea; Boni, Nicoló
2017-04-06
Electrostatically actuated torsional micromirrors are key elements in Micro-Opto-Electro- Mechanical-Systems. When forced by means of in-plane comb-fingers, the dynamics of the main torsional response is known to be strongly non-linear and governed by parametric resonance. Here, in order to also trace unstable branches of the mirror response, we implement a simplified continuation method with arc-length control and propose an innovative technique based on Finite Elements and the concepts of material derivative in order to compute the electrostatic stiffness; i.e., the derivative of the torque with respect to the torsional angle, as required by the continuation approach.
Tunnel ionization of atoms and molecules: How accurate are the weak-field asymptotic formulas?
Labeye, Marie; Risoud, François; Maquet, Alfred; Caillat, Jérémie; Taïeb, Richard
2018-05-01
Weak-field asymptotic formulas for the tunnel ionization rate of atoms and molecules in strong laser fields are often used for the analysis of strong field recollision experiments. We investigate their accuracy and domain of validity for different model systems by confronting them to exact numerical results, obtained by solving the time dependent Schrödinger equation. We find that corrections that take the dc-Stark shift into account are a simple and efficient way to improve the formula. Furthermore, analyzing the different approximations used, we show that error compensation plays a crucial role in the fair agreement between exact and analytical results.
DeepBound: accurate identification of transcript boundaries via deep convolutional neural fields
Shao, Mingfu; Ma, Jianzhu; Wang, Sheng
2017-01-01
Motivation: Reconstructing the full- length expressed transcripts (a. k. a. the transcript assembly problem) from the short sequencing reads produced by RNA-seq protocol plays a central role in identifying novel genes and transcripts as well as in studying gene expressions and gene functions. A crucial step in transcript assembly is to accurately determine the splicing junctions and boundaries of the expressed transcripts from the reads alignment. In contrast to the splicing junctions that can be efficiently detected from spliced reads, the problem of identifying boundaries remains open and challenging, due to the fact that the signal related to boundaries is noisy and weak.
DeepBound: accurate identification of transcript boundaries via deep convolutional neural fields
Shao, Mingfu
2017-04-20
Motivation: Reconstructing the full- length expressed transcripts (a. k. a. the transcript assembly problem) from the short sequencing reads produced by RNA-seq protocol plays a central role in identifying novel genes and transcripts as well as in studying gene expressions and gene functions. A crucial step in transcript assembly is to accurately determine the splicing junctions and boundaries of the expressed transcripts from the reads alignment. In contrast to the splicing junctions that can be efficiently detected from spliced reads, the problem of identifying boundaries remains open and challenging, due to the fact that the signal related to boundaries is noisy and weak.
Alasnag, Mirvat; Umakanthan, Branavan; Foster, Gary P
2008-07-01
Coronary arteriography (CA) is the standard method to image coronary lesions. Multidetector cardiac computerized tomography (MDCT) provides high-resolution images of coronary arteries, allowing a noninvasive alternative to determine lesion type. To date, no studies have assessed the ability of MDCT to categorize coronary lesion types. The objective of this study was to determine the accuracy of lesion type categorization by MDCT using CA as a reference standard. Patients who underwent both MDCT and CA within 2 months of each other were enrolled. MDCT and CA images were reviewed in a blinded fashion. Lesions were categorized according to the SCAI classification system (Types I-IV). The origin, proximal and middle segments of the major arteries were analyzed. Each segment comprised a data point for comparison. Analysis was performed using the Spearman Correlation Test. Four hundred eleven segments were studied, of which 110 had lesions. The lesion distribution was as follows: 35 left anterior descending (LAD), 29 circumflex (Cx), 31 right coronary artery (RCA), 2 ramus intermedius, 8 diagonal, 4 obtuse marginal and 2 left internal mammary arteries. Correlations between MDCT and CA were significant in all major vessels (LAD, Cx, RCA) (p < 0.001). The overall correlation coefficient was 0.67. Concordance was strong for lesion Types II-IV (97%) and poor for Type I (30%). High-risk coronary lesion types can be accurately categorized by MDCT. This ability may allow MDCT to play an important noninvasive role in the planning of coronary interventions.
Fast multigrid-based computation of the induced electric field for transcranial magnetic stimulation
Laakso, Ilkka; Hirata, Akimasa
2012-12-01
In transcranial magnetic stimulation (TMS), the distribution of the induced electric field, and the affected brain areas, depends on the position of the stimulation coil and the individual geometry of the head and brain. The distribution of the induced electric field in realistic anatomies can be modelled using computational methods. However, existing computational methods for accurately determining the induced electric field in realistic anatomical models have suffered from long computation times, typically in the range of tens of minutes or longer. This paper presents a matrix-free implementation of the finite-element method with a geometric multigrid method that can potentially reduce the computation time to several seconds or less even when using an ordinary computer. The performance of the method is studied by computing the induced electric field in two anatomically realistic models. An idealized two-loop coil is used as the stimulating coil. Multiple computational grid resolutions ranging from 2 to 0.25 mm are used. The results show that, for macroscopic modelling of the electric field in an anatomically realistic model, computational grid resolutions of 1 mm or 2 mm appear to provide good numerical accuracy compared to higher resolutions. The multigrid iteration typically converges in less than ten iterations independent of the grid resolution. Even without parallelization, each iteration takes about 1.0 s or 0.1 s for the 1 and 2 mm resolutions, respectively. This suggests that calculating the electric field with sufficient accuracy in real time is feasible.
Fast and accurate CMB computations in non-flat FLRW universes
Lesgourgues, Julien; Tram, Thomas
2014-09-01
We present a new method for calculating CMB anisotropies in a non-flat Friedmann universe, relying on a very stable algorithm for the calculation of hyperspherical Bessel functions, that can be pushed to arbitrary precision levels. We also introduce a new approximation scheme which gradually takes over in the flat space limit and leads to significant reductions of the computation time. Our method is implemented in the Boltzmann code class. It can be used to benchmark the accuracy of the camb code in curved space, which is found to match expectations. For default precision settings, corresponding to 0.1% for scalar temperature spectra and 0.2% for scalar polarisation spectra, our code is two to three times faster, depending on curvature. We also simplify the temperature and polarisation source terms significantly, so the different contributions to the Cl 's are easy to identify inside the code.
Fast and accurate CMB computations in non-flat FLRW universes
International Nuclear Information System (INIS)
Lesgourgues, Julien; Tram, Thomas
2014-01-01
We present a new method for calculating CMB anisotropies in a non-flat Friedmann universe, relying on a very stable algorithm for the calculation of hyperspherical Bessel functions, that can be pushed to arbitrary precision levels. We also introduce a new approximation scheme which gradually takes over in the flat space limit and leads to significant reductions of the computation time. Our method is implemented in the Boltzmann code class. It can be used to benchmark the accuracy of the camb code in curved space, which is found to match expectations. For default precision settings, corresponding to 0.1% for scalar temperature spectra and 0.2% for scalar polarisation spectra, our code is two to three times faster, depending on curvature. We also simplify the temperature and polarisation source terms significantly, so the different contributions to the C ℓ 's are easy to identify inside the code
RIO: a new computational framework for accurate initial data of binary black holes
Barreto, W.; Clemente, P. C. M.; de Oliveira, H. P.; Rodriguez-Mueller, B.
2018-06-01
We present a computational framework ( Rio) in the ADM 3+1 approach for numerical relativity. This work enables us to carry out high resolution calculations for initial data of two arbitrary black holes. We use the transverse conformal treatment, the Bowen-York and the puncture methods. For the numerical solution of the Hamiltonian constraint we use the domain decomposition and the spectral decomposition of Galerkin-Collocation. The nonlinear numerical code solves the set of equations for the spectral modes using the standard Newton-Raphson method, LU decomposition and Gaussian quadratures. We show the convergence of the Rio code. This code allows for easy deployment of large calculations. We show how the spin of one of the black holes is manifest in the conformal factor.
submitter A model for the accurate computation of the lateral scattering of protons in water
Bellinzona, EV; Embriaco, A; Ferrari, A; Fontana, A; Mairani, A; Parodi, K; Rotondi, A; Sala, P; Tessonnier, T
2016-01-01
A pencil beam model for the calculation of the lateral scattering in water of protons for any therapeutic energy and depth is presented. It is based on the full Molière theory, taking into account the energy loss and the effects of mixtures and compounds. Concerning the electromagnetic part, the model has no free parameters and is in very good agreement with the FLUKA Monte Carlo (MC) code. The effects of the nuclear interactions are parametrized with a two-parameter tail function, adjusted on MC data calculated with FLUKA. The model, after the convolution with the beam and the detector response, is in agreement with recent proton data in water from HIT. The model gives results with the same accuracy of the MC codes based on Molière theory, with a much shorter computing time.
Schulte, Friederike A; Lambers, Floor M; Mueller, Thomas L; Stauber, Martin; Müller, Ralph
2014-04-01
Time-lapsed in vivo micro-computed tomography is a powerful tool to analyse longitudinal changes in the bone micro-architecture. Registration can overcome problems associated with spatial misalignment between scans; however, it requires image interpolation which might affect the outcome of a subsequent bone morphometric analysis. The impact of the interpolation error itself, though, has not been quantified to date. Therefore, the purpose of this ex vivo study was to elaborate the effect of different interpolator schemes [nearest neighbour, tri-linear and B-spline (BSP)] on bone morphometric indices. None of the interpolator schemes led to significant differences between interpolated and non-interpolated images, with the lowest interpolation error found for BSPs (1.4%). Furthermore, depending on the interpolator, the processing order of registration, Gaussian filtration and binarisation played a role. Independent from the interpolator, the present findings suggest that the evaluation of bone morphometry should be done with images registered using greyscale information.
Directory of Open Access Journals (Sweden)
Attilio Frangi
2017-04-01
Full Text Available Electrostatically actuated torsional micromirrors are key elements in Micro-Opto-Electro- Mechanical-Systems. When forced by means of in-plane comb-fingers, the dynamics of the main torsional response is known to be strongly non-linear and governed by parametric resonance. Here, in order to also trace unstable branches of the mirror response, we implement a simplified continuation method with arc-length control and propose an innovative technique based on Finite Elements and the concepts of material derivative in order to compute the electrostatic stiffness; i.e., the derivative of the torque with respect to the torsional angle, as required by the continuation approach.
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.
Directory of Open Access Journals (Sweden)
Shanis Barnard
Full Text Available Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals’ Behaviour
Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs’ behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals’ quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog’s shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non
Quick, Accurate, Smart: 3D Computer Vision Technology Helps Assessing Confined Animals' Behaviour.
Barnard, Shanis; Calderara, Simone; Pistocchi, Simone; Cucchiara, Rita; Podaliri-Vulpiani, Michele; Messori, Stefano; Ferri, Nicola
2016-01-01
Mankind directly controls the environment and lifestyles of several domestic species for purposes ranging from production and research to conservation and companionship. These environments and lifestyles may not offer these animals the best quality of life. Behaviour is a direct reflection of how the animal is coping with its environment. Behavioural indicators are thus among the preferred parameters to assess welfare. However, behavioural recording (usually from video) can be very time consuming and the accuracy and reliability of the output rely on the experience and background of the observers. The outburst of new video technology and computer image processing gives the basis for promising solutions. In this pilot study, we present a new prototype software able to automatically infer the behaviour of dogs housed in kennels from 3D visual data and through structured machine learning frameworks. Depth information acquired through 3D features, body part detection and training are the key elements that allow the machine to recognise postures, trajectories inside the kennel and patterns of movement that can be later labelled at convenience. The main innovation of the software is its ability to automatically cluster frequently observed temporal patterns of movement without any pre-set ethogram. Conversely, when common patterns are defined through training, a deviation from normal behaviour in time or between individuals could be assessed. The software accuracy in correctly detecting the dogs' behaviour was checked through a validation process. An automatic behaviour recognition system, independent from human subjectivity, could add scientific knowledge on animals' quality of life in confinement as well as saving time and resources. This 3D framework was designed to be invariant to the dog's shape and size and could be extended to farm, laboratory and zoo quadrupeds in artificial housing. The computer vision technique applied to this software is innovative in non
Directory of Open Access Journals (Sweden)
Saumya Tiwari
Full Text Available Rejection is a common problem after cardiac transplants leading to significant number of adverse events and deaths, particularly in the first year of transplantation. The gold standard to identify rejection is endomyocardial biopsy. This technique is complex, cumbersome and requires a lot of expertise in the correct interpretation of stained biopsy sections. Traditional histopathology cannot be used actively or quickly during cardiac interventions or surgery. Our objective was to develop a stain-less approach using an emerging technology, Fourier transform infrared (FT-IR spectroscopic imaging to identify different components of cardiac tissue by their chemical and molecular basis aided by computer recognition, rather than by visual examination using optical microscopy. We studied this technique in assessment of cardiac transplant rejection to evaluate efficacy in an example of complex cardiovascular pathology. We recorded data from human cardiac transplant patients' biopsies, used a Bayesian classification protocol and developed a visualization scheme to observe chemical differences without the need of stains or human supervision. Using receiver operating characteristic curves, we observed probabilities of detection greater than 95% for four out of five histological classes at 10% probability of false alarm at the cellular level while correctly identifying samples with the hallmarks of the immune response in all cases. The efficacy of manual examination can be significantly increased by observing the inherent biochemical changes in tissues, which enables us to achieve greater diagnostic confidence in an automated, label-free manner. We developed a computational pathology system that gives high contrast images and seems superior to traditional staining procedures. This study is a prelude to the development of real time in situ imaging systems, which can assist interventionists and surgeons actively during procedures.
Lumme, E.; Pomoell, J.; Kilpua, E. K. J.
2017-12-01
Estimates of the photospheric magnetic, electric, and plasma velocity fields are essential for studying the dynamics of the solar atmosphere, for example through the derivative quantities of Poynting and relative helicity flux and using the fields to obtain the lower boundary condition for data-driven coronal simulations. In this paper we study the performance of a data processing and electric field inversion approach that requires only high-resolution and high-cadence line-of-sight or vector magnetograms, which we obtain from the Helioseismic and Magnetic Imager (HMI) onboard Solar Dynamics Observatory (SDO). The approach does not require any photospheric velocity estimates, and the lacking velocity information is compensated for using ad hoc assumptions. We show that the free parameters of these assumptions can be optimized to reproduce the time evolution of the total magnetic energy injection through the photosphere in NOAA AR 11158, when compared to recent state-of-the-art estimates for this active region. However, we find that the relative magnetic helicity injection is reproduced poorly, reaching at best a modest underestimation. We also discuss the effect of some of the data processing details on the results, including the masking of the noise-dominated pixels and the tracking method of the active region, neither of which has received much attention in the literature so far. In most cases the effect of these details is small, but when the optimization of the free parameters of the ad hoc assumptions is considered, a consistent use of the noise mask is required. The results found in this paper imply that the data processing and electric field inversion approach that uses only the photospheric magnetic field information offers a flexible and straightforward way to obtain photospheric magnetic and electric field estimates suitable for practical applications such as coronal modeling studies.
Creating space for biodiversity by planning swath patterns and field marging using accurate geometry
Bruin, de S.; Heijting, S.; Klompe, A.; Lerink, P.; Vonk, M.; Wal, van der T.
2009-01-01
Potential benefits of field margins or boundary strips include promotion of biodiversity and farm wildlife, maintaining landscape diversity, exploiting pest predators and parasites and enhancing crop pollinator populations. In this paper we propose and demonstrate a method to relocate areas of
Gray, Alan; Harlen, Oliver G; Harris, Sarah A; Khalid, Syma; Leung, Yuk Ming; Lonsdale, Richard; Mulholland, Adrian J; Pearson, Arwen R; Read, Daniel J; Richardson, Robin A
2015-01-01
Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
Energy Technology Data Exchange (ETDEWEB)
Bangga, Galih; Weihing, Pascal; Lutz, Thorsten; Krämer, Ewald [University of Stuttgart, Stuttgart (Germany)
2017-05-15
The present study focuses on the impact of grid for accurate prediction of the MEXICO rotor under stalled conditions. Two different blade mesh topologies, O and C-H meshes, and two different grid resolutions are tested for several time step sizes. The simulations are carried out using Delayed detached-eddy simulation (DDES) with two eddy viscosity RANS turbulence models, namely Spalart- Allmaras (SA) and Menter Shear stress transport (SST) k-ω. A high order spatial discretization, WENO (Weighted essentially non- oscillatory) scheme, is used in these computations. The results are validated against measurement data with regards to the sectional loads and the chordwise pressure distributions. The C-H mesh topology is observed to give the best results employing the SST k-ω turbulence model, but the computational cost is more expensive as the grid contains a wake block that increases the number of cells.
Computation of Surface Integrals of Curl Vector Fields
Hu, Chenglie
2007-01-01
This article presents a way of computing a surface integral when the vector field of the integrand is a curl field. Presented in some advanced calculus textbooks such as [1], the technique, as the author experienced, is simple and applicable. The computation is based on Stokes' theorem in 3-space calculus, and thus provides not only a means to…
Computational lens for the near field
DEFF Research Database (Denmark)
Carney, P. Scott; Franzin, Richard A.; Bozhevolnyi, Sergey I.
2004-01-01
A method is presented to reconstruct the structure of a scattering object from data acquired with a photon scanning tunneling microscope . The data may be understood to form a Gabor type near-field hologram and are obtained at a distance from the sample where the field is defocused and normally...
Are rapid population estimates accurate? A field trial of two different assessment methods.
Grais, Rebecca F; Coulombier, Denis; Ampuero, Julia; Lucas, Marcelino E S; Barretto, Avertino T; Jacquier, Guy; Diaz, Francisco; Balandine, Serge; Mahoudeau, Claude; Brown, Vincent
2006-09-01
Emergencies resulting in large-scale displacement often lead to populations resettling in areas where basic health services and sanitation are unavailable. To plan relief-related activities quickly, rapid population size estimates are needed. The currently recommended Quadrat method estimates total population by extrapolating the average population size living in square blocks of known area to the total site surface. An alternative approach, the T-Square, provides a population estimate based on analysis of the spatial distribution of housing units taken throughout a site. We field tested both methods and validated the results against a census in Esturro Bairro, Beira, Mozambique. Compared to the census (population: 9,479), the T-Square yielded a better population estimate (9,523) than the Quadrat method (7,681; 95% confidence interval: 6,160-9,201), but was more difficult for field survey teams to implement. Although applicable only to similar sites, several general conclusions can be drawn for emergency planning.
International Nuclear Information System (INIS)
Schnurr, C.; Nessler, J.; Koenig, D.P.; Meyer, C.; Schild, H.H.; Koebke, J.
2009-01-01
The existing studies concerning image-free navigated implantation of hip resurfacing arthroplasty are based on analysis of the accuracy of conventional biplane radiography. Studies have shown that these measurements in biplane radiography are imprecise and that precision is improved by use of three-dimensional (3D) computer tomography (CT) scans. To date, the accuracy of image-free navigation devices for hip resurfacing has not been investigated using CT scans, and anteversion accuracy has not been assessed at all. Furthermore, no study has tested the reliability of the navigation software concerning the automatically calculated implant position. The purpose of our study was to analyze the accuracy of varus-valgus and anteversion using an image-free hip resurfacing navigation device. The reliability of the software-calculated implant position was also determined. A total of 32 femoral hip resurfacing components were implanted on embalmed human femurs using an image-free navigation device. In all, 16 prostheses were implanted with the proposed position generated by the navigation software; the 16 prostheses were inserted in an optimized valgus position. A 3D CT scan was undertaken before and after operation. The difference between the measured and planned varus-valgus angle averaged 1 deg (mean±standard deviation (SD): group I, 1 deg±2 deg; group II, 1 deg±1 deg). The mean±SD difference between femoral neck anteversion and anteversion of the implant was 4 deg (group I, 4 deg±4 deg; group II, 4 deg±3 deg). The software-calculated implant position differed 7 deg±8 deg from the measured neck-shaft angle. These measured accuracies did not differ significantly between the two groups. Our study proved the high accuracy of the navigation device concerning the most important biomechanical factor: the varus-valgus angle. The software calculation of the proposed implant position has been shown to be inaccurate and needs improvement. Hence, manual adjustment of the
Entertainment computing, social transformation and the quantum field
Rauterberg, G.W.M.; Nijholt, A.; Reidsma, D.; Hondorp, H.
2009-01-01
Entertainment computing is on its way getting an established academic discipline. The scope of entertainment computing is quite broad (see the scope of the international journal Entertainment Computing). One unifying idea in this diverse community of entertainment researchers and developers might be a normative position to enhance human living through social transformation. One possible option in this direction is a shared `conscious' field. Several ideas about a new kind of field based on qu...
Optimal usage of computing grid network in the fields of nuclear fusion computing task
International Nuclear Information System (INIS)
Tenev, D.
2006-01-01
Nowadays the nuclear power becomes the main source of energy. To make its usage more efficient, the scientists created complicated simulation models, which require powerful computers. The grid computing is the answer to powerful and accessible computing resources. The article observes, and estimates the optimal configuration of the grid environment in the fields of the complicated nuclear fusion computing tasks. (author)
Noyes, Ben F.; Mokaberi, Babak; Mandoy, Ram; Pate, Alex; Huijgen, Ralph; McBurney, Mike; Chen, Owen
2017-03-01
Reducing overlay error via an accurate APC feedback system is one of the main challenges in high volume production of the current and future nodes in the semiconductor industry. The overlay feedback system directly affects the number of dies meeting overlay specification and the number of layers requiring dedicated exposure tools through the fabrication flow. Increasing the former number and reducing the latter number is beneficial for the overall efficiency and yield of the fabrication process. An overlay feedback system requires accurate determination of the overlay error, or fingerprint, on exposed wafers in order to determine corrections to be automatically and dynamically applied to the exposure of future wafers. Since current and future nodes require correction per exposure (CPE), the resolution of the overlay fingerprint must be high enough to accommodate CPE in the overlay feedback system, or overlay control module (OCM). Determining a high resolution fingerprint from measured data requires extremely dense overlay sampling that takes a significant amount of measurement time. For static corrections this is acceptable, but in an automated dynamic correction system this method creates extreme bottlenecks for the throughput of said system as new lots have to wait until the previous lot is measured. One solution is using a less dense overlay sampling scheme and employing computationally up-sampled data to a dense fingerprint. That method uses a global fingerprint model over the entire wafer; measured localized overlay errors are therefore not always represented in its up-sampled output. This paper will discuss a hybrid system shown in Fig. 1 that combines a computationally up-sampled fingerprint with the measured data to more accurately capture the actual fingerprint, including local overlay errors. Such a hybrid system is shown to result in reduced modelled residuals while determining the fingerprint, and better on-product overlay performance.
Are current atomistic force fields accurate enough to study proteins in crowded environments?
Directory of Open Access Journals (Sweden)
Drazen Petrov
2014-05-01
Full Text Available The high concentration of macromolecules in the crowded cellular interior influences different thermodynamic and kinetic properties of proteins, including their structural stabilities, intermolecular binding affinities and enzymatic rates. Moreover, various structural biology methods, such as NMR or different spectroscopies, typically involve samples with relatively high protein concentration. Due to large sampling requirements, however, the accuracy of classical molecular dynamics (MD simulations in capturing protein behavior at high concentration still remains largely untested. Here, we use explicit-solvent MD simulations and a total of 6.4 µs of simulated time to study wild-type (folded and oxidatively damaged (unfolded forms of villin headpiece at 6 mM and 9.2 mM protein concentration. We first perform an exhaustive set of simulations with multiple protein molecules in the simulation box using GROMOS 45a3 and 54a7 force fields together with different types of electrostatics treatment and solution ionic strengths. Surprisingly, the two villin headpiece variants exhibit similar aggregation behavior, despite the fact that their estimated aggregation propensities markedly differ. Importantly, regardless of the simulation protocol applied, wild-type villin headpiece consistently aggregates even under conditions at which it is experimentally known to be soluble. We demonstrate that aggregation is accompanied by a large decrease in the total potential energy, with not only hydrophobic, but also polar residues and backbone contributing substantially. The same effect is directly observed for two other major atomistic force fields (AMBER99SB-ILDN and CHARMM22-CMAP as well as indirectly shown for additional two (AMBER94, OPLS-AAL, and is possibly due to a general overestimation of the potential energy of protein-protein interactions at the expense of water-water and water-protein interactions. Overall, our results suggest that current MD force fields
International Nuclear Information System (INIS)
Feng, Y.; Sardei, F.; Kisslinger, J.
2005-01-01
The paper presents a new simple and accurate numerical field-line mapping technique providing a high-quality representation of field lines as required by a Monte Carlo modeling of plasma edge transport in the complex magnetic boundaries of three-dimensional (3D) toroidal fusion devices. Using a toroidal sequence of precomputed 3D finite flux-tube meshes, the method advances field lines through a simple bilinear, forward/backward symmetric interpolation at the interfaces between two adjacent flux tubes. It is a reversible field-line mapping (RFLM) algorithm ensuring a continuous and unique reconstruction of field lines at any point of the 3D boundary. The reversibility property has a strong impact on the efficiency of modeling the highly anisotropic plasma edge transport in general closed or open configurations of arbitrary ergodicity as it avoids artificial cross-field diffusion of the fast parallel transport. For stellarator-symmetric magnetic configurations, which are the standard case for stellarators, the reversibility additionally provides an average cancellation of the radial interpolation errors of field lines circulating around closed magnetic flux surfaces. The RFLM technique has been implemented in the 3D edge transport code EMC3-EIRENE and is used routinely for plasma transport modeling in the boundaries of several low-shear and high-shear stellarators as well as in the boundary of a tokamak with 3D magnetic edge perturbations
Kurz, S
1999-01-01
In this paper a new technique for the accurate calculation of magnetic fields in the end regions of superconducting accelerator magnets is presented. This method couples Boundary Elements (BEM) which discretize the surface of the iron yoke and Finite Elements (FEM) for the modelling of the nonlinear interior of the yoke. The BEM-FEM method is therefore specially suited for the calculation of 3-dimensional effects in the magnets, as the coils and the air regions do not have to be represented in the finite-element mesh and discretization errors only influence the calculation of the magnetization (reduced field) of the yoke. The method has been recently implemented into the CERN-ROXIE program package for the design and optimization of the LHC magnets. The field shape and multipole errors in the two-in-one LHC dipoles with its coil ends sticking out of the common iron yoke is presented.
Computing discrete signed distance fields from triangle meshes
DEFF Research Database (Denmark)
Bærentzen, Jakob Andreas; Aanæs, Henrik
2002-01-01
A method for generating a discrete, signed 3D distance field is proposed. Distance fields are used in a number of contexts. In particular the popular level set method is usually initialized by a distance field. The main focus of our work is on simplifying the computation of the sign when generating...
Energy Technology Data Exchange (ETDEWEB)
Gray, Alan [The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom); Harlen, Oliver G. [University of Leeds, Leeds LS2 9JT (United Kingdom); Harris, Sarah A., E-mail: s.a.harris@leeds.ac.uk [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Leeds, Leeds LS2 9JT (United Kingdom); Khalid, Syma; Leung, Yuk Ming [University of Southampton, Southampton SO17 1BJ (United Kingdom); Lonsdale, Richard [Max-Planck-Institut für Kohlenforschung, Kaiser-Wilhelm-Platz 1, 45470 Mülheim an der Ruhr (Germany); Philipps-Universität Marburg, Hans-Meerwein Strasse, 35032 Marburg (Germany); Mulholland, Adrian J. [University of Bristol, Bristol BS8 1TS (United Kingdom); Pearson, Arwen R. [University of Leeds, Leeds LS2 9JT (United Kingdom); University of Hamburg, Hamburg (Germany); Read, Daniel J.; Richardson, Robin A. [University of Leeds, Leeds LS2 9JT (United Kingdom); The University of Edinburgh, Edinburgh EH9 3JZ, Scotland (United Kingdom)
2015-01-01
The current computational techniques available for biomolecular simulation are described, and the successes and limitations of each with reference to the experimental biophysical methods that they complement are presented. Despite huge advances in the computational techniques available for simulating biomolecules at the quantum-mechanical, atomistic and coarse-grained levels, there is still a widespread perception amongst the experimental community that these calculations are highly specialist and are not generally applicable by researchers outside the theoretical community. In this article, the successes and limitations of biomolecular simulation and the further developments that are likely in the near future are discussed. A brief overview is also provided of the experimental biophysical methods that are commonly used to probe biomolecular structure and dynamics, and the accuracy of the information that can be obtained from each is compared with that from modelling. It is concluded that progress towards an accurate spatial and temporal model of biomacromolecules requires a combination of all of these biophysical techniques, both experimental and computational.
Multigrid methods for the computation of propagators in gauge fields
International Nuclear Information System (INIS)
Kalkreuter, T.
1992-11-01
In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. We discuss proper averaging operations for bosons and for staggered fermions. An efficient algorithm for computing C numerically is presented. The averaging kernels C can be used not only in deterministic multigrid computations, but also in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies of gauge theories. Actual numerical computations of kernels and propagators are performed in compact four-dimensional SU(2) gauge fields. (orig./HSI)
Directory of Open Access Journals (Sweden)
Theodore D. Katsilieris
2017-03-01
Full Text Available The terrestrial optical wireless communication links have attracted significant research and commercial worldwide interest over the last few years due to the fact that they offer very high and secure data rate transmission with relatively low installation and operational costs, and without need of licensing. However, since the propagation path of the information signal, i.e., the laser beam, is the atmosphere, their effectivity affects the atmospheric conditions strongly in the specific area. Thus, system performance depends significantly on the rain, the fog, the hail, the atmospheric turbulence, etc. Due to the influence of these effects, it is necessary to study, theoretically and numerically, very carefully before the installation of such a communication system. In this work, we present exactly and accurately approximate mathematical expressions for the estimation of the average capacity and the outage probability performance metrics, as functions of the link’s parameters, the transmitted power, the attenuation due to the fog, the ambient noise and the atmospheric turbulence phenomenon. The latter causes the scintillation effect, which results in random and fast fluctuations of the irradiance at the receiver’s end. These fluctuations can be studied accurately with statistical methods. Thus, in this work, we use either the lognormal or the gamma–gamma distribution for weak or moderate to strong turbulence conditions, respectively. Moreover, using the derived mathematical expressions, we design, accomplish and present a computational tool for the estimation of these systems’ performances, while also taking into account the parameter of the link and the atmospheric conditions. Furthermore, in order to increase the accuracy of the presented tool, for the cases where the obtained analytical mathematical expressions are complex, the performance results are verified with the numerical estimation of the appropriate integrals. Finally, using
Computer Forensics Field Triage Process Model
Directory of Open Access Journals (Sweden)
Marcus K. Rogers
2006-06-01
Full Text Available With the proliferation of digital based evidence, the need for the timely identification, analysis and interpretation of digital evidence is becoming more crucial. In many investigations critical information is required while at the scene or within a short period of time - measured in hours as opposed to days. The traditional cyber forensics approach of seizing a system(s/media, transporting it to the lab, making a forensic image(s, and then searching the entire system for potential evidence, is no longer appropriate in some circumstances. In cases such as child abductions, pedophiles, missing or exploited persons, time is of the essence. In these types of cases, investigators dealing with the suspect or crime scene need investigative leads quickly; in some cases it is the difference between life and death for the victim(s. The Cyber Forensic Field Triage Process Model (CFFTPM proposes an onsite or field approach for providing the identification, analysis and interpretation of digital evidence in a short time frame, without the requirement of having to take the system(s/media back to the lab for an in-depth examination or acquiring a complete forensic image(s. The proposed model adheres to commonly held forensic principles, and does not negate the ability that once the initial field triage is concluded, the system(s/storage media be transported back to a lab environment for a more thorough examination and analysis. The CFFTPM has been successfully used in various real world cases, and its investigative importance and pragmatic approach has been amply demonstrated. Furthermore, the derived evidence from these cases has not been challenged in the court proceedings where it has been introduced. The current article describes the CFFTPM in detail, discusses the model’s forensic soundness, investigative support capabilities and practical considerations.
Field-programmable custom computing technology architectures, tools, and applications
Luk, Wayne; Pocek, Ken
2000-01-01
Field-Programmable Custom Computing Technology: Architectures, Tools, and Applications brings together in one place important contributions and up-to-date research results in this fast-moving area. In seven selected chapters, the book describes the latest advances in architectures, design methods, and applications of field-programmable devices for high-performance reconfigurable systems. The contributors to this work were selected from the leading researchers and practitioners in the field. It will be valuable to anyone working or researching in the field of custom computing technology. It serves as an excellent reference, providing insight into some of the most challenging issues being examined today.
International Nuclear Information System (INIS)
Kawai, Takeshi; Ebisawa, Toru; Tasaki, Seiji; Akiyoshi, Tsunekazu; Eguchi, Yoshiaki; Hino, Masahiro; Achiwa, Norio.
1995-01-01
The purpose of our study is to develop accurate techniques for controlling polarization of a long wavelength neutron beam and to make a thin-film dynamical spin-flip device operated in magnetizing fields less than 100 gauss and in a shorter switching time up to 20 kHz. The device would work as a chopper for a polarized neutron beam and as a magnetic switching device for a multilayer neutron interferometer. We have started to develop multilayer polarizing mirrors functioning under magnetizing fields less than 100 gauss. The multilayers of Permalloy-Ge and Fe-Ge have been produced using the evaporation method under magnetizing fields of about 100 gauss parallel to the Si-wafer substrate surface. The hysteresis loop for in-plane magnetization of the multilayers were measured to discuss their feasibilities for the polarizing device functioning under very low magnetizing fields. The polarizing efficiencies of Fe-Ge and Permalloy-Ge multilayers were 95 % and 91 % with reflectivities of 50 % and 66 % respectively under magnetizing fields of 80 gauss. The report also discusses problems in applying these multilayer polarizing mirrors to ultracold neutrons. (author)
Masso, Majid; Vaisman, Iosif I
2008-09-15
Accurate predictive models for the impact of single amino acid substitutions on protein stability provide insight into protein structure and function. Such models are also valuable for the design and engineering of new proteins. Previously described methods have utilized properties of protein sequence or structure to predict the free energy change of mutants due to thermal (DeltaDeltaG) and denaturant (DeltaDeltaG(H2O)) denaturations, as well as mutant thermal stability (DeltaT(m)), through the application of either computational energy-based approaches or machine learning techniques. However, accuracy associated with applying these methods separately is frequently far from optimal. We detail a computational mutagenesis technique based on a four-body, knowledge-based, statistical contact potential. For any mutation due to a single amino acid replacement in a protein, the method provides an empirical normalized measure of the ensuing environmental perturbation occurring at every residue position. A feature vector is generated for the mutant by considering perturbations at the mutated position and it's ordered six nearest neighbors in the 3-dimensional (3D) protein structure. These predictors of stability change are evaluated by applying machine learning tools to large training sets of mutants derived from diverse proteins that have been experimentally studied and described. Predictive models based on our combined approach are either comparable to, or in many cases significantly outperform, previously published results. A web server with supporting documentation is available at http://proteins.gmu.edu/automute.
International Nuclear Information System (INIS)
Chang, Chih-Hao; Liou, Meng-Sing
2007-01-01
In this paper, we propose a new approach to compute compressible multifluid equations. Firstly, a single-pressure compressible multifluid model based on the stratified flow model is proposed. The stratified flow model, which defines different fluids in separated regions, is shown to be amenable to the finite volume method. We can apply the conservation law to each subregion and obtain a set of balance equations. Secondly, the AUSM + scheme, which is originally designed for the compressible gas flow, is extended to solve compressible liquid flows. By introducing additional dissipation terms into the numerical flux, the new scheme, called AUSM + -up, can be applied to both liquid and gas flows. Thirdly, the contribution to the numerical flux due to interactions between different phases is taken into account and solved by the exact Riemann solver. We will show that the proposed approach yields an accurate and robust method for computing compressible multiphase flows involving discontinuities, such as shock waves and fluid interfaces. Several one-dimensional test problems are used to demonstrate the capability of our method, including the Ransom's water faucet problem and the air-water shock tube problem. Finally, several two dimensional problems will show the capability to capture enormous details and complicated wave patterns in flows having large disparities in the fluid density and velocities, such as interactions between water shock wave and air bubble, between air shock wave and water column(s), and underwater explosion
Asynchronous Distributed Execution of Fixpoint-Based Computational Fields
DEFF Research Database (Denmark)
Lluch Lafuente, Alberto; Loreti, Michele; Montanari, Ugo
2017-01-01
. Computational fields are a key ingredient of aggregate programming, a promising software engineering methodology particularly relevant for the Internet of Things. In our approach, space topology is represented by a fixed graph-shaped field, namely a network with attributes on both nodes and arcs, where arcs...
International Nuclear Information System (INIS)
Motomura, Kazuyoshi; Sumino, Hiroshi; Noguchi, Atsushi; Horinouchi, Takashi; Nakanishi, Katsuyuki
2013-01-01
Sentinel node biopsy often results in the identification and removal of multiple nodes as sentinel nodes, although most of these nodes could be non-sentinel nodes. This study investigated whether computed tomography-lymphography (CT-LG) can distinguish sentinel nodes from non-sentinel nodes and whether sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. This study included 184 patients with breast cancer and clinically negative nodes. Contrast agent was injected interstitially. The location of sentinel nodes was marked on the skin surface using a CT laser light navigator system. Lymph nodes located just under the marks were first removed as sentinel nodes. Then, all dyed nodes or all hot nodes were removed. The mean number of sentinel nodes identified by CT-LG was significantly lower than that of dyed and/or hot nodes removed (1.1 vs 1.8, p <0.0001). Twenty-three (12.5%) patients had ≥2 sentinel nodes identified by CT-LG removed, whereas 94 (51.1%) of patients had ≥2 dyed and/or hot nodes removed (p <0.0001). Pathological evaluation demonstrated that 47 (25.5%) of 184 patients had metastasis to at least one node. All 47 patients demonstrated metastases to at least one of the sentinel nodes identified by CT-LG. CT-LG can distinguish sentinel nodes from non-sentinel nodes, and sentinel nodes identified by CT-LG can accurately stage the axilla in patients with breast cancer. Successful identification of sentinel nodes using CT-LG may facilitate image-based diagnosis of metastasis, possibly leading to the omission of sentinel node biopsy
International Nuclear Information System (INIS)
Hahn, Song Yop
1985-01-01
A method employing infinite elements is described for the magnetic field computations of the magnetic circuits with permanent magnet. The system stiffness matrix is derived by a variational approach, while the interfacial boundary conditions between the finite element regions and the infinite element regions are dealt with using collocation method. The proposed method is applied to a simple linear problems, and the numerical results are compared with those of the standard finite element method and the analytic solutions. It is observed that the proposed method gives more accurate results than those of the standard finite element method under the same computing efforts. (Author)
Directory of Open Access Journals (Sweden)
Feng Chai
2016-10-01
Full Text Available High power density outer-rotor motors commonly use water or oil cooling. A reasonable thermal design for outer-rotor air-cooling motors can effectively enhance the power density without the fluid circulating device. Research on the heat dissipation mechanism of an outer-rotor air-cooling motor can provide guidelines for the selection of the suitable cooling mode and the design of the cooling structure. This study investigates the temperature field of the motor through computational fluid dynamics (CFD and presents a method to overcome the difficulties in building an accurate temperature field model. The proposed method mainly includes two aspects: a new method for calculating the equivalent thermal conductivity (ETC of the air-gap in the laminar state and an equivalent treatment to the thermal circuit that comprises a hub, shaft, and bearings. Using an outer-rotor air-cooling in-wheel motor as an example, the temperature field of this motor is calculated numerically using the proposed method; the results are experimentally verified. The heat transfer rate (HTR of each cooling path is obtained using the numerical results and analytic formulas. The influences of the structural parameters on temperature increases and the HTR of each cooling path are analyzed. Thereafter, the overload capability of the motor is analyzed in various overload conditions.
Vogt, Natalja; Marochkin, Ilya I; Rykov, Anatolii N
2018-04-18
The accurate molecular structure of picolinic acid has been determined from experimental data and computed at the coupled cluster level of theory. Only one conformer with the O[double bond, length as m-dash]C-C-N and H-O-C[double bond, length as m-dash]O fragments in antiperiplanar (ap) positions, ap-ap, has been detected under conditions of the gas-phase electron diffraction (GED) experiment (Tnozzle = 375(3) K). The semiexperimental equilibrium structure, rsee, of this conformer has been derived from the GED data taking into account the anharmonic vibrational effects estimated from the ab initio force field. The equilibrium structures of the two lowest-energy conformers, ap-ap and ap-sp (with the synperiplanar H-O-C[double bond, length as m-dash]O fragment), have been fully optimized at the CCSD(T)_ae level of theory in conjunction with the triple-ζ basis set (cc-pwCVTZ). The quality of the optimized structures has been improved due to extrapolation to the quadruple-ζ basis set. The high accuracy of both GED determination and CCSD(T) computations has been disclosed by a correct comparison of structures having the same physical meaning. The ap-ap conformer has been found to be stabilized by the relatively strong NH-O hydrogen bond of 1.973(27) Å (GED) and predicted to be lower in energy by 16 kJ mol-1 with respect to the ap-sp conformer without a hydrogen bond. The influence of this bond on the structure of picolinic acid has been analyzed within the Natural Bond Orbital model. The possibility of the decarboxylation of picolinic acid has been considered in the GED analysis, but no significant amounts of pyridine and carbon dioxide could be detected. To reveal the structural changes reflecting the mesomeric and inductive effects due to the carboxylic substituent, the accurate structure of pyridine has been also computed at the CCSD(T)_ae level with basis sets from triple- to 5-ζ quality. The comprehensive structure computations for pyridine as well as for
Energy Technology Data Exchange (ETDEWEB)
Adly, A.A., E-mail: adlyamr@gmail.com [Electrical Power and Machines Dept., Faculty of Engineering, Cairo University, Giza 12613 (Egypt); Abd-El-Hafiz, S.K. [Engineering Mathematics Department, Faculty of Engineering, Cairo University, Giza 12613 (Egypt)
2017-07-15
Highlights: • An approach to simulate hysteresis while taking shape anisotropy into consideration. • Utilizing the ensemble of triangular sub-regions hysteresis models in field computation. • A novel tool capable of carrying out field computation while keeping track of hysteresis losses. • The approach may be extended for 3D tetra-hedra sub-volumes. - Abstract: Field computation in media exhibiting hysteresis is crucial to a variety of applications such as magnetic recording processes and accurate determination of core losses in power devices. Recently, Hopfield neural networks (HNN) have been successfully configured to construct scalar and vector hysteresis models. This paper presents an efficient hysteresis modeling methodology and its implementation in field computation applications. The methodology is based on the application of the integral equation approach on discretized triangular magnetic sub-regions. Within every triangular sub-region, hysteresis properties are realized using a 3-node HNN. Details of the approach and sample computation results are given in the paper.
Harms, Justin D.; Bachmann, Charles M.; Ambeau, Brittany L.; Faulring, Jason W.; Ruiz Torres, Andres J.; Badura, Gregory; Myers, Emily
2017-10-01
Field-portable goniometers are created for a wide variety of applications. Many of these applications require specific types of instruments and measurement schemes and must operate in challenging environments. Therefore, designs are based on the requirements that are specific to the application. We present a field-portable goniometer that was designed for measuring the hemispherical-conical reflectance factor (HCRF) of various soils and low-growing vegetation in austere coastal and desert environments and biconical reflectance factors in laboratory settings. Unlike some goniometers, this system features a requirement for "target-plane tracking" to ensure that measurements can be collected on sloped surfaces, without compromising angular accuracy. The system also features a second upward-looking spectrometer to measure the spatially dependent incoming illumination, an integrated software package to provide full automation, an automated leveling system to ensure a standard frame of reference, a design that minimizes the obscuration due to self-shading to measure the opposition effect, and the ability to record a digital elevation model of the target region. This fully automated and highly mobile system obtains accurate and precise measurements of HCRF in a wide variety of terrain and in less time than most other systems while not sacrificing consistency or repeatability in laboratory environments.
Computational methods in several fields of radiation dosimetry
International Nuclear Information System (INIS)
Paretzke, Herwig G.
2010-01-01
Full text: Radiation dosimetry has to cope with a wide spectrum of applications and requirements in time and size. The ubiquitous presence of various radiation fields or radionuclides in the human home, working, urban or agricultural environment can lead to various dosimetric tasks starting from radioecology, retrospective and predictive dosimetry, personal dosimetry, up to measurements of radionuclide concentrations in environmental and food product and, finally in persons and their excreta. In all these fields measurements and computational models for the interpretation or understanding of observations are employed explicitly or implicitly. In this lecture some examples of own computational models will be given from the various dosimetric fields, including a) Radioecology (e.g. with the code systems based on ECOSYS, which was developed far before the Chernobyl reactor accident, and tested thoroughly afterwards), b) Internal dosimetry (improved metabolism models based on our own data), c) External dosimetry (with the new ICRU-ICRP-Voxelphantom developed by our lab), d) Radiation therapy (with GEANT IV as applied to mixed reactor radiation incident on individualized voxel phantoms), e) Some aspects of nanodosimetric track structure computations (not dealt with in the other presentation of this author). Finally, some general remarks will be made on the high explicit or implicit importance of computational models in radiation protection and other research field dealing with large systems, as well as on good scientific practices which should generally be followed when developing and applying such computational models
Vereecken, Carine; Dohogne, Sophie; Covents, Marc; Maes, Lea
2010-06-01
Computer-administered questionnaires have received increased attention for large-scale population research on nutrition. In Belgium-Flanders, Young Adolescents' Nutrition Assessment on Computer (YANA-C) has been developed. In this tool, standardised photographs are available to assist in portion-size estimation. The purpose of the present study is to assess how accurate adolescents are in estimating portion sizes of food using YANA-C. A convenience sample, aged 11-17 years, estimated the amounts of ten commonly consumed foods (breakfast cereals, French fries, pasta, rice, apple sauce, carrots and peas, crisps, creamy velouté, red cabbage, and peas). Two procedures were followed: (1) short-term recall: adolescents (n 73) self-served their usual portions of the ten foods and estimated the amounts later the same day; (2) real-time perception: adolescents (n 128) estimated two sets (different portions) of pre-weighed portions displayed near the computer. Self-served portions were, on average, 8 % underestimated; significant underestimates were found for breakfast cereals, French fries, peas, and carrots and peas. Spearman's correlations between the self-served and estimated weights varied between 0.51 and 0.84, with an average of 0.72. The kappa statistics were moderate (>0.4) for all but one item. Pre-weighed portions were, on average, 15 % underestimated, with significant underestimates for fourteen of the twenty portions. Photographs of food items can serve as a good aid in ranking subjects; however, to assess the actual intake at a group level, underestimation must be considered.
Experience with a distributed computing system for magnetic field analysis
International Nuclear Information System (INIS)
Newman, M.J.
1978-08-01
The development of a general purpose computer system, THESEUS, is described the initial use for which has been magnetic field analysis. The system involves several computers connected by data links. Some are small computers with interactive graphics facilities and limited analysis capabilities, and others are large computers for batch execution of analysis programs with heavy processor demands. The system is highly modular for easy extension and highly portable for transfer to different computers. It can easily be adapted for a completely different application. It provides a highly efficient and flexible interface between magnet designers and specialised analysis programs. Both the advantages and problems experienced are highlighted, together with a mention of possible future developments. (U.K.)
Modeling electric fields in two dimensions using computer aided design
International Nuclear Information System (INIS)
Gilmore, D.W.; Giovanetti, D.
1992-01-01
The authors describe a method for analyzing static electric fields in two dimensions using AutoCAD. The algorithm is coded in LISP and is modeled after Coloumb's Law. The software platform allows for facile graphical manipulations of field renderings and supports a wide range of hardcopy-output and data-storage formats. More generally, this application is representative of the ability to analyze data that is the solution to known mathematical functions with computer aided design (CAD)
International Nuclear Information System (INIS)
Hoffstaetter, G.H.
1994-12-01
Analyzing stability of particle motion in storage rings contributes to the general field of stability analysis in weakly nonlinear motion. A method which we call pseudo invariant estimation (PIE) is used to compute lower bounds on the survival time in circular accelerators. The pseudeo invariants needed for this approach are computed via nonlinear perturbative normal form theory and the required global maxima of the highly complicated multivariate functions could only be rigorously bound with an extension of interval arithmetic. The bounds on the survival times are large enough to the relevant; the same is true for the lower bounds on dynamical aperatures, which can be computed. The PIE method can lead to novel design criteria with the objective of maximizing the survival time. A major effort in the direction of rigourous predictions only makes sense if accurate models of accelerators are available. Fringe fields often have a significant influence on optical properties, but the computation of fringe-field maps by DA based integration is slower by several orders of magnitude than DA evaluation of the propagator for main-field maps. A novel computation of fringe-field effects called symplectic scaling (SYSCA) is introduced. It exploits the advantages of Lie transformations, generating functions, and scaling properties and is extremely accurate. The computation of fringe-field maps is typically made nearly two orders of magnitude faster. (orig.)
Probe-Hole Field Emission Microscope System Controlled by Computer
Gong, Yunming; Zeng, Haishan
1991-09-01
A probe-hole field emission microscope system, controlled by an Apple II computer, has been developed and operated successfully for measuring the work function of a single crystal plane. The work functions on the clean W(100) and W(111) planes are measured to be 4.67 eV and 4.45 eV, respectively.
Internal and external Field of View: computer games and cybersickness
Vries, S.C. de; Bos, J.E.; Emmerik, M.L. van; Groen, E.L.
2007-01-01
In an experiment with a computer game environment, we studied the effect of Field-of-View (FOV) on cybersickness. In particular, we examined the effect of differences between the internal FOV (IFOV, the FOV which the graphics generator is using to render its images) and the external FOV (EFOV, the
Design Guidance for Computer-Based Procedures for Field Workers
Energy Technology Data Exchange (ETDEWEB)
Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Le Blanc, Katya [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
Nearly all activities that involve human interaction with nuclear power plant systems are guided by procedures, instructions, or checklists. Paper-based procedures (PBPs) currently used by most utilities have a demonstrated history of ensuring safety; however, improving procedure use could yield significant savings in increased efficiency, as well as improved safety through human performance gains. The nuclear industry is constantly trying to find ways to decrease human error rates, especially human error rates associated with procedure use. As a step toward the goal of improving field workers’ procedure use and adherence and hence improve human performance and overall system reliability, the U.S. Department of Energy Light Water Reactor Sustainability (LWRS) Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with computer-based procedures (CBPs). PBPs have ensured safe operation of plants for decades, but limitations in paper-based systems do not allow them to reach the full potential for procedures to prevent human errors. The environment in a nuclear power plant is constantly changing, depending on current plant status and operating mode. PBPs, which are static by nature, are being applied to a constantly changing context. This constraint often results in PBPs that are written in a manner that is intended to cover many potential operating scenarios. Hence, the procedure layout forces the operator to search through a large amount of irrelevant information to locate the pieces of information relevant for the task and situation at hand, which has potential consequences of taking up valuable time when operators must be responding to the situation, and potentially leading operators down an incorrect response path. Other challenges related to use of PBPs are management of multiple procedures, place-keeping, finding the correct procedure for a task, and relying
Magnetic field computations for ISX using GFUN-3D
International Nuclear Information System (INIS)
Cain, W.D.
1977-01-01
This paper presents a comparison between measured magnetic fields and the magnetic fields calculated by the three-dimensional computer program GFUN-3D for the Impurity Study Experiment (ISX). Several iron models are considered ranging in sophistication from 50 to 222 tetrahedra iron elements. The effects of air gaps and the efforts made to simulate effects of grain orientation and packing factor are detailed. The results obtained are compared with the measured magnetic fields, and explanations are presented to account for the variations which occur
An accurate method for computer-generating tungsten anode x-ray spectra from 30 to 140 kV.
Boone, J M; Seibert, J A
1997-11-01
A tungsten anode spectral model using interpolating polynomials (TASMIP) was used to compute x-ray spectra at 1 keV intervals over the range from 30 kV to 140 kV. The TASMIP is not semi-empirical and uses no physical assumptions regarding x-ray production, but rather interpolates measured constant potential x-ray spectra published by Fewell et al. [Handbook of Computed Tomography X-ray Spectra (U.S. Government Printing Office, Washington, D.C., 1981)]. X-ray output measurements (mR/mAs measured at 1 m) were made on a calibrated constant potential generator in our laboratory from 50 kV to 124 kV, and with 0-5 mm added aluminum filtration. The Fewell spectra were slightly modified (numerically hardened) and normalized based on the attenuation and output characteristics of a constant potential generator and metal-insert x-ray tube in our laboratory. Then, using the modified Fewell spectra of different kVs, the photon fluence phi at each 1 keV energy bin (E) over energies from 10 keV to 140 keV was characterized using polynomial functions of the form phi (E) = a0[E] + a1[E] kV + a2[E] kV2 + ... + a(n)[E] kVn. A total of 131 polynomial functions were used to calculate accurate x-ray spectra, each function requiring between two and four terms. The resulting TASMIP algorithm produced x-ray spectra that match both the quality and quantity characteristics of the x-ray system in our laboratory. For photon fluences above 10% of the peak fluence in the spectrum, the average percent difference (and standard deviation) between the modified Fewell spectra and the TASMIP photon fluence was -1.43% (3.8%) for the 50 kV spectrum, -0.89% (1.37%) for the 70 kV spectrum, and for the 80, 90, 100, 110, 120, 130 and 140 kV spectra, the mean differences between spectra were all less than 0.20% and the standard deviations were less than approximately 1.1%. The model was also extended to include the effects of generator-induced kV ripple. Finally, the x-ray photon fluence in the units of
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
I develop a variational method for systematic numerical computation of physical quantities -- bound state energies and scattering amplitudes -- in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. I present an algorithm for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. I also show how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. I show how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. I discuss the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, I do not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. I apply the method to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. I describe a computer implementation of the method and present numerical results for simple quantum mechanical systems
Hamiltonian lattice field theory: Computer calculations using variational methods
International Nuclear Information System (INIS)
Zako, R.L.
1991-01-01
A variational method is developed for systematic numerical computation of physical quantities-bound state energies and scattering amplitudes-in quantum field theory. An infinite-volume, continuum theory is approximated by a theory on a finite spatial lattice, which is amenable to numerical computation. An algorithm is presented for computing approximate energy eigenvalues and eigenstates in the lattice theory and for bounding the resulting errors. It is shown how to select basis states and choose variational parameters in order to minimize errors. The algorithm is based on the Rayleigh-Ritz principle and Kato's generalizations of Temple's formula. The algorithm could be adapted to systems such as atoms and molecules. It is shown how to compute Green's functions from energy eigenvalues and eigenstates in the lattice theory, and relate these to physical (renormalized) coupling constants, bound state energies and Green's functions. Thus one can compute approximate physical quantities in a lattice theory that approximates a quantum field theory with specified physical coupling constants. The author discusses the errors in both approximations. In principle, the errors can be made arbitrarily small by increasing the size of the lattice, decreasing the lattice spacing and computing sufficiently long. Unfortunately, the author does not understand the infinite-volume and continuum limits well enough to quantify errors due to the lattice approximation. Thus the method is currently incomplete. The method is applied to real scalar field theories using a Fock basis of free particle states. All needed quantities can be calculated efficiently with this basis. The generalization to more complicated theories is straightforward. The author describes a computer implementation of the method and present numerical results for simple quantum mechanical systems
Directory of Open Access Journals (Sweden)
Shiyong Yan
2015-08-01
Full Text Available We obtained accurate, detailed motion distribution of glaciers in Central Asia by applying digital elevation model (DEM assisted pixel-tracking method to L-band synthetic aperture radar imagery. The paper firstly introduces and analyzes each component of the offset field briefly, and then describes the method used to efficiently and precisely compensate the topography-related offset caused by the large spatial baseline and rugged terrain with the help of DEM. The results indicate that the rugged topography not only forms the complex shapes of glaciers, but also affects the glacier velocity estimation, especially with large spatial baseline. The maximum velocity, 0.85 m∙d−1, was observed in the middle part on the Fedchenko Glacier, which is the world’s longest mountain glacier. The motion fluctuation on its main trunk is apparently influenced by mass flowing in from tributaries, as well as angles between tributaries and the main stream. The approach presented in this paper was proved to be highly appropriate for monitoring glacier motion and will provide valuable sensitive indicators of current and future climate change for environmental analysis.
Liang, Yufeng; Vinson, John; Pemmaraju, Sri; Drisdell, Walter S; Shirley, Eric L; Prendergast, David
2017-03-03
Constrained-occupancy delta-self-consistent-field (ΔSCF) methods and many-body perturbation theories (MBPT) are two strategies for obtaining electronic excitations from first principles. Using the two distinct approaches, we study the O 1s core excitations that have become increasingly important for characterizing transition-metal oxides and understanding strong electronic correlation. The ΔSCF approach, in its current single-particle form, systematically underestimates the pre-edge intensity for chosen oxides, despite its success in weakly correlated systems. By contrast, the Bethe-Salpeter equation within MBPT predicts much better line shapes. This motivates one to reexamine the many-electron dynamics of x-ray excitations. We find that the single-particle ΔSCF approach can be rectified by explicitly calculating many-electron transition amplitudes, producing x-ray spectra in excellent agreement with experiments. This study paves the way to accurately predict x-ray near-edge spectral fingerprints for physics and materials science beyond the Bethe-Salpether equation.
Numerical computation of space shuttle orbiter flow field
Tannehill, John C.
1988-01-01
A new parabolized Navier-Stokes (PNS) code has been developed to compute the hypersonic, viscous chemically reacting flow fields around 3-D bodies. The flow medium is assumed to be a multicomponent mixture of thermally perfect but calorically imperfect gases. The new PNS code solves the gas dynamic and species conservation equations in a coupled manner using a noniterative, implicit, approximately factored, finite difference algorithm. The space-marching method is made well-posed by special treatment of the streamwise pressure gradient term. The code has been used to compute hypersonic laminar flow of chemically reacting air over cones at angle of attack. The results of the computations are compared with the results of reacting boundary-layer computations and show excellent agreement.
Infantino, Angelo; Marengo, Mario; Baschetti, Serafina; Cicoria, Gianfranco; Longo Vaschetto, Vittorio; Lucconi, Giulia; Massucci, Piera; Vichi, Sara; Zagni, Federico; Mostacci, Domiziano
2015-11-01
Biomedical cyclotrons for production of Positron Emission Tomography (PET) radionuclides and radiotherapy with hadrons or ions are widely diffused and established in hospitals as well as in industrial facilities and research sites. Guidelines for site planning and installation, as well as for radiation protection assessment, are given in a number of international documents; however, these well-established guides typically offer analytic methods of calculation of both shielding and materials activation, in approximate or idealized geometry set up. The availability of Monte Carlo codes with accurate and up-to-date libraries for transport and interactions of neutrons and charged particles at energies below 250 MeV, together with the continuously increasing power of nowadays computers, makes systematic use of simulations with realistic geometries possible, yielding equipment and site specific evaluation of the source terms, shielding requirements and all quantities relevant to radiation protection. In this work, the well-known Monte Carlo code FLUKA was used to simulate two representative models of cyclotron for PET radionuclides production, including their targetry; and one type of proton therapy cyclotron including the energy selection system. Simulations yield estimates of various quantities of radiological interest, including the effective dose distribution around the equipment, the effective number of neutron produced per incident proton and the activation of target materials, the structure of the cyclotron, the energy degrader, the vault walls and the soil. The model was validated against experimental measurements and comparison with well-established reference data. Neutron ambient dose equivalent H*(10) was measured around a GE PETtrace cyclotron: an average ratio between experimental measurement and simulations of 0.99±0.07 was found. Saturation yield of 18F, produced by the well-known 18O(p,n)18F reaction, was calculated and compared with the IAEA recommended
Acoustic radiosity for computation of sound fields in diffuse environments
Muehleisen, Ralph T.; Beamer, C. Walter
2002-05-01
The use of image and ray tracing methods (and variations thereof) for the computation of sound fields in rooms is relatively well developed. In their regime of validity, both methods work well for prediction in rooms with small amounts of diffraction and mostly specular reflection at the walls. While extensions to the method to include diffuse reflections and diffraction have been made, they are limited at best. In the fields of illumination and computer graphics the ray tracing and image methods are joined by another method called luminous radiative transfer or radiosity. In radiosity, an energy balance between surfaces is computed assuming diffuse reflection at the reflective surfaces. Because the interaction between surfaces is constant, much of the computation required for sound field prediction with multiple or moving source and receiver positions can be reduced. In acoustics the radiosity method has had little attention because of the problems of diffraction and specular reflection. The utility of radiosity in acoustics and an approach to a useful development of the method for acoustics will be presented. The method looks especially useful for sound level prediction in industrial and office environments. [Work supported by NSF.
Finite element electromagnetic field computation on the Sequent Symmetry 81 parallel computer
International Nuclear Information System (INIS)
Ratnajeevan, S.; Hoole, H.
1990-01-01
Finite element field analysis algorithms lend themselves to parallelization and this fact is exploited in this paper to implement a finite element analysis program for electromagnetic field computation on the Sequent Symmetry 81 parallel computer with three processors. In terms of waiting time, the maximum gains are to be made in matrix solution and therefore this paper concentrates on the gains in parallelizing the solution part of finite element analysis. An outline of how parallelization could be exploited in most finite element operations is given in this paper although the actual implemention of parallelism on the Sequent Symmetry 81 parallel computer was in sparsity computation, matrix assembly and the matrix solution areas. In all cases, the algorithms were modified suit the parallel programming application rather than allowing the compiler to parallelize on existing algorithms
Field computation for two-dimensional array transducers with limited diffraction array beams.
Lu, Jian-Yu; Cheng, Jiqi
2005-10-01
A method is developed for calculating fields produced with a two-dimensional (2D) array transducer. This method decomposes an arbitrary 2D aperture weighting function into a set of limited diffraction array beams. Using the analytical expressions of limited diffraction beams, arbitrary continuous wave (cw) or pulse wave (pw) fields of 2D arrays can be obtained with a simple superposition of these beams. In addition, this method can be simplified and applied to a 1D array transducer of a finite or infinite elevation height. For beams produced with axially symmetric aperture weighting functions, this method can be reduced to the Fourier-Bessel method studied previously where an annular array transducer can be used. The advantage of the method is that it is accurate and computationally efficient, especially in regions that are not far from the surface of the transducer (near field), where it is important for medical imaging. Both computer simulations and a synthetic array experiment are carried out to verify the method. Results (Bessel beam, focused Gaussian beam, X wave and asymmetric array beams) show that the method is accurate as compared to that using the Rayleigh-Sommerfeld diffraction formula and agrees well with the experiment.
Novel computational approaches for the analysis of cosmic magnetic fields
Energy Technology Data Exchange (ETDEWEB)
Saveliev, Andrey [Universitaet Hamburg, Hamburg (Germany); Keldysh Institut, Moskau (Russian Federation)
2016-07-01
In order to give a consistent picture of cosmic, i.e. galactic and extragalactic, magnetic fields, different approaches are possible and often even necessary. Here we present three of them: First, a semianalytic analysis of the time evolution of primordial magnetic fields from which their properties and, subsequently, the nature of present-day intergalactic magnetic fields may be deduced. Second, the use of high-performance computing infrastructure by developing powerful algorithms for (magneto-)hydrodynamic simulations and applying them to astrophysical problems. We are currently developing a code which applies kinetic schemes in massive parallel computing on high performance multiprocessor systems in a new way to calculate both hydro- and electrodynamic quantities. Finally, as a third approach, astroparticle physics might be used as magnetic fields leave imprints of their properties on charged particles transversing them. Here we focus on electromagnetic cascades by developing a software based on CRPropa which simulates the propagation of particles from such cascades through the intergalactic medium in three dimensions. This may in particular be used to obtain information about the helicity of extragalactic magnetic fields.
A Neural Information Field Approach to Computational Cognition
2016-11-18
effects of distraction during list memory . These distractions include short and long delays before recall, and continuous distraction (forced rehearsal... memory encoding and replay in hippocampus. Computational Neuroscience Society (CNS), p. 166, 2014. D. A. Pinotsis, Neural Field Coding of Short Term ...performance of children learning to count in a SPA model; proposed a new SPA model of cognitive load using the N-back task; developed a new model of the
Directory of Open Access Journals (Sweden)
Aliza B Rubenstein
2017-06-01
Full Text Available Multispecificity-the ability of a single receptor protein molecule to interact with multiple substrates-is a hallmark of molecular recognition at protein-protein and protein-peptide interfaces, including enzyme-substrate complexes. The ability to perform structure-based prediction of multispecificity would aid in the identification of novel enzyme substrates, protein interaction partners, and enable design of novel enzymes targeted towards alternative substrates. The relatively slow speed of current biophysical, structure-based methods limits their use for prediction and, especially, design of multispecificity. Here, we develop a rapid, flexible-backbone self-consistent mean field theory-based technique, MFPred, for multispecificity modeling at protein-peptide interfaces. We benchmark our method by predicting experimentally determined peptide specificity profiles for a range of receptors: protease and kinase enzymes, and protein recognition modules including SH2, SH3, MHC Class I and PDZ domains. We observe robust recapitulation of known specificities for all receptor-peptide complexes, and comparison with other methods shows that MFPred results in equivalent or better prediction accuracy with a ~10-1000-fold decrease in computational expense. We find that modeling bound peptide backbone flexibility is key to the observed accuracy of the method. We used MFPred for predicting with high accuracy the impact of receptor-side mutations on experimentally determined multispecificity of a protease enzyme. Our approach should enable the design of a wide range of altered receptor proteins with programmed multispecificities.
Directory of Open Access Journals (Sweden)
Taiji Sohmura
2010-08-01
Two clinical cases with implant placement on the three lower molars by flap operation using bone supported surgical guide and flapless operation with teeth supported surgical guide and immediate loading with provisional prostheses prepared beforehand are introduced. The present simulation and drilling support using the surgical guide may help to perform safe and accurate implant surgery.
DEFF Research Database (Denmark)
Fasano, Andrea; Rasmussen, Henrik K.
2017-01-01
A third order accurate, in time and space, finite element scheme for the numerical simulation of three- dimensional time-dependent flow of the molecular stress function type of fluids in a generalized formu- lation is presented. The scheme is an extension of the K-BKZ Lagrangian finite element me...
Reservoir computer predictions for the Three Meter magnetic field time evolution
Perevalov, A.; Rojas, R.; Lathrop, D. P.; Shani, I.; Hunt, B. R.
2017-12-01
The source of the Earth's magnetic field is the turbulent flow of liquid metal in the outer core. Our experiment's goal is to create Earth-like dynamo, to explore the mechanisms and to understand the dynamics of the magnetic and velocity fields. Since it is a complicated system, predictions of the magnetic field is a challenging problem. We present results of mimicking the three Meter experiment by a reservoir computer deep learning algorithm. The experiment is a three-meter diameter outer sphere and a one-meter diameter inner sphere with the gap filled with liquid sodium. The spheres can rotate up to 4 and 14 Hz respectively, giving a Reynolds number near to 108. Two external electromagnets apply magnetic fields, while an array of 31 external and 2 internal Hall sensors measure the resulting induced fields. We use this magnetic probe data to train a reservoir computer to predict the 3M time evolution and mimic waves in the experiment. Surprisingly accurate predictions can be made for several magnetic dipole time scales. This shows that such a complicated MHD system's behavior can be predicted. We gratefully acknowledge support from NSF EAR-1417148.
Multigrid Methods for the Computation of Propagators in Gauge Fields
Kalkreuter, Thomas
Multigrid methods were invented for the solution of discretized partial differential equations in order to overcome the slowness of traditional algorithms by updates on various length scales. In the present work generalizations of multigrid methods for propagators in gauge fields are investigated. Gauge fields are incorporated in algorithms in a covariant way. The kernel C of the restriction operator which averages from one grid to the next coarser grid is defined by projection on the ground-state of a local Hamiltonian. The idea behind this definition is that the appropriate notion of smoothness depends on the dynamics. The ground-state projection choice of C can be used in arbitrary dimension and for arbitrary gauge group. We discuss proper averaging operations for bosons and for staggered fermions. The kernels C can also be used in multigrid Monte Carlo simulations, and for the definition of block spins and blocked gauge fields in Monte Carlo renormalization group studies. Actual numerical computations are performed in four-dimensional SU(2) gauge fields. We prove that our proposals for block spins are “good”, using renormalization group arguments. A central result is that the multigrid method works in arbitrarily disordered gauge fields, in principle. It is proved that computations of propagators in gauge fields without critical slowing down are possible when one uses an ideal interpolation kernel. Unfortunately, the idealized algorithm is not practical, but it was important to answer questions of principle. Practical methods are able to outperform the conjugate gradient algorithm in case of bosons. The case of staggered fermions is harder. Multigrid methods give considerable speed-ups compared to conventional relaxation algorithms, but on lattices up to 184 conjugate gradient is superior.
A Computational Model of Cellular Response to Modulated Radiation Fields
Energy Technology Data Exchange (ETDEWEB)
McMahon, Stephen J., E-mail: stephen.mcmahon@qub.ac.uk [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Butterworth, Karl T. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); McGarry, Conor K. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Radiotherapy Physics, Northern Ireland Cancer Centre, Belfast Health and Social Care Trust, Northern Ireland (United Kingdom); Trainor, Colman [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); O' Sullivan, Joe M. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Clinical Oncology, Northern Ireland Cancer Centre, Belfast Health and Social Care Trust, Belfast, Northern Ireland (United Kingdom); Hounsell, Alan R. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom); Radiotherapy Physics, Northern Ireland Cancer Centre, Belfast Health and Social Care Trust, Northern Ireland (United Kingdom); Prise, Kevin M. [Centre for Cancer Research and Cell Biology, Queen' s University Belfast, Belfast, Northern Ireland (United Kingdom)
2012-09-01
Purpose: To develop a model to describe the response of cell populations to spatially modulated radiation exposures of relevance to advanced radiotherapies. Materials and Methods: A Monte Carlo model of cellular radiation response was developed. This model incorporated damage from both direct radiation and intercellular communication including bystander signaling. The predictions of this model were compared to previously measured survival curves for a normal human fibroblast line (AGO1522) and prostate tumor cells (DU145) exposed to spatially modulated fields. Results: The model was found to be able to accurately reproduce cell survival both in populations which were directly exposed to radiation and those which were outside the primary treatment field. The model predicts that the bystander effect makes a significant contribution to cell killing even in uniformly irradiated cells. The bystander effect contribution varies strongly with dose, falling from a high of 80% at low doses to 25% and 50% at 4 Gy for AGO1522 and DU145 cells, respectively. This was verified using the inducible nitric oxide synthase inhibitor aminoguanidine to inhibit the bystander effect in cells exposed to different doses, which showed significantly larger reductions in cell killing at lower doses. Conclusions: The model presented in this work accurately reproduces cell survival following modulated radiation exposures, both in and out of the primary treatment field, by incorporating a bystander component. In addition, the model suggests that the bystander effect is responsible for a significant portion of cell killing in uniformly irradiated cells, 50% and 70% at doses of 2 Gy in AGO1522 and DU145 cells, respectively. This description is a significant departure from accepted radiobiological models and may have a significant impact on optimization of treatment planning approaches if proven to be applicable in vivo.
A Computational Model of Cellular Response to Modulated Radiation Fields
International Nuclear Information System (INIS)
McMahon, Stephen J.; Butterworth, Karl T.; McGarry, Conor K.; Trainor, Colman; O’Sullivan, Joe M.; Hounsell, Alan R.; Prise, Kevin M.
2012-01-01
Purpose: To develop a model to describe the response of cell populations to spatially modulated radiation exposures of relevance to advanced radiotherapies. Materials and Methods: A Monte Carlo model of cellular radiation response was developed. This model incorporated damage from both direct radiation and intercellular communication including bystander signaling. The predictions of this model were compared to previously measured survival curves for a normal human fibroblast line (AGO1522) and prostate tumor cells (DU145) exposed to spatially modulated fields. Results: The model was found to be able to accurately reproduce cell survival both in populations which were directly exposed to radiation and those which were outside the primary treatment field. The model predicts that the bystander effect makes a significant contribution to cell killing even in uniformly irradiated cells. The bystander effect contribution varies strongly with dose, falling from a high of 80% at low doses to 25% and 50% at 4 Gy for AGO1522 and DU145 cells, respectively. This was verified using the inducible nitric oxide synthase inhibitor aminoguanidine to inhibit the bystander effect in cells exposed to different doses, which showed significantly larger reductions in cell killing at lower doses. Conclusions: The model presented in this work accurately reproduces cell survival following modulated radiation exposures, both in and out of the primary treatment field, by incorporating a bystander component. In addition, the model suggests that the bystander effect is responsible for a significant portion of cell killing in uniformly irradiated cells, 50% and 70% at doses of 2 Gy in AGO1522 and DU145 cells, respectively. This description is a significant departure from accepted radiobiological models and may have a significant impact on optimization of treatment planning approaches if proven to be applicable in vivo.
Evaluating amber force fields using computed NMR chemical shifts.
Koes, David R; Vries, John K
2017-10-01
NMR chemical shifts can be computed from molecular dynamics (MD) simulations using a template matching approach and a library of conformers containing chemical shifts generated from ab initio quantum calculations. This approach has potential utility for evaluating the force fields that underlie these simulations. Imperfections in force fields generate flawed atomic coordinates. Chemical shifts obtained from flawed coordinates have errors that can be traced back to these imperfections. We use this approach to evaluate a series of AMBER force fields that have been refined over the course of two decades (ff94, ff96, ff99SB, ff14SB, ff14ipq, and ff15ipq). For each force field a series of MD simulations are carried out for eight model proteins. The calculated chemical shifts for the 1 H, 15 N, and 13 C a atoms are compared with experimental values. Initial evaluations are based on root mean squared (RMS) errors at the protein level. These results are further refined based on secondary structure and the types of atoms involved in nonbonded interactions. The best chemical shift for identifying force field differences is the shift associated with peptide protons. Examination of the model proteins on a residue by residue basis reveals that force field performance is highly dependent on residue position. Examination of the time course of nonbonded interactions at these sites provides explanations for chemical shift differences at the atomic coordinate level. Results show that the newer ff14ipq and ff15ipq force fields developed with the implicitly polarized charge method perform better than the older force fields. © 2017 Wiley Periodicals, Inc.
International Nuclear Information System (INIS)
Yoshikawa, T.; Kimura, F.; Koide, T.; Kurita, S.
1990-01-01
Since 1986, a simple computation model for a nuclear accident has been operating in the emergency information center of Japan Agency for Science and Technology. It was developed by introducing the variation method for wind and a random walk particle model for diffusion in 50-100 km scale. Furthermore, we developed a new model with dynamic equations and a diffusion equation to predict more accurately the wind and diffusion, including local thermal convection. The momentum equation and the continuity equation are solved numerically in nonhydrostatic and incompressible conditions, using a finite difference technique. Then, the equation of thermal energy preservation is solved for potential temperature in the predicted wind field of every time step. The diffusion of nuclear pollutants is computed numerically in the predicted wind field, using diffusion coefficients obtained from the predictive dynamic equations. These computations were verified with meteorological surveys and gas tracer diffusion experiments over flat land, along a sea shore and over a mountainous area. Horizontal circulations and vertical convections can be computed in any mesh size from several tens of meters to several kilometers, while small vertical convections less than 1 km or so cannot be represented with the former hydrostatic circulation models. (author)
A computational study of the near-field generation and decay of wingtip vortices
International Nuclear Information System (INIS)
Craft, T.J.; Gerasimov, A.V.; Launder, B.E.; Robinson, C.M.E.
2006-01-01
The numerical prediction of the downstream trailing vortex shed from an aircraft wingtip is a particularly challenging CFD task because, besides predicting the development of the strong vortex itself, one needs to compute accurately the flow over the wing to resolve the boundary layer roll-up and shedding which provide the initial conditions for the free vortex. Computations are here reported of the flow over a NACA 0012 half-wing with rounded wing tip and the near-field wake as measured by [Chow, J.S., Zilliac, G., Bradshaw, P., 1997. Turbulence measurements in the near-field of a wingtip vortex. NASA Tech Mem 110418, NASA.]. The aim is to assess the performance of two turbulence models which, in principle, might be seen as capable of resolving both the three dimensional boundary layer on the wing and the generation and near-field decay of the strongly accelerated vortex that develops from the wingtip. Results using linear and non-linear eddy-viscosity models are presented, but these both exhibit a far too rapid decay of the vortex core. Only a stress-transport (or second-moment) model that satisfies the 'two-component limit', [Lumley, J.L., 1978. Computational modelling of turbulent flows. Adv. Appl. Mech. 18, 123-176.], reproduces the principal features found in the experimental measurements
Sengupta, Arkajyoti; Ramabhadran, Raghunath O; Raghavachari, Krishnan
2014-08-14
In this study we have used the connectivity-based hierarchy (CBH) method to derive accurate heats of formation of a range of biomolecules, 18 amino acids and 10 barbituric acid/uracil derivatives. The hierarchy is based on the connectivity of the different atoms in a large molecule. It results in error-cancellation reaction schemes that are automated, general, and can be readily used for a broad range of organic molecules and biomolecules. Herein, we first locate stable conformational and tautomeric forms of these biomolecules using an accurate level of theory (viz. CCSD(T)/6-311++G(3df,2p)). Subsequently, the heats of formation of the amino acids are evaluated using the CBH-1 and CBH-2 schemes and routinely employed density functionals or wave function-based methods. The calculated heats of formation obtained herein using modest levels of theory and are in very good agreement with those obtained using more expensive W1-F12 and W2-F12 methods on amino acids and G3 results on barbituric acid derivatives. Overall, the present study (a) highlights the small effect of including multiple conformers in determining the heats of formation of biomolecules and (b) in concurrence with previous CBH studies, proves that use of the more effective error-cancelling isoatomic scheme (CBH-2) results in more accurate heats of formation with modestly sized basis sets along with common density functionals or wave function-based methods.
Numerical computation of gravitational field for general axisymmetric objects
Fukushima, Toshio
2016-10-01
We developed a numerical method to compute the gravitational field of a general axisymmetric object. The method (I) numerically evaluates a double integral of the ring potential by the split quadrature method using the double exponential rules, and (II) derives the acceleration vector by numerically differentiating the numerically integrated potential by Ridder's algorithm. Numerical comparison with the analytical solutions for a finite uniform spheroid and an infinitely extended object of the Miyamoto-Nagai density distribution confirmed the 13- and 11-digit accuracy of the potential and the acceleration vector computed by the method, respectively. By using the method, we present the gravitational potential contour map and/or the rotation curve of various axisymmetric objects: (I) finite uniform objects covering rhombic spindles and circular toroids, (II) infinitely extended spheroids including Sérsic and Navarro-Frenk-White spheroids, and (III) other axisymmetric objects such as an X/peanut-shaped object like NGC 128, a power-law disc with a central hole like the protoplanetary disc of TW Hya, and a tear-drop-shaped toroid like an axisymmetric equilibrium solution of plasma charge distribution in an International Thermonuclear Experimental Reactor-like tokamak. The method is directly applicable to the electrostatic field and will be easily extended for the magnetostatic field. The FORTRAN 90 programs of the new method and some test results are electronically available.
Transversity results and computations in symplectic field theory
International Nuclear Information System (INIS)
Fabert, Oliver
2008-01-01
Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M φ of symplectic manifolds (M,ω M ) with symplectomorphisms φ. While the cylindrical contact homology of M φ is given by the Floer homologies of powers of φ, the other algebraic invariants of symplectic field theory for M φ provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian φ we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M φ ≅ S 1 x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic field theory are established, our result implies that the
Transversity results and computations in symplectic field theory
Energy Technology Data Exchange (ETDEWEB)
Fabert, Oliver
2008-02-21
Although the definition of symplectic field theory suggests that one has to count holomorphic curves in cylindrical manifolds R x V equipped with a cylindrical almost complex structure J, it is already well-known from Gromov-Witten theory that, due to the presence of multiply-covered curves, we in general cannot achieve transversality for all moduli spaces even for generic choices of J. In this thesis we treat the transversality problem of symplectic field theory in two important cases. In the first part of this thesis we are concerned with the rational symplectic field theory of Hamiltonian mapping tori, which is also called the Floer case. For this observe that in the general geometric setup for symplectic field theory, the contact manifolds can be replaced by mapping tori M{sub {phi}} of symplectic manifolds (M,{omega}{sub M}) with symplectomorphisms {phi}. While the cylindrical contact homology of M{sub {phi}} is given by the Floer homologies of powers of {phi}, the other algebraic invariants of symplectic field theory for M{sub {phi}} provide natural generalizations of symplectic Floer homology. For symplectically aspherical M and Hamiltonian {phi} we study the moduli spaces of rational curves and prove a transversality result, which does not need the polyfold theory by Hofer, Wysocki and Zehnder and allows us to compute the full contact homology of M{sub {phi}} {approx_equal} S{sup 1} x M. The second part of this thesis is devoted to the branched covers of trivial cylinders over closed Reeb orbits, which are the trivial examples of punctured holomorphic curves studied in rational symplectic field theory. Since all moduli spaces of trivial curves with virtual dimension one cannot be regular, we use obstruction bundles in order to find compact perturbations making the Cauchy-Riemann operator transversal to the zero section and show that the algebraic count of elements in the resulting regular moduli spaces is zero. Once the analytical foundations of symplectic
O'Kane, Dermot B; Lawrentschuk, Nathan; Bolton, Damien M
2016-01-01
We herein present a case of a 76-year-old gentleman, where prostate-specific membrane antigen positron emission tomography-computed tomography (PSMA PET-CT) was used to accurately detect prostate cancer (PCa), pelvic lymph node (LN) metastasis in the setting of biochemical recurrence following definitive treatment for PCa. The positive PSMA PET-CT result was confirmed with histological examination of the involved pelvic LNs following pelvic LN dissection.
Directory of Open Access Journals (Sweden)
Dermot B O′Kane
2016-01-01
Full Text Available We herein present a case of a 76-year-old gentleman, where prostate-specific membrane antigen positron emission tomography-computed tomography (PSMA PET-CT was used to accurately detect prostate cancer (PCa, pelvic lymph node (LN metastasis in the setting of biochemical recurrence following definitive treatment for PCa. The positive PSMA PET-CT result was confirmed with histological examination of the involved pelvic LNs following pelvic LN dissection.
Litvinenko, Alexander
2015-01-05
Simulators capable of computing scattered fields from objects of uncertain shapes are highly useful in electromagnetics and photonics, where device designs are typically subject to fabrication tolerances. Knowledge of statistical variations in scattered fields is useful in ensuring error-free functioning of devices. Oftentimes such simulators use a Monte Carlo (MC) scheme to sample the random domain, where the variables parameterize the uncertainties in the geometry. At each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver is executed to compute the scattered fields. However, to obtain accurate statistics of the scattered fields, the number of MC samples has to be large. This significantly increases the total execution time. In this work, to address this challenge, the Multilevel MC (MLMC) scheme is used together with a (deterministic) surface integral equation solver. The MLMC achieves a higher efficiency by “balancing” the statistical errors due to sampling of the random domain and the numerical errors due to discretization of the geometry at each of these samples. Error balancing results in a smaller number of samples requiring coarser discretizations. Consequently, total execution time is significantly shortened.
Litvinenko, Alexander
2016-01-06
Simulators capable of computing scattered fields from objects of uncertain shapes are highly useful in electromagnetics and photonics, where device designs are typically subject to fabrication tolerances. Knowledge of statistical variations in scattered fields is useful in ensuring error-free functioning of devices. Oftentimes such simulators use a Monte Carlo (MC) scheme to sample the random domain, where the variables parameterize the uncertainties in the geometry. At each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver is executed to compute the scattered fields. However, to obtain accurate statistics of the scattered fields, the number of MC samples has to be large. This significantly increases the total execution time. In this work, to address this challenge, the Multilevel MC (MLMC [1]) scheme is used together with a (deterministic) surface integral equation solver. The MLMC achieves a higher efficiency by balancing the statistical errors due to sampling of the random domain and the numerical errors due to discretization of the geometry at each of these samples. Error balancing results in a smaller number of samples requiring coarser discretizations. Consequently, total execution time is significantly shortened.
Litvinenko, Alexander
2015-01-07
Simulators capable of computing scattered fields from objects of uncertain shapes are highly useful in electromagnetics and photonics, where device designs are typically subject to fabrication tolerances. Knowledge of statistical variations in scattered fields is useful in ensuring error-free functioning of devices. Oftentimes such simulators use a Monte Carlo (MC) scheme to sample the random domain, where the variables parameterize the uncertainties in the geometry. At each sample, which corresponds to a realization of the geometry, a deterministic electromagnetic solver is executed to compute the scattered fields. However, to obtain accurate statistics of the scattered fields, the number of MC samples has to be large. This significantly increases the total execution time. In this work, to address this challenge, the Multilevel MC (MLMC [1]) scheme is used together with a (deterministic) surface integral equation solver. The MLMC achieves a higher efficiency by “balancing” the statistical errors due to sampling of the random domain and the numerical errors due to discretization of the geometry at each of these samples. Error balancing results in a smaller number of samples requiring coarser discretizations. Consequently, total execution time is significantly shortened.
TE/TM scheme for computation of electromagnetic fields in accelerators
International Nuclear Information System (INIS)
Zagorodnov, Igor; Weiland, Thomas
2005-01-01
We propose a new two-level economical conservative scheme for short-range wake field calculation in three dimensions. The scheme does not have dispersion in the longitudinal direction and is staircase free (second order convergent). Unlike the finite-difference time domain method (FDTD), it is based on a TE/TM like splitting of the field components in time. Additionally, it uses an enhanced alternating direction splitting of the transverse space operator that makes the scheme computationally as effective as the conventional FDTD method. Unlike the FDTD ADI and low-order Strang methods, the splitting error in our scheme is only of fourth order. As numerical examples show, the new scheme is much more accurate on the long-time scale than the conventional FDTD approach
International Nuclear Information System (INIS)
Keshavarz, Mohammad Hossein; Gharagheizi, Farhad; Shokrolahi, Arash; Zakinejad, Sajjad
2012-01-01
Highlights: ► A novel method is introduced for desk calculation of toxicity of benzoic acid derivatives. ► There is no need to use QSAR and QSTR methods, which are based on computer codes. ► The predicted results of 58 compounds are more reliable than those predicted by QSTR method. ► The present method gives good predictions for further 324 benzoic acid compounds. - Abstract: Most of benzoic acid derivatives are toxic, which may cause serious public health and environmental problems. Two novel simple and reliable models are introduced for desk calculations of the toxicity of benzoic acid compounds in mice via oral LD 50 with more reliance on their answers as one could attach to the more complex outputs. They require only elemental composition and molecular fragments without using any computer codes. The first model is based on only the number of carbon and hydrogen atoms, which can be improved by several molecular fragments in the second model. For 57 benzoic compounds, where the computed results of quantitative structure–toxicity relationship (QSTR) were recently reported, the predicted results of two simple models of present method are more reliable than QSTR computations. The present simple method is also tested with further 324 benzoic acid compounds including complex molecular structures, which confirm good forecasting ability of the second model.
International Nuclear Information System (INIS)
Xi Weiguo; Stuchly, M.A.; Gandhi, O.P.
1993-01-01
Possible health effects of human exposure to 60 Hz magnetic fields are a subject of increasing concern. An understanding of the coupling of electromagnetic fields to human body tissues is essential for assessment of their biological effects. A method is presented for the computerized simulation of induced electric currents and fields in bodies of men and rodents from power-line frequency magnetic fields. In the impedance method, the body is represented by a 3 dimensional impedance network. The computational model consists of several tens of thousands of cubic numerical cells and thus represented a realistic shape. The modelling for humans is performed with two models, a heterogeneous model based on cross-section anatomy and a homogeneous one using an average tissue conductivity. A summary of computed results of induced electric currents and fields is presented. It is confirmed that induced currents are lower than endangerous current levels for most environmental exposures. However, the induced current density varies greatly, with the maximum being at least 10 times larger than the average. This difference is likely to be greater when more detailed anatomy and morphology are considered. 15 refs., 2 figs., 1 tab
Computer Based Procedures for Field Workers - FY16 Research Activities
International Nuclear Information System (INIS)
Oxstrand, Johanna; Bly, Aaron
2016-01-01
The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven job aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages - Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to
Computer Based Procedures for Field Workers - FY16 Research Activities
Energy Technology Data Exchange (ETDEWEB)
Oxstrand, Johanna [Idaho National Lab. (INL), Idaho Falls, ID (United States); Bly, Aaron [Idaho National Lab. (INL), Idaho Falls, ID (United States)
2016-09-01
The Computer-Based Procedure (CBP) research effort is a part of the Light-Water Reactor Sustainability (LWRS) Program, which provides the technical foundations for licensing and managing the long-term, safe, and economical operation of current nuclear power plants. One of the primary missions of the LWRS program is to help the U.S. nuclear industry adopt new technologies and engineering solutions that facilitate the continued safe operation of the plants and extension of the current operating licenses. One area that could yield tremendous savings in increased efficiency and safety is in improving procedure use. A CBP provides the opportunity to incorporate context-driven job aids, such as drawings, photos, and just-in-time training. The presentation of information in CBPs can be much more flexible and tailored to the task, actual plant condition, and operation mode. The dynamic presentation of the procedure will guide the user down the path of relevant steps, thus minimizing time spent by the field worker to evaluate plant conditions and decisions related to the applicability of each step. This dynamic presentation of the procedure also minimizes the risk of conducting steps out of order and/or incorrectly assessed applicability of steps. This report provides a summary of the main research activities conducted in the Computer-Based Procedures for Field Workers effort since 2012. The main focus of the report is on the research activities conducted in fiscal year 2016. The activities discussed are the Nuclear Electronic Work Packages – Enterprise Requirements initiative, the development of a design guidance for CBPs (which compiles all insights gained through the years of CBP research), the facilitation of vendor studies at the Idaho National Laboratory (INL) Advanced Test Reactor (ATR), a pilot study for how to enhance the plant design modification work process, the collection of feedback from a field evaluation study at Plant Vogtle, and path forward to
Quantitative x-ray dark-field computed tomography
International Nuclear Information System (INIS)
Bech, M; Pfeiffer, F; Bunk, O; Donath, T; David, C; Feidenhans'l, R
2010-01-01
The basic principles of x-ray image formation in radiology have remained essentially unchanged since Roentgen first discovered x-rays over a hundred years ago. The conventional approach relies on x-ray attenuation as the sole source of contrast and draws exclusively on ray or geometrical optics to describe and interpret image formation. Phase-contrast or coherent scatter imaging techniques, which can be understood using wave optics rather than ray optics, offer ways to augment or complement the conventional approach by incorporating the wave-optical interaction of x-rays with the specimen. With a recently developed approach based on x-ray optical gratings, advanced phase-contrast and dark-field scatter imaging modalities are now in reach for routine medical imaging and non-destructive testing applications. To quantitatively assess the new potential of particularly the grating-based dark-field imaging modality, we here introduce a mathematical formalism together with a material-dependent parameter, the so-called linear diffusion coefficient and show that this description can yield quantitative dark-field computed tomography (QDFCT) images of experimental test phantoms.
Kutateladze, Andrei G; Mukhina, Olga A
2014-09-05
Spin-spin coupling constants in (1)H NMR carry a wealth of structural information and offer a powerful tool for deciphering molecular structures. However, accurate ab initio or DFT calculations of spin-spin coupling constants have been very challenging and expensive. Scaling of (easy) Fermi contacts, fc, especially in the context of recent findings by Bally and Rablen (Bally, T.; Rablen, P. R. J. Org. Chem. 2011, 76, 4818), offers a framework for achieving practical evaluation of spin-spin coupling constants. We report a faster and more precise parametrization approach utilizing a new basis set for hydrogen atoms optimized in conjunction with (i) inexpensive B3LYP/6-31G(d) molecular geometries, (ii) inexpensive 4-31G basis set for carbon atoms in fc calculations, and (iii) individual parametrization for different atom types/hybridizations, not unlike a force field in molecular mechanics, but designed for the fc's. With the training set of 608 experimental constants we achieved rmsd <0.19 Hz. The methodology performs very well as we illustrate with a set of complex organic natural products, including strychnine (rmsd 0.19 Hz), morphine (rmsd 0.24 Hz), etc. This precision is achieved with much shorter computational times: accurate spin-spin coupling constants for the two conformers of strychnine were computed in parallel on two 16-core nodes of a Linux cluster within 10 min.
International Nuclear Information System (INIS)
Gritzo, L.A.; Koski, J.A.; Suo-Anttila, A.J.
1999-01-01
The Container Analysis Fire Environment computer code (CAFE) is intended to provide Type B package designers with an enhanced engulfing fire boundary condition when combined with the PATRAN/P-Thermal commercial code. Historically an engulfing fire boundary condition has been modeled as σT 4 where σ is the Stefan-Boltzman constant, and T is the fire temperature. The CAFE code includes the necessary chemistry, thermal radiation, and fluid mechanics to model an engulfing fire. Effects included are the local cooling of gases that form a protective boundary layer that reduces the incoming radiant heat flux to values lower than expected from a simple σT 4 model. In addition, the effect of object shape on mixing that may increase the local fire temperature is included. Both high and low temperature regions that depend upon the local availability of oxygen are also calculated. Thus the competing effects that can both increase and decrease the local values of radiant heat flux are included in a reamer that is not predictable a-priori. The CAFE package consists of a group of computer subroutines that can be linked to workstation-based thermal analysis codes in order to predict package performance during regulatory and other accident fire scenarios
Harb, Moussab
2015-01-01
Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.
Harb, Moussab
2015-08-26
Using accurate first-principles quantum calculations based on DFT (including the perturbation theory DFPT) with the range-separated hybrid HSE06 exchange-correlation functional, we predict essential fundamental properties (such as bandgap, optical absorption coefficient, dielectric constant, charge carrier effective masses and exciton binding energy) of two stable monoclinic vanadium oxynitride (VON) semiconductor crystals for solar energy conversion applications. In addition to the predicted band gaps in the optimal range for making single-junction solar cells, both polymorphs exhibit relatively high absorption efficiencies in the visible range, high dielectric constants, high charge carrier mobilities and much lower exciton binding energies than the thermal energy at room temperature. Moreover, their optical absorption, dielectric and exciton dissociation properties are found to be better than those obtained for semiconductors frequently utilized in photovoltaic devices like Si, CdTe and GaAs. These novel results offer a great opportunity for this stoichiometric VON material to be properly synthesized and considered as a new good candidate for photovoltaic applications.
Milman, Mark H
2005-12-01
Astrometric measurements using stellar interferometry rely on precise measurement of the central white light fringe to accurately obtain the optical pathlength difference of incoming starlight to the two arms of the interferometer. One standard approach to stellar interferometry uses a channeled spectrum to determine phases at a number of different wavelengths that are then converted to the pathlength delay. When throughput is low these channels are broadened to improve the signal-to-noise ratio. Ultimately the ability to use monochromatic models and algorithms in each of the channels to extract phase becomes problematic and knowledge of the spectrum must be incorporated to achieve the accuracies required of the astrometric measurements. To accomplish this an optimization problem is posed to estimate simultaneously the pathlength delay and spectrum of the source. Moreover, the nature of the parameterization of the spectrum that is introduced circumvents the need to solve directly for these parameters so that the optimization problem reduces to a scalar problem in just the pathlength delay variable. A number of examples are given to show the robustness of the approach.
Parallel computation of automatic differentiation applied to magnetic field calculations
International Nuclear Information System (INIS)
Hinkins, R.L.; Lawrence Berkeley Lab., CA
1994-09-01
The author presents a parallelization of an accelerator physics application to simulate magnetic field in three dimensions. The problem involves the evaluation of high order derivatives with respect to two variables of a multivariate function. Automatic differentiation software had been used with some success, but the computation time was prohibitive. The implementation runs on several platforms, including a network of workstations using PVM, a MasPar using MPFortran, and a CM-5 using CMFortran. A careful examination of the code led to several optimizations that improved its serial performance by a factor of 8.7. The parallelization produced further improvements, especially on the MasPar with a speedup factor of 620. As a result a problem that took six days on a SPARC 10/41 now runs in minutes on the MasPar, making it feasible for physicists at Lawrence Berkeley Laboratory to simulate larger magnets
Analysis of Craniofacial Images using Computational Atlases and Deformation Fields
DEFF Research Database (Denmark)
Ólafsdóttir, Hildur
2008-01-01
purposes. The basis for most of the applications is non-rigid image registration. This approach brings one image into the coordinate system of another resulting in a deformation field describing the anatomical correspondence between the two images. A computational atlas representing the average anatomy...... of asymmetry. The analyses are applied to the study of three different craniofacial anomalies. The craniofacial applications include studies of Crouzon syndrome (in mice), unicoronal synostosis plagiocephaly and deformational plagiocephaly. Using the proposed methods, the thesis reveals novel findings about...... the craniofacial morphology and asymmetry of Crouzon mice. Moreover, a method to plan and evaluate treatment of children with deformational plagiocephaly, based on asymmetry assessment, is established. Finally, asymmetry in children with unicoronal synostosis is automatically assessed, confirming previous results...
Computing the scalar field couplings in 6D supergravity
Saidi, El Hassan
2008-11-01
Using non-chiral supersymmetry in 6D space-time, we compute the explicit expression of the metric the scalar manifold SO(1,1)×{SO(4,20)}/{SO(4)×SO(20)} of the ten-dimensional type IIA superstring on generic K3. We consider as well the scalar field self-couplings in the general case where the non-chiral 6D supergravity multiplet is coupled to generic n vector supermultiplets with moduli space SO(1,1)×{SO(4,n)}/{SO(4)×SO(n)}. We also work out a dictionary giving a correspondence between hyper-Kähler geometry and the Kähler geometry of the Coulomb branch of 10D type IIA on Calabi-Yau threefolds. Others features are also discussed.
Energy Technology Data Exchange (ETDEWEB)
Carrington, David Bradley [Los Alamos National Lab. (LANL), Los Alamos, NM (United States); Waters, Jiajia [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-01-05
KIVA-hpFE is a high performance computer software for solving the physics of multi-species and multiphase turbulent reactive flow in complex geometries having immersed moving parts. The code is written in Fortran 90/95 and can be used on any computer platform with any popular complier. The code is in two versions, a serial version and a parallel version utilizing MPICH2 type Message Passing Interface (MPI or Intel MPI) for solving distributed domains. The parallel version is at least 30x faster than the serial version and much faster than our previous generation of parallel engine modeling software, by many factors. The 5th generation algorithm construction is a Galerkin type Finite Element Method (FEM) solving conservative momentum, species, and energy transport equations along with two-equation turbulent model k-ω Reynolds Averaged Navier-Stokes (RANS) model and a Vreman type dynamic Large Eddy Simulation (LES) method. The LES method is capable modeling transitional flow from laminar to fully turbulent; therefore, this LES method does not require special hybrid or blending to walls. The FEM projection method also uses a Petrov-Galerkin (P-G) stabilization along with pressure stabilization. We employ hierarchical basis sets, constructed on the fly with enrichment in areas associated with relatively larger error as determined by error estimation methods. In addition, when not using the hp-adaptive module, the code employs Lagrangian basis or shape functions. The shape functions are constructed for hexahedral, prismatic and tetrahedral elements. The software is designed to solve many types of reactive flow problems, from burners to internal combustion engines and turbines. In addition, the formulation allows for direct integration of solid bodies (conjugate heat transfer), as in heat transfer through housings, parts, cylinders. It can also easily be extended to stress modeling of solids, used in fluid structure interactions problems, solidification, porous media
DeGregorio, Nicole; Iyengar, Srinivasan S
2018-01-09
We present two sampling measures to gauge critical regions of potential energy surfaces. These sampling measures employ (a) the instantaneous quantum wavepacket density, an approximation to the (b) potential surface, its (c) gradients, and (d) a Shannon information theory based expression that estimates the local entropy associated with the quantum wavepacket. These four criteria together enable a directed sampling of potential surfaces that appears to correctly describe the local oscillation frequencies, or the local Nyquist frequency, of a potential surface. The sampling functions are then utilized to derive a tessellation scheme that discretizes the multidimensional space to enable efficient sampling of potential surfaces. The sampled potential surface is then combined with four different interpolation procedures, namely, (a) local Hermite curve interpolation, (b) low-pass filtered Lagrange interpolation, (c) the monomial symmetrization approximation (MSA) developed by Bowman and co-workers, and (d) a modified Shepard algorithm. The sampling procedure and the fitting schemes are used to compute (a) potential surfaces in highly anharmonic hydrogen-bonded systems and (b) study hydrogen-transfer reactions in biogenic volatile organic compounds (isoprene) where the transferring hydrogen atom is found to demonstrate critical quantum nuclear effects. In the case of isoprene, the algorithm discussed here is used to derive multidimensional potential surfaces along a hydrogen-transfer reaction path to gauge the effect of quantum-nuclear degrees of freedom on the hydrogen-transfer process. Based on the decreased computational effort, facilitated by the optimal sampling of the potential surfaces through the use of sampling functions discussed here, and the accuracy of the associated potential surfaces, we believe the method will find great utility in the study of quantum nuclear dynamics problems, of which application to hydrogen-transfer reactions and hydrogen
Quantum field theory and coalgebraic logic in theoretical computer science.
Basti, Gianfranco; Capolupo, Antonio; Vitiello, Giuseppe
2017-11-01
We suggest that in the framework of the Category Theory it is possible to demonstrate the mathematical and logical dual equivalence between the category of the q-deformed Hopf Coalgebras and the category of the q-deformed Hopf Algebras in quantum field theory (QFT), interpreted as a thermal field theory. Each pair algebra-coalgebra characterizes a QFT system and its mirroring thermal bath, respectively, so to model dissipative quantum systems in far-from-equilibrium conditions, with an evident significance also for biological sciences. Our study is in fact inspired by applications to neuroscience where the brain memory capacity, for instance, has been modeled by using the QFT unitarily inequivalent representations. The q-deformed Hopf Coalgebras and the q-deformed Hopf Algebras constitute two dual categories because characterized by the same functor T, related with the Bogoliubov transform, and by its contravariant application T op , respectively. The q-deformation parameter is related to the Bogoliubov angle, and it is effectively a thermal parameter. Therefore, the different values of q identify univocally, and label the vacua appearing in the foliation process of the quantum vacuum. This means that, in the framework of Universal Coalgebra, as general theory of dynamic and computing systems ("labelled state-transition systems"), the so labelled infinitely many quantum vacua can be interpreted as the Final Coalgebra of an "Infinite State Black-Box Machine". All this opens the way to the possibility of designing a new class of universal quantum computing architectures based on this coalgebraic QFT formulation, as its ability of naturally generating a Fibonacci progression demonstrates. Copyright © 2017 Elsevier Ltd. All rights reserved.
Application of large computers for predicting the oil field production
Energy Technology Data Exchange (ETDEWEB)
Philipp, W; Gunkel, W; Marsal, D
1971-10-01
The flank injection drive plays a dominant role in the exploitation of the BEB-oil fields. Therefore, 2-phase flow computer models were built up, adapted to a predominance of a single flow direction and combining a high accuracy of prediction with a low job time. Any case study starts with the partitioning of the reservoir into blocks. Then the statistics of the time-independent reservoir properties are analyzed by means of an IBM 360/25 unit. Using these results and the past production of oil, water and gas, a Fortran-program running on a CDC-3300 computer yields oil recoveries and the ratios of the relative permeabilities as a function of the local oil saturation for all blocks penetrated by mobile water. In order to assign kDwU/KDoU-functions to blocks not yet reached by the advancing water-front, correlation analysis is used to relate reservoir properties to kDwU/KDoU-functions. All these results are used as input into a CDC-660 Fortran program, allowing short-, medium-, and long-term forecasts as well as the handling of special problems.
Directory of Open Access Journals (Sweden)
Alessandra Paffi
2015-01-01
Full Text Available The aim of this paper is to propose an approach for an accurate and fast (real-time computation of the electric field induced inside the whole brain volume during a transcranial magnetic stimulation (TMS procedure. The numerical solution implements the admittance method for a discretized realistic brain model derived from Magnetic Resonance Imaging (MRI. Results are in a good agreement with those obtained using commercial codes and require much less computational time. An integration of the developed code with neuronavigation tools will permit real-time evaluation of the stimulated brain regions during the TMS delivery, thus improving the efficacy of clinical applications.
Directory of Open Access Journals (Sweden)
Jiun-Hung Geng
2015-01-01
Full Text Available Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD were analyzed. In all, 197 (60% were classified as stone-free and 132 (40% as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA, abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals: 9.49 (3.72–24.20, 2.25 (1.22–4.14, 2.20 (1.10–4.40, and 2.89 (1.35–6.21 respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these
Geng, Jiun-Hung; Tu, Hung-Pin; Shih, Paul Ming-Chen; Shen, Jung-Tsung; Jang, Mei-Yu; Wu, Wen-Jen; Li, Ching-Chia; Chou, Yii-Her; Juan, Yung-Shun
2015-01-01
Urolithiasis is a common disease of the urinary system. Extracorporeal shockwave lithotripsy (SWL) has become one of the standard treatments for renal and ureteral stones; however, the success rates range widely and failure of stone disintegration may cause additional outlay, alternative procedures, and even complications. We used the data available from noncontrast abdominal computed tomography (NCCT) to evaluate the impact of stone parameters and abdominal fat distribution on calculus-free rates following SWL. We retrospectively reviewed 328 patients who had urinary stones and had undergone SWL from August 2012 to August 2013. All of them received pre-SWL NCCT; 1 month after SWL, radiography was arranged to evaluate the condition of the fragments. These patients were classified into stone-free group and residual stone group. Unenhanced computed tomography variables, including stone attenuation, abdominal fat area, and skin-to-stone distance (SSD) were analyzed. In all, 197 (60%) were classified as stone-free and 132 (40%) as having residual stone. The mean ages were 49.35 ± 13.22 years and 55.32 ± 13.52 years, respectively. On univariate analysis, age, stone size, stone surface area, stone attenuation, SSD, total fat area (TFA), abdominal circumference, serum creatinine, and the severity of hydronephrosis revealed statistical significance between these two groups. From multivariate logistic regression analysis, the independent parameters impacting SWL outcomes were stone size, stone attenuation, TFA, and serum creatinine. [Adjusted odds ratios and (95% confidence intervals): 9.49 (3.72-24.20), 2.25 (1.22-4.14), 2.20 (1.10-4.40), and 2.89 (1.35-6.21) respectively, all p < 0.05]. In the present study, stone size, stone attenuation, TFA and serum creatinine were four independent predictors for stone-free rates after SWL. These findings suggest that pretreatment NCCT may predict the outcomes after SWL. Consequently, we can use these predictors for selecting
DEFF Research Database (Denmark)
Dridi, Kim; Bjarklev, Anders Overgaard
1999-01-01
An electromagnetic vector-field modle for design of optical components based on the finite-difference-time-domain method and radiation integrals in presented. Its ability to predict the optical electromagnetic dynamics in structures with complex material distribution is demonstrated. Theoretical...
Computation of multiphase systems with phase field models
International Nuclear Information System (INIS)
Badalassi, V.E.; Ceniceros, H.D.; Banerjee, S.
2003-01-01
Phase field models offer a systematic physical approach for investigating complex multiphase systems behaviors such as near-critical interfacial phenomena, phase separation under shear, and microstructure evolution during solidification. However, because interfaces are replaced by thin transition regions (diffuse interfaces), phase field simulations require resolution of very thin layers to capture the physics of the problems studied. This demands robust numerical methods that can efficiently achieve high resolution and accuracy, especially in three dimensions. We present here an accurate and efficient numerical method to solve the coupled Cahn-Hilliard/Navier-Stokes system, known as Model H, that constitutes a phase field model for density-matched binary fluids with variable mobility and viscosity. The numerical method is a time-split scheme that combines a novel semi-implicit discretization for the convective Cahn-Hilliard equation with an innovative application of high-resolution schemes employed for direct numerical simulations of turbulence. This new semi-implicit discretization is simple but effective since it removes the stability constraint due to the nonlinearity of the Cahn-Hilliard equation at the same cost as that of an explicit scheme. It is derived from a discretization used for diffusive problems that we further enhance to efficiently solve flow problems with variable mobility and viscosity. Moreover, we solve the Navier-Stokes equations with a robust time-discretization of the projection method that guarantees better stability properties than those for Crank-Nicolson-based projection methods. For channel geometries, the method uses a spectral discretization in the streamwise and spanwise directions and a combination of spectral and high order compact finite difference discretizations in the wall normal direction. The capabilities of the method are demonstrated with several examples including phase separation with, and without, shear in two and three
International Nuclear Information System (INIS)
Cheung, Joo Yeon; Kim, Yookyung; Shim, Sung Shine; Lee, Jin Hwa; Chang, Jung Hyun; Ryu, Yon Ju; Lee, Rena J.
2012-01-01
Aim: To evaluate the accuracy of depth measurements on supine chest computed tomography (CT) for transthoracic needle biopsy (TNB). Materials and methods: We measured skin-lesion depths from the skin surface to nodules on both prebiopsy supine CT scans and CT scans obtained during cone beam CT-guided TNB in the supine (n = 29) or prone (n = 40) position in 69 patients, and analyzed the differences between the two measurements, based on patient position for the biopsy and lesion location. Results: Skin-lesion depths measured on prebiopsy supine CT scans were significantly larger than those measured on CT scans obtained during TNB in the prone position (p < 0.001; mean difference ± standard deviation (SD), 6.2 ± 5.7 mm; range, 0–18 mm), but the differences showed marginal significance in the supine position (p = 0.051; 3.5 ± 3.9 mm; 0–13 mm). Additionally, the differences were significantly larger for the upper (mean ± SD, 7.8 ± 5.7 mm) and middle (10.1 ± 6.5 mm) lung zones than for the lower lung zones (3.1 ± 3.3 mm) in the prone position (p = 0.011), and were larger for the upper lung zone (4.6 ± 5.0 mm) than for the middle (2.4 ± 2.0 mm) and lower (2.3 ± 2.3 mm) lung zones in the supine position (p = 0.004). Conclusions: Skin-lesion depths measured on prebiopsy supine chest CT scans were inaccurate for TNB in the prone position, particularly for nodules in the upper and middle lung zones.
Grudinin, Sergei; Garkavenko, Maria; Kazennov, Andrei
2017-05-01
A new method called Pepsi-SAXS is presented that calculates small-angle X-ray scattering profiles from atomistic models. The method is based on the multipole expansion scheme and is significantly faster compared with other tested methods. In particular, using the Nyquist-Shannon-Kotelnikov sampling theorem, the multipole expansion order is adapted to the size of the model and the resolution of the experimental data. It is argued that by using the adaptive expansion order, this method has the same quadratic dependence on the number of atoms in the model as the Debye-based approach, but with a much smaller prefactor in the computational complexity. The method has been systematically validated on a large set of over 50 models collected from the BioIsis and SASBDB databases. Using a laptop, it was demonstrated that Pepsi-SAXS is about seven, 29 and 36 times faster compared with CRYSOL, FoXS and the three-dimensional Zernike method in SAStbx, respectively, when tested on data from the BioIsis database, and is about five, 21 and 25 times faster compared with CRYSOL, FoXS and SAStbx, respectively, when tested on data from SASBDB. On average, Pepsi-SAXS demonstrates comparable accuracy in terms of χ 2 to CRYSOL and FoXS when tested on BioIsis and SASBDB profiles. Together with a small allowed variation of adjustable parameters, this demonstrates the effectiveness of the method. Pepsi-SAXS is available at http://team.inria.fr/nano-d/software/pepsi-saxs.
International Nuclear Information System (INIS)
Carbonniere, Philippe; Begue, Didier; Dargelos, Alain; Pouchan, Claude
2004-01-01
In this work we present an attractive least-squares fitting procedure which allows for the calculation of a quartic force field by jointly using energy, gradient, and Hessian data, obtained from electronic wave function calculations on a suitably chosen grid of points. We use the experimental design to select the grid points: a 'simplex-sum' of Box and Behnken grid was chosen for its efficiency and accuracy. We illustrate the numerical implementations of the method by using the energy and gradient data for H 2 O and H 2 CO. The B3LYP/cc-pVTZ quartic force field performed from 11 and 44 simplex-sum configurations shows excellent agreement in comparison to the classical 44 and 168 energy calculations
Directory of Open Access Journals (Sweden)
Marzena Nowakowska
Full Text Available Late blight (LB caused by the oomycete Phytophthora infestans continues to thwart global tomato production, while only few resistant cultivars have been introduced locally. In order to gain from the released tomato germplasm with LB resistance, we compared the 5-year field performance of LB resistance in several tomato cultigens, with the results of controlled conditions testing (i.e., detached leaflet/leaf, whole plant. In case of these artificial screening techniques, the effects of plant age and inoculum concentration were additionally considered. In the field trials, LA 1033, L 3707, L 3708 displayed the highest LB resistance, and could be used for cultivar development under Polish conditions. Of the three methods using controlled conditions, the detached leaf and the whole plant tests had the highest correlation with the field experiments. The plant age effect on LB resistance in tomato reported here, irrespective of the cultigen tested or inoculum concentration used, makes it important to standardize the test parameters when screening for resistance. Our results help show why other reports disagree on LB resistance in tomato.
Self-report and long-term field measures of MP3 player use: how accurate is self-report?
Portnuff, C D F; Fligor, B J; Arehart, K H
2013-02-01
This study was designed to evaluate the usage patterns of portable listening device (PLD) listeners, and the relationships between self-report measures and long-term dosimetry measures of listening habits. This study used a descriptive correlational design. Participants (N = 52) were 18-29 year old men and women who completed surveys. A randomly assigned subset (N = 24) of participants had their listening monitored by dosimetry for one week. Median weekly noise doses reported and measured through dosimetry were low (9-93%), but 14.3% of participants reported exceeding a 100% noise dose weekly. When measured by dosimetry, 16.7% of participants exceeded a 100% noise dose weekly. The self-report question that best predicted the dosimetry-measured dose asked participants to report listening duration and usual listening level on a visual-analog scale. This study reports a novel dosimetry system that can provide accurate measures of PLD use over time. When not feasible, though, the self-report question described could provide a useful research or clinical tool to estimate exposure from PLD use. Among the participants in this study, a small but substantial percentage of PLD users incurred exposure from PLD use alone that increases their risk of music-induced hearing loss.
Energy Technology Data Exchange (ETDEWEB)
Vaganova, N. A., E-mail: vna@imm.uran.ru [Institute of Mathematics and Mechanics of Ural Branch of Russian Academy of Sciences, Ekaterinburg (Russian Federation); Filimonov, M. Yu., E-mail: fmy@imm.uran.ru [Ural Federal University, Ekaterinburg, Russia and Institute of Mathematics and Mechanics of Ural Branch of Russian Academy of Sciences, Ekaterinburg (Russian Federation)
2015-11-30
A mathematical model, numerical algorithm and program code for simulation and long-term forecasting of changes in permafrost as a result of operation of a multiple well pad of northern oil and gas field are presented. In the model the most significant climatic and physical factors are taken into account such as solar radiation, determined by specific geographical location, heterogeneous structure of frozen soil, thermal stabilization of soil, possible insulation of the objects, seasonal fluctuations in air temperature, and freezing and thawing of the upper soil layer. Results of computing are presented.
Cerbino, Roberto; Piotti, Davide; Buscaglia, Marco; Giavazzi, Fabio
2018-01-01
Micro- and nanoscale objects with anisotropic shape are key components of a variety of biological systems and inert complex materials, and represent fundamental building blocks of novel self-assembly strategies. The time scale of their thermal motion is set by their translational and rotational diffusion coefficients, whose measurement may become difficult for relatively large particles with small optical contrast. Here we show that dark field differential dynamic microscopy is the ideal tool for probing the roto-translational Brownian motion of anisotropic shaped particles. We demonstrate our approach by successful application to aqueous dispersions of non-motile bacteria and of colloidal aggregates of spherical particles.
Describing of elements IO field in a testing computer program
Directory of Open Access Journals (Sweden)
Igor V. Loshkov
2017-01-01
Full Text Available A standard of describing the process of displaying interactive windows on a computer monitor, through which an output of questions and input of answers are implemented during computer testing, is presented in the article [11]. According to the proposed standard, the description of the process mentioned above is performed with a format line, containing element names, their parameters as well as grouping and auxiliary symbols. Program objects are described using elements of standard. The majority of objects create input and output windows on a computer monitor. The aim of our research was to develop a minimum possible set of elements of standard to perform mathematical and computer science testing.The choice of elements of the standard was conducted in parallel with the development and testing of the program that uses them. This approach made it possible to choose a sufficiently complete set of elements for testing in fields of study mentioned above. For the proposed elements, names were selected in such a way: firstly, they indicate their function and secondly, they coincide with the names of elements in other programming languages that are similar by function. Parameters, their names, their assignments and accepted values are proposed for the elements. The principle of name selection for the parameters was the same as for elements of the standard: the names should correspond to their assignments or coincide with names of similar parameters in other programming languages. The parameters define properties of objects. Particularly, while the elements of standard create windows, the parameters define object properties (location, size, appearance and the sequence in which windows are created. All elements of standard, proposed in this article are composed in a table, the columns of which have names and functions of these elements. Inside the table, the elements of standard are grouped row by row into four sets: input elements, output elements, input
Alkali Rydberg states in electromagnetic fields: computational physics meets experiment
International Nuclear Information System (INIS)
Krug, A.
2001-11-01
We study highly excited hydrogen and alkali atoms ('Rydberg states') under the influence of a strong microwave field. As the external frequency is comparable to the highly excited electron's classical Kepler frequency, the external field induces a strong coupling of many different quantum mechanical energy levels and finally leads to the ionization of the outer electron. While periodically driven atomic hydrogen can be seen as a paradigm of quantum chaotic motion in an open (decaying) quantum system, the presence of the non-hydrogenic atomic core - which unavoidably has to be treated quantum mechanically - entails some complications. Indeed, laboratory experiments show clear differences in the ionization dynamics of microwave driven hydrogen and non-hydrogenic Rydberg states. In the first part of this thesis, a machinery is developed that allows for numerical experiments on alkali and hydrogen atoms under precisely identical laboratory conditions. Due to the high density of states in the parameter regime typically explored in laboratory experiments, such simulations are only possible with the most advanced parallel computing facilities, in combination with an efficient parallel implementation of the numerical approach. The second part of the thesis is devoted to the results of the numerical experiment. We identify and describe significant differences and surprising similarities in the ionization dynamics of atomic hydrogen as compared to alkali atoms, and give account of the relevant frequency scales that distinguish hydrogenic from non-hydrogenic ionization behavior. Our results necessitate a reinterpretation of the experimental results so far available, and solve the puzzle of a distinct ionization behavior of periodically driven hydrogen and non-hydrogenic Rydberg atoms - an unresolved question for about one decade. Finally, microwave-driven Rydberg states will be considered as prototypes of open, complex quantum systems that exhibit a complicated temporal decay
Mah, K; Danjoux, C E; Manship, S; Makhani, N; Cardoso, M; Sixel, K E
1998-07-15
To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of > or =5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall, it has improved staff efficiency and resource utilization.
International Nuclear Information System (INIS)
Mah, Katherine; Danjoux, Cyril E.; Manship, Sharan; Makhani, Nadiya; Cardoso, Marlene; Sixel, Katharina E.
1998-01-01
Purpose: To reduce the time required for planning and simulating craniospinal fields through the use of a computed tomography (CT) simulator and virtual simulation, and to improve the accuracy of field and shielding placement. Methods and Materials: A CT simulation planning technique was developed. Localization of critical anatomic features such as the eyes, cribriform plate region, and caudal extent of the thecal sac are enhanced by this technique. Over a 2-month period, nine consecutive pediatric patients were simulated and planned for craniospinal irradiation. Four patients underwent both conventional simulation and CT simulation. Five were planned using CT simulation only. The accuracy of CT simulation was assessed by comparing digitally reconstructed radiographs (DRRs) to portal films for all patients and to conventional simulation films as well in the first four patients. Results: Time spent by patients in the CT simulation suite was 20 min on average and 40 min maximally for those who were noncompliant. Image acquisition time was <10 min in all cases. In the absence of the patient, virtual simulation of all fields took 20 min. The DRRs were in agreement with portal and/or simulation films to within 5 mm in five of the eight cases. Discrepancies of ≥5 mm in the positioning of the inferior border of the cranial fields in the first three patients were due to a systematic error in CT scan acquisition and marker contouring which was corrected by modifying the technique after the fourth patient. In one patient, the facial shield had to be moved 0.75 cm inferiorly owing to an error in shield construction. Conclusions: Our analysis showed that CT simulation of craniospinal fields was accurate. It resulted in a significant reduction in the time the patient must be immobilized during the planning process. This technique can improve accuracy in field placement and shielding by using three-dimensional CT-aided localization of critical and target structures. Overall
Entertainment computing, social transformation and the quantum field
Rauterberg, G.W.M.; Nijholt, A.; Reidsma, D.; Hondorp, H.
2009-01-01
Entertainment computing is on its way getting an established academic discipline. The scope of entertainment computing is quite broad (see the scope of the international journal Entertainment Computing). One unifying idea in this diverse community of entertainment researchers and developers might be
Ida, Masato; Taniguchi, Nobuyuki
2003-09-01
This paper introduces a candidate for the origin of the numerical instabilities in large eddy simulation repeatedly observed in academic and practical industrial flow computations. Without resorting to any subgrid-scale modeling, but based on a simple assumption regarding the streamwise component of flow velocity, it is shown theoretically that in a channel-flow computation, the application of the Gaussian filtering to the incompressible Navier-Stokes equations yields a numerically unstable term, a cross-derivative term, which is similar to one appearing in the Gaussian filtered Vlasov equation derived by Klimas [J. Comput. Phys. 68, 202 (1987)] and also to one derived recently by Kobayashi and Shimomura [Phys. Fluids 15, L29 (2003)] from the tensor-diffusivity subgrid-scale term in a dynamic mixed model. The present result predicts that not only the numerical methods and the subgrid-scale models employed but also only the applied filtering process can be a seed of this numerical instability. An investigation concerning the relationship between the turbulent energy scattering and the unstable term shows that the instability of the term does not necessarily represent the backscatter of kinetic energy which has been considered a possible origin of numerical instabilities in large eddy simulation. The present findings raise the question whether a numerically stable subgrid-scale model can be ideally accurate.
Directory of Open Access Journals (Sweden)
Coen Pramono D
2005-03-01
Full Text Available Functional and aesthetic dysgnathia surgery requires accurate pre-surgical planning, including the surgical technique to be used related with the difference of anatomical structures amongst individuals. Programs that simulate the surgery become increasingly important. This can be mediated by using a surgical model, conventional x-rays as panoramic, cephalometric projections and another sophisticated method such as a three dimensional computed tomography (3 D-CT. A patient who had undergone double jaw surgeries with difficult anatomical landmarks was presented. In this case the mandible foramens were seen highly relatively related to the sigmoid notches. Therefore, ensuring the bone incisions in sagittal split was presumed to be difficult. A 3D-CT was made and considered to be very helpful in supporting the pre-operative diagnostic.
Computation of magnetic field in DC brushless linear motors built with NdFeB magnets
International Nuclear Information System (INIS)
Basak, A.; Shirkoohi, G.H.
1990-01-01
A software package based on finite element technique has been used to compute three-dimensional magnetic fields and static forces developed in brushless d.c. linear motors. As the field flux-source two different types of permanent magnets, one of them being the high energy neodymium- iron-boron type, has been used in computer models. Motors with the same specifications as the computer models were built and experimental results obtained from them are compared with the computed results
Chowdhury, Amor; Sarjaš, Andrej
2016-09-15
The presented paper describes accurate distance measurement for a field-sensed magnetic suspension system. The proximity measurement is based on a Hall effect sensor. The proximity sensor is installed directly on the lower surface of the electro-magnet, which means that it is very sensitive to external magnetic influences and disturbances. External disturbances interfere with the information signal and reduce the usability and reliability of the proximity measurements and, consequently, the whole application operation. A sensor fusion algorithm is deployed for the aforementioned reasons. The sensor fusion algorithm is based on the Unscented Kalman Filter, where a nonlinear dynamic model was derived with the Finite Element Modelling approach. The advantage of such modelling is a more accurate dynamic model parameter estimation, especially in the case when the real structure, materials and dimensions of the real-time application are known. The novelty of the paper is the design of a compact electro-magnetic actuator with a built-in low cost proximity sensor for accurate proximity measurement of the magnetic object. The paper successively presents a modelling procedure with the finite element method, design and parameter settings of a sensor fusion algorithm with Unscented Kalman Filter and, finally, the implementation procedure and results of real-time operation.
Roozeboom, Nettie H.; Lee, Henry C.; Simurda, Laura J.; Zilliac, Gregory G.; Pulliam, Thomas H.
2016-01-01
Wing-body juncture flow fields on commercial aircraft configurations are challenging to compute accurately. The NASA Advanced Air Vehicle Program's juncture flow committee is designing an experiment to provide data to improve Computational Fluid Dynamics (CFD) modeling in the juncture flow region. Preliminary design of the model was done using CFD, yet CFD tends to over-predict the separation in the juncture flow region. Risk reduction wind tunnel tests were requisitioned by the committee to obtain a better understanding of the flow characteristics of the designed models. NASA Ames Research Center's Fluid Mechanics Lab performed one of the risk reduction tests. The results of one case, accompanied by CFD simulations, are presented in this paper. Experimental results suggest the wall mounted wind tunnel model produces a thicker boundary layer on the fuselage than the CFD predictions, resulting in a larger wing horseshoe vortex suppressing the side of body separation in the juncture flow region. Compared to experimental results, CFD predicts a thinner boundary layer on the fuselage generates a weaker wing horseshoe vortex resulting in a larger side of body separation.
International Nuclear Information System (INIS)
Tajima, Osamu; Shibasaki, Masaki; Hoshi, Toshiko; Imai, Kamon
2002-01-01
The purpose of this study was to investigate whether a newly developed maneuver that reduces the reconstruction area by a half more accurately evaluates left ventricular (LV) volume on quantitative gated SPECT (QGS) analysis. The subjects were 38 patients who underwent left ventricular angiography (LVG) followed by G-SPECT within 2 weeks. Acquisition was performed with a general purpose collimator and a 64 x 64 matrix. On QGS analysis, the field magnification was 34 cm in original image (Original: ORI), and furthermore it was changed from 34 cm to 17 cm to enlarge the re-constructed image (Field Change Conversion: FCC). End-diastolic volume (EDV) and end-systolic volume (ESV) of the left ventricle were also obtained using LVG. EDV was 71±19 ml, 83±20 ml and 98±23 ml for ORI, FCC and LVG, respectively (p<0.001: ORI versus LVG, p<0.001: ORI versus FCC, p<0.001: FCC versus LVG). ESV was 28±12 ml, 34±13 ml and 41±14 ml for ORI, FCC and LVG, respectively (p<0.001: ORI versus LVG, p<0.001: ORI versus FCC, p<0.001: FCC versus LVG). FCC was better than ORI for calculating LV volume in clinical cases. Furthermore, FCC is a useful method for accurately measuring the LV volume on QGS analysis. (author)
Technical Note: Computation of Electric Field Strength Necessary for ...
African Journals Online (AJOL)
Obviously, electric field is established by this charge. The effects of this field on the objects lying within its vicinity depend on its intensity. In this paper, the electric field of 33kV overhead line is considered. The aim of the paper is to determine the maximum electric field strength or potential gradient, E of the 33kV overhead ...
Use of time space Green's functions in the computation of transient eddy current fields
International Nuclear Information System (INIS)
Davey, K.; Turner, L.
1988-01-01
The utility of integral equations to solve eddy current problems has been borne out by numerous computations in the past few years, principally in sinusoidal steady-state problems. This paper attempts to examine the applicability of the integral approaches in both time and space for the more generic transient problem. The basic formulation for the time space Green's function approach is laid out. A technique employing Gauss-Laguerre integration is employed to realize the temporal solution, while Gauss--Legendre integration is used to resolve the spatial field character. The technique is then applied to the fusion electromagnetic induction experiments (FELIX) cylinder experiments in both two and three dimensions. It is found that quite accurate solutions can be obtained using rather coarse time steps and very few unknowns; the three-dimensional field solution worked out in this context used basically only four unknowns. The solution appears to be somewhat sensitive to the choice of time step, a consequence of a numerical instability imbedded in the Green's function near the origin
Zhu, Shun; Travis, Sue M; Elcock, Adrian H
2013-07-09
A major current challenge for drug design efforts focused on protein kinases is the development of drug resistance caused by spontaneous mutations in the kinase catalytic domain. The ubiquity of this problem means that it would be advantageous to develop fast, effective computational methods that could be used to determine the effects of potential resistance-causing mutations before they arise in a clinical setting. With this long-term goal in mind, we have conducted a combined experimental and computational study of the thermodynamic effects of active-site mutations on a well-characterized and high-affinity interaction between a protein kinase and a small-molecule inhibitor. Specifically, we developed a fluorescence-based assay to measure the binding free energy of the small-molecule inhibitor, SB203580, to the p38α MAP kinase and used it measure the inhibitor's affinity for five different kinase mutants involving two residues (Val38 and Ala51) that contact the inhibitor in the crystal structure of the inhibitor-kinase complex. We then conducted long, explicit-solvent thermodynamic integration (TI) simulations in an attempt to reproduce the experimental relative binding affinities of the inhibitor for the five mutants; in total, a combined simulation time of 18.5 μs was obtained. Two widely used force fields - OPLS-AA/L and Amber ff99SB-ILDN - were tested in the TI simulations. Both force fields produced excellent agreement with experiment for three of the five mutants; simulations performed with the OPLS-AA/L force field, however, produced qualitatively incorrect results for the constructs that contained an A51V mutation. Interestingly, the discrepancies with the OPLS-AA/L force field could be rectified by the imposition of position restraints on the atoms of the protein backbone and the inhibitor without destroying the agreement for other mutations; the ability to reproduce experiment depended, however, upon the strength of the restraints' force constant
Voorhies, Coerte V.; Conrad, Joy
1996-01-01
The geomagnetic spatial power spectrum R(sub n)(r) is the mean square magnetic induction represented by degree n spherical harmonic coefficients of the internal scalar potential averaged over the geocentric sphere of radius r. McLeod's Rule for the magnetic field generated by Earth's core geodynamo says that the expected core surface power spectrum (R(sub nc)(c)) is inversely proportional to (2n + 1) for 1 less than n less than or equal to N(sub E). McLeod's Rule is verified by locating Earth's core with main field models of Magsat data; the estimated core radius of 3485 kn is close to the seismologic value for c of 3480 km. McLeod's Rule and similar forms are then calibrated with the model values of R(sub n) for 3 less than or = n less than or = 12. Extrapolation to the degree 1 dipole predicts the expectation value of Earth's dipole moment to be about 5.89 x 10(exp 22) Am(exp 2)rms (74.5% of the 1980 value) and the expected geomagnetic intensity to be about 35.6 (mu)T rms at Earth's surface. Archeo- and paleomagnetic field intensity data show these and related predictions to be reasonably accurate. The probability distribution chi(exp 2) with 2n+1 degrees of freedom is assigned to (2n + 1)R(sub nc)/(R(sub nc). Extending this to the dipole implies that an exceptionally weak absolute dipole moment (less than or = 20% of the 1980 value) will exist during 2.5% of geologic time. The mean duration for such major geomagnetic dipole power excursions, one quarter of which feature durable axial dipole reversal, is estimated from the modern dipole power time-scale and the statistical model of excursions. The resulting mean excursion duration of 2767 years forces us to predict an average of 9.04 excursions per million years, 2.26 axial dipole reversals per million years, and a mean reversal duration of 5533 years. Paleomagnetic data show these predictions to be quite accurate. McLeod's Rule led to accurate predictions of Earth's core radius, mean paleomagnetic field
International Nuclear Information System (INIS)
Zhang, Shiping; Shen, Guoqing; An, Liansuo; Niu, Yuguang
2015-01-01
Online monitoring of the temperature field is crucial to optimally adjust combustion within a boiler. In this paper, acoustic computed tomography (CT) technology was used to obtain the temperature profile of a furnace cross-section. The physical principles behind acoustic CT, acoustic signals and time delay estimation were studied. Then, the technique was applied to a domestic 600-MW coal-fired boiler. Acoustic CT technology was used to monitor the temperature field of the cross-section in the boiler furnace, and the temperature profile was reconstructed through ART iteration. The linear sweeping frequency signal was adopted as the sound source signal, whose sweeping frequency ranged from 500 to 3000 Hz with a sweeping cycle of 0.1 s. The generalized cross-correlation techniques with PHAT and ML were used as the time delay estimation method when the boiler was in different states. Its actual operation indicated that the monitored images accurately represented the combustion state of the boiler, and the acoustic CT system was determined to be accurate and reliable. - Highlights: • An online monitoring approach to monitor temperature field in a boiler furnace. • The paper provides acoustic CT technology to obtain the temperature profile of a furnace cross-section. • The temperature profile was reconstructed through ART iteration. • The technique is applied to a domestic 600-MW coal-fired boiler. • The monitored images accurately represent the combustion state of the boiler
Directory of Open Access Journals (Sweden)
Om Prakash Gurjar
2016-03-01
Full Text Available Purpose: Various factors cause geometric uncertainties during prostate radiotherapy, including interfractional and intrafractional patient motions, organ motion, and daily setup errors. This may lead to increased normal tissue complications when a high dose to the prostate is administered. More-accurate treatment delivery is possible with daily imaging and localization of the prostate. This study aims to measure the shift of the prostate by using kilovoltage (kV cone beam computed tomography (CBCT after position verification by kV orthogonal portal imaging (OPI.Methods: Position verification in 10 patients with prostate cancer was performed by using OPI followed by CBCT before treatment delivery in 25 sessions per patient. In each session, OPI was performed by using an on-board imaging (OBI system and pelvic bone-to-pelvic bone matching was performed. After applying the noted shift by using OPI, CBCT was performed by using the OBI system and prostate-to-prostate matching was performed. The isocenter shifts along all three translational directions in both techniques were combined into a three-dimensional (3-D iso-displacement vector (IDV.Results: The mean (SD IDV (in centimeters calculated during the 250 imaging sessions was 0.931 (0.598, median 0.825 for OPI and 0.515 (336, median 0.43 for CBCT, p-value was less than 0.0001 which shows extremely statistical significant difference.Conclusion: Even after bone-to-bone matching by using OPI, a significant shift in prostate was observed on CBCT. This study concludes that imaging with CBCT provides a more accurate prostate localization than the OPI technique. Hence, CBCT should be chosen as the preferred imaging technique.
25th Annual International Symposium on Field-Programmable Custom Computing Machines
The IEEE Symposium on Field-Programmable Custom Computing Machines is the original and premier forum for presenting and discussing new research related to computing that exploits the unique features and capabilities of FPGAs and other reconfigurable hardware. Over the past two decades, FCCM has been the place to present papers on architectures, tools, and programming models for field-programmable custom computing machines as well as applications that use such systems.
Computer codes for shaping the magnetic field of the JINR phasotron
International Nuclear Information System (INIS)
Zaplatin, N.L.; Morozov, N.A.
1983-01-01
The computer codes providing for the shaping the magnetic field of the JINR high current phasotron are presented. Using these codes the control for the magnetic field mapping was realized in on- or off-line regimes. Then these field parameters were calculated and ferromagnetic correcting elements and trim coils setting were chosen. Some computer codes were realised for the magnetic field horizontal component measurements. The data are presented on some codes possibilities. The codes were used on the EC-1010 and the CDC-6500 computers
Steffen, Julien; Hartke, Bernd
2017-10-28
Building on the recently published quantum-mechanically derived force field (QMDFF) and its empirical valence bond extension, EVB-QMDFF, it is now possible to generate a reliable potential energy surface for any given elementary reaction step in an essentially black box manner. This requires a limited and pre-defined set of reference data near the reaction path and generates an accurate approximation of the reference potential energy surface, on and off the reaction path. This intermediate representation can be used to generate reaction rate data, with far better accuracy and reliability than with traditional approaches based on transition state theory (TST) or variational extensions thereof (VTST), even if those include sophisticated tunneling corrections. However, the additional expense at the reference level remains very modest. We demonstrate all this for three arbitrarily chosen example reactions.
Spectrally accurate contour dynamics
International Nuclear Information System (INIS)
Van Buskirk, R.D.; Marcus, P.S.
1994-01-01
We present an exponentially accurate boundary integral method for calculation the equilibria and dynamics of piece-wise constant distributions of potential vorticity. The method represents contours of potential vorticity as a spectral sum and solves the Biot-Savart equation for the velocity by spectrally evaluating a desingularized contour integral. We use the technique in both an initial-value code and a newton continuation method. Our methods are tested by comparing the numerical solutions with known analytic results, and it is shown that for the same amount of computational work our spectral methods are more accurate than other contour dynamics methods currently in use
Lyon, Richard G. (Inventor); Leisawitz, David T. (Inventor); Rinehart, Stephen A. (Inventor); Memarsadeghi, Nargess (Inventor)
2012-01-01
Disclosed herein are systems, computer-implemented methods, and tangible computer-readable storage media for wide field imaging interferometry. The method includes for each point in a two dimensional detector array over a field of view of an image: gathering a first interferogram from a first detector and a second interferogram from a second detector, modulating a path-length for a signal from an image associated with the first interferogram in the first detector, overlaying first data from the modulated first detector and second data from the second detector, and tracking the modulating at every point in a two dimensional detector array comprising the first detector and the second detector over a field of view for the image. The method then generates a wide-field data cube based on the overlaid first data and second data for each point. The method can generate an image from the wide-field data cube.
Je, U. K.; Cho, H. M.; Cho, H. S.; Park, Y. O.; Park, C. K.; Lim, H. W.; Kim, K. S.; Kim, G. A.; Park, S. Y.; Woo, T. H.; Choi, S. I.
2016-02-01
In this paper, we propose a new/next-generation type of CT examinations, the so-called Interior Computed Tomography (ICT), which may presumably lead to dose reduction to the patient outside the target region-of-interest (ROI), in dental x-ray imaging. Here an x-ray beam from each projection position covers only a relatively small ROI containing a target of diagnosis from the examined structure, leading to imaging benefits such as decreasing scatters and system cost as well as reducing imaging dose. We considered the compressed-sensing (CS) framework, rather than common filtered-backprojection (FBP)-based algorithms, for more accurate ICT reconstruction. We implemented a CS-based ICT algorithm and performed a systematic simulation to investigate the imaging characteristics. Simulation conditions of two ROI ratios of 0.28 and 0.14 between the target and the whole phantom sizes and four projection numbers of 360, 180, 90, and 45 were tested. We successfully reconstructed ICT images of substantially high image quality by using the CS framework even with few-view projection data, still preserving sharp edges in the images.
Lee, Y. C.; Thompson, H. M.; Gaskell, P. H.
2009-12-01
, industrial and physical applications. However, despite recent modelling advances, the accurate numerical solution of the equations governing such problems is still at a relatively early stage. Indeed, recent studies employing a simplifying long-wave approximation have shown that highly efficient numerical methods are necessary to solve the resulting lubrication equations in order to achieve the level of grid resolution required to accurately capture the effects of micro- and nano-scale topographical features. Solution method: A portable parallel multigrid algorithm has been developed for the above purpose, for the particular case of flow over submerged topographical features. Within the multigrid framework adopted, a W-cycle is used to accelerate convergence in respect of the time dependent nature of the problem, with relaxation sweeps performed using a fixed number of pre- and post-Red-Black Gauss-Seidel Newton iterations. In addition, the algorithm incorporates automatic adaptive time-stepping to avoid the computational expense associated with repeated time-step failure. Running time: 1.31 minutes using 128 processors on BlueGene/P with a problem size of over 16.7 million mesh points.
Basics of thermal field theory a tutorial on perturbative computations
Laine, Mikko
2016-01-01
This book presents thermal field theory techniques, which can be applied in both cosmology and the theoretical description of the QCD plasma generated in heavy-ion collision experiments. It focuses on gauge interactions (whether weak or strong), which are essential in both contexts. As well as the many differences in the physics questions posed and in the microscopic forces playing a central role, the authors also explain the similarities and the techniques, such as the resummations, that are needed for developing a formally consistent perturbative expansion. The formalism is developed step by step, starting from quantum mechanics; introducing scalar, fermionic and gauge fields; describing the issues of infrared divergences; resummations and effective field theories; and incorporating systems with finite chemical potentials. With this machinery in place, the important class of real-time (dynamic) observables is treated in some detail. This is followed by an overview of a number of applications, ranging from t...
Computational radiation chemistry: the emergence of a new field
International Nuclear Information System (INIS)
Bartczak, W.M.; Kroh, J.
1991-01-01
The role of the computer experiment as an information source, which is complementary to the ''real'' experiment in radiation chemistry, is discussed. The discussion is followed by a brief review of some of the simulation techniques, which have been recently applied to the problems of radiation chemistry: ion recombination in spurs and tracks of ionization, electron tunnelling in low-temperature glasses, electron localization in disordered media. (author)
International Nuclear Information System (INIS)
Blanc, V.; Barbie, L.; Masson, R.
2011-01-01
Homogenization of linear viscoelastic heterogeneous media is here extended from two phase inclusion-matrix media to three phase inclusion-matrix media. Each phase obeying to a compressible Maxwellian behaviour, this analytic method leads to an equivalent elastic homogenization problem in the Laplace-Carson space. For some particular microstructures, such as the Hashin composite sphere assemblage, an exact solution is obtained. The inversion of the Laplace-Carson transforms of the overall stress-strain behaviour gives in such cases an internal variable formulation. As expected, the number of these internal variables and their evolution laws are modified to take into account the third phase. Moreover, evolution laws of averaged stresses and strains per phase can still be derived for three phase media. Results of this model are compared to full fields computations of representative volume elements using finite element method, for various concentrations and sizes of inclusion. Relaxation and creep test cases are performed in order to compare predictions of the effective response. The internal variable formulation is shown to yield accurate prediction in both cases. (authors)
Computation of wave fields and soil structure interaction
International Nuclear Information System (INIS)
Lysmer, J.W.
1982-01-01
The basic message of the lecture is that the determination of the temporal and spatial variation of the free-field motions is the most important part of any soil-structure interaction analysis. Any interaction motions may be considered as small aberrations superimposed on the free-field motions. The current definition of the soil-structure interaction problem implies that superposition must be used, directly or indirectly, in any rational method of analysis of this problem. This implies that the use of nonlinear procedures in any part of a soil-structure interaction analysis must be questioned. Currently the most important part of the soil-structure interaction analysis, the free-field problem, cannot be solved by nonlinear methods. Hence, it does not seem reasonable to spend a large effort on trying to obtain nonlinear solutions for the interaction part of the problem. Even if such solutions are obtained they cannot legally be superimposed on the free-field motions to obtain the total motions of the structure. This of course does not preclude the possibility that such an illegal procedure may lead to solutions which are close enough for engineering purposes. However, further research is required to make a decision on this issue
Advances in computational methods for Quantum Field Theory calculations
Ruijl, B.J.G.
2017-01-01
In this work we describe three methods to improve the performance of Quantum Field Theory calculations. First, we simplify large expressions to speed up numerical integrations. Second, we design Forcer, a program for the reduction of four-loop massless propagator integrals. Third, we extend the R*
Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models
Cuntz, Hermann; Lansner, Anders; Panzeri, Stefano; Einevoll, Gaute T.
2015-01-01
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP). Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best “LFP proxy”, we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents) with “ground-truth” LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D) network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo. PMID:26657024
Computing the Local Field Potential (LFP from Integrate-and-Fire Network Models.
Directory of Open Access Journals (Sweden)
Alberto Mazzoni
2015-12-01
Full Text Available Leaky integrate-and-fire (LIF network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local field potential (LFP. Given that LFPs are generated by spatially separated currents across the neuronal membrane, they cannot be computed directly from quantities defined in models of point-like LIF neurons. Here, we explore the best approximation for predicting the LFP based on standard output from point-neuron LIF networks. To search for this best "LFP proxy", we compared LFP predictions from candidate proxies based on LIF network output (e.g, firing rates, membrane potentials, synaptic currents with "ground-truth" LFP obtained when the LIF network synaptic input currents were injected into an analogous three-dimensional (3D network model of multi-compartmental neurons with realistic morphology, spatial distributions of somata and synapses. We found that a specific fixed linear combination of the LIF synaptic currents provided an accurate LFP proxy, accounting for most of the variance of the LFP time course observed in the 3D network for all recording locations. This proxy performed well over a broad set of conditions, including substantial variations of the neuronal morphologies. Our results provide a simple formula for estimating the time course of the LFP from LIF network simulations in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo.
Lima, Fabricio O.; Silva, Gisele S.; Furie, Karen L.; Frankel, Michael R.; Lev, Michael H.; Camargo, Érica CS; Haussen, Diogo C.; Singhal, Aneesh B.; Koroshetz, Walter J.; Smith, Wade S.; Nogueira, Raul G.
2016-01-01
Background and Purpose Patients with large vessel occlusion strokes (LVOS) may be better served by direct transfer to endovascular capable centers avoiding hazardous delays between primary and comprehensive stroke centers. However, accurate stroke field triage remains challenging. We aimed to develop a simple field scale to identify LVOS. Methods The FAST-ED scale was based on items of the NIHSS with higher predictive value for LVOS and tested in the STOPStroke cohort, in which patients underwent CT angiography within the first 24 hours of stroke onset. LVOS were defined by total occlusions involving the intracranial-ICA, MCA-M1, MCA-2, or basilar arteries. Patients with partial, bi-hemispheric, and/or anterior + posterior circulation occlusions were excluded. Receiver operating characteristic (ROC) curve, sensitivity, specificity, positive (PPV) and negative predictive values (NPV) of FAST-ED were compared with the NIHSS, Rapid Arterial oCclusion Evaluation (RACE) scale and Cincinnati Prehospital Stroke Severity Scale (CPSSS). Results LVO was detected in 240 of the 727 qualifying patients (33%). FAST-ED had comparable accuracy to predict LVO to the NIHSS and higher accuracy than RACE and CPSS (area under the ROC curve: FAST-ED=0.81 as reference; NIHSS=0.80, p=0.28; RACE=0.77, p=0.02; and CPSS=0.75, p=0.002). A FAST-ED ≥4 had sensitivity of 0.60, specificity 0.89, PPV 0.72, and NPV 0.82 versus RACE ≥5 of 0.55, 0.87, 0.68, 0.79 and CPSS ≥2 of 0.56, 0.85, 0.65, 0.78, respectively. Conclusions FAST-ED is a simple scale that if successfully validated in the field may be used by medical emergency professionals to identify LVOS in the pre-hospital setting enabling rapid triage of patients. PMID:27364531
Computation of the magnetic field of a spectrometer in detectors region
International Nuclear Information System (INIS)
Zhidkov, E.P.; Yuldasheva, M.B.; Yudin, I.P.; Yuldashev, O.I.
1995-01-01
Computed results of the 3D magnetic field of a spectrometer intended for investigation of hadron production of charmed particles and the indication of the narrow resonances in neutron-nucleus interactions are presented. The methods used in computations: finite element method and finite element method with suggested new infinite elements are described. For accuracy control the computations were carried out on a sequence of three-dimensional meshes. Special attention is devoted to behaviour of the magnetic field in the basic detector (proportional chambers) region. The performed results can be used for the field behaviour estimate of similar spectrometer magnets. (orig.)
Computation of the Magnetic Field of a Spectrometer in Detectors Region
International Nuclear Information System (INIS)
Zhidkov, E.P.; Yuldasheva, M.B.; Yudin, I.P.; Yuldashev, O.I.
1994-01-01
Computed results of the 3D magnetic field of a spectrometer intended for investigation of hadron production of charmed particles and the indication of the narrow resonances in neutron-nucleus interaction are presented. The methods, used in computations - finite element method and finite element method with suggested new infinite elements are described. For accuracy control the computations were carried out on a sequence of three-dimensional meshes. Special attention is devoted to the behaviour of the magnetic field in the basic detectors (proportional chambers) region. The performed results can be used for the field behaviour estimate of similar spectrometer magnets. 12 refs., 16 figs
Computational analysis of the flow field downstream of flow conditioners
Energy Technology Data Exchange (ETDEWEB)
Erdal, Asbjoern
1997-12-31
Technological innovations are essential for maintaining the competitiveness for the gas companies and here metering technology is one important area. This thesis shows that computational fluid dynamic techniques can be a valuable tool for examination of several parameters that may affect the performance of a flow conditioner (FC). Previous design methods, such as screen theory, could not provide fundamental understanding of how a FC works. The thesis shows, among other things, that the flow pattern through a complex geometry, like a 19-hole plate FC, can be simulated with good accuracy by a k-{epsilon} turbulence model. The calculations illuminate how variations in pressure drop, overall porosity, grading of porosity across the cross-section and the number of holes affects the performance of FCs. These questions have been studied experimentally by researchers for a long time. Now an understanding of the important mechanisms behind efficient FCs emerges from the predictions. 179 ref., 110 figs., 8 tabs.
International Nuclear Information System (INIS)
Getmanov, B.S.
1988-01-01
The results of classification of two-dimensional relativistic field models (1) spinor; (2) essentially-nonlinear scalar) possessing higher conservation laws using the system of symbolic computer calculations are presented shortly
2006-07-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch Traffic Management Center Integration Field Operations Test in the State of Utah. The document discusses evaluation findings in the followin...
2006-05-01
This document provides the final report for the evaluation of the USDOT-sponsored Computer-Aided Dispatch - Traffic Management Center Integration Field Operations Test in the State of Washington. The document discusses evaluation findings in the foll...
Analysis of the Extremely Low Frequency Magnetic Field Emission from Laptop Computers
Directory of Open Access Journals (Sweden)
Brodić Darko
2016-03-01
Full Text Available This study addresses the problem of magnetic field emission produced by the laptop computers. Although, the magnetic field is spread over the entire frequency spectrum, the most dangerous part of it to the laptop users is the frequency range from 50 to 500 Hz, commonly called the extremely low frequency magnetic field. In this frequency region the magnetic field is characterized by high peak values. To examine the influence of laptop’s magnetic field emission in the office, a specific experiment is proposed. It includes the measurement of the magnetic field at six laptop’s positions, which are in close contact to its user. The results obtained from ten different laptop computers show the extremely high emission at some positions, which are dependent on the power dissipation or bad ergonomics. Eventually, the experiment extracts these dangerous positions of magnetic field emission and suggests possible solutions.
Time-Accurate Simulations of Synthetic Jet-Based Flow Control for An Axisymmetric Spinning Body
National Research Council Canada - National Science Library
Sahu, Jubaraj
2004-01-01
.... A time-accurate Navier-Stokes computational technique has been used to obtain numerical solutions for the unsteady jet-interaction flow field for a spinning projectile at a subsonic speed, Mach...
Numerical computation of the transport matrix in toroidal plasma with a stochastic magnetic field
Zhu, Siqiang; Chen, Dunqiang; Dai, Zongliang; Wang, Shaojie
2018-04-01
A new numerical method, based on integrating along the full orbit of guiding centers, to compute the transport matrix is realized. The method is successfully applied to compute the phase-space diffusion tensor of passing electrons in a tokamak with a stochastic magnetic field. The new method also computes the Lagrangian correlation function, which can be used to evaluate the Lagrangian correlation time and the turbulence correlation length. For the case of the stochastic magnetic field, we find that the order of magnitude of the parallel correlation length can be estimated by qR0, as expected previously.
38 CFR 4.76a - Computation of average concentric contraction of visual fields.
2010-07-01
... concentric contraction of visual fields. 4.76a Section 4.76a Pensions, Bonuses, and Veterans' Relief DEPARTMENT OF VETERANS AFFAIRS SCHEDULE FOR RATING DISABILITIES Disability Ratings The Organs of Special Sense § 4.76a Computation of average concentric contraction of visual fields. Table III—Normal Visual...
Computing black hole entropy in loop quantum gravity from a conformal field theory perspective
International Nuclear Information System (INIS)
Agulló, Iván; Borja, Enrique F.; Díaz-Polo, Jacobo
2009-01-01
Motivated by the analogy proposed by Witten between Chern-Simons and conformal field theories, we explore an alternative way of computing the entropy of a black hole starting from the isolated horizon framework in loop quantum gravity. The consistency of the result opens a window for the interplay between conformal field theory and the description of black holes in loop quantum gravity
Computer algebra in quantum field theory integration, summation and special functions
Schneider, Carsten
2013-01-01
The book focuses on advanced computer algebra methods and special functions that have striking applications in the context of quantum field theory. It presents the state of the art and new methods for (infinite) multiple sums, multiple integrals, in particular Feynman integrals, difference and differential equations in the format of survey articles. The presented techniques emerge from interdisciplinary fields: mathematics, computer science and theoretical physics; the articles are written by mathematicians and physicists with the goal that both groups can learn from the other field, including
Wan, Junwei; Chen, Hongyan; Zhao, Jing
2017-08-01
According to the requirements of real-time, reliability and safety for aerospace experiment, the single center cloud computing technology application verification platform is constructed. At the IAAS level, the feasibility of the cloud computing technology be applied to the field of aerospace experiment is tested and verified. Based on the analysis of the test results, a preliminary conclusion is obtained: Cloud computing platform can be applied to the aerospace experiment computing intensive business. For I/O intensive business, it is recommended to use the traditional physical machine.
Manual cross check of computed dose times for motorised wedged fields
International Nuclear Information System (INIS)
Porte, J.
2001-01-01
If a mass of tissue equivalent material is exposed in turn to wedged and open radiation fields of the same size, for equal times, it is incorrect to assume that the resultant isodose pattern will be effectively that of a wedge having half the angle of the wedged field. Computer programs have been written to address the problem of creating an intermediate wedge field, commonly known as a motorized wedge. The total exposure time is apportioned between the open and wedged fields, to produce a beam modification equivalent to that of a wedged field of a given wedge angle. (author)
Improved Off-Shell Scattering Amplitudes in String Field Theory and New Computational Methods
Park, I Y; Bars, Itzhak
2004-01-01
We report on new results in Witten's cubic string field theory for the off-shell factor in the 4-tachyon amplitude that was not fully obtained explicitly before. This is achieved by completing the derivation of the Veneziano formula in the Moyal star formulation of Witten's string field theory (MSFT). We also demonstrate detailed agreement of MSFT with a number of on-shell and off-shell computations in other approaches to Witten's string field theory. We extend the techniques of computation in MSFT, and show that the j=0 representation of SL(2,R) generated by the Virasoro operators $L_{0},L_{\\pm1}$ is a key structure in practical computations for generating numbers. We provide more insight into the Moyal structure that simplifies string field theory, and develop techniques that could be applied more generally, including nonperturbative processes.
Energy Technology Data Exchange (ETDEWEB)
Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-07
A numerical algorithm for computing the field components B_{r} and B_{z} and their r and z derivatives with open boundaries in cylindrical coordinates for radially thin solenoids with uniform current density is described in this note. An algorithm for computing the vector potential A_{θ} is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. The (apparently) new feature of the algorithms described in this note applies to cases where the field point is outside of the bore of the solenoid and the field-point radius approaches the solenoid radius. Since the elliptic integrals of the third kind normally used in computing B_{z} and A_{θ} become infinite in this region of parameter space, fields for points with the axial coordinate z outside of the ends of the solenoid and near the solenoid radius are treated by use of elliptic integrals of the third kind of modified argument, derived by use of an addition theorem. Also, the algorithms also avoid the numerical difficulties the textbook solutions have for points near the axis arising from explicit factors of 1/r or 1/r^{2} in the some of the expressions.
A Fixpoint-Based Calculus for Graph-Shaped Computational Fields
DEFF Research Database (Denmark)
Lluch Lafuente, Alberto; Loreti, Michele; Montanari, Ugo
2015-01-01
topology is represented by a graph-shaped field, namely a network with attributes on both nodes and arcs, where arcs represent interaction capabilities between nodes. We propose a calculus where computation is strictly synchronous and corresponds to sequential computations of fixpoints in the graph......-shaped field. Under some conditions, those fixpoints can be computed by synchronised iterations, where in each iteration the attributes of a node is updated based on the attributes of the neighbours in the previous iteration. Basic constructs are reminiscent of the semiring μ-calculus, a semiring......-valued generalisation of the modal μ-calculus, which provides a flexible mechanism to specify the neighbourhood range (according to path formulae) and the way attributes should be combined (through semiring operators). Additional control-How constructs allow one to conveniently structure the fixpoint computations. We...
Study of electric and magnetic fields on transmission lines using a computer simulation program
International Nuclear Information System (INIS)
Robelo Mojica, Nelson
2011-01-01
A study was conducted to determine and reduce levels of electric and magnetic fields with different configurations used by the Instituto Costarricense de Electricidad in power transmission lines in Costa Rica. The computer simulation program PLS-CADD with EPRI algorithm has been used to obtain field values close to those actual to lines easements that have worked to date. Different configurations have been compared on equal terms and the lowest levels of electric and magnetic fields are determined. The most appropriate configuration of the tower has been obtained and therefore has decreased exposure to electromagnetic fields people, without affecting the energy demand of the population. (author) [es
International Nuclear Information System (INIS)
Grillo, C.; Suyu, S. H.; Umetsu, K.; Rosati, P.; Caminha, G. B.; Mercurio, A.; Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M.; Lombardi, M.; Gobat, R.; Coe, D.; Koekemoer, A. M.; Postman, M.; Zitrin, A.; Halkola, A.
2015-01-01
We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M * /M ☉ ) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide intriguing
Energy Technology Data Exchange (ETDEWEB)
Grillo, C. [Dark Cosmology Centre, Niels Bohr Institute, University of Copenhagen, Juliane Maries Vej 30, DK-2100 Copenhagen (Denmark); Suyu, S. H.; Umetsu, K. [Institute of Astronomy and Astrophysics, Academia Sinica, P.O. Box 23-141, Taipei 10617, Taiwan (China); Rosati, P.; Caminha, G. B. [Dipartimento di Fisica e Scienze della Terra, Università degli Studi di Ferrara, Via Saragat 1, I-44122 Ferrara (Italy); Mercurio, A. [INAF - Osservatorio Astronomico di Capodimonte, Via Moiariello 16, I-80131 Napoli (Italy); Balestra, I.; Munari, E.; Nonino, M.; De Lucia, G.; Borgani, S.; Biviano, A.; Girardi, M. [INAF - Osservatorio Astronomico di Trieste, via G. B. Tiepolo 11, I-34143, Trieste (Italy); Lombardi, M. [Dipartimento di Fisica, Università degli Studi di Milano, via Celoria 16, I-20133 Milano (Italy); Gobat, R. [Laboratoire AIM-Paris-Saclay, CEA/DSM-CNRS-Universitè Paris Diderot, Irfu/Service d' Astrophysique, CEA Saclay, Orme des Merisiers, F-91191 Gif sur Yvette (France); Coe, D.; Koekemoer, A. M.; Postman, M. [Space Telescope Science Institute, 3700 San Martin Drive, Baltimore, MD 21208 (United States); Zitrin, A. [Cahill Center for Astronomy and Astrophysics, California Institute of Technology, MS 249-17, Pasadena, CA 91125 (United States); Halkola, A., E-mail: grillo@dark-cosmology.dk; and others
2015-02-10
We present a detailed mass reconstruction and a novel study on the substructure properties in the core of the Cluster Lensing And Supernova survey with Hubble (CLASH) and Frontier Fields galaxy cluster MACS J0416.1–2403. We show and employ our extensive spectroscopic data set taken with the VIsible Multi-Object Spectrograph instrument as part of our CLASH-VLT program, to confirm spectroscopically 10 strong lensing systems and to select a sample of 175 plausible cluster members to a limiting stellar mass of log (M {sub *}/M {sub ☉}) ≅ 8.6. We reproduce the measured positions of a set of 30 multiple images with a remarkable median offset of only 0.''3 by means of a comprehensive strong lensing model comprised of two cluster dark-matter halos, represented by cored elliptical pseudo-isothermal mass distributions, and the cluster member components, parameterized with dual pseudo-isothermal total mass profiles. The latter have total mass-to-light ratios increasing with the galaxy HST/WFC3 near-IR (F160W) luminosities. The measurement of the total enclosed mass within the Einstein radius is accurate to ∼5%, including the systematic uncertainties estimated from six distinct mass models. We emphasize that the use of multiple-image systems with spectroscopic redshifts and knowledge of cluster membership based on extensive spectroscopic information is key to constructing robust high-resolution mass maps. We also produce magnification maps over the central area that is covered with HST observations. We investigate the galaxy contribution, both in terms of total and stellar mass, to the total mass budget of the cluster. When compared with the outcomes of cosmological N-body simulations, our results point to a lack of massive subhalos in the inner regions of simulated clusters with total masses similar to that of MACS J0416.1–2403. Our findings of the location and shape of the cluster dark-matter halo density profiles and on the cluster substructures provide
Using the CAVE virtual-reality environment as an aid to 3-D electromagnetic field computation
International Nuclear Information System (INIS)
Turner, L.R.; Levine, D.; Huang, M.; Papka, M.
1995-01-01
One of the major problems in three-dimensional (3-D) field computation is visualizing the resulting 3-D field distributions. A virtual-reality environment, such as the CAVE, (CAVE Automatic Virtual Environment) is helping to overcome this problem, thus making the results of computation more usable for designers and users of magnets and other electromagnetic devices. As a demonstration of the capabilities of the CAVE, the elliptical multipole wiggler (EMW), an insertion device being designed for the Advanced Photon Source (APS) now being commissioned at Argonne National Laboratory (ANL), wa made visible, along with its fields and beam orbits. Other uses of the CAVE in preprocessing and postprocessing computation for electromagnetic applications are also discussed
THEORETICAL COMPUTATION OF A STRESS FIELD IN A CYLINDRICAL GLASS SPECIMEN
Directory of Open Access Journals (Sweden)
NORBERT KREČMER
2011-03-01
Full Text Available This work deals with the computation of the stress field generated in an infinitely high glass cylinder while cooling. The theory of structural relaxation is used in order to compute the heat capacity, the thermal expansion coefficient, and the viscosity. The relaxation of the stress components is solved in the frame of the Maxwell viscoelasticity model. The obtained results were verified by the sensitivity analysis and compared with some experimental data.
International Nuclear Information System (INIS)
Pshenichnikov, A.F.
2012-01-01
A new algorithm for calculating magnetic fields in a concentrated magnetic fluid with inhomogeneous density is proposed. Inhomogeneity of the fluid is caused by magnetophoresis. In this case, the diffusion and magnetostatic parts of the problem are tightly linked together and are solved jointly. The dynamic diffusion equation is solved by the finite volume method and, to calculate the magnetic field inside the fluid, an iterative process is performed in parallel. The solution to the problem is sought in Cartesian coordinates, and the computational domain is decomposed into rectangular elements. This technique eliminates the need to solve the related boundary-value problem for magnetic fields, accelerates computations and eliminates the error caused by the finite sizes of the outer region. Formulas describing the contribution of the rectangular element to the field intensity in the case of a plane problem are given. Magnetic and concentration fields inside the magnetic fluid filling a rectangular cavity generated under the action of the uniform external filed are calculated. - Highlights: ▶ New algorithm for calculating magnetic field intense magnetic fluid with account of magnetophoresis and diffusion of particles. ▶ We do not need to solve boundary-value problem, but we accelerate computations and eliminate some errors. ▶ We solve nonlinear flow equation by the finite volume method and calculate magnetic and focus fields in the fluid for plane case.
International Nuclear Information System (INIS)
Fukushima, Toshio
2017-01-01
In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitational acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.
Energy Technology Data Exchange (ETDEWEB)
Fukushima, Toshio, E-mail: Toshio.Fukushima@nao.ac.jp [National Astronomical Observatory/SOKENDAI, Ohsawa, Mitaka, Tokyo 181-8588 (Japan)
2017-10-01
In order to obtain the gravitational field of a general finite body inside its Brillouin sphere, we developed a new method to compute the field accurately. First, the body is assumed to consist of some layers in a certain spherical polar coordinate system and the volume mass density of each layer is expanded as a Maclaurin series of the radial coordinate. Second, the line integral with respect to the radial coordinate is analytically evaluated in a closed form. Third, the resulting surface integrals are numerically integrated by the split quadrature method using the double exponential rule. Finally, the associated gravitational acceleration vector is obtained by numerically differentiating the numerically integrated potential. Numerical experiments confirmed that the new method is capable of computing the gravitational field independently of the location of the evaluation point, namely whether inside, on the surface of, or outside the body. It can also provide sufficiently precise field values, say of 14–15 digits for the potential and of 9–10 digits for the acceleration. Furthermore, its computational efficiency is better than that of the polyhedron approximation. This is because the computational error of the new method decreases much faster than that of the polyhedron models when the number of required transcendental function calls increases. As an application, we obtained the gravitational field of 433 Eros from its shape model expressed as the 24 × 24 spherical harmonic expansion by assuming homogeneity of the object.
International Nuclear Information System (INIS)
Dragt, A.J.; Gluckstern, R.L.
1992-11-01
The University of Maryland Dynamical Systems and Accelerator Theory Group carries out research in two broad areas: the computation of charged particle beam transport using Lie algebraic methods and advanced methods for the computation of electromagnetic fields and beam-cavity interactions. Important improvements in the state of the art are believed to be possible in both of these areas. In addition, applications of these methods are made to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. The Lie algebraic method of computing and analyzing beam transport handles both linear and nonlinear beam elements. Tests show this method to be superior to the earlier matrix or numerical integration methods. It has wide application to many areas including accelerator physics, intense particle beams, ion microprobes, high resolution electron microscopy, and light optics. With regard to the area of electromagnetic fields and beam cavity interactions, work is carried out on the theory of beam breakup in single pulses. Work is also done on the analysis of the high frequency behavior of longitudinal and transverse coupling impedances, including the examination of methods which may be used to measure these impedances. Finally, work is performed on the electromagnetic analysis of coupled cavities and on the coupling of cavities to waveguides
Computational strong-field quantum dynamics. Intense light-matter interactions
Energy Technology Data Exchange (ETDEWEB)
Bauer, Dieter (ed.) [Rostock Univ. (Germany). Inst. fuer Physik
2017-09-01
This graduate textbook introduces the computational techniques to study ultra-fast quantum dynamics of matter exposed to strong laser fields. Coverage includes methods to propagate wavefunctions according to the time dependent Schroedinger, Klein-Gordon or Dirac equation, the calculation of typical observables, time-dependent density functional theory, multi configurational time-dependent Hartree-Fock, time-dependent configuration interaction singles, the strong-field approximation, and the microscopic particle-in-cell approach.
Computational strong-field quantum dynamics. Intense light-matter interactions
International Nuclear Information System (INIS)
Bauer, Dieter
2017-01-01
This graduate textbook introduces the computational techniques to study ultra-fast quantum dynamics of matter exposed to strong laser fields. Coverage includes methods to propagate wavefunctions according to the time dependent Schroedinger, Klein-Gordon or Dirac equation, the calculation of typical observables, time-dependent density functional theory, multi configurational time-dependent Hartree-Fock, time-dependent configuration interaction singles, the strong-field approximation, and the microscopic particle-in-cell approach.
Effects of Force Field Selection on the Computational Ranking of MOFs for CO2 Separations.
Dokur, Derya; Keskin, Seda
2018-02-14
Metal-organic frameworks (MOFs) have been considered as highly promising materials for adsorption-based CO 2 separations. The number of synthesized MOFs has been increasing very rapidly. High-throughput molecular simulations are very useful to screen large numbers of MOFs in order to identify the most promising adsorbents prior to extensive experimental studies. Results of molecular simulations depend on the force field used to define the interactions between gas molecules and MOFs. Choosing the appropriate force field for MOFs is essential to make reliable predictions about the materials' performance. In this work, we performed two sets of molecular simulations using the two widely used generic force fields, Dreiding and UFF, and obtained adsorption data of CO 2 /H 2 , CO 2 /N 2 , and CO 2 /CH 4 mixtures in 100 different MOF structures. Using this adsorption data, several adsorbent evaluation metrics including selectivity, working capacity, sorbent selection parameter, and percent regenerability were computed for each MOF. MOFs were then ranked based on these evaluation metrics, and top performing materials were identified. We then examined the sensitivity of the MOF rankings to the force field type. Our results showed that although there are significant quantitative differences between some adsorbent evaluation metrics computed using different force fields, rankings of the top MOF adsorbents for CO 2 separations are generally similar: 8, 8, and 9 out of the top 10 most selective MOFs were found to be identical in the ranking for CO 2 /H 2 , CO 2 /N 2 , and CO 2 /CH 4 separations using Dreiding and UFF. We finally suggested a force field factor depending on the energy parameters of atoms present in the MOFs to quantify the robustness of the simulation results to the force field selection. This easily computable factor will be highly useful to determine whether the results are sensitive to the force field type or not prior to performing computationally demanding
Computational strong-field quantum dynamics intense light-matter interactions
2017-01-01
This graduate textbook introduces the computational techniques to study ultra-fast quantum dynamics of matter exposed to strong laser fields. Coverage includes methods to propagate wavefunctions according to the time-dependent Schrödinger, Klein-Gordon or Dirac equation, the calculation of typical observables, time-dependent density functional theory, multi-configurational time-dependent Hartree-Fock, time-dependent configuration interaction singles, the strong-field approximation, and the microscopic particle-in-cell approach.
Combined tangential-normal vector elements for computing electric and magnetic fields
International Nuclear Information System (INIS)
Sachdev, S.; Cendes, Z.J.
1993-01-01
A direct method for computing electric and magnetic fields in two dimensions is developed. This method determines both the fields and fluxes directly from Maxwell's curl and divergence equations without introducing potential functions. This allows both the curl and the divergence of the field to be set independently in all elements. The technique is based on a new type of vector finite element that simultaneously interpolates to the tangential component of the electric or the magnetic field and the normal component of the electric or magnetic flux. Continuity conditions are imposed across element edges simply by setting like variables to be the same across element edges. This guarantees the continuity of the field and flux at the mid-point of each edge and that for all edges the average value of the tangential component of the field and of the normal component of the flux is identical
Glushkov, A. V.; Gurskaya, M. Yu; Ignatenko, A. V.; Smirnov, A. V.; Serga, I. N.; Svinarenko, A. A.; Ternovsky, E. V.
2017-10-01
The consistent relativistic energy approach to the finite Fermi-systems (atoms and nuclei) in a strong realistic laser field is presented and applied to computing the multiphoton resonances parameters in some atoms and nuclei. The approach is based on the Gell-Mann and Low S-matrix formalism, multiphoton resonance lines moments technique and advanced Ivanov-Ivanova algorithm of calculating the Green’s function of the Dirac equation. The data for multiphoton resonance width and shift for the Cs atom and the 57Fe nucleus in dependence upon the laser intensity are listed.
Bolemon, Jay S.; Etzold, David J.
1974-01-01
Discusses the use of a small computer to solve self-consistent field problems of one-dimensional systems of two or more interacting particles in an elementary quantum mechanics course. Indicates that the calculation can serve as a useful introduction to the iterative technique. (CC)
Computation of magnetic fields within source regions of ionospheric and magnetospheric currents
DEFF Research Database (Denmark)
Engels, U.; Olsen, Nils
1998-01-01
A general method of computing the magnetic effect caused by a predetermined three-dimensional external current density is presented. It takes advantage of the representation of solenoidal vector fields in terms of toroidal and poloidal modes expressed by two independent series of spherical harmon...
Computer-based measurement and automatizatio aplication research in nuclear technology fields
International Nuclear Information System (INIS)
Jiang Hongfei; Zhang Xiangyang
2003-01-01
This paper introduces computer-based measurement and automatization application research in nuclear technology fields. The emphasis of narration are the role of software in the development of system, and the network measurement and control software model which has optimistic application foreground. And presents the application examples of research and development. (authors)
International Nuclear Information System (INIS)
Fuentes, N.O.; Sakanaka, P.H.
1990-01-01
Field-reversed configuration equilibria are studied by solving the Grad-Shafranov equation. A multiple coil system (main coil and end mirrors) is considered to simulate the coil geometry of CNEA device. First results are presented for computed two-dimensional FRC equilibria produced varying the mirror coil current with two different mirror lenghts. (Author)
Computer codes for tasks in the fields of isotope and radiation research
International Nuclear Information System (INIS)
Friedrich, K.; Gebhardt, O.
1978-11-01
Concise descriptions of computer codes developed for solving problems in the fields of isotope and radiation research at the Zentralinstitut fuer Isotopen- und Strahlenforschung (ZfI) are compiled. In part two the structure of the ZfI program library MABIF is outlined and a complete list of all codes available is given
Dominguez, Margaret Z.; Content, David A.; Gong, Qian; Griesmann, Ulf; Hagopian, John G.; Marx, Catherine T; Whipple, Arthur L.
2017-01-01
Infrared Computer Generated Holograms (CGHs) were designed, manufactured and used to measure the performance of the grism (grating prism) prototype which includes testing Diffractive Optical Elements (DOE). The grism in the Wide Field Infrared Survey Telescope (WFIRST) will allow the surveying of a large section of the sky to find bright galaxies.
Three-dimensional magnetic field computation on a distributed memory parallel processor
International Nuclear Information System (INIS)
Barion, M.L.
1990-01-01
The analysis of three-dimensional magnetic fields by finite element methods frequently proves too onerous a task for the computing resource on which it is attempted. When non-linear and transient effects are included, it may become impossible to calculate the field distribution to sufficient resolution. One approach to this problem is to exploit the natural parallelism in the finite element method via parallel processing. This paper reports on an implementation of a finite element code for non-linear three-dimensional low-frequency magnetic field calculation on Intel's iPSC/2
Computing Galois Groups of Eisenstein Polynomials Over P-adic Fields
Milstead, Jonathan
The most efficient algorithms for computing Galois groups of polynomials over global fields are based on Stauduhar's relative resolvent method. These methods are not directly generalizable to the local field case, since they require a field that contains the global field in which all roots of the polynomial can be approximated. We present splitting field-independent methods for computing the Galois group of an Eisenstein polynomial over a p-adic field. Our approach is to combine information from different disciplines. We primarily, make use of the ramification polygon of the polynomial, which is the Newton polygon of a related polynomial. This allows us to quickly calculate several invariants that serve to reduce the number of possible Galois groups. Algorithms by Greve and Pauli very efficiently return the Galois group of polynomials where the ramification polygon consists of one segment as well as information about the subfields of the stem field. Second, we look at the factorization of linear absolute resolvents to further narrow the pool of possible groups.
Quasistatic zooming of FDTD E-field computations: the impact of down-scaling techniques
Energy Technology Data Exchange (ETDEWEB)
Van de Kamer, J.B.; Kroeze, H.; De Leeuw, A.A.C.; Lagendijk, J.J.W. [Department of Radiotherapy, University Medical Center Utrecht, Heidelberglaan 100, 3584 CX, Utrecht (Netherlands)
2001-05-01
Due to current computer limitations, regional hyperthermia treatment planning (HTP) is practically limited to a resolution of 1 cm, whereas a millimetre resolution is desired. Using the centimetre resolution E-vector-field distribution, computed with, for example, the finite-difference time-domain (FDTD) method and the millimetre resolution patient anatomy it is possible to obtain a millimetre resolution SAR distribution in a volume of interest (VOI) by means of quasistatic zooming. To compute the required low-resolution E-vector-field distribution, a low-resolution dielectric geometry is needed which is constructed by down-scaling the millimetre resolution dielectric geometry. In this study we have investigated which down-scaling technique results in a dielectric geometry that yields the best low-resolution E-vector-field distribution as input for quasistatic zooming. A segmented 2 mm resolution CT data set of a patient has been down-scaled to 1 cm resolution using three different techniques: 'winner-takes-all', 'volumetric averaging' and 'anisotropic volumetric averaging'. The E-vector-field distributions computed for those low-resolution dielectric geometries have been used as input for quasistatic zooming. The resulting zoomed-resolution SAR distributions were compared with a reference: the 2 mm resolution SAR distribution computed with the FDTD method. The E-vector-field distribution for both a simple phantom and the complex partial patient geometry down-scaled using 'anisotropic volumetric averaging' resulted in zoomed-resolution SAR distributions that best approximate the corresponding high-resolution SAR distribution (correlation 97, 96% and absolute averaged difference 6, 14% respectively). (author)
Accurate quantum chemical calculations
Bauschlicher, Charles W., Jr.; Langhoff, Stephen R.; Taylor, Peter R.
1989-01-01
An important goal of quantum chemical calculations is to provide an understanding of chemical bonding and molecular electronic structure. A second goal, the prediction of energy differences to chemical accuracy, has been much harder to attain. First, the computational resources required to achieve such accuracy are very large, and second, it is not straightforward to demonstrate that an apparently accurate result, in terms of agreement with experiment, does not result from a cancellation of errors. Recent advances in electronic structure methodology, coupled with the power of vector supercomputers, have made it possible to solve a number of electronic structure problems exactly using the full configuration interaction (FCI) method within a subspace of the complete Hilbert space. These exact results can be used to benchmark approximate techniques that are applicable to a wider range of chemical and physical problems. The methodology of many-electron quantum chemistry is reviewed. Methods are considered in detail for performing FCI calculations. The application of FCI methods to several three-electron problems in molecular physics are discussed. A number of benchmark applications of FCI wave functions are described. Atomic basis sets and the development of improved methods for handling very large basis sets are discussed: these are then applied to a number of chemical and spectroscopic problems; to transition metals; and to problems involving potential energy surfaces. Although the experiences described give considerable grounds for optimism about the general ability to perform accurate calculations, there are several problems that have proved less tractable, at least with current computer resources, and these and possible solutions are discussed.
Harbusch, Karin; Itsova, Gergana; Koch, Ulrich; Kuhner, Christine
2009-01-01
We built a natural language processing (NLP) system implementing a "virtual writing conference" for elementary-school children, with German as the target language. Currently, state-of-the-art computer support for writing tasks is restricted to multiple-choice questions or quizzes because automatic parsing of the often ambiguous and fragmentary…
Ketchum, Eleanor A. (Inventor)
2000-01-01
A computer-implemented method and apparatus for determining position of a vehicle within 100 km autonomously from magnetic field measurements and attitude data without a priori knowledge of position. An inverted dipole solution of two possible position solutions for each measurement of magnetic field data are deterministically calculated by a program controlled processor solving the inverted first order spherical harmonic representation of the geomagnetic field for two unit position vectors 180 degrees apart and a vehicle distance from the center of the earth. Correction schemes such as a successive substitutions and a Newton-Raphson method are applied to each dipole. The two position solutions for each measurement are saved separately. Velocity vectors for the position solutions are calculated so that a total energy difference for each of the two resultant position paths is computed. The position path with the smaller absolute total energy difference is chosen as the true position path of the vehicle.
Cone Beam Computed Tomography (CBCT) in the Field of Interventional Oncology of the Liver
Energy Technology Data Exchange (ETDEWEB)
Bapst, Blanche, E-mail: blanchebapst@hotmail.com; Lagadec, Matthieu, E-mail: matthieu.lagadec@bjn.aphp.fr [Beaujon Hospital, University Hospitals Paris Nord Val de Seine, Beaujon, Department of Radiology (France); Breguet, Romain, E-mail: romain.breguet@hcuge.ch [University Hospital of Geneva (Switzerland); Vilgrain, Valérie, E-mail: Valerie.vilgrain@bjn.aphp.fr; Ronot, Maxime, E-mail: maxime.ronot@bjn.aphp.fr [Beaujon Hospital, University Hospitals Paris Nord Val de Seine, Beaujon, Department of Radiology (France)
2016-01-15
Cone beam computed tomography (CBCT) is an imaging modality that provides computed tomographic images using a rotational C-arm equipped with a flat panel detector as part of the Angiography suite. The aim of this technique is to provide additional information to conventional 2D imaging to improve the performance of interventional liver oncology procedures (intraarterial treatments such as chemoembolization or selective internal radiation therapy, and percutaneous tumor ablation). CBCT provides accurate tumor detection and targeting, periprocedural guidance, and post-procedural evaluation of treatment success. This technique can be performed during intraarterial or intravenous contrast agent administration with various acquisition protocols to highlight liver tumors, liver vessels, or the liver parenchyma. The purpose of this review is to present an extensive overview of published data on CBCT in interventional oncology of the liver, for both percutaneous ablation and intraarterial procedures.
Computer model of copper resistivity will improve the efficiency of field-compression devices
International Nuclear Information System (INIS)
Burgess, T.J.
1977-01-01
By detonating a ring of high explosive around an existing magnetic field, we can, under certain conditions, compress the field and multiply its strength tremendously. In this way, we can duplicate for a fraction of a second the extreme pressures that normally exist only in the interior of stars and planets. Under such pressures, materials may exhibit behavior that will confirm or alter current notions about the fundamental structure of matter and the ongoing processes in planetary interiors. However, we cannot design an efficient field-compression device unless we can calculate the electrical resistivity of certain basic metal components, which interact with the field. To aid in the design effort, we have developed a computer code that calculates the resistivity of copper and other metals over the wide range of temperatures and pressures found in a field-compression device
Fukushima, Toshio
2017-06-01
Reviewed are recently developed methods of the numerical integration of the gravitational field of general two- or three-dimensional bodies with arbitrary shape and mass density distribution: (i) an axisymmetric infinitely-thin disc (Fukushima 2016a, MNRAS, 456, 3702), (ii) a general infinitely-thin plate (Fukushima 2016b, MNRAS, 459, 3825), (iii) a plane-symmetric and axisymmetric ring-like object (Fukushima 2016c, AJ, 152, 35), (iv) an axisymmetric thick disc (Fukushima 2016d, MNRAS, 462, 2138), and (v) a general three-dimensional body (Fukushima 2016e, MNRAS, 463, 1500). The key techniques employed are (a) the split quadrature method using the double exponential rule (Takahashi and Mori, 1973, Numer. Math., 21, 206), (b) the precise and fast computation of complete elliptic integrals (Fukushima 2015, J. Comp. Appl. Math., 282, 71), (c) Ridder's algorithm of numerical differentiaion (Ridder 1982, Adv. Eng. Softw., 4, 75), (d) the recursive computation of the zonal toroidal harmonics, and (e) the integration variable transformation to the local spherical polar coordinates. These devices succesfully regularize the Newton kernel in the integrands so as to provide accurate integral values. For example, the general 3D potential is regularly integrated as Φ (\\vec{x}) = - G \\int_0^∞ ( \\int_{-1}^1 ( \\int_0^{2π} ρ (\\vec{x}+\\vec{q}) dψ ) dγ ) q dq, where \\vec{q} = q (√{1-γ^2} cos ψ, √{1-γ^2} sin ψ, γ), is the relative position vector referred to \\vec{x}, the position vector at which the potential is evaluated. As a result, the new methods can compute the potential and acceleration vector very accurately. In fact, the axisymmetric integration reproduces the Miyamoto-Nagai potential with 14 correct digits. The developed methods are applied to the gravitational field study of galaxies and protoplanetary discs. Among them, the investigation on the rotation curve of M33 supports a disc-like structure of the dark matter with a double-power-law surface
Karbeyaz, Başak Ulker; Miller, Eric L; Cleveland, Robin O
2007-02-01
Conventional ultrasound transducers used for medical diagnosis generally consist of linearly aligned rectangular apertures with elements that are focused in one plane. While traditional beamforming is easily accomplished with such transducers, the development of quantitative, physics-based imaging methods, such as tomography, requires an accurate, and computationally efficient, model of the field radiated by the transducer. The field can be expressed in terms of the Helmholtz-Kirchhoff integral; however, its direct numerical evaluation is a computationally intensive task. Here, a fast semianalytical method based on Stepanishen's spatial impulse response formulation [J. Acoust. Soc. Am. 49, 1627-1638 (1971)] is developed to compute the acoustic field of a rectangular element of cylindrically concave transducers in a homogeneous medium. The pressure field, for, lossless and attenuating media, is expressed as a superposition of Bessel functions, which can be evaluated rapidly. In particular, the coefficients of the Bessel series are frequency independent and need only be evaluated once for a given transducer. A speed up of two orders of magnitude is obtained compared to an optimized direct numerical integration. The numerical results are compared with Field II and the Fresnel approximation.
Energy Technology Data Exchange (ETDEWEB)
Walstrom, Peter Lowell [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-08-24
A numerical algorithm for computing the field components B_{r} and B_{z} and their r and z derivatives with open boundaries in cylindrical coordinates for circular current loops is described. An algorithm for computing the vector potential is also described. For the convenience of the reader, derivations of the final expressions from their defining integrals are given in detail, since their derivations (especially for the field derivatives) are not all easily found in textbooks. Numerical calculations are based on evaluation of complete elliptic integrals using the Bulirsch algorithm cel. Since cel can evaluate complete elliptic integrals of a fairly general type, in some cases the elliptic integrals can be evaluated without first reducing them to forms containing standard Legendre forms. The algorithms avoid the numerical difficulties that many of the textbook solutions have for points near the axis because of explicit factors of 1=r or 1=r^{2} in the some of the expressions.
Burrelli, Joan S.
This brief describes graduate enrollment increases in the science and engineering fields, especially in engineering and computer sciences. Graduate student enrollment is summarized by enrollment status, citizenship, race/ethnicity, and fields. (KHR)
TBCI and URMEL - New computer codes for wake field and cavity mode calculations
International Nuclear Information System (INIS)
Weiland, T.
1983-01-01
Wake force computation is important for any study of instabilities in high current accelerators and storage rings. These forces are generated by intense bunches of charged particles passing cylindrically symmetric structures on or off axis. The adequate method for computing such forces is the time domain approach. The computer Code TBCI computes for relativistic as well as for nonrelativistic bunches of arbitrary shape longitudinal and transverse wake forces up to the octupole component. TBCI is not limited to cavity-like objects and thus applicable to bellows, beam pipes with varying cross sections and any other nonresonant structures. For the accelerating cavities one also needs to know the resonant modes and frequencies for the study of instabilities and mode couplers. The complementary code named URMEL computes these fields for any azimuthal dependence of the fields in ascending order. The mathematical procedure being used is very safe and does not miss modes. Both codes together represent a unique tool for accelerator design and are easy to use
A program for computing cohomology of Lie superalgebras of vector fields
International Nuclear Information System (INIS)
Kornyak, V.V.
1998-01-01
An algorithm and its C implementation for computing the cohomology of Lie algebras and superalgebras is described. When elaborating the algorithm we paid primary attention to cohomology in trivial, adjoint and coadjoint modules for Lie algebras and superalgebras of the formal vector fields. These algebras have found many applications to modern supersymmetric models of theoretical and mathematical physics. As an example, we present 3- and 5-cocycles from the cohomology in the trivial module for the Poisson algebra Po (2), as found by computer
Jaschinski, Wolfgang; König, Mirjam; Mekontso, Tiofil M; Ohlendorf, Arne; Welscher, Monique
2015-05-01
Two types of progressive addition lenses (PALs) were compared in an office field study: 1. General purpose PALs with continuous clear vision between infinity and near reading distances and 2. Computer vision PALs with a wider zone of clear vision at the monitor and in near vision but no clear distance vision. Twenty-three presbyopic participants wore each type of lens for two weeks in a double-masked four-week quasi-experimental procedure that included an adaptation phase (Weeks 1 and 2) and a test phase (Weeks 3 and 4). Questionnaires on visual and musculoskeletal conditions as well as preferences regarding the type of lenses were administered. After eight more weeks of free use of the spectacles, the preferences were assessed again. The ergonomic conditions were analysed from photographs. Head inclination when looking at the monitor was significantly lower by 2.3 degrees with the computer vision PALs than with the general purpose PALs. Vision at the monitor was judged significantly better with computer PALs, while distance vision was judged better with general purpose PALs; however, the reported advantage of computer vision PALs differed in extent between participants. Accordingly, 61 per cent of the participants preferred the computer vision PALs, when asked without information about lens design. After full information about lens characteristics and additional eight weeks of free spectacle use, 44 per cent preferred the computer vision PALs. On average, computer vision PALs were rated significantly better with respect to vision at the monitor during the experimental part of the study. In the final forced-choice ratings, approximately half of the participants preferred either the computer vision PAL or the general purpose PAL. Individual factors seem to play a role in this preference and in the rated advantage of computer vision PALs. © 2015 The Authors. Clinical and Experimental Optometry © 2015 Optometry Australia.
The accurate particle tracer code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi; Yao, Yicun
2017-11-01
The Accurate Particle Tracer (APT) code is designed for systematic large-scale applications of geometric algorithms for particle dynamical simulations. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and nonlinear problems. To provide a flexible and convenient I/O interface, the libraries of Lua and Hdf5 are used. Following a three-step procedure, users can efficiently extend the libraries of electromagnetic configurations, external non-electromagnetic forces, particle pushers, and initialization approaches by use of the extendible module. APT has been used in simulations of key physical problems, such as runaway electrons in tokamaks and energetic particles in Van Allen belt. As an important realization, the APT-SW version has been successfully distributed on the world's fastest computer, the Sunway TaihuLight supercomputer, by supporting master-slave architecture of Sunway many-core processors. Based on large-scale simulations of a runaway beam under parameters of the ITER tokamak, it is revealed that the magnetic ripple field can disperse the pitch-angle distribution significantly and improve the confinement of energetic runaway beam on the same time.
Auxiliary fields as a tool for computing analytical solutions of the Schroedinger equation
International Nuclear Information System (INIS)
Silvestre-Brac, Bernard; Semay, Claude; Buisseret, Fabien
2008-01-01
We propose a new method to obtain approximate solutions for the Schroedinger equation with an arbitrary potential that possesses bound states. This method, relying on the auxiliary field technique, allows to find in many cases, analytical solutions. It offers a convenient way to study the qualitative features of the energy spectrum of bound states in any potential. In particular, we illustrate our method by solving the case of central potentials with power-law form and with logarithmic form. For these types of potentials, we propose very accurate analytical energy formulae which greatly improves the corresponding formulae that can be found in the literature
Auxiliary fields as a tool for computing analytical solutions of the Schroedinger equation
Energy Technology Data Exchange (ETDEWEB)
Silvestre-Brac, Bernard [LPSC Universite Joseph Fourier, Grenoble 1, CNRS/IN2P3, Institut Polytechnique de Grenoble, Avenue des Martyrs 53, F-38026 Grenoble-Cedex (France); Semay, Claude; Buisseret, Fabien [Groupe de Physique Nucleaire Theorique, Universite de Mons-Hainaut, Academie universitaire Wallonie-Bruxelles, Place du Parc 20, B-7000 Mons (Belgium)], E-mail: silvestre@lpsc.in2p3.fr, E-mail: claude.semay@umh.ac.be, E-mail: fabien.buisseret@umh.ac.be
2008-07-11
We propose a new method to obtain approximate solutions for the Schroedinger equation with an arbitrary potential that possesses bound states. This method, relying on the auxiliary field technique, allows to find in many cases, analytical solutions. It offers a convenient way to study the qualitative features of the energy spectrum of bound states in any potential. In particular, we illustrate our method by solving the case of central potentials with power-law form and with logarithmic form. For these types of potentials, we propose very accurate analytical energy formulae which greatly improves the corresponding formulae that can be found in the literature.
Gautschi, Walter; Rassias, Themistocles M
2011-01-01
Approximation theory and numerical analysis are central to the creation of accurate computer simulations and mathematical models. Research in these areas can influence the computational techniques used in a variety of mathematical and computational sciences. This collection of contributed chapters, dedicated to renowned mathematician Gradimir V. Milovanovia, represent the recent work of experts in the fields of approximation theory and numerical analysis. These invited contributions describe new trends in these important areas of research including theoretic developments, new computational alg
Dudding-Byth, Tracy; Baxter, Anne; Holliday, Elizabeth G; Hackett, Anna; O'Donnell, Sheridan; White, Susan M; Attia, John; Brunner, Han; de Vries, Bert; Koolen, David; Kleefstra, Tjitske; Ratwatte, Seshika; Riveros, Carlos; Brain, Steve; Lovell, Brian C
2017-12-19
Massively parallel genetic sequencing allows rapid testing of known intellectual disability (ID) genes. However, the discovery of novel syndromic ID genes requires molecular confirmation in at least a second or a cluster of individuals with an overlapping phenotype or similar facial gestalt. Using computer face-matching technology we report an automated approach to matching the faces of non-identical individuals with the same genetic syndrome within a database of 3681 images [1600 images of one of 10 genetic syndrome subgroups together with 2081 control images]. Using the leave-one-out method, two research questions were specified: 1) Using two-dimensional (2D) photographs of individuals with one of 10 genetic syndromes within a database of images, did the technology correctly identify more than expected by chance: i) a top match? ii) at least one match within the top five matches? or iii) at least one in the top 10 with an individual from the same syndrome subgroup? 2) Was there concordance between correct technology-based matches and whether two out of three clinical geneticists would have considered the diagnosis based on the image alone? The computer face-matching technology correctly identifies a top match, at least one correct match in the top five and at least one in the top 10 more than expected by chance (P syndromes except Kabuki syndrome. Although the accuracy of the computer face-matching technology was tested on images of individuals with known syndromic forms of intellectual disability, the results of this pilot study illustrate the potential utility of face-matching technology within deep phenotyping platforms to facilitate the interpretation of DNA sequencing data for individuals who remain undiagnosed despite testing the known developmental disorder genes.
Computational physics an introduction to Monte Carlo simulations of matrix field theory
Ydri, Badis
2017-01-01
This book is divided into two parts. In the first part we give an elementary introduction to computational physics consisting of 21 simulations which originated from a formal course of lectures and laboratory simulations delivered since 2010 to physics students at Annaba University. The second part is much more advanced and deals with the problem of how to set up working Monte Carlo simulations of matrix field theories which involve finite dimensional matrix regularizations of noncommutative and fuzzy field theories, fuzzy spaces and matrix geometry. The study of matrix field theory in its own right has also become very important to the proper understanding of all noncommutative, fuzzy and matrix phenomena. The second part, which consists of 9 simulations, was delivered informally to doctoral students who are working on various problems in matrix field theory. Sample codes as well as sample key solutions are also provided for convenience and completness. An appendix containing an executive arabic summary of t...
Computation of 3-D magnetostatic fields using a reduced scalar potential
International Nuclear Information System (INIS)
Biro, O.; Preis, K.; Vrisk, G.; Richter, K.R.
1993-01-01
The paper presents some improvements to the finite element computation of static magnetic fields in three dimensions using a reduced magnetic scalar potential. New methods are described for obtaining an edge element representation of the rotational part of the magnetic field from a given source current distribution. In the case when the current distribution is not known in advance, a boundary value problem is set up in terms of a current vector potential. An edge element representation of the solution can be directly used in the subsequent magnetostatic calculation. The magnetic field in a D.C. arc furnace is calculated by first determining the current distribution in terms of a current vector potential. A three dimensional problem involving a permanent magnet as well as a coil is solved and the magnetic field in some points is compared with measurement results
Hada, M.; Rhone, J.; Beitman, A.; Saganti, P.; Plante, I.; Ponomarev, A.; Slaba, T.; Patel, Z.
2018-01-01
The yield of chromosomal aberrations has been shown to increase in the lymphocytes of astronauts after long-duration missions of several months in space. Chromosome exchanges, especially translocations, are positively correlated with many cancers and are therefore a potential biomarker of cancer risk associated with radiation exposure. Although extensive studies have been carried out on the induction of chromosomal aberrations by low- and high-LET radiation in human lymphocytes, fibroblasts, and epithelial cells exposed in vitro, there is a lack of data on chromosome aberrations induced by low dose-rate chronic exposure and mixed field beams such as those expected in space. Chromosome aberration studies at NSRL will provide the biological validation needed to extend the computational models over a broader range of experimental conditions (more complicated mixed fields leading up to the galactic cosmic rays (GCR) simulator), helping to reduce uncertainties in radiation quality effects and dose-rate dependence in cancer risk models. These models can then be used to answer some of the open questions regarding requirements for a full GCR reference field, including particle type and number, energy, dose rate, and delivery order. In this study, we designed a simplified mixed field beam with a combination of proton, helium, oxygen, and iron ions with shielding or proton, helium, oxygen, and titanium without shielding. Human fibroblasts cells were irradiated with these mixed field beam as well as each single beam with acute and chronic dose rate, and chromosome aberrations (CA) were measured with 3-color fluorescent in situ hybridization (FISH) chromosome painting methods. Frequency and type of CA induced with acute dose rate and chronic dose rates with single and mixed field beam will be discussed. A computational chromosome and radiation-induced DNA damage model, BDSTRACKS (Biological Damage by Stochastic Tracks), was updated to simulate various types of CA induced by
DEFF Research Database (Denmark)
Cappellin, Cecilia; Breinbjerg, Olav; Frandsen, Aksel
2008-01-01
An effective technique for extracting the singularity of plane wave spectra in the computation of antenna aperture fields is proposed. The singular spectrum is first factorized into a product of a finite function and a singular function. The finite function is inverse Fourier transformed...... numerically using the Inverse Fast Fourier Transform, while the singular function is inverse Fourier transformed analytically, using the Weyl-identity, and the two resulting spatial functions are then convolved to produce the antenna aperture field. This article formulates the theory of the singularity...
Parallel computation of electrostatic potentials and fields in technical geometries on SUPRENUM
International Nuclear Information System (INIS)
Alef, M.
1990-02-01
The programs EPOTZR und EFLDZR have been developed in order to compute electrostatic potentials and the corresponding fields in technical geometries (example: Diode geometry for optimum focussing of ion beams in pulsed high-current ion diodes). The Poisson equation is discretized in a two-dimensional boundary-fitted grid in the (r,z)-plane and solved using multigrid methods. The z- and r-components of the field are determined by numerical differentiation of the potential. This report contains the user's guide of the SUPRENUM versions EPOTZR-P and EFLDZR-P. (orig./HP) [de
Mittra, R.; Rushdi, A.
1979-01-01
An approach for computing the geometrical optic fields reflected from a numerically specified surface is presented. The approach includes the step of deriving a specular point and begins with computing the reflected rays off the surface at the points where their coordinates, as well as the partial derivatives (or equivalently, the direction of the normal), are numerically specified. Then, a cluster of three adjacent rays are chosen to define a 'mean ray' and the divergence factor associated with this mean ray. Finally, the ampilitude, phase, and vector direction of the reflected field at a given observation point are derived by associating this point with the nearest mean ray and determining its position relative to such a ray.
Energy Technology Data Exchange (ETDEWEB)
Calkins, Mathew; Gates, D.E.A.; Gates, S. James Jr. [Center for String and Particle Theory, Department of Physics, University of Maryland,College Park, MD 20742-4111 (United States); Golding, William M. [Sensors and Electron Devices Directorate, US Army Research Laboratory,Adelphi, Maryland 20783 (United States)
2015-04-13
Starting with valise supermultiplets obtained from 0-branes plus field redefinitions, valise adinkra networks, and the “Garden Algebra,” we discuss an architecture for algorithms that (starting from on-shell theories and, through a well-defined computation procedure), search for off-shell completions. We show in one dimension how to directly attack the notorious “off-shell auxiliary field” problem of supersymmetry with algorithms in the adinkra network-world formulation.
Computational model for superconducting toroidal-field magnets for a tokamak reactor
International Nuclear Information System (INIS)
Turner, L.R.; Abdou, M.A.
1978-01-01
A computational model for predicting the performance characteristics and cost of superconducting toroidal-field (TF) magnets in tokamak reactors is presented. The model can be used to compare the technical and economic merits of different approaches to the design of TF magnets for a reactor system. The model has been integrated into the ANL Systems Analysis Program. Samples of results obtainable with the model are presented
Computation of Galois field expressions for quaternary logic functions on GPUs
Directory of Open Access Journals (Sweden)
Gajić Dušan B.
2014-01-01
Full Text Available Galois field (GF expressions are polynomials used as representations of multiple-valued logic (MVL functions. For this purpose, MVL functions are considered as functions defined over a finite (Galois field of order p - GF(p. The problem of computing these functional expressions has an important role in areas such as digital signal processing and logic design. Time needed for computing GF-expressions increases exponentially with the number of variables in MVL functions and, as a result, it often represents a limiting factor in applications. This paper proposes a method for an accelerated computation of GF(4-expressions for quaternary (four-valued logic functions using graphics processing units (GPUs. The method is based on the spectral interpretation of GF-expressions, permitting the use of fast Fourier transform (FFT-like algorithms for their computation. These algorithms are then adapted for highly parallel processing on GPUs. The performance of the proposed solutions is compared with referent C/C++ implementations of the same algorithms processed on central processing units (CPUs. Experimental results confirm that the presented approach leads to significant reduction in processing times (up to 10.86 times when compared to CPU processing. Therefore, the proposed approach widens the set of problem instances which can be efficiently handled in practice. [Projekat Ministarstva nauke Republike Srbije, br. ON174026 i br. III44006
Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1
International Nuclear Information System (INIS)
Kawasaki, Shoji; Nakamura, Kazuo; Nakamura, Yukio; Hiraki, Naoharu; Toi, Kazuo
1981-01-01
A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance. (Kato, T.)
Data processing system with a micro-computer for high magnetic field tokamak, TRIAM-1
Energy Technology Data Exchange (ETDEWEB)
Kawasaki, S; Nakamura, K; Nakamura, Y; Hiraki, N; Toi, K [Kyushu Univ., Fukuoka (Japan). Research Inst. for Applied Mechanics
1981-02-01
A data processing system was designed and constructed for the purpose of analyzing the data of the high magnetic field tokamak TRIAM-1. The system consists of a 10-channel A-D converter, a 20 K byte memory (RAM), an address bus control circuit, a data bus control circuit, a timing pulse and control signal generator, a D-A converter, a micro-computer, and a power source. The memory can be used as a CPU memory except at the time of sampling and data output. The out-put devices of the system are an X-Y recorder and an oscilloscope. The computer is composed of a CPU, a memory and an I/O part. The memory size can be extended. A cassette tape recorder is provided to keep the programs of the computer. An interface circuit between the computer and the tape recorder was designed and constructed. An electric discharge printer as an I/O device can be connected. From TRIAM-1, the signals of magnetic probes, plasma current, vertical field coil current, and one-turn loop voltage are fed into the processing system. The plasma displacement calculated from these signals is shown by one of I/O devices. The results of test run showed good performance.
Stagg, Bethan C.; Donkin, Maria E.
2017-01-01
We investigated usability of mobile computers and field guide books with adult botanical novices, for the identification of wildflowers and deciduous trees in winter. Identification accuracy was significantly higher for wildflowers using a mobile computer app than field guide books but significantly lower for deciduous trees. User preference…
Directory of Open Access Journals (Sweden)
Sinem Oktem-Okullu
Full Text Available The outcome of H. pylori infection is closely related with bacteria's virulence factors and host immune response. The association between T cells and H. pylori infection has been identified, but the effects of the nine major H. pylori specific virulence factors; cagA, vacA, oipA, babA, hpaA, napA, dupA, ureA, ureB on T cell response in H. pylori infected patients have not been fully elucidated. We developed a multiplex- PCR assay to detect nine H. pylori virulence genes with in a three PCR reactions. Also, the expression levels of Th1, Th17 and Treg cell specific cytokines and transcription factors were detected by using qRT-PCR assays. Furthermore, a novel expert derived model is developed to identify set of factors and rules that can distinguish the ulcer patients from gastritis patients. Within all virulence factors that we tested, we identified a correlation between the presence of napA virulence gene and ulcer disease as a first data. Additionally, a positive correlation between the H. pylori dupA virulence factor and IFN-γ, and H. pylori babA virulence factor and IL-17 was detected in gastritis and ulcer patients respectively. By using computer-based models, clinical outcomes of a patients infected with H. pylori can be predicted by screening the patient's H. pylori vacA m1/m2, ureA and cagA status and IFN-γ (Th1, IL-17 (Th17, and FOXP3 (Treg expression levels. Herein, we report, for the first time, the relationship between H. pylori virulence factors and host immune responses for diagnostic prediction of gastric diseases using computer-based models.
Oktem-Okullu, Sinem; Tiftikci, Arzu; Saruc, Murat; Cicek, Bahattin; Vardareli, Eser; Tozun, Nurdan; Kocagoz, Tanil; Sezerman, Ugur; Yavuz, Ahmet Sinan; Sayi-Yazgan, Ayca
2015-01-01
The outcome of H. pylori infection is closely related with bacteria's virulence factors and host immune response. The association between T cells and H. pylori infection has been identified, but the effects of the nine major H. pylori specific virulence factors; cagA, vacA, oipA, babA, hpaA, napA, dupA, ureA, ureB on T cell response in H. pylori infected patients have not been fully elucidated. We developed a multiplex- PCR assay to detect nine H. pylori virulence genes with in a three PCR reactions. Also, the expression levels of Th1, Th17 and Treg cell specific cytokines and transcription factors were detected by using qRT-PCR assays. Furthermore, a novel expert derived model is developed to identify set of factors and rules that can distinguish the ulcer patients from gastritis patients. Within all virulence factors that we tested, we identified a correlation between the presence of napA virulence gene and ulcer disease as a first data. Additionally, a positive correlation between the H. pylori dupA virulence factor and IFN-γ, and H. pylori babA virulence factor and IL-17 was detected in gastritis and ulcer patients respectively. By using computer-based models, clinical outcomes of a patients infected with H. pylori can be predicted by screening the patient's H. pylori vacA m1/m2, ureA and cagA status and IFN-γ (Th1), IL-17 (Th17), and FOXP3 (Treg) expression levels. Herein, we report, for the first time, the relationship between H. pylori virulence factors and host immune responses for diagnostic prediction of gastric diseases using computer-based models.
International Nuclear Information System (INIS)
Belge, Benedicte; Pasquet, Agnes; Vanoverschelde, Jean-Louis J.; Coche, Emmanuel; Gerber, Bernhard L.
2006-01-01
Retrospective reconstruction of ECG-gated images at different parts of the cardiac cycle allows the assessment of cardiac function by multi-detector row CT (MDCT) at the time of non-invasive coronary imaging. We compared the accuracy of such measurements by MDCT to cine magnetic resonance (MR). Forty patients underwent the assessment of global and regional cardiac function by 16-slice MDCT and cine MR. Left ventricular (LV) end-diastolic and end-systolic volumes estimated by MDCT (134±51 and 67±56 ml) were similar to those by MR (137±57 and 70±60 ml, respectively; both P=NS) and strongly correlated (r=0.92 and r=0.95, respectively; both P<0.001). Consequently, LV ejection fractions by MDCT and MR were also similar (55±21 vs. 56±21%; P=NS) and highly correlated (r=0.95; P<0.001). Regional end-diastolic and end-systolic wall thicknesses by MDCT were highly correlated (r=0.84 and r=0.92, respectively; both P<0.001), but significantly lower than by MR (8.3±1.8 vs. 8.8±1.9 mm and 12.7±3.4 vs. 13.3±3.5 mm, respectively; both P<0.001). Values of regional wall thickening by MDCT and MR were similar (54±30 vs. 51±31%; P=NS) and also correlated well (r=0.91; P<0.001). Retrospectively gated MDCT can accurately estimate LV volumes, EF and regional LV wall thickening compared to cine MR. (orig.)
International Nuclear Information System (INIS)
Yang Yanming; Dai Guiling
1988-01-01
All previous National Conferences on computer application in the field of nuclear science and technology sponsored by the Society of Nuclear Electronics and Detection Technology are reviewed. Surveys are geiven on the basic situations and technique levels of computer applications for each time period. Some points concerning possible developments of computer techniques are given as well
Harrison, R. G.
2015-07-01
A mean-field positive-feedback (PFB) theory of ferromagnetism is used to explain the origin of Barkhausen noise (BN) and to show why it is most pronounced in the irreversible regions of the hysteresis loop. By incorporating the ABBM-Sablik model of BN into the PFB theory, we obtain analytical solutions that simultaneously describe both the major hysteresis loop and, by calculating separate expressions for the differential susceptibility in the irreversible and reversible regions, the BN power response at all points of the loop. The PFB theory depends on summing components of the applied field, in particular, the non-monotonic field-magnetization relationship characterizing hysteresis, associated with physical processes occurring in the material. The resulting physical model is then validated by detailed comparisons with measured single-peak BN data in three different steels. It also agrees with the well-known influence of a demagnetizing field on the position and shape of these peaks. The results could form the basis of a physics-based method for modeling and understanding the significance of the observed single-peak (and in multi-constituent materials, multi-peak) BN envelope responses seen in contemporary applications of BN, such as quality control in manufacturing, non-destructive testing, and monitoring the microstructural state of ferromagnetic materials.
International Nuclear Information System (INIS)
Beleggia, M.; Graef, M. de
2003-01-01
A method is presented to compute the demagnetization tensor field for uniformly magnetized particles of arbitrary shape. By means of a Fourier space approach it is possible to compute analytically the Fourier representation of the demagnetization tensor field for a given shape. Then, specifying the direction of the uniform magnetization, the demagnetizing field and the magnetostatic energy associated with the particle can be evaluated. In some particular cases, the real space representation is computable analytically. In general, a numerical inverse fast Fourier transform is required to perform the inversion. As an example, the demagnetization tensor field for the tetrahedron will be given
Garza, J.L.B.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.; Huysmans, M.A.; Dieën, J.H. van; Beek, A.J. van der; Blatter, B.M.; Dennerlein, J.T.
2012-01-01
The present study, a part of the PROOF (PRedicting Occupational biomechanics in OFfice workers) study, aimed to determine whether trapezius muscle effort was different across computer activities in a field study of computer workers, and also investigated whether head and shoulder postures were
Bruno-Garza, J.L.; Eijckelhof, B.H.W.; Johnson, P.W.; Raina, S.M.; Rynell, P.; Huijsmans, M.A.; van Dieen, J.H.; van der Beek, A.J.; Blatter, B.M.; Dennerlein, J.T.
2012-01-01
The present study, a part of the PROOF (PRedicting Occupational biomechanics in OFfice workers) study, aimed to determine whether trapezius muscle effort was different across computer activities in a field study of computer workers, and also investigated whether head and shoulder postures were
Computation and analysis of backward ray-tracing in aero-optics flow fields.
Xu, Liang; Xue, Deting; Lv, Xiaoyi
2018-01-08
A backward ray-tracing method is proposed for aero-optics simulation. Different from forward tracing, the backward tracing direction is from the internal sensor to the distant target. Along this direction, the tracing in turn goes through the internal gas region, the aero-optics flow field, and the freestream. The coordinate value, the density, and the refractive index are calculated at each tracing step. A stopping criterion is developed to ensure the tracing stops at the outer edge of the aero-optics flow field. As a demonstration, the analysis is carried out for a typical blunt nosed vehicle. The backward tracing method and stopping criterion greatly simplify the ray-tracing computations in the aero-optics flow field, and they can be extended to our active laser illumination aero-optics study because of the reciprocity principle.
Multi-GPU Jacobian accelerated computing for soft-field tomography
International Nuclear Information System (INIS)
Borsic, A; Attardo, E A; Halter, R J
2012-01-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15–20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20 times
Multi-GPU Jacobian accelerated computing for soft-field tomography.
Borsic, A; Attardo, E A; Halter, R J
2012-10-01
Image reconstruction in soft-field tomography is based on an inverse problem formulation, where a forward model is fitted to the data. In medical applications, where the anatomy presents complex shapes, it is common to use finite element models (FEMs) to represent the volume of interest and solve a partial differential equation that models the physics of the system. Over the last decade, there has been a shifting interest from 2D modeling to 3D modeling, as the underlying physics of most problems are 3D. Although the increased computational power of modern computers allows working with much larger FEM models, the computational time required to reconstruct 3D images on a fine 3D FEM model can be significant, on the order of hours. For example, in electrical impedance tomography (EIT) applications using a dense 3D FEM mesh with half a million elements, a single reconstruction iteration takes approximately 15-20 min with optimized routines running on a modern multi-core PC. It is desirable to accelerate image reconstruction to enable researchers to more easily and rapidly explore data and reconstruction parameters. Furthermore, providing high-speed reconstructions is essential for some promising clinical application of EIT. For 3D problems, 70% of the computing time is spent building the Jacobian matrix, and 25% of the time in forward solving. In this work, we focus on accelerating the Jacobian computation by using single and multiple GPUs. First, we discuss an optimized implementation on a modern multi-core PC architecture and show how computing time is bounded by the CPU-to-memory bandwidth; this factor limits the rate at which data can be fetched by the CPU. Gains associated with the use of multiple CPU cores are minimal, since data operands cannot be fetched fast enough to saturate the processing power of even a single CPU core. GPUs have much faster memory bandwidths compared to CPUs and better parallelism. We are able to obtain acceleration factors of 20
Ekholm, T.; Lanoix, P.; Teerikorpi, P.; Paturel, G.; Fouqué, P.
1999-11-01
A sample of 32 galaxies with accurate distance moduli from the Cepheid PL-relation (Lanoix \\cite{Lanoix99}) has been used to study the dynamical behaviour of the Local (Virgo) supercluster. We used analytical Tolman-Bondi (TB) solutions for a spherically symmetric density excess embedded in the Einstein-deSitter universe (q_0=0.5). Using 12 galaxies within Theta =30degr from the centre we found a mass estimate of 1.62M_virial for the Virgo cluster. This agrees with the finding of Teerikorpi et al. (\\cite{Teerikorpi92}) that TB-estimate may be larger than virial mass estimate from Tully & Shaya (\\cite{Tully84}). Our conclusions do not critically depend on our primary choice of the global H_0=57 km s-1 Mpc{-1} established from SNe Ia (Lanoix \\cite{Lanoix99}). The remaining galaxies outside Virgo region do not disagree with this value. Finally, we also found a TB-solution with the H_0 and q_0 cited yielding exactly one virial mass for the Virgo cluster.
Miyazawa, Ken; Kawaguchi, Misuzu; Tabuchi, Masako; Goto, Shigemi
2010-12-01
Miniscrew implants have proven to be effective in providing absolute orthodontic anchorage. However, as self-drilling miniscrew implants have become more popular, a problem has emerged, i.e. root contact, which can lead to perforation and other root injuries. To avoid possible root damage, a surgical guide was fabricated and cone-beam computed tomography (CBCT) was used to incorporate guide tubes drilled in accordance with the planned direction of the implants. Eighteen patients (5 males and 13 females; mean age 23.8 years; minimum 10.7, maximum 45.5) were included in the study. Forty-four self-drilling miniscrew implants (diameter 1.6, and length 8 mm) were placed in interradicular bone using a surgical guide procedure, the majority in the maxillary molar area. To determine the success rates, statistical analysis was undertaken using Fisher's exact probability test. CBCT images of post-surgical self-drilling miniscrew implant placement showed no root contact (0/44). However, based on CBCT evaluation, it was necessary to change the location or angle of 52.3 per cent (23/44) of the guide tubes prior to surgery in order to obtain optimal placement. If orthodontic force could be applied to the screw until completion of orthodontic treatment, screw anchorage was recorded as successful. The total success rate of all miniscrews was 90.9 per cent (40/44). Orthodontic self-drilling miniscrew implants must be inserted carefully, particularly in the case of blind placement, since even guide tubes made on casts frequently require repositioning to avoid the roots of the teeth. The use of surgical guides, fabricated using CBCT images, appears to be a promising technique for placement of orthodontic self-drilling miniscrew implants adjacent to the dental roots and maxillary sinuses.
Fedorenko, Sergei V.
2011-01-01
A novel method for computation of the discrete Fourier transform over a finite field with reduced multiplicative complexity is described. If the number of multiplications is to be minimized, then the novel method for the finite field of even extension degree is the best known method of the discrete Fourier transform computation. A constructive method of constructing for a cyclic convolution over a finite field is introduced.
Structure of receptive fields in a computational model of area 3b of primary sensory cortex.
Detorakis, Georgios Is; Rougier, Nicolas P
2014-01-01
In a previous work, we introduced a computational model of area 3b which is built upon the neural field theory and receives input from a simplified model of the index distal finger pad populated by a random set of touch receptors (Merkell cells). This model has been shown to be able to self-organize following the random stimulation of the finger pad model and to cope, to some extent, with cortical or skin lesions. The main hypothesis of the model is that learning of skin representations occurs at the thalamo-cortical level while cortico-cortical connections serve a stereotyped competition mechanism that shapes the receptive fields. To further assess this hypothesis and the validity of the model, we reproduced in this article the exact experimental protocol of DiCarlo et al. that has been used to examine the structure of receptive fields in area 3b of the primary somatosensory cortex. Using the same analysis toolset, the model yields consistent results, having most of the receptive fields to contain a single region of excitation and one to several regions of inhibition. We further proceeded our study using a dynamic competition that deeply influences the formation of the receptive fields. We hypothesized this dynamic competition to correspond to some form of somatosensory attention that may help to precisely shape the receptive fields. To test this hypothesis, we designed a protocol where an arbitrary region of interest is delineated on the index distal finger pad and we either (1) instructed explicitly the model to attend to this region (simulating an attentional signal) (2) preferentially trained the model on this region or (3) combined the two aforementioned protocols simultaneously. Results tend to confirm that dynamic competition leads to shrunken receptive fields and its joint interaction with intensive training promotes a massive receptive fields migration and shrinkage.
Structure of Receptive Fields in a Computational Model of Area 3b of Primary Sensory Cortex
Directory of Open Access Journals (Sweden)
Georgios eDetorakis
2014-07-01
Full Text Available In a previous work, we introduced a computational model of area 3b which is built upon the neural field theory and receives input from a simplified model of the index distal finger pad populated by a random set of touch receptors(Merkell cells. This model has been shown to be able to self-organize following the random stimulation of the finger pad model and to cope, to some extent, with cortical or skin lesions. The main hypothesis of the model is that learning of skin representations occurs at the thalamo-cortical level while cortico-cortical connections serve a stereotyped competition mechanism that shapes the receptive fields. To further assess this hypothesis and the validity of the model, we reproduced in this article the exact experimental protocol of DiCarlo et al. that has been used to examine the structure of receptive fields in area 3b of the primary somatosensory cortex. Using the same analysis toolset, the model yields consistent results, having most of the receptive fields to contain a single region of excitation and one to severalregions of inhibition. We further proceeded our study using a dynamic competition that deeply influences the formation of the receptive fields. We hypothesized this dynamic competition to correspond to some form of somatosensory attention that may help to precisely shape the receptive fields. To test this hypothesis, we designed a protocol where an arbitrary region of interest is delineated on the index distal finger pad and we either (1 instructed explicitly the model to attend to this region (simulating an attentional signal (2 preferentially trained the model on this region or (3combined the two aforementioned protocols simultaneously. Results tend to confirm that dynamic competition leads to shrunken receptive fields and its joint interaction with intensive training promotes a massive receptive fields migration and shrinkage.
Portable, remotely operated, computer-controlled, quadrupole mass spectrometer for field use
International Nuclear Information System (INIS)
Friesen, R.D.; Newton, J.C.; Smith, C.F.
1982-04-01
A portable, remote-controlled mass spectrometer was required at the Nevada Test Site to analyze prompt post-event gas from the nuclear cavity in support of the underground testing program. A Balzers QMG-511 quadrupole was chosen for its ability to be interfaced to a DEC LSI-11 computer and to withstand the ground movement caused by this field environment. The inlet system valves, the pumps, the pressure and temperature transducers, and the quadrupole mass spectrometer are controlled by a read-only-memory-based DEC LSI-11/2 with a high-speed microwave link to the control point which is typically 30 miles away. The computer at the control point is a DEC LSI-11/23 running the RSX-11 operating system. The instrument was automated as much as possible because the system is run by inexperienced operators at times. The mass spectrometer has been used on an initial field event with excellent performance. The gas analysis system is described, including automation by a novel computer control method which reduces operator errors and allows dynamic access to the system parameters
Voepel, H.; Hodge, R. A.; Leyland, J.; Sear, D. A.; Ahmed, S. I.
2014-12-01
Uncertainty for bedload estimates in gravel bed rivers is largely driven by our inability to characterize the arrangement and orientation of the sediment grains within the bed. The characteristics of the surface structure are produced by the water working of grains, which leads to structural differences in bedforms through differential patterns of grain sorting, packing, imbrication, mortaring and degree of bed armoring. Until recently the technical and logistical difficulties of characterizing the arrangement of sediment in 3D have prohibited a full understanding of how grains interact with stream flow and the feedback mechanisms that exist. Micro-focus X-ray CT has been used for non-destructive 3D imaging of grains within a series of intact sections of river bed taken from key morphological units (see Figure 1). Volume, center of mass, points of contact, protrusion and spatial orientation of individual surface grains are derived from these 3D images, which in turn, facilitates estimates of 3D static force properties at the grain-scale such as pivoting angles, buoyancy and gravity forces, and grain exposure. By aggregating representative samples of grain-scale properties of localized interacting sediment into overall metrics, we can compare and contrast bed stability at a macro-scale with respect to stream bed morphology. Understanding differences in bed stability through representative metrics derived at the grain-scale will ultimately lead to improved bedload estimates with reduced uncertainty and increased understanding of interactions between grain-scale properties on channel morphology. Figure 1. CT-Scans of a water worked gravel-filled pot. a. 3D rendered scan showing the outer mesh, and b. the same pot with the mesh removed. c. vertical change in porosity of the gravels sampled in 5mm volumes. Values are typical of those measured in the field and lab. d. 2-D slices through the gravels at 20% depth from surface (porosity = 0.35), and e. 75% depth from
THE CHALLENGE OF THE PERFORMANCE CONCEPT WITHIN THE SUSTAINABILITY AND COMPUTATIONAL DESIGN FIELD
Directory of Open Access Journals (Sweden)
Marcio Nisenbaum
2017-11-01
Full Text Available This paper discusses the notion of performance and its appropriation within the research fields related to sustainability and computational design, focusing on the design processes of the architectural and urban fields. Recently, terms such as “performance oriented design” or “performance driven architecture”, especially when related to sustainability, have been used by many authors and professionals as an attempt to engender project guidelines based on simulation processes and systematic use of digital tools. In this context, the notion of performance has basically been understood as the way in which an action is fulfilled, agreeing to contemporary discourses of efficiency and optimization – in this circumstance it is considered that a building or urban area “performs” if it fulfills certain objective sustainability evaluation criteria, reduced to mathematical parameters. This paper intends to broaden this understanding by exploring new theoretical interpretations, referring to etymological investigation, historical research, and literature review, based on authors from different areas and on the case study of the solar houses academic competition, Solar Decathlon. This initial analysis is expected to contribute to the emergence of new forms of interpretation of the performance concept, relativizing the notion of the “body” that “performs” in different manners, thus enhancing its appropriation and use within the fields of sustainability and computational design.
Experimental and computational investigation of the NASA low-speed centrifugal compressor flow field
Hathaway, Michael D.; Chriss, Randall M.; Wood, Jerry R.; Strazisar, Anthony J.
1993-01-01
An experimental and computational investigation of the NASA Lewis Research Center's low-speed centrifugal compressor (LSCC) flow field was conducted using laser anemometry and Dawes' three-dimensional viscous code. The experimental configuration consisted of a backswept impeller followed by a vaneless diffuser. Measurements of the three-dimensional velocity field were acquired at several measurement planes through the compressor. The measurements describe both the throughflow and secondary velocity field along each measurement plane. In several cases the measurements provide details of the flow within the blade boundary layers. Insight into the complex flow physics within centrifugal compressors is provided by the computational fluid dynamics analysis (CFD), and assessment of the CFD predictions is provided by comparison with the measurements. Five-hole probe and hot-wire surveys at the inlet and exit to the impeller as well as surface flow visualization along the impeller blade surfaces provided independent confirmation of the laser measurement technique. The results clearly document the development of the throughflow velocity wake that is characteristic of unshrouded centrifugal compressors.
Li, Chunqing; Tie, Xiaobo; Liang, Kai; Ji, Chanjuan
2016-01-01
After conducting the intensive research on the distribution of fluid's velocity and biochemical reactions in the membrane bioreactor (MBR), this paper introduces the use of the mass-transfer differential equation to simulate the distribution of the chemical oxygen demand (COD) concentration in MBR membrane pool. The solutions are as follows: first, use computational fluid dynamics to establish a flow control equation model of the fluid in MBR membrane pool; second, calculate this model by adopting direct numerical simulation to get the velocity field of the fluid in membrane pool; third, combine the data of velocity field to establish mass-transfer differential equation model for the concentration field in MBR membrane pool, and use Seidel iteration method to solve the equation model; last but not least, substitute the real factory data into the velocity and concentration field model to calculate simulation results, and use visualization software Tecplot to display the results. Finally by analyzing the nephogram of COD concentration distribution, it can be found that the simulation result conforms the distribution rule of the COD's concentration in real membrane pool, and the mass-transfer phenomenon can be affected by the velocity field of the fluid in membrane pool. The simulation results of this paper have certain reference value for the design optimization of the real MBR system.
Quantum perceptron over a field and neural network architecture selection in a quantum computer.
da Silva, Adenilton José; Ludermir, Teresa Bernarda; de Oliveira, Wilson Rosa
2016-04-01
In this work, we propose a quantum neural network named quantum perceptron over a field (QPF). Quantum computers are not yet a reality and the models and algorithms proposed in this work cannot be simulated in actual (or classical) computers. QPF is a direct generalization of a classical perceptron and solves some drawbacks found in previous models of quantum perceptrons. We also present a learning algorithm named Superposition based Architecture Learning algorithm (SAL) that optimizes the neural network weights and architectures. SAL searches for the best architecture in a finite set of neural network architectures with linear time over the number of patterns in the training set. SAL is the first learning algorithm to determine neural network architectures in polynomial time. This speedup is obtained by the use of quantum parallelism and a non-linear quantum operator. Copyright © 2016 Elsevier Ltd. All rights reserved.
Computationally efficient near-field source localization using third-order moments
Chen, Jian; Liu, Guohong; Sun, Xiaoying
2014-12-01
In this paper, a third-order moment-based estimation of signal parameters via rotational invariance techniques (ESPRIT) algorithm is proposed for passive localization of near-field sources. By properly choosing sensor outputs of the symmetric uniform linear array, two special third-order moment matrices are constructed, in which the steering matrix is the function of electric angle γ, while the rotational factor is the function of electric angles γ and ϕ. With the singular value decomposition (SVD) operation, all direction-of-arrivals (DOAs) are estimated from a polynomial rooting version. After substituting the DOA information into the steering matrix, the rotational factor is determined via the total least squares (TLS) version, and the related range estimations are performed. Compared with the high-order ESPRIT method, the proposed algorithm requires a lower computational burden, and it avoids the parameter-match procedure. Computer simulations are carried out to demonstrate the performance of the proposed algorithm.
The Risks of Cloud Computing in Accounting Field and the Solution Offers: The Case of Turkey
Directory of Open Access Journals (Sweden)
Serap Özdemir
2015-03-01
Full Text Available Cloud is the system that maintains common information sharing among the information devices. It is known that there are always risks and that hundred per cent safety is not available in the environments of information technology. Service providers that operate in accounting sector and utilize the cloud technology are responsible for keeping and preserving the digital financial data that are vitally important for the companies. Service providers need to take all the necessary technical measures, so that the digital data are not damaged, lost and possessed by the malicious third parties. The establishments that provide service for accounting systems by utilizing the cloud computing opportunities in accounting field need to consider the general and co untry-specific risks of cloud computing technology. Therefore, they need to build the necessary technical infrastructure and models in order to run the system flawlessly and to preserve the digital data of the establishments in a secure environment.
Approach and tool for computer animation of fields in electrical apparatus
International Nuclear Information System (INIS)
Miltchev, Radoslav; Yatchev, Ivan S.; Ritchie, Ewen
2002-01-01
The paper presents a technical approach and post-processing tool for creating and displaying computer animation. The approach enables handling of two- and three-dimensional physical field phenomena results obtained from finite element software or to display movement processes in electrical apparatus simulations. The main goal of this work is to extend auxiliary features built in general-purpose CAD software working in the Windows environment. Different storage techniques were examined and the one employing image capturing was chosen. The developed tool provides benefits of independent visualisation, creating scenarios and facilities for exporting animations in common file fon-nats for distribution on different computer platforms. It also provides a valuable educational tool.(Author)
Lucchesi, David M; Peron, Roberto
2010-12-03
The pericenter shift of a binary system represents a suitable observable to test for possible deviations from the newtonian inverse-square law in favor of new weak interactions between macroscopic objects. We analyzed 13 years of tracking data of the LAGEOS satellites with GEODYN II software but with no models for general relativity. From the fit of LAGEOS II pericenter residuals we have been able to obtain a 99.8% agreement with the predictions of Einstein's theory. This result may be considered as a 99.8% measurement in the field of the Earth of the combination of the γ and β parameters of general relativity, and it may be used to constrain possible deviations from the inverse-square law in favor of new weak interactions parametrized by a Yukawa-like potential with strength α and range λ. We obtained |α| ≲ 1 × 10(-11), a huge improvement at a range of about 1 Earth radius.
Supplemental computational phantoms to estimate out-of-field absorbed dose in photon radiotherapy
Gallagher, Kyle J.; Tannous, Jaad; Nabha, Racile; Feghali, Joelle Ann; Ayoub, Zeina; Jalbout, Wassim; Youssef, Bassem; Taddei, Phillip J.
2018-01-01
The purpose of this study was to develop a straightforward method of supplementing patient anatomy and estimating out-of-field absorbed dose for a cohort of pediatric radiotherapy patients with limited recorded anatomy. A cohort of nine children, aged 2-14 years, who received 3D conformal radiotherapy for low-grade localized brain tumors (LBTs), were randomly selected for this study. The extent of these patients’ computed tomography simulation image sets were cranial only. To approximate their missing anatomy, we supplemented the LBT patients’ image sets with computed tomography images of patients in a previous study with larger extents of matched sex, height, and mass and for whom contours of organs at risk for radiogenic cancer had already been delineated. Rigid fusion was performed between the LBT patients’ data and that of the supplemental computational phantoms using commercial software and in-house codes. In-field dose was calculated with a clinically commissioned treatment planning system, and out-of-field dose was estimated with a previously developed analytical model that was re-fit with parameters based on new measurements for intracranial radiotherapy. Mean doses greater than 1 Gy were found in the red bone marrow, remainder, thyroid, and skin of the patients in this study. Mean organ doses between 150 mGy and 1 Gy were observed in the breast tissue of the girls and lungs of all patients. Distant organs, i.e. prostate, bladder, uterus, and colon, received mean organ doses less than 150 mGy. The mean organ doses of the younger, smaller LBT patients (0-4 years old) were a factor of 2.4 greater than those of the older, larger patients (8-12 years old). Our findings demonstrated the feasibility of a straightforward method of applying supplemental computational phantoms and dose-calculation models to estimate absorbed dose for a set of children of various ages who received radiotherapy and for whom anatomies were largely missing in their original
International Nuclear Information System (INIS)
Watanabe, Shuichi; Kudo, Hiroyuki; Saito, Tsuneo
1993-01-01
In this paper, we propose a new reconstruction algorithm based on MAP (maximum a posteriori probability) estimation principle for emission tomography. To improve noise suppression properties of the conventional ML-EM (maximum likelihood expectation maximization) algorithm, direct three-dimensional reconstruction that utilizes intensity correlations between adjacent transaxial slices is introduced. Moreover, to avoid oversmoothing of edges, a priori knowledge of RI (radioisotope) distribution is represented by using a doubly-stochastic image model called the compound Gauss-Markov random field. The a posteriori probability is maximized by using the iterative GEM (generalized EM) algorithm. Computer simulation results are shown to demonstrate validity of the proposed algorithm. (author)
Field programmable gate array-assigned complex-valued computation and its limits
Energy Technology Data Exchange (ETDEWEB)
Bernard-Schwarz, Maria, E-mail: maria.bernardschwarz@ni.com [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria); Zwick, Wolfgang; Klier, Jochen [National Instruments, Ganghoferstrasse 70b, 80339 Munich (Germany); Wenzel, Lothar [National Instruments, 11500 N MOPac Expy, Austin, Texas 78759 (United States); Gröschl, Martin [Institute of Applied Physics, TU Wien, Wiedner Hauptstrasse 8, 1040 Wien (Austria)
2014-09-15
We discuss how leveraging Field Programmable Gate Array (FPGA) technology as part of a high performance computing platform reduces latency to meet the demanding real time constraints of a quantum optics simulation. Implementations of complex-valued operations using fixed point numeric on a Virtex-5 FPGA compare favorably to more conventional solutions on a central processing unit. Our investigation explores the performance of multiple fixed point options along with a traditional 64 bits floating point version. With this information, the lowest execution times can be estimated. Relative error is examined to ensure simulation accuracy is maintained.
MHD computation of feedback of resistive-shell instabilities in the reversed field pinch
International Nuclear Information System (INIS)
Zita, E.J.; Prager, S.C.
1992-05-01
MHD computation demonstrates that feedback can sustain reversal and reduce loop voltage in resistive-shell reversed field pinch (RFP) plasmas. Edge feedback on ∼2R/a tearing modes resonant near axis is found to restore plasma parameters to nearly their levels with a close-fitting conducting shell. When original dynamo modes are stabilized, neighboring tearing modes grow to maintain the RFP dynamo more efficiently. This suggests that experimentally observed limits on RFP pulselengths to the order of the shell time can be overcome by applying feedback to a few helical modes
Three dimensional field computation software package DE3D and its applications
International Nuclear Information System (INIS)
Fan Mingwu; Zhang Tianjue; Yan Weili
1992-07-01
A software package, DE3D that can be run on PC for three dimensional electrostatic and magnetostatic field analysis has been developed in CIAE (China Institute of Atomic Energy). Two scalar potential method and special numerical techniques have made the code with high precision. It can be used for electrostatic and magnetostatic fields computations with complex boundary conditions. In the most cases, the result accuracy is better than 1% comparing with the measured. In some situations, the results are more acceptable than the other codes because some tricks are used for the current integral. Typical examples, design of a cyclotron magnet and magnetic elements on its beam transport line, given in the paper show how the program helps the designer to improve the design of the product. The software package could bring advantages to the producers and designers
Color fields of the static pentaquark system computed in SU(3) lattice QCD
Cardoso, Nuno; Bicudo, Pedro
2013-02-01
We compute the color fields of SU(3) lattice QCD created by static pentaquark systems, in a 243×48 lattice at β=6.2 corresponding to a lattice spacing a=0.07261(85)fm. We find that the pentaquark color fields are well described by a multi-Y-type shaped flux tube. The flux tube junction points are compatible with Fermat-Steiner points minimizing the total flux tube length. We also compare the pentaquark flux tube profile with the diquark-diantiquark central flux tube profile in the tetraquark and the quark-antiquark fundamental flux tube profile in the meson, and they match, thus showing that the pentaquark flux tubes are composed of fundamental flux tubes.
Color fields computed in SU(3) lattice QCD for the static tetraquark system
International Nuclear Information System (INIS)
Cardoso, Nuno; Cardoso, Marco; Bicudo, Pedro
2011-01-01
The color fields created by the static tetraquark system are computed in quenched SU(3) lattice QCD, in a 24 3 x48 lattice at β=6.2 corresponding to a lattice spacing a=0.07261(85) fm. We find that the tetraquark color fields are well described by a double-Y, or butterfly, shaped flux tube. The two flux-tube junction points are compatible with Fermat points minimizing the total flux-tube length. We also compare the diquark-diantiquark central flux-tube profile in the tetraquark with the quark-antiquark fundamental flux-tube profile in the meson, and they match, thus showing that the tetraquark flux tubes are composed of fundamental flux tubes.
International Nuclear Information System (INIS)
Yeh, G.T.
1980-01-01
Darcian velocity has been conventionally calculated in the finite-element modeling of groundwater flow by taking the derivatives of the computed pressure field. This results in discontinuities in the velocity field at nodal points and element boundaries. Discontinuities become enormous when the computed pressure field is far from a linear distribution. It is proposed in this paper that the finite element procedure that is used to simulate the pressure field or the moisture content field also be applied to Darcy's law with the derivatives of the computed pressure field as the load function. The problem of discontinuity is then eliminated, and the error of mass balance over the region of interest is much reduced. The reduction is from 23.8 to 2.2% by one numerical scheme and from 29.7 to -3.6% by another for a transient problem
International Nuclear Information System (INIS)
Smith, R.A.
1975-06-01
The design evaluation of toroidal field coils on the Princeton Large Torus (PLT), the Poloidal Diverter Experiment (PDX) and the Tokamak Fusion Test Reactor (TFTR) has been performed by structural analysis with the finite element method. The technique employed has been simplified with supplementary computer programs that are used to generate the input data for the finite element computer program. Significant automation has been provided by computer codes in three areas of data input. These are the definition of coil geometry by a mesh of node points, the definition of finite elements via the node points and the definition of the node point force/displacement boundary conditions. The computer programs by name that have been used to perform the above functions are PDXNODE, ELEMENT and PDXFORC. The geometric finite element modeling options for toroidal field coils provided by PDXNODE include one-fourth or one-half symmetric sections of circular coils, oval shaped coils or dee-shaped coils with or without a beveled wedging surface. The program ELEMENT which defines the finite elements for input to the finite element computer code can provide considerable time and labor savings when defining the model of coils of non-uniform cross-section or when defining the model of coils whose material properties are different in the R and THETA directions due to the laminations of alternate epoxy and copper windings. The modeling features provided by the program ELEMENT have been used to analyze the PLT and the TFTR toroidal field coils with integral support structures. The computer program named PDXFORC is described. It computes the node point forces in a model of a toroidal field coil from the vector crossproduct of the coil current and the magnetic field. The model can be of one-half or one-fourth symmetry to be consistent with the node model defined by PDXNODE, and the magnetic field is computed from toroidal or poloidal coils
Some exact computations on the twisted butterfly state in string field theory
International Nuclear Information System (INIS)
Okawa, Yuji
2004-01-01
The twisted butterfly state solves the equation of motion of vacuum string field theory in the singular limit. The finiteness of the energy density of the solution is an important issue, but possible conformal anomaly resulting from the twisting has prevented us from addressing this problem. We present a description of the twisted regulated butterfly state in terms of a conformal field theory with a vanishing central charge which consists of the ordinary bc ghosts and a matter system with c=26. Various quantities relevant to vacuum string field theory are computed exactly using this description. We find that the energy density of the solution can be finite in the limit, but the finiteness depends on the sub leading structure of vacuum string field theory. We further argue, contrary to our previous expectation, that contributions from sub leading terms in the kinetic term to the energy density can be of the same order as the contribution from the leading term which consists of the midpoint ghost insertion. (author)
International Nuclear Information System (INIS)
Chakraborty, Partha Sarathi; Karunanithi, Sellam; Dhull, Varun Singh; Kumar, Kunal; Tripathi, Madhavi
2015-01-01
We present the case of a 35-year-old man with calcinosis, Raynaud's phenomenon, esophageal dysmotility, sclerodactyly and telangiectasia variant scleroderma who presented with dysphagia, Raynaud's phenomenon and calf pain. 99m Tc-methylene diphosphonate bone scintigraphy was performed to identify the extent of the calcification. It revealed extensive dystrophic calcification in the left thigh and bilateral legs which was involving the muscles and was well-delineated on single photon emission computed tomography/computed tomography. Calcinosis in scleroderma usually involves the skin but can be found in deeper periarticular tissues. Myopathy is associated with a poor prognosis
Directory of Open Access Journals (Sweden)
Johan Debayle
2011-05-01
Full Text Available An image analysis method has been developed in order to compute the velocity field of a granular medium (sand grains, mean diameter 600 μm submitted to different kinds of mechanical stresses. The differential method based on optical flow conservation consists in describing a dense motion field with vectors associated to each pixel. A multiscale, coarse-to-fine, analytical approach through tailor sized windows yields the best compromise between accuracy and robustness of the results, while enabling an acceptable computation time. The corresponding algorithmis presented and its validation discussed through different tests. The results of the validation tests of the proposed approach show that the method is satisfactory when attributing specific values to parameters in association with the size of the image analysis window. An application in the case of vibrated sand has been studied. An instrumented laboratory device provides sinusoidal vibrations and enables external optical observations of sand motion in 3D transparent boxes. At 50 Hz, by increasing the relative acceleration G, the onset and development of two convective rolls can be observed. An ultra fast camera records the grain avalanches, and several pairs of images are analysed by the proposed method. The vertical velocity profiles are deduced and allow to precisely quantify the dimensions of the fluidized region as a function of G.
2003-10-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : WSDOT deployment". This document defines the objective, approach,...
2004-01-01
The purpose of this document is to expand upon the evaluation components presented in "Computer-aided dispatch--traffic management center field operational test final evaluation plan : state of Utah". This document defines the objective, approach, an...
A Backward Pyramid Oriented Optical Flow Field Computing Method for Aerial Image
Directory of Open Access Journals (Sweden)
LI Jiatian
2016-09-01
Full Text Available Aerial image optical flow field is the foundation for detecting moving objects at low altitude and obtaining change information. In general,the image pyramid structure is embedded in numerical procedure in order to enhance the convergence globally. However,more often than not,the pyramid structure is constructed using a bottom-up approach progressively,ignoring the geometry imaging process.In particular,when the ground objects moving it will lead to miss optical flow or the optical flow too small that could hardly sustain the subsequent modeling and analyzing issues. So a backward pyramid structure is proposed on the foundation of top-level standard image. Firstly,down sampled factors of top-level image are calculated quantitatively through central projection,which making the optical flow in top-level image represent the shifting threshold of the set ground target. Secondly,combining top-level image with its original,the down sampled factors in middle layer are confirmed in a constant proportion way. Finally,the image of middle layer is achieved by Gaussian smoothing and image interpolation,and meanwhile the pyramid is formed. The comparative experiments and analysis illustrate that the backward pyramid can calculate the optic flow field in aerial image accurately,and it has advantages in restraining small ground displacement.
A fast point-cloud computing method based on spatial symmetry of Fresnel field
Wang, Xiangxiang; Zhang, Kai; Shen, Chuan; Zhu, Wenliang; Wei, Sui
2017-10-01
Aiming at the great challenge for Computer Generated Hologram (CGH) duo to the production of high spatial-bandwidth product (SBP) is required in the real-time holographic video display systems. The paper is based on point-cloud method and it takes advantage of the propagating reversibility of Fresnel diffraction in the propagating direction and the fringe pattern of a point source, known as Gabor zone plate has spatial symmetry, so it can be used as a basis for fast calculation of diffraction field in CGH. A fast Fresnel CGH method based on the novel look-up table (N-LUT) method is proposed, the principle fringe patterns (PFPs) at the virtual plane is pre-calculated by the acceleration algorithm and be stored. Secondly, the Fresnel diffraction fringe pattern at dummy plane can be obtained. Finally, the Fresnel propagation from dummy plan to hologram plane. The simulation experiments and optical experiments based on Liquid Crystal On Silicon (LCOS) is setup to demonstrate the validity of the proposed method under the premise of ensuring the quality of 3D reconstruction the method proposed in the paper can be applied to shorten the computational time and improve computational efficiency.
Review on the applications of the very high speed computing technique to atomic energy field
International Nuclear Information System (INIS)
Hoshino, Tsutomu
1981-01-01
The demand of calculation in atomic energy field is enormous, and the physical and technological knowledge obtained by experiments are summarized into mathematical models, and accumulated as the computer programs for design, safety analysis of operational management. These calculation code systems are classified into reactor physics, reactor technology, operational management and nuclear fusion. In this paper, the demand of calculation speed in the diffusion and transport of neutrons, shielding, technological safety, core control and particle simulation is explained as the typical calculation. These calculations are divided into two models, the one is fluid model which regards physical systems as continuum, and the other is particle model which regards physical systems as composed of finite number of particles. The speed of computers in present state is too slow, and the capability 1000 to 10000 times as much as the present general purpose machines is desirable. The calculation techniques of pipeline system and parallel processor system are described. As an example of the practical system, the computer network OCTOPUS in the Lorence Livermore Laboratory is shown. Also, the CHI system in UCLA is introduced. (Kako, I.)
Accurate Modeling of Advanced Reflectarrays
DEFF Research Database (Denmark)
Zhou, Min
to the conventional phase-only optimization technique (POT), the geometrical parameters of the array elements are directly optimized to fulfill the far-field requirements, thus maintaining a direct relation between optimization goals and optimization variables. As a result, better designs can be obtained compared...... of the incident field, the choice of basis functions, and the technique to calculate the far-field. Based on accurate reference measurements of two offset reflectarrays carried out at the DTU-ESA Spherical NearField Antenna Test Facility, it was concluded that the three latter factors are particularly important...... using the GDOT to demonstrate its capabilities. To verify the accuracy of the GDOT, two offset contoured beam reflectarrays that radiate a high-gain beam on a European coverage have been designed and manufactured, and subsequently measured at the DTU-ESA Spherical Near-Field Antenna Test Facility...
Accurate metacognition for visual sensory memory representations.
Vandenbroucke, Annelinde R E; Sligte, Ilja G; Barrett, Adam B; Seth, Anil K; Fahrenfort, Johannes J; Lamme, Victor A F
2014-04-01
The capacity to attend to multiple objects in the visual field is limited. However, introspectively, people feel that they see the whole visual world at once. Some scholars suggest that this introspective feeling is based on short-lived sensory memory representations, whereas others argue that the feeling of seeing more than can be attended to is illusory. Here, we investigated this phenomenon by combining objective memory performance with subjective confidence ratings during a change-detection task. This allowed us to compute a measure of metacognition--the degree of knowledge that subjects have about the correctness of their decisions--for different stages of memory. We show that subjects store more objects in sensory memory than they can attend to but, at the same time, have similar metacognition for sensory memory and working memory representations. This suggests that these subjective impressions are not an illusion but accurate reflections of the richness of visual perception.
A novel potential/viscous flow coupling technique for computing helicopter flow fields
Summa, J. Michael; Strash, Daniel J.; Yoo, Sungyul
1993-01-01
The primary objective of this work was to demonstrate the feasibility of a new potential/viscous flow coupling procedure for reducing computational effort while maintaining solution accuracy. This closed-loop, overlapped velocity-coupling concept has been developed in a new two-dimensional code, ZAP2D (Zonal Aerodynamics Program - 2D), a three-dimensional code for wing analysis, ZAP3D (Zonal Aerodynamics Program - 3D), and a three-dimensional code for isolated helicopter rotors in hover, ZAPR3D (Zonal Aerodynamics Program for Rotors - 3D). Comparisons with large domain ARC3D solutions and with experimental data for a NACA 0012 airfoil have shown that the required domain size can be reduced to a few tenths of a percent chord for the low Mach and low angle of attack cases and to less than 2-5 chords for the high Mach and high angle of attack cases while maintaining solution accuracies to within a few percent. This represents CPU time reductions by a factor of 2-4 compared with ARC2D. The current ZAP3D calculation for a rectangular plan-form wing of aspect ratio 5 with an outer domain radius of about 1.2 chords represents a speed-up in CPU time over the ARC3D large domain calculation by about a factor of 2.5 while maintaining solution accuracies to within a few percent. A ZAPR3D simulation for a two-bladed rotor in hover with a reduced grid domain of about two chord lengths was able to capture the wake effects and compared accurately with the experimental pressure data. Further development is required in order to substantiate the promise of computational improvements due to the ZAPR3D coupling concept.
Quantum control with noisy fields: computational complexity versus sensitivity to noise
International Nuclear Information System (INIS)
Kallush, S; Khasin, M; Kosloff, R
2014-01-01
A closed quantum system is defined as completely controllable if an arbitrary unitary transformation can be executed using the available controls. In practice, control fields are a source of unavoidable noise, which has to be suppressed to retain controllability. Can one design control fields such that the effect of noise is negligible on the time-scale of the transformation? This question is intimately related to the fundamental problem of a connection between the computational complexity of the control problem and the sensitivity of the controlled system to noise. The present study considers a paradigm of control, where the Lie-algebraic structure of the control Hamiltonian is fixed, while the size of the system increases with the dimension of the Hilbert space representation of the algebra. We find two types of control tasks, easy and hard. Easy tasks are characterized by a small variance of the evolving state with respect to the operators of the control operators. They are relatively immune to noise and the control field is easy to find. Hard tasks have a large variance, are sensitive to noise and the control field is hard to find. The influence of noise increases with the size of the system, which is measured by the scaling factor N of the largest weight of the representation. For fixed time and control field the ability to control degrades as O(N) for easy tasks and as O(N 2 ) for hard tasks. As a consequence, even in the most favorable estimate, for large quantum systems, generic noise in the controls dominates for a typical class of target transformations, i.e. complete controllability is destroyed by noise. (paper)
Computer usage and national energy consumption: Results from a field-metering study
Energy Technology Data Exchange (ETDEWEB)
Desroches, Louis-Benoit [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Fuchs, Heidi [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Greenblatt, Jeffery [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Pratt, Stacy [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Willem, Henry [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Claybaugh, Erin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Beraki, Bereket [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Nagaraju, Mythri [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Price, Sarah [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division; Young, Scott [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States). Energy Analysis & Environmental Impacts Dept., Environmental Energy Technologies Division
2014-12-01
The electricity consumption of miscellaneous electronic loads (MELs) in the home has grown in recent years, and is expected to continue rising. Consumer electronics, in particular, are characterized by swift technological innovation, with varying impacts on energy use. Desktop and laptop computers make up a significant share of MELs electricity consumption, but their national energy use is difficult to estimate, given uncertainties around shifting user behavior. This report analyzes usage data from 64 computers (45 desktop, 11 laptop, and 8 unknown) collected in 2012 as part of a larger field monitoring effort of 880 households in the San Francisco Bay Area, and compares our results to recent values from the literature. We find that desktop computers are used for an average of 7.3 hours per day (median = 4.2 h/d), while laptops are used for a mean 4.8 hours per day (median = 2.1 h/d). The results for laptops are likely underestimated since they can be charged in other, unmetered outlets. Average unit annual energy consumption (AEC) for desktops is estimated to be 194 kWh/yr (median = 125 kWh/yr), and for laptops 75 kWh/yr (median = 31 kWh/yr). We estimate national annual energy consumption for desktop computers to be 20 TWh. National annual energy use for laptops is estimated to be 11 TWh, markedly higher than previous estimates, likely reflective of laptops drawing more power in On mode in addition to greater market penetration. This result for laptops, however, carries relatively higher uncertainty compared to desktops. Different study methodologies and definitions, changing usage patterns, and uncertainty about how consumers use computers must be considered when interpreting our results with respect to existing analyses. Finally, as energy consumption in On mode is predominant, we outline several energy savings opportunities: improved power management (defaulting to low-power modes after periods of inactivity as well as power scaling), matching the rated power
Baniamerian, Jamaledin; Liu, Shuang; Abbas, Mahmoud Ahmed
2018-04-01
The vertical gradient is an essential tool in interpretation algorithms. It is also the primary enhancement technique to improve the resolution of measured gravity and magnetic field data, since it has higher sensitivity to changes in physical properties (density or susceptibility) of the subsurface structures than the measured field. If the field derivatives are not directly measured with the gradiometers, they can be calculated from the collected gravity or magnetic data using numerical methods such as those based on fast Fourier transform technique. The gradients behave similar to high-pass filters and enhance the short-wavelength anomalies which may be associated with either small-shallow sources or high-frequency noise content in data, and their numerical computation is susceptible to suffer from amplification of noise. This behaviour can adversely affect the stability of the derivatives in the presence of even a small level of the noise and consequently limit their application to interpretation methods. Adding a smoothing term to the conventional formulation of calculating the vertical gradient in Fourier domain can improve the stability of numerical differentiation of the field. In this paper, we propose a strategy in which the overall efficiency of the classical algorithm in Fourier domain is improved by incorporating two different smoothing filters. For smoothing term, a simple qualitative procedure based on the upward continuation of the field to a higher altitude is introduced to estimate the related parameters which are called regularization parameter and cut-off wavenumber in the corresponding filters. The efficiency of these new approaches is validated by computing the first- and second-order derivatives of noise-corrupted synthetic data sets and then comparing the results with the true ones. The filtered and unfiltered vertical gradients are incorporated into the extended Euler deconvolution to estimate the depth and structural index of a magnetic
Wide-field two-dimensional multifocal optical-resolution photoacoustic computed microscopy
Xia, Jun; Li, Guo; Wang, Lidai; Nasiriavanaki, Mohammadreza; Maslov, Konstantin; Engelbach, John A.; Garbow, Joel R.; Wang, Lihong V.
2014-01-01
Optical-resolution photoacoustic microscopy (OR-PAM) is an emerging technique that directly images optical absorption in tissue at high spatial resolution. To date, the majority of OR-PAM systems are based on single focused optical excitation and ultrasonic detection, limiting the wide-field imaging speed. While one-dimensional multifocal OR-PAM (1D-MFOR-PAM) has been developed, the potential of microlens and transducer arrays has not been fully realized. Here, we present the development of two-dimensional multifocal optical-resolution photoacoustic computed microscopy (2D-MFOR-PACM), using a 2D microlens array and a full-ring ultrasonic transducer array. The 10 × 10 mm2 microlens array generates 1800 optical foci within the focal plane of the 512-element transducer array, and raster scanning the microlens array yields optical-resolution photoacoustic images. The system has improved the in-plane resolution of a full-ring transducer array from ≥100 µm to 29 µm and achieved an imaging time of 36 seconds over a 10 × 10 mm2 field of view. In comparison, the 1D-MFOR-PAM would take more than 4 minutes to image over the same field of view. The imaging capability of the system was demonstrated on phantoms and animals both ex vivo and in vivo. PMID:24322226
Integrated Design of Superconducting Magnets with the CERN Field Computation Program ROXIE
Russenschuck, Stephan; Bazan, M; Lucas, J; Ramberger, S; Völlinger, Christine
2000-01-01
The program package ROXIE has been developed at CERN for the field computation of superconducting accelerator magnets and is used as an approach towards the integrated design of such magnets. It is also an example of fruitful international collaborations in software development.The integrated design of magnets includes feature based geometry generation, conceptual design using genetic optimization algorithms, optimization of the iron yoke (both in 2d and 3d) using deterministic methods, end-spacer design and inverse field calculation.The paper describes the version 8.0 of ROXIE which comprises an automatic mesh generator, an hysteresis model for the magnetization in superconducting filaments, the BEM-FEM coupling method for the 3d field calculation, a routine for the calculation of the peak temperature during a quench and neural network approximations of the objective function for the speed-up of optimization algorithms, amongst others.New results of the magnet design work for the LHC are given as examples.
Field-flood requirements for emission computed tomography with an Anger camer
International Nuclear Information System (INIS)
Rogers, W.L.; Clinthorne, N.H.; Harkness, B.A.; Koral, K.F.; Keyes, J.W. Jr.
1982-01-01
Emission computed tomography with a rotating camera places stringent requirements on camera uniformity and the stability of camera response. In terms of clinical tomographic imaging, we have studied the statistical accuracy required for camera flood correction, the requirements for flood accuracy, the utility and validity of flood and data image smoothing to reduce random noise effects, and the magnitude and effect of camera variations as a function of angular position, energy window, and tuning. Uniformity of the corrected flood response must be held to better than 1% to eliminate image artifacts that are apparent in a million-count image of a liver slice. This requires calibration with an accurate, well-mixed flood source. Both random fluctuations and variations in camera response with rotation must be kept below 1%. To meet the statistical limit, one requires at least 30 million counts for the flod-correction image. Smoothing the flood image alone introduces unacceptable image artifacts. Smoothing both the flood image and data, however, appears to be a good approach toward reducing noise effects. Careful camera tuning and magnetic shield design provide camera stability suitable for present clinical applications
The Accurate Particle Tracer Code
Wang, Yulei; Liu, Jian; Qin, Hong; Yu, Zhi
2016-01-01
The Accurate Particle Tracer (APT) code is designed for large-scale particle simulations on dynamical systems. Based on a large variety of advanced geometric algorithms, APT possesses long-term numerical accuracy and stability, which are critical for solving multi-scale and non-linear problems. Under the well-designed integrated and modularized framework, APT serves as a universal platform for researchers from different fields, such as plasma physics, accelerator physics, space science, fusio...
DEFF Research Database (Denmark)
Paoletti, Valeria; Hansen, Per Christian; Hansen, Mads Friis
2014-01-01
In potential-field inversion, careful management of singular value decomposition components is crucial for obtaining information about the source distribution with respect to depth. In principle, the depth-resolution plot provides a convenient visual tool for this analysis, but its computational...... on memory and computing time. We used the ApproxDRP to study retrievable depth resolution in inversion of the gravity field of the Neapolitan Volcanic Area. Our main contribution is the combined use of the Lanczos bidiagonalization algorithm, established in the scientific computing community, and the depth...
Accurate determination of antenna directivity
DEFF Research Database (Denmark)
Dich, Mikael
1997-01-01
The derivation of a formula for accurate estimation of the total radiated power from a transmitting antenna for which the radiated power density is known in a finite number of points on the far-field sphere is presented. The main application of the formula is determination of directivity from power......-pattern measurements. The derivation is based on the theory of spherical wave expansion of electromagnetic fields, which also establishes a simple criterion for the required number of samples of the power density. An array antenna consisting of Hertzian dipoles is used to test the accuracy and rate of convergence...
Wainwright, Carroll L.
2012-09-01
I present a numerical package (CosmoTransitions) for analyzing finite-temperature cosmological phase transitions driven by single or multiple scalar fields. The package analyzes the different vacua of a theory to determine their critical temperatures (where the vacuum energy levels are degenerate), their supercooling temperatures, and the bubble wall profiles which separate the phases and describe their tunneling dynamics. I introduce a new method of path deformation to find the profiles of both thin- and thick-walled bubbles. CosmoTransitions is freely available for public use.Program summaryProgram Title: CosmoTransitionsCatalogue identifier: AEML_v1_0Program summary URL: http://cpc.cs.qub.ac.uk/summaries/AEML_v1_0.htmlProgram obtainable from: CPC Program Library, Queen's University, Belfast, N. IrelandLicensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.htmlNo. of lines in distributed program, including test data, etc.: 8775No. of bytes in distributed program, including test data, etc.: 621096Distribution format: tar.gzProgramming language: Python.Computer: Developed on a 2009 MacBook Pro. No computer-specific optimization was performed.Operating system: Designed and tested on Mac OS X 10.6.8. Compatible with any OS with Python installed.RAM: Approximately 50 MB, mostly for loading plotting packages.Classification: 1.9, 11.1.External routines: SciPy, NumPy, matplotLibNature of problem: I describe a program to analyze early-Universe finite-temperature phase transitions with multiple scalar fields. The goal is to analyze the phase structure of an input theory, determine the amount of supercooling at each phase transition, and find the bubble-wall profiles of the nucleated bubbles that drive the transitions.Solution method: To find the bubble-wall profile, the program assumes that tunneling happens along a fixed path in field space. This reduces the equations of motion to one dimension, which can then be solved using the overshoot
Computational studies of the effect of magnetic field ''ripple'' on neutral beam heating of ZEPHYR
International Nuclear Information System (INIS)
Lister, G.G.; Gruber, O.
1981-01-01
The results of computations to estimate the heating efficiency of neutral injection in the proposed ZEPHYR experiment are presented. A suitably modified version of the Monte-Carlo neutral deposition and orbit following code FREYA was used for these calculations, in which particular emphasis has been placed on the effects of toroidal field ripple. We find that the ripple associated with the preliminary design of the experiment (+-6%) would result in intolerable energy losses due to ''ripple trapping'' of the fast ions produced by the neutral beam and insufficient heating of the central plasma. The necessary conditions for ignition can be obtained with a total heating power of 25 MW provided the ripple can be reduced to +-1%, in which case energy losses could be kept below 30%. These results are compatible with those found from transport code calculations of the losses to be expected due to ripple enhanced thermal conduction in the plasma
Burtyka, Filipp
2018-01-01
The paper considers algorithms for finding diagonalizable and non-diagonalizable roots (so called solvents) of monic arbitrary unilateral second-order matrix polynomial over prime finite field. These algorithms are based on polynomial matrices (lambda-matrices). This is an extension of existing general methods for computing solvents of matrix polynomials over field of complex numbers. We analyze how techniques for complex numbers can be adapted for finite field and estimate asymptotic complexity of the obtained algorithms.
Moon, Haksu; Teixeira, Fernando L.; Donderici, Burkay
2014-09-01
Computation of electromagnetic fields due to point sources (Hertzian dipoles) in cylindrically stratified media is a classical problem for which analytical expressions of the associated tensor Green's function have been long known. However, under finite-precision arithmetic, direct numerical computations based on the application of such analytical (canonical) expressions invariably lead to underflow and overflow problems related to the poor scaling of the eigenfunctions (cylindrical Bessel and Hankel functions) for extreme arguments and/or high-order, as well as convergence problems related to the numerical integration over the spectral wavenumber and to the truncation of the infinite series over the azimuth mode number. These problems are exacerbated when a disparate range of values is to be considered for the layers' thicknesses and material properties (resistivities, permittivities, and permeabilities), the transverse and longitudinal distances between source and observation points, as well as the source frequency. To overcome these challenges in a systematic fashion, we introduce herein different sets of range-conditioned, modified cylindrical functions (in lieu of standard cylindrical eigenfunctions), each associated with nonoverlapped subdomains of (numerical) evaluation to allow for stable computations under any range of physical parameters. In addition, adaptively-chosen integration contours are employed in the complex spectral wavenumber plane to ensure convergent numerical integration in all cases. We illustrate the application of the algorithm to problems of geophysical interest involving layer resistivities ranging from 1000 Ω m to 10-8 Ω m, frequencies of operation ranging from 10 MHz down to the low magnetotelluric range of 0.01 Hz, and for various combinations of layer thicknesses.
Elucidation of complicated phenomena in nuclear power field by computation science techniques
International Nuclear Information System (INIS)
Takahashi, Ryoichi
1996-01-01
In this crossover research, the complicated phenomena treated in nuclear power field are elucidated, and for connecting them to engineering application research, the development of high speed computer utilization technology and the large scale numerical simulation utilizing it are carried out. As the scale of calculation, it is aimed at to realize the three-dimensional numerical simulation of the largest scale in the world of about 100 million mesh and to develop the results into engineering research. In the nuclear power plants of next generation, the further improvement of economical efficiency is demanded together with securing safety, and it is important that the design window is large. The work of confirming quantitatively the size of design window is not easy, and it is very difficult to separate observed phenomena into elementary events. As the method of forecasting and reproducing complicated phenomena and quantifying design window, large scale numerical simulation is promising. The roles of theory, experiment and computation science are discussed. The system of executing this crossover research is described. (K.I.)
Infinities in Quantum Field Theory and in Classical Computing: Renormalization Program
Manin, Yuri I.
Introduction. The main observable quantities in Quantum Field Theory, correlation functions, are expressed by the celebrated Feynman path integrals. A mathematical definition of them involving a measure and actual integration is still lacking. Instead, it is replaced by a series of ad hoc but highly efficient and suggestive heuristic formulas such as perturbation formalism. The latter interprets such an integral as a formal series of finite-dimensional but divergent integrals, indexed by Feynman graphs, the list of which is determined by the Lagrangian of the theory. Renormalization is a prescription that allows one to systematically "subtract infinities" from these divergent terms producing an asymptotic series for quantum correlation functions. On the other hand, graphs treated as "flowcharts", also form a combinatorial skeleton of the abstract computation theory. Partial recursive functions that according to Church's thesis exhaust the universe of (semi)computable maps are generally not everywhere defined due to potentially infinite searches and loops. In this paper I argue that such infinities can be addressed in the same way as Feynman divergences. More details can be found in [9,10].
Determination of strain fields in porous shape memory alloys using micro-computed tomography
Bormann, Therese; Friess, Sebastian; de Wild, Michael; Schumacher, Ralf; Schulz, Georg; Müller, Bert
2010-09-01
Shape memory alloys (SMAs) belong to 'intelligent' materials since the metal alloy can change its macroscopic shape as the result of the temperature-induced, reversible martensite-austenite phase transition. SMAs are often applied for medical applications such as stents, hinge-less instruments, artificial muscles, and dental braces. Rapid prototyping techniques, including selective laser melting (SLM), allow fabricating complex porous SMA microstructures. In the present study, the macroscopic shape changes of the SMA test structures fabricated by SLM have been investigated by means of micro computed tomography (μCT). For this purpose, the SMA structures are placed into the heating stage of the μCT system SkyScan 1172™ (SkyScan, Kontich, Belgium) to acquire three-dimensional datasets above and below the transition temperature, i.e. at room temperature and at about 80°C, respectively. The two datasets were registered on the basis of an affine registration algorithm with nine independent parameters - three for the translation, three for the rotation and three for the scaling in orthogonal directions. Essentially, the scaling parameters characterize the macroscopic deformation of the SMA structure of interest. Furthermore, applying the non-rigid registration algorithm, the three-dimensional strain field of the SMA structure on the micrometer scale comes to light. The strain fields obtained will serve for the optimization of the SLM-process and, more important, of the design of the complex shaped SMA structures for tissue engineering and medical implants.
Application of a computer model to the study of the geothermic field of Mofete, Italy
Energy Technology Data Exchange (ETDEWEB)
Giannone, G.; Turriani, C; Pistellie, E
1984-01-01
The aim of the study was to develop a reliable and comprehensive reservoir simulation package for a better understanding of ''in-situ'' processes pertinent to geothermal reservoir hydrodynamics and thermodynamics, and to enable an assessment of optimum reservoir management strategies as to production and reinjection policies. The study consists of four parts: The first deals with the computer programme. This is based on a programme called ''CHARG''developed in the US. Some adaptation was necessary. The second part concerns the fall-off and pit-tests of the geothermal well close to Naples ''Mofete 2''. This has been a crucial test for the CHARG model using asymmetric cylindrical coordinates and 14 different layers. Part three deals with predictions about longevity of the geothermal field of Mofete. The area is divided into 2500 blocs distibuted over 14 layers. Several configurations (various numbers of production and reinjection wells) have been tested. The last chapter delas with a comparison between the ISMES reservoir model, based on the finite elements approach and the AGIP model (finite differences). Both models give nearly the same results when applied to the geothermal field of Travale.
Improvement of portable computed tomography system for on-field applications
Sukrod, K.; Khoonkamjorn, P.; Tippayakul, C.
2015-05-01
In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from the IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field.
Inversion of potential field data using the finite element method on parallel computers
Gross, L.; Altinay, C.; Shaw, S.
2015-11-01
In this paper we present a formulation of the joint inversion of potential field anomaly data as an optimization problem with partial differential equation (PDE) constraints. The problem is solved using the iterative Broyden-Fletcher-Goldfarb-Shanno (BFGS) method with the Hessian operator of the regularization and cross-gradient component of the cost function as preconditioner. We will show that each iterative step requires the solution of several PDEs namely for the potential fields, for the adjoint defects and for the application of the preconditioner. In extension to the traditional discrete formulation the BFGS method is applied to continuous descriptions of the unknown physical properties in combination with an appropriate integral form of the dot product. The PDEs can easily be solved using standard conforming finite element methods (FEMs) with potentially different resolutions. For two examples we demonstrate that the number of PDE solutions required to reach a given tolerance in the BFGS iteration is controlled by weighting regularization and cross-gradient but is independent of the resolution of PDE discretization and that as a consequence the method is weakly scalable with the number of cells on parallel computers. We also show a comparison with the UBC-GIF GRAV3D code.
Grating-based X-ray Dark-field Computed Tomography of Living Mice.
Velroyen, A; Yaroshenko, A; Hahn, D; Fehringer, A; Tapfer, A; Müller, M; Noël, P B; Pauwels, B; Sasov, A; Yildirim, A Ö; Eickelberg, O; Hellbach, K; Auweter, S D; Meinel, F G; Reiser, M F; Bech, M; Pfeiffer, F
2015-10-01
Changes in x-ray attenuating tissue caused by lung disorders like emphysema or fibrosis are subtle and thus only resolved by high-resolution computed tomography (CT). The structural reorganization, however, is of strong influence for lung function. Dark-field CT (DFCT), based on small-angle scattering of x-rays, reveals such structural changes even at resolutions coarser than the pulmonary network and thus provides access to their anatomical distribution. In this proof-of-concept study we present x-ray in vivo DFCTs of lungs of a healthy, an emphysematous and a fibrotic mouse. The tomographies show excellent depiction of the distribution of structural - and thus indirectly functional - changes in lung parenchyma, on single-modality slices in dark field as well as on multimodal fusion images. Therefore, we anticipate numerous applications of DFCT in diagnostic lung imaging. We introduce a scatter-based Hounsfield Unit (sHU) scale to facilitate comparability of scans. In this newly defined sHU scale, the pathophysiological changes by emphysema and fibrosis cause a shift towards lower numbers, compared to healthy lung tissue.
Improvement of Portable Computed Tomography System for On-field Applications
International Nuclear Information System (INIS)
Sukrod, K.; Khoonkamjorn, P.; Tippayakul, C.
2014-01-01
In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field.
Improvement of portable computed tomography system for on-field applications
International Nuclear Information System (INIS)
Sukrod, K; Khoonkamjorn, P; Tippayakul, C
2015-01-01
In 2010, Thailand Institute of Nuclear Technology (TINT) received a portable Computed Tomography (CT) system from the IAEA as part of the Regional Cooperative Agreement (RCA) program. This portable CT system has been used as the prototype for development of portable CT system intended for industrial applications since then. This paper discusses the improvements in the attempt to utilize the CT system for on-field applications. The system is foreseen to visualize the amount of agarwood in the live tree trunk. The experiments adopting Am-241 as the radiation source were conducted. The Am-241 source was selected since it emits low energy gamma which should better distinguish small density differences of wood types. Test specimens made of timbers with different densities were prepared and used in the experiments. The cross sectional views of the test specimens were obtained from the CT system using different scanning parameters. It is found from the experiments that the results are promising as the picture can clearly differentiate wood types according to their densities. Also, the optimum scanning parameters were determined from the experiments. The results from this work encourage the research team to advance into the next phase which is to experiment with the real tree on the field. (paper)
Energy Technology Data Exchange (ETDEWEB)
Oxstrand, Johanna; LeBlanc, Katya
2017-06-01
The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human interacts with the procedures, which can be achieved through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools and dynamic step presentation. As a step toward the goal of improving procedure use performance, the U.S. Department of Energy Light Water Reactor Sustainability Program researchers, together with the nuclear industry, have been investigating the possibility and feasibility of replacing current paper-based procedures with CBPs. The main purpose of the CBP research conducted at the Idaho National Laboratory was to provide design guidance to the nuclear industry to be used by both utilities and vendors. After studying existing design guidance for CBP systems, the researchers concluded that the majority of the existing guidance is intended for control room CBP systems, and does not necessarily address the challenges of designing CBP systems for instructions carried out in the field. Further, the guidance is often presented on a high level, which leaves the designer to interpret what is meant by the guidance and how to specifically implement it. The authors developed a design guidance to provide guidance specifically tailored to instructions that are carried out in the field based.
International Nuclear Information System (INIS)
Smith, R.A.
1975-06-01
The structural analysis of toroidal field coils in Tokamak fusion machines can be performed with the finite element method. This technique has been employed for design evaluations of toroidal field coils on the Princeton Large Torus (PLT), the Poloidal Diverter Experiment (PDX), and the Tokamak Fusion Test Reactor (TFTR). The application of the finite element method can be simplified with computer programs that are used to generate the input data for the finite element code. There are three areas of data input where significant automation can be provided by supplementary computer codes. These concern the definition of geometry by a node point mesh, the definition of the finite elements from the geometric node points, and the definition of the node point force/displacement boundary conditions. The node point forces in a model of a toroidal field coil are computed from the vector cross product of the coil current and the magnetic field. The computer programs named PDXNODE and ELEMENT are described. The program PDXNODE generates the geometric node points of a finite element model for a toroidal field coil. The program ELEMENT defines the finite elements of the model from the node points and from material property considerations. The program descriptions include input requirements, the output, the program logic, the methods of generating complex geometries with multiple runs, computational time and computer compatibility. The output format of PDXNODE and ELEMENT make them compatible with PDXFORC and two general purpose finite element computer codes: (ANSYS) the Engineering Analysis System written by the Swanson Analysis Systems, Inc., and (WECAN) the Westinghouse Electric Computer Analysis general purpose finite element program. The Fortran listings of PDXNODE and ELEMENT are provided
Fortenberry, Ryan
The Spitzer Space Telescope observation of spectra most likely attributable to diverse and abundant populations of polycyclic aromatic hydrocarbons (PAHs) in space has led to tremendous interest in these molecules as tracers of the physical conditions in different astrophysical regions. A major challenge in using PAHs as molecular tracers is the complexity of the spectral features in the 3-20 μm region. The large number and vibrational similarity of the putative PAHs responsible for these spectra necessitate determination for the most accurate basis spectra possible for comparison. It is essential that these spectra be established in order for the regions explored with the newest generation of observatories such as SOFIA and JWST to be understood. Current strategies to develop these spectra for individual PAHs involve either matrixisolation IR measurements or quantum chemical calculations of harmonic vibrational frequencies. These strategies have been employed to develop the successful PAH IR spectral database as a repository of basis functions used to fit astronomically observed spectra, but they are limited in important ways. Both techniques provide an adequate description of the molecules in their electronic, vibrational, and rotational ground state, but these conditions do not represent energetically hot regions for PAHs near strong radiation fields of stars and are not direct representations of the gas phase. Some non-negligible matrix effects are known in condensed-phase studies, and the inclusion of anharmonicity in quantum chemical calculations is essential to generate physically-relevant results especially for hot bands. While scaling factors in either case can be useful, they are agnostic to the system studied and are not robustly predictive. One strategy that has emerged to calculate the molecular vibrational structure uses vibrational perturbation theory along with a quartic force field (QFF) to account for higher-order derivatives of the potential
Rapid phenotyping of crop root systems in undisturbed field soils using X-ray computed tomography.
Pfeifer, Johannes; Kirchgessner, Norbert; Colombi, Tino; Walter, Achim
2015-01-01
X-ray computed tomography (CT) has become a powerful tool for root phenotyping. Compared to rather classical, destructive methods, CT encompasses various advantages. In pot experiments the growth and development of the same individual root can be followed over time and in addition the unaltered configuration of the 3D root system architecture (RSA) interacting with a real field soil matrix can be studied. Yet, the throughput, which is essential for a more widespread application of CT for basic research or breeding programs, suffers from the bottleneck of rapid and standardized segmentation methods to extract root structures. Using available methods, root segmentation is done to a large extent manually, as it requires a lot of interactive parameter optimization and interpretation and therefore needs a lot of time. Based on commercially available software, this paper presents a protocol that is faster, more standardized and more versatile compared to existing segmentation methods, particularly if used to analyse field samples collected in situ. To the knowledge of the authors this is the first study approaching to develop a comprehensive segmentation method suitable for comparatively large columns sampled in situ which contain complex, not necessarily connected root systems from multiple plants grown in undisturbed field soil. Root systems from several crops were sampled in situ and CT-volumes determined with the presented method were compared to root dry matter of washed root samples. A highly significant (P < 0.01) and strong correlation (R(2) = 0.84) was found, demonstrating the value of the presented method in the context of field research. Subsequent to segmentation, a method for the measurement of root thickness distribution has been used. Root thickness is a central RSA trait for various physiological research questions such as root growth in compacted soil or under oxygen deficient soil conditions, but hardly assessable in high throughput until today, due
International Nuclear Information System (INIS)
Hoeksema, J.T.; Scherrer, P.H.
1986-01-01
Daily magnetogram observations of the large-scale photospheric magnetic field have been made at the John M. Wilcox Solar Observatory at Stanford since May of 1976. These measurements provide a homogeneous record of the changing solar field through most of Solar Cycle 21. Using the photospheric data, the configuration of the coronal and heliospheric fields can be calculated using a Potential Field -- Source Surface model. This provides a 3-dimensional picture of the heliospheric field-evolution during the solar cycle. In this report the authors present the complete set of synoptic charts of the measured photospheric magnetic field, the computed field at the source surface, and the coefficients of the multipole expansion of the coronal field. The general underlying structure of the solar and heliospheric fields, which determine the environment for solar - terrestrial relations and provide the context within which solar-activity-related events occur, can be approximated from these data
Ensemble of Neural Network Conditional Random Fields for Self-Paced Brain Computer Interfaces
Directory of Open Access Journals (Sweden)
Hossein Bashashati
2017-07-01
Full Text Available Classification of EEG signals in self-paced Brain Computer Interfaces (BCI is an extremely challenging task. The main diﬃculty stems from the fact that start time of a control task is not defined. Therefore it is imperative to exploit the characteristics of the EEG data to the extent possible. In sensory motor self-paced BCIs, while performing the mental task, the user’s brain goes through several well-defined internal state changes. Applying appropriate classifiers that can capture these state changes and exploit the temporal correlation in EEG data can enhance the performance of the BCI. In this paper, we propose an ensemble learning approach for self-paced BCIs. We use Bayesian optimization to train several different classifiers on different parts of the BCI hyper- parameter space. We call each of these classifiers Neural Network Conditional Random Field (NNCRF. NNCRF is a combination of a neural network and conditional random field (CRF. As in the standard CRF, NNCRF is able to model the correlation between adjacent EEG samples. However, NNCRF can also model the nonlinear dependencies between the input and the output, which makes it more powerful than the standard CRF. We compare the performance of our algorithm to those of three popular sequence labeling algorithms (Hidden Markov Models, Hidden Markov Support Vector Machines and CRF, and to two classical classifiers (Logistic Regression and Support Vector Machines. The classifiers are compared for the two cases: when the ensemble learning approach is not used and when it is. The data used in our studies are those from the BCI competition IV and the SM2 dataset. We show that our algorithm is considerably superior to the other approaches in terms of the Area Under the Curve (AUC of the BCI system.
Fujii, K.
1983-01-01
A method for generating three dimensional, finite difference grids about complicated geometries by using Poisson equations is developed. The inhomogenous terms are automatically chosen such that orthogonality and spacing restrictions at the body surface are satisfied. Spherical variables are used to avoid the axis singularity, and an alternating-direction-implicit (ADI) solution scheme is used to accelerate the computations. Computed results are presented that show the capability of the method. Since most of the results presented have been used as grids for flow-field computations, this is indicative that the method is a useful tool for generating three-dimensional grids about complicated geometries.
Motta, Mario; Zhang, Shiwei
2017-11-14
We address the computation of ground-state properties of chemical systems and realistic materials within the auxiliary-field quantum Monte Carlo method. The phase constraint to control the Fermion phase problem requires the random walks in Slater determinant space to be open-ended with branching. This in turn makes it necessary to use back-propagation (BP) to compute averages and correlation functions of operators that do not commute with the Hamiltonian. Several BP schemes are investigated, and their optimization with respect to the phaseless constraint is considered. We propose a modified BP method for the computation of observables in electronic systems, discuss its numerical stability and computational complexity, and assess its performance by computing ground-state properties in several molecular systems, including small organic molecules.
Current state of standardization in the field of dimensional computed tomography
International Nuclear Information System (INIS)
Bartscher, Markus; Härtig, Frank; Neuschaefer-Rube, Ulrich; Sato, Osamu
2014-01-01
Industrial x-ray computed tomography (CT) is a well-established non-destructive testing (NDT) technology and has been in use for decades. Moreover, CT has also started to become an important technology for dimensional metrology. But the requirements on dimensional CTs, i.e., on performing coordinate measurements with CT, are different from NDT. For dimensional measurements, the position of interfaces or surfaces is of importance, while this is often less critical in NDT. Standardization plays an important role here as it can create trust in new measurement technologies as is the case for dimensional CT. At the international standardization level, the ISO TC 213 WG 10 is working on specifications for dimensional CT. This paper highlights the demands on international standards in the field of dimensional CT and describes the current developments from the viewpoint of representatives of national and international standardization committees. Key aspects of the discussion are the material influence on the length measurement error E and how E can best be measured. A respective study was performed on hole plates as new reference standards for error testing of length measurements incorporating the material influence. We performed corresponding measurement data analysis and present a further elaborated hole plate design. The authors also comment on different approaches currently pursued and give an outlook on upcoming developments as far as they can be foreseen. (paper)
International Nuclear Information System (INIS)
Kulikov, N.Ya.; Snitko, Eh.I.; Rasputnis, A.M.; Solodov, V.P.
1976-01-01
Describes the system of intrareactor control over the reactors of the Byeloyarskaya Atomic Station. In the second block of the station, use is made of direct charge emission detectors installed in the central apertures of the superheater channels and operating reliably at temperatures up to 750 deg C. The detectors of the first and the second block are connected to the computer which sends the results of processing the signals to the printer, while the signals for deviations go to the mnemonic tablaux of the reactors. The good working order of the detectors is checked by comparison with zero as well as with the mean detector current for the reactor concerned. The application of the intrareactor control system has allowed the stable thermal power to be increased from 480-500 to 530 Mw and makes it possible to control and maintain the neutron field formed with a relative error of 3-4%. The structural scheme of the system of intrareactor control is given
International Nuclear Information System (INIS)
Kansa, E.; Shumlak, U.; Tsynkov, S.
2013-01-01
Confining dense plasma in a field reversed configuration (FRC) is considered a promising approach to fusion. Numerical simulation of this process requires setting artificial boundary conditions (ABCs) for the magnetic field because whereas the plasma itself occupies a bounded region (within the FRC coils), the field extends from this region all the way to infinity. If the plasma is modeled using single fluid magnetohydrodynamics (MHD), then the exterior magnetic field can be considered quasi-static. This field has a scalar potential governed by the Laplace equation. The quasi-static ABC for the magnetic field is obtained using the method of difference potentials, in the form of a discrete Calderon boundary equation with projection on the artificial boundary shaped as a parallelepiped. The Calderon projection itself is computed by convolution with the discrete fundamental solution on the three-dimensional Cartesian grid.
Energy Technology Data Exchange (ETDEWEB)
Leao Junior, Reginaldo G.; Oliveira, Arno H. de; Mourao, Arnaldo P. [Universidade Federal de Minas Gerais (UFMG), Belo Horizonte, MG (Brazil). Departmento de Engenharia Nuclear; Sousa, Romulo V. de [Sao Joao de Deus Hospital, Divinopolis, MG (Brazil); Silva, Hugo L.L. [Santa Casa Hospital, Belo Horizonte, MG (Brazil)
2016-07-01
Objectives: This work aimed to obtain data from small fields of X-rays that evidence of the hypotheses cited as cause of difficulties for the dosimetry of these. For this purpose, the verification of compatibility between the dosimetric boundary of field and the geometric size of field, was performed. Also, the verification of kerma dose according to the expected relationship for conventional fields was made. Materials and Methods: Computer simulations of smaller fields 5 x 5 cm² were performed, using the Monte Carlo method by egs{sub c}hamber application, this derived from EGSnrc radiation transport code. As particulate sources were used phase space files of a Clinac 2100 head model coupled to cones Stereotactic Radiosurgery. Results: The simulations suggested the existence of a plateau in discrepancies between the dose FWHM and the nominal diameter of the field close to 8%. These simulations also indicated a decrease of these values for fields with diameters smaller than 12 mm and larger than 36 mm. Simultaneously, the dose kerma differences in depth reached values higher than 14% in the case where the phenomenon is more significant. Conclusion: The data showed that in fact the behavior of small fields clashes with that expected for conventional fields, and that the traditional dosimetric conventions do not apply to such fields requiring a specialized approach to the techniques that employ them. Furthermore, the existence of the aforementioned plateau of discrepancies, along with the decrease thereof in less than 15 mm diameter fields constitute a remarkable finding. (author)
Application of fluence field modulation to proton computed tomography for proton therapy imaging.
Dedes, G; De Angelis, L; Rit, S; Hansen, D; Belka, C; Bashkirov, V; Johnson, R P; Coutrakon, G; Schubert, K E; Schulte, R W; Parodi, K; Landry, G
2017-07-12
This simulation study presents the application of fluence field modulated computed tomography, initially developed for x-ray CT, to proton computed tomography (pCT). By using pencil beam (PB) scanning, fluence modulated pCT (FMpCT) may achieve variable image quality in a pCT image and imaging dose reduction. Three virtual phantoms, a uniform cylinder and two patients, were studied using Monte Carlo simulations of an ideal list-mode pCT scanner. Regions of interest (ROI) were selected for high image quality and only PBs intercepting them preserved full fluence (FF). Image quality was investigated in terms of accuracy (mean) and noise (standard deviation) of the reconstructed proton relative stopping power compared to reference values. Dose calculation accuracy on FMpCT images was evaluated in terms of dose volume histograms (DVH), range difference (RD) for beam-eye-view (BEV) dose profiles and gamma evaluation. Pseudo FMpCT scans were created from broad beam experimental data acquired with a list-mode pCT prototype. FMpCT noise in ROIs was equivalent to FF images and accuracy better than -1.3%(-0.7%) by using 1% of FF for the cylinder (patients). Integral imaging dose reduction of 37% and 56% was achieved for the two patients for that level of modulation. Corresponding DVHs from proton dose calculation on FMpCT images agreed to those from reference images and 96% of BEV profiles had RD below 2 mm, compared to only 1% for uniform 1% of FF. Gamma pass rates (2%, 2 mm) were 98% for FMpCT while for uniform 1% of FF they were as low as 59%. Applying FMpCT to preliminary experimental data showed that low noise levels and accuracy could be preserved in a ROI, down to 30% modulation. We have shown, using both virtual and experimental pCT scans, that FMpCT is potentially feasible and may allow a means of imaging dose reduction for a pCT scanner operating in PB scanning mode. This may be of particular importance to proton therapy given the low integral dose found
Scheimpflug with computational imaging to extend the depth of field of iris recognition systems
Sinharoy, Indranil
Despite the enormous success of iris recognition in close-range and well-regulated spaces for biometric authentication, it has hitherto failed to gain wide-scale adoption in less controlled, public environments. The problem arises from a limitation in imaging called the depth of field (DOF): the limited range of distances beyond which subjects appear blurry in the image. The loss of spatial details in the iris image outside the small DOF limits the iris image capture to a small volume-the capture volume. Existing techniques to extend the capture volume are usually expensive, computationally intensive, or afflicted by noise. Is there a way to combine the classical Scheimpflug principle with the modern computational imaging techniques to extend the capture volume? The solution we found is, surprisingly, simple; yet, it provides several key advantages over existing approaches. Our method, called Angular Focus Stacking (AFS), consists of capturing a set of images while rotating the lens, followed by registration, and blending of the in-focus regions from the images in the stack. The theoretical underpinnings of AFS arose from a pair of new and general imaging models we developed for Scheimpflug imaging that directly incorporates the pupil parameters. The model revealed that we could register the images in the stack analytically if we pivot the lens at the center of its entrance pupil, rendering the registration process exact. Additionally, we found that a specific lens design further reduces the complexity of image registration making AFS suitable for real-time performance. We have demonstrated up to an order of magnitude improvement in the axial capture volume over conventional image capture without sacrificing optical resolution and signal-to-noise ratio. The total time required for capturing the set of images for AFS is less than the time needed for a single-exposure, conventional image for the same DOF and brightness level. The net reduction in capture time can
M. Kasemann
Overview In autumn the main focus was to process and handle CRAFT data and to perform the Summer08 MC production. The operational aspects were well covered by regular Computing Shifts, experts on duty and Computing Run Coordination. At the Computing Resource Board (CRB) in October a model to account for service work at Tier 2s was approved. The computing resources for 2009 were reviewed for presentation at the C-RRB. The quarterly resource monitoring is continuing. Facilities/Infrastructure operations Operations during CRAFT data taking ran fine. This proved to be a very valuable experience for T0 workflows and operations. The transfers of custodial data to most T1s went smoothly. A first round of reprocessing started at the Tier-1 centers end of November; it will take about two weeks. The Computing Shifts procedure was tested full scale during this period and proved to be very efficient: 30 Computing Shifts Persons (CSP) and 10 Computing Resources Coordinators (CRC). The shift program for the shut down w...
International Nuclear Information System (INIS)
von Milczewski, J.; Diercksen, G.H.; Uzer, T.
1996-01-01
A Rydberg atom placed in crossed static electric and magnetic fields is presented as a new testbed for phenomena not possible in two degrees of freedom. We compute the Arnol close-quote d web for this system and explore the time scale and the physical consequences of diffusion along this web. copyright 1996 The American Physical Society
Ipek, Ismail
2010-01-01
The purpose of this study was to investigate the effects of CBI lesson sequence type and cognitive style of field dependence on learning from Computer-Based Cooperative Instruction (CBCI) in WEB on the dependent measures, achievement, reading comprehension and reading rate. Eighty-seven college undergraduate students were randomly assigned to…
Das, Abhiram; Schneider, Hannah; Burridge, James; Ascanio, Ana Karine Martinez; Wojciechowski, Tobias; Topp, Christopher N; Lynch, Jonathan P; Weitz, Joshua S; Bucksch, Alexander
2015-01-01
Plant root systems are key drivers of plant function and yield. They are also under-explored targets to meet global food and energy demands. Many new technologies have been developed to characterize crop root system architecture (CRSA). These technologies have the potential to accelerate the progress in understanding the genetic control and environmental response of CRSA. Putting this potential into practice requires new methods and algorithms to analyze CRSA in digital images. Most prior approaches have solely focused on the estimation of root traits from images, yet no integrated platform exists that allows easy and intuitive access to trait extraction and analysis methods from images combined with storage solutions linked to metadata. Automated high-throughput phenotyping methods are increasingly used in laboratory-based efforts to link plant genotype with phenotype, whereas similar field-based studies remain predominantly manual low-throughput. Here, we present an open-source phenomics platform "DIRT", as a means to integrate scalable supercomputing architectures into field experiments and analysis pipelines. DIRT is an online platform that enables researchers to store images of plant roots, measure dicot and monocot root traits under field conditions, and share data and results within collaborative teams and the broader community. The DIRT platform seamlessly connects end-users with large-scale compute "commons" enabling the estimation and analysis of root phenotypes from field experiments of unprecedented size. DIRT is an automated high-throughput computing and collaboration platform for field based crop root phenomics. The platform is accessible at http://www.dirt.iplantcollaborative.org/ and hosted on the iPlant cyber-infrastructure using high-throughput grid computing resources of the Texas Advanced Computing Center (TACC). DIRT is a high volume central depository and high-throughput RSA trait computation platform for plant scientists working on crop roots
International Nuclear Information System (INIS)
Wang, J.J.H.; Dubberley, J.R.
1989-01-01
Electromagnetic (EM) fields in a three-dimensional, arbitrarily shaped heterogeneous dielectric or biological body illuminated by a plane wave are computed by an iterative conjugate gradient method. The method is a generalized method of moments applied to the volume integral equation. Because no matrix is explicitly involved or stored, the present iterative method is capable of computing EM fields in objects an order of magnitude larger than those that can be handled by the conventional method of moments. Excellent numerical convergence is achieved. Perfect convergence to the result of the conventional moment method using the same basis and weighted with delta functions is consistently achieved in all the cases computed, indicating that these two algorithms (direct and interactive) are equivalent
I. Fisk
2011-01-01
Introduction CMS distributed computing system performed well during the 2011 start-up. The events in 2011 have more pile-up and are more complex than last year; this results in longer reconstruction times and harder events to simulate. Significant increases in computing capacity were delivered in April for all computing tiers, and the utilisation and load is close to the planning predictions. All computing centre tiers performed their expected functionalities. Heavy-Ion Programme The CMS Heavy-Ion Programme had a very strong showing at the Quark Matter conference. A large number of analyses were shown. The dedicated heavy-ion reconstruction facility at the Vanderbilt Tier-2 is still involved in some commissioning activities, but is available for processing and analysis. Facilities and Infrastructure Operations Facility and Infrastructure operations have been active with operations and several important deployment tasks. Facilities participated in the testing and deployment of WMAgent and WorkQueue+Request...
P. McBride
The Computing Project is preparing for a busy year where the primary emphasis of the project moves towards steady operations. Following the very successful completion of Computing Software and Analysis challenge, CSA06, last fall, we have reorganized and established four groups in computing area: Commissioning, User Support, Facility/Infrastructure Operations and Data Operations. These groups work closely together with groups from the Offline Project in planning for data processing and operations. Monte Carlo production has continued since CSA06, with about 30M events produced each month to be used for HLT studies and physics validation. Monte Carlo production will continue throughout the year in the preparation of large samples for physics and detector studies ramping to 50 M events/month for CSA07. Commissioning of the full CMS computing system is a major goal for 2007. Site monitoring is an important commissioning component and work is ongoing to devise CMS specific tests to be included in Service Availa...
M. Kasemann
Overview During the past three months activities were focused on data operations, testing and re-enforcing shift and operational procedures for data production and transfer, MC production and on user support. Planning of the computing resources in view of the new LHC calendar in ongoing. Two new task forces were created for supporting the integration work: Site Commissioning, which develops tools helping distributed sites to monitor job and data workflows, and Analysis Support, collecting the user experience and feedback during analysis activities and developing tools to increase efficiency. The development plan for DMWM for 2009/2011 was developed at the beginning of the year, based on the requirements from the Physics, Computing and Offline groups (see Offline section). The Computing management meeting at FermiLab on February 19th and 20th was an excellent opportunity discussing the impact and for addressing issues and solutions to the main challenges facing CMS computing. The lack of manpower is particul...
Computer-aided detection system applied to full-field digital mammograms
International Nuclear Information System (INIS)
Vega Bolivar, Alfonso; Sanchez Gomez, Sonia; Merino, Paula; Alonso-Bartolome, Pilar; Ortega Garcia, Estrella; Munoz Cacho, Pedro; Hoffmeister, Jeffrey W.
2010-01-01
Background: Although mammography remains the mainstay for breast cancer screening, it is an imperfect examination with a sensitivity of 75-92% for breast cancer. Computer-aided detection (CAD) has been developed to improve mammographic detection of breast cancer. Purpose: To retrospectively estimate CAD sensitivity and false-positive rate with full-field digital mammograms (FFDMs). Material and Methods: CAD was used to evaluate 151 cases of ductal carcinoma in situ (DCIS) (n=48) and invasive breast cancer (n=103) detected with FFDM. Retrospectively, CAD sensitivity was estimated based on breast density, mammographic presentation, histopathology type, and lesion size. CAD false-positive rate was estimated with screening FFDMs from 200 women. Results: CAD detected 93% (141/151) of cancer cases: 97% (28/29) in fatty breasts, 94% (81/86) in breasts containing scattered fibroglandular densities, 90% (28/31) in heterogeneously dense breasts, and 80% (4/5) in extremely dense breasts. CAD detected 98% (54/55) of cancers manifesting as calcifications, 89% (74/83) as masses, and 100% (13/13) as mixed masses and calcifications. CAD detected 92% (73/79) of invasive ductal carcinomas, 89% (8/9) of invasive lobular carcinomas, 93% (14/15) of other invasive carcinomas, and 96% (46/48) of DCIS. CAD sensitivity for cancers 1-10 mm was 87% (47/54); 11-20 mm, 99% (70/71); 21-30 mm, 86% (12/14); and larger than 30 mm, 100% (12/12). The CAD false-positive rate was 2.5 marks per case. Conclusion: CAD with FFDM showed a high sensitivity in identifying cancers manifesting as calcifications or masses. CAD sensitivity was maintained in small lesions (1-20 mm) and invasive lobular carcinomas, which have lower mammographic sensitivity
Population coding and decoding in a neural field: a computational study.
Wu, Si; Amari, Shun-Ichi; Nakahara, Hiroyuki
2002-05-01
This study uses a neural field model to investigate computational aspects of population coding and decoding when the stimulus is a single variable. A general prototype model for the encoding process is proposed, in which neural responses are correlated, with strength specified by a gaussian function of their difference in preferred stimuli. Based on the model, we study the effect of correlation on the Fisher information, compare the performances of three decoding methods that differ in the amount of encoding information being used, and investigate the implementation of the three methods by using a recurrent network. This study not only rediscovers main results in existing literatures in a unified way, but also reveals important new features, especially when the neural correlation is strong. As the neural correlation of firing becomes larger, the Fisher information decreases drastically. We confirm that as the width of correlation increases, the Fisher information saturates and no longer increases in proportion to the number of neurons. However, we prove that as the width increases further--wider than (sqrt)2 times the effective width of the turning function--the Fisher information increases again, and it increases without limit in proportion to the number of neurons. Furthermore, we clarify the asymptotic efficiency of the maximum likelihood inference (MLI) type of decoding methods for correlated neural signals. It shows that when the correlation covers a nonlocal range of population (excepting the uniform correlation and when the noise is extremely small), the MLI type of method, whose decoding error satisfies the Cauchy-type distribution, is not asymptotically efficient. This implies that the variance is no longer adequate to measure decoding accuracy.
Full field image reconstruction is suitable for high-pitch dual-source computed tomography.
Mahnken, Andreas H; Allmendinger, Thomas; Sedlmair, Martin; Tamm, Miriam; Reinartz, Sebastian D; Flohr, Thomas
2012-11-01
The field of view (FOV) in high-pitch dual-source computed tomography (DSCT) is limited by the size of the second detector. The goal of this study was to develop and evaluate a full FOV image reconstruction technique for high-pitch DSCT. For reconstruction beyond the FOV of the second detector, raw data of the second system were extended to the full dimensions of the first system, using the partly existing data of the first system in combination with a very smooth transition weight function. During the weighted filtered backprojection, the data of the second system were applied with an additional weighting factor. This method was tested for different pitch values from 1.5 to 3.5 on a simulated phantom and on 25 high-pitch DSCT data sets acquired at pitch values of 1.6, 2.0, 2.5, 2.8, and 3.0. Images were reconstructed with FOV sizes of 260 × 260 and 500 × 500 mm. Image quality was assessed by 2 radiologists using a 5-point Likert scale and analyzed with repeated-measure analysis of variance. In phantom and patient data, full FOV image quality depended on pitch. Where complete projection data from both tube-detector systems were available, image quality was unaffected by pitch changes. Full FOV image quality was not compromised at pitch values of 1.6 and remained fully diagnostic up to a pitch of 2.0. At higher pitch values, there was an increasing difference in image quality between limited and full FOV images (P = 0.0097). With this new image reconstruction technique, full FOV image reconstruction can be used up to a pitch of 2.0.
International Nuclear Information System (INIS)
Zhao Yongxia; Song Shaojuan; Liu Chuanya; Qi Hengtao; Qin Weichang
2008-01-01
Objective: To investigate the differences in image quality and radiation dose between full- field digital mammography (FFDM) system and compute radiology mammography (CRM) system. Methods: The ALVIM mammographic phantom was exposed by FFDM system with automatic exposure control (AEC) and then exposed by CRM system with the unique imaging plank on the same condition. The FFDM system applied the same kV value and the different mAs values (14, 16, 18, 22 and 24 mAs), and the emission skin dose (ESD) and the average gland dose (AGD) were recorded for the above-mentioned exposure factors. All images were read by five experienced radiologists under the same condition and judged based on 5-point scales. And then receive operating characteristic (ROC) curve was drawn and the probability (P det ) values were calculated. The data were statistically processed with ANOVA. Results: The P det values of calcifications and lesion lump were higher with FFDM system than with CRM system at the same dose (1.36 mGy). Especially, for microcalcifications and lesion lump, the largest difference of the P det value was 0.215, and that of lesion lump was 0.245. In comparison with CRM system, the radiation dose of FFDM system could be reduced at the same P det value. The ESD value was reduced by 26%, and the ACD value was reduced by 41%. When the mAs value exceed AEC value, the P det value almost had no change, though the radiation dose was increased. Conclusions: The detection rates of microcalcifications and lesion lump with FFDM system are proven to be superior to CRM system at the same dose. The radiation dose of FFDM system was less than CRM system for the same image quality. (authors)
Cai, Sophie; Elze, Tobias; Bex, Peter J; Wiggs, Janey L; Pasquale, Louis R; Shen, Lucy Q
2017-04-01
To assess the clinical validity of visual field (VF) archetypal analysis, a previously developed machine learning method for decomposing any Humphrey VF (24-2) into a weighted sum of clinically recognizable VF loss patterns. For each of 16 previously identified VF loss patterns ("archetypes," denoted AT1 through AT16), we screened 30,995 reliable VFs to select 10-20 representative patients whose VFs had the highest decomposition coefficients for each archetype. VF global indices and patient ocular and demographic features were extracted retrospectively. Based on resemblances between VF archetypes and clinically observed VF patterns, hypotheses were generated for associations between certain VF archetypes and clinical features, such as an association between AT6 (central island, representing severe VF loss) and large cup-to-disk ratio (CDR). Distributions of the selected clinical features were compared between representative eyes of certain archetypes and all other eyes using the two-tailed t-test or Fisher exact test. 243 eyes from 243 patients were included, representative of AT1 through AT16. CDR was more often ≥ 0.7 among eyes representative of AT6 (central island; p = 0.002), AT10 (inferior arcuate defect; p = 0.048), AT14 (superior paracentral defect; p = 0.016), and AT16 (inferior paracentral defect; p = 0.016) than other eyes. CDR was more often 6D (p = 0.069). Shared clinical features between computationally derived VF archetypes and clinically observed VF patterns support the clinical validity of VF archetypal analysis.
Directory of Open Access Journals (Sweden)
Timothy Olding
2011-01-01
Full Text Available This paper explores the combination of cone beam optical computed tomography with an N-isopropylacrylamide (NIPAM-based polymer gel dosimeter for three-dimensional dose imaging of small field deliveries. Initial investigations indicate that cone beam optical imaging of polymer gels is complicated by scattered stray light perturbation. This can lead to significant dosimetry failures in comparison to dose readout by magnetic resonance imaging (MRI. For example, only 60% of the voxels from an optical CT dose readout of a 1 l dosimeter passed a two-dimensional Low′s gamma test (at a 3%, 3 mm criteria, relative to a treatment plan for a well-characterized pencil beam delivery. When the same dosimeter was probed by MRI, a 93% pass rate was observed. The optical dose measurement was improved after modifications to the dosimeter preparation, matching its performance with the imaging capabilities of the scanner. With the new dosimeter preparation, 99.7% of the optical CT voxels passed a Low′s gamma test at the 3%, 3 mm criteria and 92.7% at a 2%, 2 mm criteria. The fitted interjar dose responses of a small sample set of modified dosimeters prepared (a from the same gel batch and (b from different gel batches prepared on the same day were found to be in agreement to within 3.6% and 3.8%, respectively, over the full dose range. Without drawing any statistical conclusions, this experiment gives a preliminary indication that intrabatch or interbatch NIPAM dosimeters prepared on the same day should be suitable for dose sensitivity calibration.
Computer-aided detection system applied to full-field digital mammograms
Energy Technology Data Exchange (ETDEWEB)
Vega Bolivar, Alfonso; Sanchez Gomez, Sonia; Merino, Paula; Alonso-Bartolome, Pilar; Ortega Garcia, Estrella (Dept. of Radiology, Univ. Marques of Valdecilla Hospital, Santander (Spain)), e-mail: avegab@telefonica.net; Munoz Cacho, Pedro (Dept. of Statistics, Univ. Marques of Valdecilla Hospital, Santander (Spain)); Hoffmeister, Jeffrey W. (iCAD, Inc., Nashua, NH (United States))
2010-12-15
Background: Although mammography remains the mainstay for breast cancer screening, it is an imperfect examination with a sensitivity of 75-92% for breast cancer. Computer-aided detection (CAD) has been developed to improve mammographic detection of breast cancer. Purpose: To retrospectively estimate CAD sensitivity and false-positive rate with full-field digital mammograms (FFDMs). Material and Methods: CAD was used to evaluate 151 cases of ductal carcinoma in situ (DCIS) (n=48) and invasive breast cancer (n=103) detected with FFDM. Retrospectively, CAD sensitivity was estimated based on breast density, mammographic presentation, histopathology type, and lesion size. CAD false-positive rate was estimated with screening FFDMs from 200 women. Results: CAD detected 93% (141/151) of cancer cases: 97% (28/29) in fatty breasts, 94% (81/86) in breasts containing scattered fibroglandular densities, 90% (28/31) in heterogeneously dense breasts, and 80% (4/5) in extremely dense breasts. CAD detected 98% (54/55) of cancers manifesting as calcifications, 89% (74/83) as masses, and 100% (13/13) as mixed masses and calcifications. CAD detected 92% (73/79) of invasive ductal carcinomas, 89% (8/9) of invasive lobular carcinomas, 93% (14/15) of other invasive carcinomas, and 96% (46/48) of DCIS. CAD sensitivity for cancers 1-10 mm was 87% (47/54); 11-20 mm, 99% (70/71); 21-30 mm, 86% (12/14); and larger than 30 mm, 100% (12/12). The CAD false-positive rate was 2.5 marks per case. Conclusion: CAD with FFDM showed a high sensitivity in identifying cancers manifesting as calcifications or masses. CAD sensitivity was maintained in small lesions (1-20 mm) and invasive lobular carcinomas, which have lower mammographic sensitivity
International Nuclear Information System (INIS)
Afanas'ev, A.M.
1987-01-01
The large-scale construction of atomic power stations results in a need for trainers to instruct power-station personnel. The present work considers one problem of developing training computer software, associated with the development of a high-speed algorithm for calculating the neutron field after control-rod (CR) shift by the operator. The case considered here is that in which training units are developed on the basis of small computers of SM-2 type, which fall significantly short of the BESM-6 and EC-type computers used for the design calculations, in terms of speed and memory capacity. Depending on the apparatus for solving the criticality problem, in a two-dimensional single-group approximation, the physical-calculation programs require ∼ 1 min of machine time on a BESM-6 computer, which translates to ∼ 10 min on an SM-2 machine. In practice, this time is even longer, since ultimately it is necessary to determine not the effective multiplication factor K/sub ef/, but rather the local perturbations of the emergency-control (EC) system (to reach criticality) and change in the neutron field on shifting the CR and the EC rods. This long time means that it is very problematic to use physical-calculation programs to work in dialog mode with a computer. The algorithm presented below allows the neutron field following shift of the CR and EC rods to be calculated in a few seconds on a BESM-6 computer (tens of second on an SM-2 machine. This high speed may be achieved as a result of the preliminary calculation of the influence function (IF) for each CR. The IF may be calculated at high speed on a computer. Then it is stored in the external memory (EM) and, where necessary, used as the initial information
Stieger, Stefan; Lewetz, David; Reips, Ulf-Dietrich
2017-12-06
Researchers are increasingly using smartphones to collect scientific data. To date, most smartphone studies have collected questionnaire data or data from the built-in sensors. So far, few studies have analyzed whether smartphones can also be used to conduct computer-based tasks (CBTs). Using a mobile experience-sampling method study and a computer-based tapping task as examples (N = 246; twice a day for three weeks, 6,000+ measurements), we analyzed how well smartphones can be used to conduct a CBT. We assessed methodological aspects such as potential technologically induced problems, dropout, task noncompliance, and the accuracy of millisecond measurements. Overall, we found few problems: Dropout rate was low, and the time measurements were very accurate. Nevertheless, particularly at the beginning of the study, some participants did not comply with the task instructions, probably because they did not read the instructions before beginning the task. To summarize, the results suggest that smartphones can be used to transfer CBTs from the lab to the field, and that real-world variations across device manufacturers, OS types, and CPU load conditions did not substantially distort the results.
I. Fisk
2013-01-01
Computing activity had ramped down after the completion of the reprocessing of the 2012 data and parked data, but is increasing with new simulation samples for analysis and upgrade studies. Much of the Computing effort is currently involved in activities to improve the computing system in preparation for 2015. Operations Office Since the beginning of 2013, the Computing Operations team successfully re-processed the 2012 data in record time, not only by using opportunistic resources like the San Diego Supercomputer Center which was accessible, to re-process the primary datasets HTMHT and MultiJet in Run2012D much earlier than planned. The Heavy-Ion data-taking period was successfully concluded in February collecting almost 500 T. Figure 3: Number of events per month (data) In LS1, our emphasis is to increase efficiency and flexibility of the infrastructure and operation. Computing Operations is working on separating disk and tape at the Tier-1 sites and the full implementation of the xrootd federation ...
International Nuclear Information System (INIS)
Li Weidong; Siebers, Jeffrey V.; Moore, Joseph A.
2006-01-01
This study develops a method to improve the dosimetric accuracy of computed images for an amorphous silicon flat-panel imager. Radially dependent kernels derived from Monte Carlo simulations are convolved with the treatment-planning system's energy fluence. Multileaf collimator (MLC) beam hardening is accounted for by having separate kernels for open and blocked portions of MLC fields. Field-size-dependent output factors are used to account for the field-size dependence of scatter within the imager. Gamma analysis was used to evaluate open and sliding window test fields and intensity modulated patient fields. For each tested field, at least 99.6% of the points had γ<1 with a 3%, 3-mm criteria. With a 2%, 2-mm criteria, between 81% and 100% of points had γ<1. Patient intensity modulated test fields had 94%-100% of the points with γ<1 with a 2%, 2-mm criteria for all six fields tested. This study demonstrates that including the dependencies of kernel and fluence on radius and beam hardening in the convolution improves its accuracy compared with the use of radial and beam-hardening independent kernels; it also demonstrates that the resultant accuracy of the convolution method is sufficient for pretreatment, intensity modulated patient field verification
Sookhak Lari, Kaveh; Johnston, Colin D; Rayner, John L; Davis, Greg B
2018-03-05
Remediation of subsurface systems, including groundwater, soil and soil gas, contaminated with light non-aqueous phase liquids (LNAPLs) is challenging. Field-scale pilot trials of multi-phase remediation were undertaken at a site to determine the effectiveness of recovery options. Sequential LNAPL skimming and vacuum-enhanced skimming, with and without water table drawdown were trialled over 78days; in total extracting over 5m 3 of LNAPL. For the first time, a multi-component simulation framework (including the multi-phase multi-component code TMVOC-MP and processing codes) was developed and applied to simulate the broad range of multi-phase remediation and recovery methods used in the field trials. This framework was validated against the sequential pilot trials by comparing predicted and measured LNAPL mass removal rates and compositional changes. The framework was tested on both a Cray supercomputer and a cluster. Simulations mimicked trends in LNAPL recovery rates (from 0.14 to 3mL/s) across all remediation techniques each operating over periods of 4-14days over the 78day trial. The code also approximated order of magnitude compositional changes of hazardous chemical concentrations in extracted gas during vacuum-enhanced recovery. The verified framework enables longer term prediction of the effectiveness of remediation approaches allowing better determination of remediation endpoints and long-term risks. Copyright © 2017 Commonwealth Scientific and Industrial Research Organisation. Published by Elsevier B.V. All rights reserved.
Directory of Open Access Journals (Sweden)
Heike Mónika Greschke
2007-09-01
Full Text Available This article aims to introduce an ethnic group inhabiting a common virtual space in the World Wide Web (WWW, while being physically located in different socio-geographical contexts. Potentially global in its geographical extent, this social formation is constituted by means of interrelating virtual-global dimensions with physically grounded parts of the actors' lifeworlds. In addition, the communities' social life relies on specific communicative practices joining mediated forms of communication with co-presence based encounters. Ethnographic research in a pluri-local and computer-mediated field poses a set of problems which demand thorough reflection as well as a search for creative solutions. How can the boundaries of the field be determined? What does "being there" signify in such a case? Is it possible to enter the field while sitting at my own desk, just by visiting the respective site in the WWW, simply observing the communication going on without even being noticed by the subjects in the field? Or does "being in the field" imply that I ought to turn into a member of the studied community? Am I supposed to effectively live with the others for a while? And then, what can "living together" actually mean in that case? Will I learn enough about the field simply by participating in its virtual activities? Or do I have to account for the physically grounded dimensions of the actors' lifeworlds, as well? Ethnographic research in a pluri-local and computer-mediated field in practice raises a lot of questions regarding the ways of entering the field and being in the field. Some of them will be discussed in this paper by means of reflecting research experiences gained in the context of a recently concluded case study. URN: urn:nbn:de:0114-fqs0703321
Energy Technology Data Exchange (ETDEWEB)
Moudi, Ehsan; Haghanifar, Sina; Madani, Zahrasadat; Bijani, Ali; Nabavi, Zeynab Sadat [Babol University of Medical Science, Babol (Iran, Islamic Republic of)
2015-09-15
The aim of this study was to investigate the effects of metal artifacts on the accurate diagnosis of root fractures using cone-beam computed tomography (CBCT) images with large and small/limited fields of view (FOVs). Forty extracted molar and premolar teeth were collected. Access canals were made in all teeth using a rotary system. In half of the teeth, fractures were created by the application of mild pressure with a hammer. The teeth were then randomly put into a wax rim on an acryl base designed in the shape of a mandible. CBCT scans were obtained using a Newtom 5G system with FOVs of 18 cm×16 cm and 6 cm×6 cm. A metal pin was then placed into each tooth, and CBCT imaging was again performed using the same fields of view. All scans were evaluated by two oral and maxillofacial radiologists. The specificity, sensitivity, positive predictive value, negative predictive value, and likelihood ratios (positive and negative) were calculated. The maximum levels of sensitivity and specificity (100% and 100%, respectively) were observed in small volume CBCT scans of teeth without pins. The highest negative predictive value was found in the small-volume group without pins, whereas the positive predictive value was 100% in all groups except the large-volume group with pins.
Computer aided detection of clusters of microcalcifications on full field digital mammograms
International Nuclear Information System (INIS)
Ge Jun; Sahiner, Berkman; Hadjiiski, Lubomir M.; Chan, H.-P.; Wei Jun; Helvie, Mark A.; Zhou Chuan
2006-01-01
We are developing a computer-aided detection (CAD) system to identify microcalcification clusters (MCCs) automatically on full field digital mammograms (FFDMs). The CAD system includes six stages: preprocessing; image enhancement; segmentation of microcalcification candidates; false positive (FP) reduction for individual microcalcifications; regional clustering; and FP reduction for clustered microcalcifications. At the stage of FP reduction for individual microcalcifications, a truncated sum-of-squares error function was used to improve the efficiency and robustness of the training of an artificial neural network in our CAD system for FFDMs. At the stage of FP reduction for clustered microcalcifications, morphological features and features derived from the artificial neural network outputs were extracted from each cluster. Stepwise linear discriminant analysis (LDA) was used to select the features. An LDA classifier was then used to differentiate clustered microcalcifications from FPs. A data set of 96 cases with 192 images was collected at the University of Michigan. This data set contained 96 MCCs, of which 28 clusters were proven by biopsy to be malignant and 68 were proven to be benign. The data set was separated into two independent data sets for training and testing of the CAD system in a cross-validation scheme. When one data set was used to train and validate the convolution neural network (CNN) in our CAD system, the other data set was used to evaluate the detection performance. With the use of a truncated error metric, the training of CNN could be accelerated and the classification performance was improved. The CNN in combination with an LDA classifier could substantially reduce FPs with a small tradeoff in sensitivity. By using the free-response receiver operating characteristic methodology, it was found that our CAD system can achieve a cluster-based sensitivity of 70, 80, and 90 % at 0.21, 0.61, and 1.49 FPs/image, respectively. For case
de Assis, Thiago A.; Dall’Agnol, Fernando F.
2018-05-01
Numerical simulations are important when assessing the many characteristics of field emission related phenomena. In small simulation domains, the electrostatic effect from the boundaries is known to influence the calculated apex field enhancement factor (FEF) of the emitter, but no established dependence has been reported at present. In this work, we report the dependence of the lateral size, L, and the height, H, of the simulation domain on the apex-FEF of a single conducting ellipsoidal emitter. Firstly, we analyze the error, ε, in the calculation of the apex-FEF as a function of H and L. Importantly, our results show that the effects of H and L on ε are scale invariant, allowing one to predict ε for ratios L/h and H/h, where h is the height of the emitter. Next, we analyze the fractional change of the apex-FEF, δ, from a single emitter, , and a pair, . We show that small relative errors in (i.e. ), due to the finite domain size, are sufficient to alter the functional dependence , where c is the distance from the emitters in the pair. We show that obeys a recently proposed power law decay (Forbes 2016 J. Appl. Phys. 120 054302), at sufficiently large distances in the limit of infinite domain size (, say), which is not observed when using a long time established exponential decay (Bonard et al 2001 Adv. Mater. 13 184) or a more sophisticated fitting formula proposed recently by Harris et al (2015 AIP Adv. 5 087182). We show that the inverse-third power law functional dependence is respected for various systems like infinity arrays and small clusters of emitters with different shapes. Thus, , with m = 3, is suggested to be a universal signature of the charge-blunting effect in small clusters or arrays, at sufficient large distances between emitters with any shape. These results improve the physical understanding of the field electron emission theory to accurately characterize emitters in small clusters or arrays.
I. Fisk
2010-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion An activity that is still in progress is computing for the heavy-ion program. The heavy-ion events are collected without zero suppression, so the event size is much large at roughly 11 MB per event of RAW. The central collisions are more complex and...
M. Kasemann P. McBride Edited by M-C. Sawley with contributions from: P. Kreuzer D. Bonacorsi S. Belforte F. Wuerthwein L. Bauerdick K. Lassila-Perini M-C. Sawley
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the comput...
International Nuclear Information System (INIS)
Reinhart, E.R.; Leon-Salamanca, T.
2004-01-01
The new generation of compact, powerful portable computers have been incorporated into a number of nondestructive evaluation (NDE) systems used to inspect critical areas of the steam turbine and generator units of nuclear power plants. Due to the complex geometry of turbine rotors, generator rotors, retaining rings, and shrunk-on turbine discs, the computers are needed to rapidly calculate the optimum position of an ultrasonic transducer or eddy current probe in order to detect defects at several critical areas. Examples where computers have been used to overcome problems in nondestructive evaluation include; analysis of large numbers of closely spaced near-bore ultrasonic reflectors to determine their potential for link-up in turbine and generator rotor bores, distinguishing ultrasonic crack signals from other reflectors such as the shrink-fit form reflector detected during ultrasonic scanning of shrunk-on generator retaining rings, and detection and recording of eddy current and ultrasonic signals from defects that could be missed by data acquisition systems with inadequate response. The computers are also used to control scanners to insure total inspection coverage. To facilitate the use of data from detected discontinuities in conjunction with stress and fracture mechanics analysis programs, the computers provide presentations of flaws in color and in three dimensions. The field computers have been instrumental in allowing the inspectors to develop on-site reports that enable the owner/operator to rapidly make run/repair/replace decisions. Examples of recent experiences using field portable computers in NDE systems will be presented along with anticipated future developments. (author)
International Nuclear Information System (INIS)
Dragt, A.J.; Gluckstern, R.L.
1994-08-01
The University of Maryland Dynamical Systems and Accelerator Theory Group has been carrying out long-term research work in the general area of Dynamical Systems with a particular emphasis on applications to Accelerator Physics. This work is broadly divided into two tasks: the computation of charged particle beam transport and the computation of electromagnetic fields and beam-cavity interactions. Each of these tasks is described briefly. Work is devoted both to the development of new methods and the application of these methods to problems of current interest in accelerator physics including the theoretical performance of present and proposed high energy machines. In addition to its research effort, the Dynamical Systems and Accelerator Theory Group is actively engaged in the education of students and postdoctoral research associates. Substantial progress in research has been made during the past year. These achievements are summarized in the following report
International Nuclear Information System (INIS)
Bates, Brian A.; Cullip, Timothy J.; Rosenman, Julian G.
1995-01-01
Purpose/Objective: To demonstrate that one can obtain a homogeneous dose distribution within a specified gross tumor volume (GTV) while severely limiting the dose to a structure surrounded by that tumor volume. We present three clinical examples below. Materials and Methods: Using planning CT scans from previously treated patients, we designed variety of radiation treatment plans in which the dose-critical normal structure was blocked, even if it meant blocking some of the tumor. To deal with the resulting dose inhomogeneities within the tumor, we introduced 3D compensation. Examples presented here include (1) blocking the spinal cord segment while treating an entire vertebral body, (2) blocking both kidneys while treating the entire peritoneal cavity, and (3) blocking one parotid gland while treating the oropharynx in its entirety along with regional nodes. A series of multiple planar and non-coplanar beam templates with automatic anatomic blocking and field shaping were designed for each scenario. Three-dimensional compensators were designed that gave the most homogeneous dose-distribution for the GTV. For each beam, rays were cast from the beam source through a 2D compensator grid and out through the tumor. The average tumor dose along each ray was then used to adjust the compensator thickness over successive iterations to achieve a uniform average dose. DVH calculations for the GTV, normal structures, and the 'auto-blocked' structure were made and used for inter-plan comparisons. Results: These optimized treatment plans successfully decreased dose to the dose-limiting structure while at the same time preserving or even improving the dose distribution to the tumor volume as compared to traditional treatment plans. Conclusion: The use of 3D compensation allows one to obtain dose distributions that are, theoretically, at least, far superior to those in common clinical use. Sensible beam templates, auto-blocking, auto-field shaping, and 3D compensators form a
DEFF Research Database (Denmark)
Korshoej, Anders Rosendal; Saturnino, Guilherme Bicalho; Rasmussen, Line Kirkegaard
2016-01-01
the potential of the intervention to improve the clinical efficacy of TTFields therapy of brain cancer. Methods: We used finite element analysis to calculate the electrical field distribution in realistic head models based on MRI data from two patients: One with left cortical/subcortical glioblastoma and one......Objective: The present work proposes a new clinical approach to TTFields therapy of glioblastoma. The approach combines targeted surgical skull removal (craniectomy) with TTFields therapy to enhance the induced electrical field in the underlying tumor tissue. Using computer simulations, we explore...... with deeply seated right thalamic anaplastic astrocytoma. Field strength was assessed in the tumor regions before and after virtual removal of bone areas of varying shape and size (10 to 100 mm) immediately above the tumor. Field strength was evaluated before and after tumor resection to assess realistic...
Charlebois, Kathleen; Palmour, Nicole; Knoppers, Bartha Maria
2016-01-01
This study aims to understand the influence of the ethical and legal issues on cloud computing adoption in the field of genomics research. To do so, we adapted Diffusion of Innovation (DoI) theory to enable understanding of how key stakeholders manage the various ethical and legal issues they encounter when adopting cloud computing. Twenty semi-structured interviews were conducted with genomics researchers, patient advocates and cloud service providers. Thematic analysis generated five major themes: 1) Getting comfortable with cloud computing; 2) Weighing the advantages and the risks of cloud computing; 3) Reconciling cloud computing with data privacy; 4) Maintaining trust and 5) Anticipating the cloud by creating the conditions for cloud adoption. Our analysis highlights the tendency among genomics researchers to gradually adopt cloud technology. Efforts made by cloud service providers to promote cloud computing adoption are confronted by researchers' perpetual cost and security concerns, along with a lack of familiarity with the technology. Further underlying those fears are researchers' legal responsibility with respect to the data that is stored on the cloud. Alternative consent mechanisms aimed at increasing patients' control over the use of their data also provide a means to circumvent various institutional and jurisdictional hurdles that restrict access by creating siloed databases. However, the risk of creating new, cloud-based silos may run counter to the goal in genomics research to increase data sharing on a global scale.
Directory of Open Access Journals (Sweden)
Kathleen Charlebois
Full Text Available This study aims to understand the influence of the ethical and legal issues on cloud computing adoption in the field of genomics research. To do so, we adapted Diffusion of Innovation (DoI theory to enable understanding of how key stakeholders manage the various ethical and legal issues they encounter when adopting cloud computing. Twenty semi-structured interviews were conducted with genomics researchers, patient advocates and cloud service providers. Thematic analysis generated five major themes: 1 Getting comfortable with cloud computing; 2 Weighing the advantages and the risks of cloud computing; 3 Reconciling cloud computing with data privacy; 4 Maintaining trust and 5 Anticipating the cloud by creating the conditions for cloud adoption. Our analysis highlights the tendency among genomics researchers to gradually adopt cloud technology. Efforts made by cloud service providers to promote cloud computing adoption are confronted by researchers' perpetual cost and security concerns, along with a lack of familiarity with the technology. Further underlying those fears are researchers' legal responsibility with respect to the data that is stored on the cloud. Alternative consent mechanisms aimed at increasing patients' control over the use of their data also provide a means to circumvent various institutional and jurisdictional hurdles that restrict access by creating siloed databases. However, the risk of creating new, cloud-based silos may run counter to the goal in genomics research to increase data sharing on a global scale.
Charlebois, Kathleen; Palmour, Nicole; Knoppers, Bartha Maria
2016-01-01
This study aims to understand the influence of the ethical and legal issues on cloud computing adoption in the field of genomics research. To do so, we adapted Diffusion of Innovation (DoI) theory to enable understanding of how key stakeholders manage the various ethical and legal issues they encounter when adopting cloud computing. Twenty semi-structured interviews were conducted with genomics researchers, patient advocates and cloud service providers. Thematic analysis generated five major themes: 1) Getting comfortable with cloud computing; 2) Weighing the advantages and the risks of cloud computing; 3) Reconciling cloud computing with data privacy; 4) Maintaining trust and 5) Anticipating the cloud by creating the conditions for cloud adoption. Our analysis highlights the tendency among genomics researchers to gradually adopt cloud technology. Efforts made by cloud service providers to promote cloud computing adoption are confronted by researchers’ perpetual cost and security concerns, along with a lack of familiarity with the technology. Further underlying those fears are researchers’ legal responsibility with respect to the data that is stored on the cloud. Alternative consent mechanisms aimed at increasing patients’ control over the use of their data also provide a means to circumvent various institutional and jurisdictional hurdles that restrict access by creating siloed databases. However, the risk of creating new, cloud-based silos may run counter to the goal in genomics research to increase data sharing on a global scale. PMID:27755563
Accurate overlaying for mobile augmented reality
Pasman, W; van der Schaaf, A; Lagendijk, RL; Jansen, F.W.
1999-01-01
Mobile augmented reality requires accurate alignment of virtual information with objects visible in the real world. We describe a system for mobile communications to be developed to meet these strict alignment criteria using a combination of computer vision. inertial tracking and low-latency
Spatial Processing of Urban Acoustic Wave Fields from High-Performance Computations
National Research Council Canada - National Science Library
Ketcham, Stephen A; Wilson, D. K; Cudney, Harley H; Parker, Michael W
2007-01-01
.... The objective of this work is to develop spatial processing techniques for acoustic wave propagation data from three-dimensional high-performance computations to quantify scattering due to urban...
Prospects for the applications of computer in the field of domestic nuclear medicinal instrument
International Nuclear Information System (INIS)
Zhao Changhe
1993-01-01
The current situation and prospects about domestic nuclear medical instrument, as well as the comparisons of computer application in nuclear medical instruments with in other medical instruments from various points of view have all been described in the paper
On the Computation of Degenerate Hopf Bifurcations for n-Dimensional Multiparameter Vector Fields
Directory of Open Access Journals (Sweden)
Michail P. Markakis
2016-01-01
Full Text Available The restriction of an n-dimensional nonlinear parametric system on the center manifold is treated via a new proper symbolic form and analytical expressions of the involved quantities are obtained as functions of the parameters by lengthy algebraic manipulations combined with computer assisted calculations. Normal forms regarding degenerate Hopf bifurcations up to codimension 3, as well as the corresponding Lyapunov coefficients and bifurcation portraits, can be easily computed for any system under consideration.
Computer-assisted training experiment used in the field of thermal energy production (EDF)
International Nuclear Information System (INIS)
Felgines, R.
1982-01-01
In 1981, the EDF carried out an experiment with computer-assisted training (EAO). This new approach, which continued until June 1982, involved about 700 employees all of whom operated nuclear power stations. The different stages of this experiment and the lessons which can be drawn from it are given the lessons were of a positive nature and make it possible to envisage complete coverage of all nuclear power stations by computer-assisted training within a very short space of time [fr
Directory of Open Access Journals (Sweden)
David M. Benoit
2011-08-01
Full Text Available We present a theoretical framework for the computation of anharmonic vibrational frequencies for large systems, with a particular focus on determining adsorbate frequencies from first principles. We give a detailed account of our local implementation of the vibrational self-consistent field approach and its correlation corrections. We show that our approach is both robust, accurate and can be easily deployed on computational grids in order to provide an efficient computational tool. We also present results on the vibrational spectrum of hydrogen fluoride on pyrene, on the thiophene molecule in the gas phase, and on small neutral gold clusters.
Two interacting spins in external fields and application to quantum computation
International Nuclear Information System (INIS)
Baldiotti, M.C.; Gitman, D.M.; Bagrov, V.G.
2009-01-01
We study the four-level system given by two quantum dots immersed in a time-dependent magnetic field, which are coupled to each other by an effective Heisenberg-type interaction. We describe the construction of the corresponding evolution operator in a special case of different time-dependent parallel external magnetic fields. We find a relation between the external field and the effective interaction function. The obtained results are used to analyze the theoretical implementation of a universal quantum gate
P. McBride
It has been a very active year for the computing project with strong contributions from members of the global community. The project has focused on site preparation and Monte Carlo production. The operations group has begun processing data from P5 as part of the global data commissioning. Improvements in transfer rates and site availability have been seen as computing sites across the globe prepare for large scale production and analysis as part of CSA07. Preparations for the upcoming Computing Software and Analysis Challenge CSA07 are progressing. Ian Fisk and Neil Geddes have been appointed as coordinators for the challenge. CSA07 will include production tests of the Tier-0 production system, reprocessing at the Tier-1 sites and Monte Carlo production at the Tier-2 sites. At the same time there will be a large analysis exercise at the Tier-2 centres. Pre-production simulation of the Monte Carlo events for the challenge is beginning. Scale tests of the Tier-0 will begin in mid-July and the challenge it...
M. Kasemann
Introduction During the past six months, Computing participated in the STEP09 exercise, had a major involvement in the October exercise and has been working with CMS sites on improving open issues relevant for data taking. At the same time operations for MC production, real data reconstruction and re-reconstructions and data transfers at large scales were performed. STEP09 was successfully conducted in June as a joint exercise with ATLAS and the other experiments. It gave good indication about the readiness of the WLCG infrastructure with the two major LHC experiments stressing the reading, writing and processing of physics data. The October Exercise, in contrast, was conducted as an all-CMS exercise, where Physics, Computing and Offline worked on a common plan to exercise all steps to efficiently access and analyze data. As one of the major results, the CMS Tier-2s demonstrated to be fully capable for performing data analysis. In recent weeks, efforts were devoted to CMS Computing readiness. All th...
I. Fisk
2011-01-01
Introduction It has been a very active quarter in Computing with interesting progress in all areas. The activity level at the computing facilities, driven by both organised processing from data operations and user analysis, has been steadily increasing. The large-scale production of simulated events that has been progressing throughout the fall is wrapping-up and reprocessing with pile-up will continue. A large reprocessing of all the proton-proton data has just been released and another will follow shortly. The number of analysis jobs by users each day, that was already hitting the computing model expectations at the time of ICHEP, is now 33% higher. We are expecting a busy holiday break to ensure samples are ready in time for the winter conferences. Heavy Ion The Tier 0 infrastructure was able to repack and promptly reconstruct heavy-ion collision data. Two copies were made of the data at CERN using a large CASTOR disk pool, and the core physics sample was replicated ...
I. Fisk
2012-01-01
Introduction Computing continued with a high level of activity over the winter in preparation for conferences and the start of the 2012 run. 2012 brings new challenges with a new energy, more complex events, and the need to make the best use of the available time before the Long Shutdown. We expect to be resource constrained on all tiers of the computing system in 2012 and are working to ensure the high-priority goals of CMS are not impacted. Heavy ions After a successful 2011 heavy-ion run, the programme is moving to analysis. During the run, the CAF resources were well used for prompt analysis. Since then in 2012 on average 200 job slots have been used continuously at Vanderbilt for analysis workflows. Operations Office As of 2012, the Computing Project emphasis has moved from commissioning to operation of the various systems. This is reflected in the new organisation structure where the Facilities and Data Operations tasks have been merged into a common Operations Office, which now covers everything ...
M. Kasemann
CCRC’08 challenges and CSA08 During the February campaign of the Common Computing readiness challenges (CCRC’08), the CMS computing team had achieved very good results. The link between the detector site and the Tier0 was tested by gradually increasing the number of parallel transfer streams well beyond the target. Tests covered the global robustness at the Tier0, processing a massive number of very large files and with a high writing speed to tapes. Other tests covered the links between the different Tiers of the distributed infrastructure and the pre-staging and reprocessing capacity of the Tier1’s: response time, data transfer rate and success rate for Tape to Buffer staging of files kept exclusively on Tape were measured. In all cases, coordination with the sites was efficient and no serious problem was found. These successful preparations prepared the ground for the second phase of the CCRC’08 campaign, in May. The Computing Software and Analysis challen...
I. Fisk
2010-01-01
Introduction The first data taking period of November produced a first scientific paper, and this is a very satisfactory step for Computing. It also gave the invaluable opportunity to learn and debrief from this first, intense period, and make the necessary adaptations. The alarm procedures between different groups (DAQ, Physics, T0 processing, Alignment/calibration, T1 and T2 communications) have been reinforced. A major effort has also been invested into remodeling and optimizing operator tasks in all activities in Computing, in parallel with the recruitment of new Cat A operators. The teams are being completed and by mid year the new tasks will have been assigned. CRB (Computing Resource Board) The Board met twice since last CMS week. In December it reviewed the experience of the November data-taking period and could measure the positive improvements made for the site readiness. It also reviewed the policy under which Tier-2 are associated with Physics Groups. Such associations are decided twice per ye...
M. Kasemann
Introduction More than seventy CMS collaborators attended the Computing and Offline Workshop in San Diego, California, April 20-24th to discuss the state of readiness of software and computing for collisions. Focus and priority were given to preparations for data taking and providing room for ample dialog between groups involved in Commissioning, Data Operations, Analysis and MC Production. Throughout the workshop, aspects of software, operating procedures and issues addressing all parts of the computing model were discussed. Plans for the CMS participation in STEP’09, the combined scale testing for all four experiments due in June 2009, were refined. The article in CMS Times by Frank Wuerthwein gave a good recap of the highly collaborative atmosphere of the workshop. Many thanks to UCSD and to the organizers for taking care of this workshop, which resulted in a long list of action items and was definitely a success. A considerable amount of effort and care is invested in the estimate of the co...
Energy Technology Data Exchange (ETDEWEB)
Aduku, K.J.; Thelwall, M.; Kousha, K.
2016-07-01
Counts of Mendeley readers may give useful evidence about the impact of research. Although several studies have indicated that there are significant positive correlations between counts of Mendeley readers and citation counts for journal articles, it is not known how the pattern of association may vary between journal articles and conference papers. To fill this gap, Mendeley readership data and Scopus citation counts were extracted for both journal articles and conference papers published in 2011 in four fields for which conferences are important; Computer Science Applications, Computer Software, Building & Construction Engineering and Industrial & Manufacturing Engineering. Mendeley readership counts were found to correlate moderately with citation counts for both journal articles and conference papers in Computer Science Applications and Computer Software. Nevertheless, the correlations were much lower between Mendeley readers and citation counts for conference papers than for journal articles in Building & Construction Engineering and Industrial & Manufacturing Engineering. Hence, there seems to be disciplinary differences in the usefulness of Mendeley readership counts as impact indicators for conference papers, even between fields for which conferences are important. (Author)
International Nuclear Information System (INIS)
Hua Jinsong; Lin Ping; Liu Chun; Wang Qi
2011-01-01
Highlights: → We study phase-field models for multi-phase flow computation. → We develop an energy-law preserving C0 FEM. → We show that the energy-law preserving method work better. → We overcome unphysical oscillation associated with the Cahn-Hilliard model. - Abstract: We use the idea in to develop the energy law preserving method and compute the diffusive interface (phase-field) models of Allen-Cahn and Cahn-Hilliard type, respectively, governing the motion of two-phase incompressible flows. We discretize these two models using a C 0 finite element in space and a modified midpoint scheme in time. To increase the stability in the pressure variable we treat the divergence free condition by a penalty formulation, under which the discrete energy law can still be derived for these diffusive interface models. Through an example we demonstrate that the energy law preserving method is beneficial for computing these multi-phase flow models. We also demonstrate that when applying the energy law preserving method to the model of Cahn-Hilliard type, un-physical interfacial oscillations may occur. We examine the source of such oscillations and a remedy is presented to eliminate the oscillations. A few two-phase incompressible flow examples are computed to show the good performance of our method.
DEFF Research Database (Denmark)
Richards, H.L.; Rikvold, P.A.
1996-01-01
particularly promising as materials for high-density magnetic recording media. In this paper we use analytic arguments and Monte Carlo simulations to quantitatively study the effects of the demagnetizing field on the dynamics of magnetization switching in two-dimensional, single-domain, kinetic Ising systems....... For systems in the weak-field ''stochastic region,'' where magnetization switching is on average effected by the nucleation and growth of a single droplet, the simulation results can be explained by a simple model in which the free energy is a function only of magnetization. In the intermediate......-field ''multidroplet region,'' a generalization of Avrami's law involving a magnetization-dependent effective magnetic field gives good agreement with the simulations. The effects of the demagnetizing field do not qualitatively change the droplet-theoretical picture of magnetization switching in highly anisotropic...
Portable computing - A fielded interactive scientific application in a small off-the-shelf package
Groleau, Nicolas; Hazelton, Lyman; Frainier, Rich; Compton, Michael; Colombano, Silvano; Szolovits, Peter
1993-01-01
Experience with the design and implementation of a portable computing system for STS crew-conducted science is discussed. Principal-Investigator-in-a-Box (PI) will help the SLS-2 astronauts perform vestibular (human orientation system) experiments in flight. PI is an interactive system that provides data acquisition and analysis, experiment step rescheduling, and various other forms of reasoning to astronaut users. The hardware architecture of PI consists of a computer and an analog interface box. 'Off-the-shelf' equipment is employed in the system wherever possible in an effort to use widely available tools and then to add custom functionality and application codes to them. Other projects which can help prospective teams to learn more about portable computing in space are also discussed.
Alter, Stephen J.; Brauckmann, Gregory J.; Kleb, Bil; Streett, Craig L; Glass, Christopher E.; Schuster, David M.
2015-01-01
Using the Fully Unstructured Three-Dimensional (FUN3D) computational fluid dynamics code, an unsteady, time-accurate flow field about a Space Launch System configuration was simulated at a transonic wind tunnel condition (Mach = 0.9). Delayed detached eddy simulation combined with Reynolds Averaged Naiver-Stokes and a Spallart-Almaras turbulence model were employed for the simulation. Second order accurate time evolution scheme was used to simulate the flow field, with a minimum of 0.2 seconds of simulated time to as much as 1.4 seconds. Data was collected at 480 pressure taps at locations, 139 of which matched a 3% wind tunnel model, tested in the Transonic Dynamic Tunnel (TDT) facility at NASA Langley Research Center. Comparisons between computation and experiment showed agreement within 5% in terms of location for peak RMS levels, and 20% for frequency and magnitude of power spectral densities. Grid resolution and time step sensitivity studies were performed to identify methods for improved accuracy comparisons to wind tunnel data. With limited computational resources, accurate trends for reduced vibratory loads on the vehicle were observed. Exploratory methods such as determining minimized computed errors based on CFL number and sub-iterations, as well as evaluating frequency content of the unsteady pressures and evaluation of oscillatory shock structures were used in this study to enhance computational efficiency and solution accuracy. These techniques enabled development of a set of best practices, for the evaluation of future flight vehicle designs in terms of vibratory loads.
Program system for computation of the terrestrial gamma-radiation field
International Nuclear Information System (INIS)
Kirkegaard, P.; Loevborg, L.
1979-02-01
A system of computer programs intended for solution of the plane one-dimensional photon transport equation in the case of two adjacent media is described, and user's guides for the programs are given. One medium represents a natural ground with uniformly distributed potassium, uranium, and thorium gamma-ray emitters. The other medium is usually air with no radioactive contaminants. The solution method is the double-P 1 approximation with logarithmic energy spacing. The complete data-processing system GB contains the transport-theory code GAMP1, the code GFX for computation of scalar flux and dose rate, and a number of auxiliary programs and data files. (author)
Portable, accurate toxicity testing
International Nuclear Information System (INIS)
Sabate, R.W.; Stiffey, A.V.; Dewailly, E.L.; Hinds, A.A.; Vieaux, G.J.
1994-01-01
Ever tightening environmental regulations, severe penalties for non-compliance, and expensive remediation costs have stimulated development of methods to detect and measure toxins. Most of these methods are bioassays that must be performed in the laboratory; none previously devised has been truly portable. The US Army, through the Small Business Innovative Research program, has developed a hand-held, field deployable unit for testing toxicity of battlefield water supplies. This patented system employs the measurable quenching, in the presence of toxins, of the natural bioluminescence produced by the marine dinoflagellate alga Pyrocystis lunula. The procedure's inventor used it for years to measure toxicity concentrations of chemical warfare agents actually, their simulants, primarily in the form of pesticides and herbicides plus assorted toxic reagents, waterbottom samples, drilling fluids, even blood. While the procedure is more precise, cheaper, and faster than most bioassays, until recently it was immobile. Now it is deployable in the field. The laboratory apparatus has been proven to be sensitive to toxins in concentrations as low as a few parts per billion, repeatable within a variation of 10% or less, and unlike some other bioassays effective in turbid or colored media. The laboratory apparatus and the hand-held tester have been calibrated with the EPA protocol that uses the shrimplike Mysidopsis bahia. The test organism tolerates transportation well, but must be rested a few hours at the test site for regeneration of its light-producing powers. Toxicity now can be measured confidently in soils, water columns, discharge points, and many other media in situ. Most significant to the oil industry is that drilling fluids can be monitored continuously on the rig
2010-01-01
Introduction Just two months after the “LHC First Physics” event of 30th March, the analysis of the O(200) million 7 TeV collision events in CMS accumulated during the first 60 days is well under way. The consistency of the CMS computing model has been confirmed during these first weeks of data taking. This model is based on a hierarchy of use-cases deployed between the different tiers and, in particular, the distribution of RECO data to T1s, who then serve data on request to T2s, along a topology known as “fat tree”. Indeed, during this period this model was further extended by almost full “mesh” commissioning, meaning that RECO data were shipped to T2s whenever possible, enabling additional physics analyses compared with the “fat tree” model. Computing activities at the CMS Analysis Facility (CAF) have been marked by a good time response for a load almost evenly shared between ALCA (Alignment and Calibration tasks - highest p...
Contributions from I. Fisk
2012-01-01
Introduction The start of the 2012 run has been busy for Computing. We have reconstructed, archived, and served a larger sample of new data than in 2011, and we are in the process of producing an even larger new sample of simulations at 8 TeV. The running conditions and system performance are largely what was anticipated in the plan, thanks to the hard work and preparation of many people. Heavy ions Heavy Ions has been actively analysing data and preparing for conferences. Operations Office Figure 6: Transfers from all sites in the last 90 days For ICHEP and the Upgrade efforts, we needed to produce and process record amounts of MC samples while supporting the very successful data-taking. This was a large burden, especially on the team members. Nevertheless the last three months were very successful and the total output was phenomenal, thanks to our dedicated site admins who keep the sites operational and the computing project members who spend countless hours nursing the...
M. Kasemann
Introduction A large fraction of the effort was focused during the last period into the preparation and monitoring of the February tests of Common VO Computing Readiness Challenge 08. CCRC08 is being run by the WLCG collaboration in two phases, between the centres and all experiments. The February test is dedicated to functionality tests, while the May challenge will consist of running at all centres and with full workflows. For this first period, a number of functionality checks of the computing power, data repositories and archives as well as network links are planned. This will help assess the reliability of the systems under a variety of loads, and identifying possible bottlenecks. Many tests are scheduled together with other VOs, allowing the full scale stress test. The data rates (writing, accessing and transfer¬ring) are being checked under a variety of loads and operating conditions, as well as the reliability and transfer rates of the links between Tier-0 and Tier-1s. In addition, the capa...
Matthias Kasemann
Overview The main focus during the summer was to handle data coming from the detector and to perform Monte Carlo production. The lessons learned during the CCRC and CSA08 challenges in May were addressed by dedicated PADA campaigns lead by the Integration team. Big improvements were achieved in the stability and reliability of the CMS Tier1 and Tier2 centres by regular and systematic follow-up of faults and errors with the help of the Savannah bug tracking system. In preparation for data taking the roles of a Computing Run Coordinator and regular computing shifts monitoring the services and infrastructure as well as interfacing to the data operations tasks are being defined. The shift plan until the end of 2008 is being put together. User support worked on documentation and organized several training sessions. The ECoM task force delivered the report on “Use Cases for Start-up of pp Data-Taking” with recommendations and a set of tests to be performed for trigger rates much higher than the ...
P. MacBride
The Computing Software and Analysis Challenge CSA07 has been the main focus of the Computing Project for the past few months. Activities began over the summer with the preparation of the Monte Carlo data sets for the challenge and tests of the new production system at the Tier-0 at CERN. The pre-challenge Monte Carlo production was done in several steps: physics generation, detector simulation, digitization, conversion to RAW format and the samples were run through the High Level Trigger (HLT). The data was then merged into three "Soups": Chowder (ALPGEN), Stew (Filtered Pythia) and Gumbo (Pythia). The challenge officially started when the first Chowder events were reconstructed on the Tier-0 on October 3rd. The data operations teams were very busy during the the challenge period. The MC production teams continued with signal production and processing while the Tier-0 and Tier-1 teams worked on splitting the Soups into Primary Data Sets (PDS), reconstruction and skimming. The storage sys...
I. Fisk
2013-01-01
Computing operation has been lower as the Run 1 samples are completing and smaller samples for upgrades and preparations are ramping up. Much of the computing activity is focusing on preparations for Run 2 and improvements in data access and flexibility of using resources. Operations Office Data processing was slow in the second half of 2013 with only the legacy re-reconstruction pass of 2011 data being processed at the sites. Figure 1: MC production and processing was more in demand with a peak of over 750 Million GEN-SIM events in a single month. Figure 2: The transfer system worked reliably and efficiently and transferred on average close to 520 TB per week with peaks at close to 1.2 PB. Figure 3: The volume of data moved between CMS sites in the last six months The tape utilisation was a focus for the operation teams with frequent deletion campaigns from deprecated 7 TeV MC GEN-SIM samples to INVALID datasets, which could be cleaned up...
I. Fisk
2012-01-01
Introduction Computing activity has been running at a sustained, high rate as we collect data at high luminosity, process simulation, and begin to process the parked data. The system is functional, though a number of improvements are planned during LS1. Many of the changes will impact users, we hope only in positive ways. We are trying to improve the distributed analysis tools as well as the ability to access more data samples more transparently. Operations Office Figure 2: Number of events per month, for 2012 Since the June CMS Week, Computing Operations teams successfully completed data re-reconstruction passes and finished the CMSSW_53X MC campaign with over three billion events available in AOD format. Recorded data was successfully processed in parallel, exceeding 1.2 billion raw physics events per month for the first time in October 2012 due to the increase in data-parking rate. In parallel, large efforts were dedicated to WMAgent development and integrati...
International Nuclear Information System (INIS)
Barchanski, A; Gersem, H de; Gjonaj, E; Weiland, T
2005-01-01
We present a comparison of simulated low-frequency electromagnetic fields in the human body, calculated by means of the electro-quasistatic formulation. The geometrical data in these simulations were provided by an anatomically realistic, high-resolution human body model, while the dielectric properties of the various body tissues were modelled by the parametric Cole-Cole equation. The model was examined under two different excitation sources and various spatial resolutions in a frequency range from 10 Hz to 1 MHz. An analysis of the differences in the computed fields resulting from a neglect of the permittivity was carried out. On this basis, an estimation of the impact of the displacement current on the simulated low-frequency electromagnetic fields in the human body is obtained. (note)
International Nuclear Information System (INIS)
Cho, Oyeon; Chun, Mi Son; Oh, Young Taek; Kim, Mi Hwa; Park, Hae Jin; Nam, Sang Soo; Heo, Jae Sung; Noh, O Kyu; Park, Sung Ho
2013-01-01
Parotid gland can be considered as a risk organ in whole brain radiotherapy (WBRT). The purpose of this study is to evaluate the parotid gland sparing effect of computed tomography (CT)-based WBRT compared to 2-dimensional plan with conventional field margin. From January 2008 to April 2011, 53 patients underwent WBRT using CT-based simulation. Bilateral two-field arrangement was used and the prescribed dose was 30 Gy in 10 fractions. We compared the parotid dose between 2 radiotherapy plans using different lower field margins: conventional field to the lower level of the atlas (CF) and modified field fitted to the brain tissue (MF). Averages of mean parotid dose of the 2 protocols with CF and MF were 17.4 Gy and 8.7 Gy, respectively (p 98% of prescribed dose were 99.7% for CF and 99.5% for MF. Compared to WBRT with CF, CT-based lower field margin modification is a simple and effective technique for sparing the parotid gland, while providing similar dose coverage of the whole brain.
A Randomized Field Trial of the Fast ForWord Language Computer-Based Training Program
Borman, Geoffrey D.; Benson, James G.; Overman, Laura
2009-01-01
This article describes an independent assessment of the Fast ForWord Language computer-based training program developed by Scientific Learning Corporation. Previous laboratory research involving children with language-based learning impairments showed strong effects on their abilities to recognize brief and fast sequences of nonspeech and speech…
Computational Methods for Inviscid and Viscous Two-and-Three-Dimensional Flow Fields.
1975-01-01
Difference Equations Over a Network, Watson Sei. Comput. Lab. Report, 19U9. 173- Isaacson, E. and Keller, H. B., Analaysis of Numerical Methods...element method has given a new impulse to the old mathematical theory of multivariate interpolation. We first study the one-dimensional case, which
Institute of Scientific and Technical Information of China (English)
高文; 陈熙霖
1997-01-01
The blur in target images caused by camera vibration due to robot motion or hand shaking and by object(s) moving in the background scene is different to deal with in the computer vision system.In this paper,the authors study the relation model between motion and blur in the case of object motion existing in video image sequence,and work on a practical computation algorithm for both motion analysis and blut image restoration.Combining the general optical flow and stochastic process,the paper presents and approach by which the motion velocity can be calculated from blurred images.On the other hand,the blurred image can also be restored using the obtained motion information.For solving a problem with small motion limitation on the general optical flow computation,a multiresolution optical flow algoritm based on MAP estimation is proposed. For restoring the blurred image ,an iteration algorithm and the obtained motion velocity are used.The experiment shows that the proposed approach for both motion velocity computation and blurred image restoration works well.
Phenomenography and Grounded Theory as Research Methods in Computing Education Research Field
Kinnunen, Paivi; Simon, Beth
2012-01-01
This paper discusses two qualitative research methods, phenomenography and grounded theory. We introduce both methods' data collection and analysis processes and the type or results you may get at the end by using examples from computing education research. We highlight some of the similarities and differences between the aim, data collection and…
Relativity in a Rock Field: A Study of Physics Learning with a Computer Game
Carr, David; Bossomaier, Terry
2011-01-01
The "Theory of Special Relativity" is widely regarded as a difficult topic for learners in physics to grasp, as it reformulates fundamental conceptions of space, time and motion, and predominantly deals with situations outside of everyday experience. In this paper, we describe embedding the physics of relativity into a computer game, and…
I. Fisk
2011-01-01
Introduction The Computing Team successfully completed the storage, initial processing, and distribution for analysis of proton-proton data in 2011. There are still a variety of activities ongoing to support winter conference activities and preparations for 2012. Heavy ions The heavy-ion run for 2011 started in early November and has already demonstrated good machine performance and success of some of the more advanced workflows planned for 2011. Data collection will continue until early December. Facilities and Infrastructure Operations Operational and deployment support for WMAgent and WorkQueue+Request Manager components, routinely used in production by Data Operations, are provided. The GlideInWMS and components installation are now deployed at CERN, which is added to the GlideInWMS factory placed in the US. There has been new operational collaboration between the CERN team and the UCSD GlideIn factory operators, covering each others time zones by monitoring/debugging pilot jobs sent from the facto...
Application of computer picture processing to dynamic strain measurement under electromagnetic field
International Nuclear Information System (INIS)
Yagawa, G.; Soneda, N.
1987-01-01
For the structural design of fusion reactors, it is very important to ensure the structural integrity of components under various dynamic loading conditions due to a solid-electromagnetic field interaction, an earthquake, MHD effects and so on. As one of the experimental approaches to assess the dynamic fracture, we consider the strain measurement near a crack tip under a transient electromagnetic field, which in general involves several experimental difficulties. The authors have developed a strain measurement method using a picture processing technique. In this method, locations of marks printed on a surface of specimen are determined by the picture processing. The displacement field is interpolated using the mark displacements and finite elements. Finally the strain distribution is calculated by differentiating the displacement field. In the present study, the method is improved and automated apply to the measurement of dynamic strain distribution under an electromagnetic field. Then the effects of dynamic loading on the strain distribution are investigated by comparing the dynamic results with the static ones. (orig./GL)
Computer analysis of multicircuit shells of revolution by the field method
International Nuclear Information System (INIS)
Cohen, G.A.
1975-01-01
The method of analysis developed which has been termed the 'field method' converts the boundary-value problem into two successive initial-value problems. In the first initial-value problem, a forward integration over the shell meridian is made for the 'field functions', which may be interpreted physically as influence functions (plus additional functions to account for external loading) of the structure. The second initial-value problem consists of a backward integration (i.e., in the reverse direction) for the physical force and displacement functions, the differential equations for which are dependent on the already calculated field functions. In this method, no artificial subdivision of the meridian is necessary since both initial-value problems are numerically stable. Also, because the physical response functions are obtained directly from the backward integration, their storage points may be chosen automatically during execution to obtain a uniformly 'dense' description of these functions. Studies comparing the efficiency (i.e., execution time) of the field method with that of a conventional superposition (Zarghamee) method have been made, for the simple case of the linear static response of a clamped cylindrical shell. The field method has been presented previously for shells of revolution with open branched meridians. This work is now extended to the case of meridians which contain circuits. Also, a new method for the treatment of arbitrary kinematic constraints is presented
Pan, Bing; Wang, Bo
2017-10-01
Digital volume correlation (DVC) is a powerful technique for quantifying interior deformation within solid opaque materials and biological tissues. In the last two decades, great efforts have been made to improve the accuracy and efficiency of the DVC algorithm. However, there is still a lack of a flexible, robust and accurate version that can be efficiently implemented in personal computers with limited RAM. This paper proposes an advanced DVC method that can realize accurate full-field internal deformation measurement applicable to high-resolution volume images with up to billions of voxels. Specifically, a novel layer-wise reliability-guided displacement tracking strategy combined with dynamic data management is presented to guide the DVC computation from slice to slice. The displacements at specified calculation points in each layer are computed using the advanced 3D inverse-compositional Gauss-Newton algorithm with the complete initial guess of the deformation vector accurately predicted from the computed calculation points. Since only limited slices of interest in the reference and deformed volume images rather than the whole volume images are required, the DVC calculation can thus be efficiently implemented on personal computers. The flexibility, accuracy and efficiency of the presented DVC approach are demonstrated by analyzing computer-simulated and experimentally obtained high-resolution volume images.
Electric field computation and measurements in the electroporation of inhomogeneous samples
Bernardis, Alessia; Bullo, Marco; Campana, Luca Giovanni; Di Barba, Paolo; Dughiero, Fabrizio; Forzan, Michele; Mognaschi, Maria Evelina; Sgarbossa, Paolo; Sieni, Elisabetta
2017-12-01
In clinical treatments of a class of tumors, e.g. skin tumors, the drug uptake of tumor tissue is helped by means of a pulsed electric field, which permeabilizes the cell membranes. This technique, which is called electroporation, exploits the conductivity of the tissues: however, the tumor tissue could be characterized by inhomogeneous areas, eventually causing a non-uniform distribution of current. In this paper, the authors propose a field model to predict the effect of tissue inhomogeneity, which can affect the current density distribution. In particular, finite-element simulations, considering non-linear conductivity against field relationship, are developed. Measurements on a set of samples subject to controlled inhomogeneity make it possible to assess the numerical model in view of identifying the equivalent resistance between pairs of electrodes.
Cranial radiotherapy guided by computed tomography with or without fields conformation in pediatric
International Nuclear Information System (INIS)
Fernandez, Diego; Caussa, Lucas; Murina, Patricia; Zunino, Silvia
2007-01-01
Many malignancies in children can be cured by radiotherapy, acute toxicity and the significant effect of delayed treatment are worrying for the patient, family and society. Therefore, the end of the pediatric radiotherapy is to maintain or improve the cure rate of cancer, diminishing the aftermath of treatment. The goal of this study is to measure differences in doses to the healthy tissue of the central nervous system with two radiotherapy techniques, both guided by computed tomography [es
Computing the Local Field Potential (LFP) from Integrate-and-Fire Network Models
DEFF Research Database (Denmark)
Mazzoni, Alberto; Linden, Henrik; Cuntz, Hermann
2015-01-01
Leaky integrate-and-fire (LIF) network models are commonly used to study how the spiking dynamics of neural networks changes with stimuli, tasks or dynamic network states. However, neurophysiological studies in vivo often rather measure the mass activity of neuronal microcircuits with the local f...... in cases where a single pyramidal population dominates the LFP generation, and thereby facilitate quantitative comparison between computational models and experimental LFP recordings in vivo....
Directory of Open Access Journals (Sweden)
V. Javor
2012-11-01
Full Text Available A comparison of different engineering models results for a lightning magnetic field of negative first strokes is presented in this paper. A new function for representing double-peaked channel-base current is used for lightning stroke modeling. This function includes the initial and subsidiary peak in a current waveform. For experimentally measured currents, a magnetic field is calculated for the three engineering models: transmission line (TL model, TL model with linear decay (MTLL, and TL model with exponential decay (MTLE.
M. Kasemann
CMS relies on a well functioning, distributed computing infrastructure. The Site Availability Monitoring (SAM) and the Job Robot submission have been very instrumental for site commissioning in order to increase availability of more sites such that they are available to participate in CSA07 and are ready to be used for analysis. The commissioning process has been further developed, including "lessons learned" documentation via the CMS twiki. Recently the visualization, presentation and summarizing of SAM tests for sites has been redesigned, it is now developed by the central ARDA project of WLCG. Work to test the new gLite Workload Management System was performed; a 4 times increase in throughput with respect to LCG Resource Broker is observed. CMS has designed and launched a new-generation traffic load generator called "LoadTest" to commission and to keep exercised all data transfer routes in the CMS PhE-DEx topology. Since mid-February, a transfer volume of about 12 P...
Levi, Michele; Steinhoff, Jan
2017-12-01
We present a novel public package ‘EFTofPNG’ for high precision computation in the effective field theory of post-Newtonian (PN) gravity, including spins. We created this package in view of the timely need to publicly share automated computation tools, which integrate the various types of physics manifested in the expected increasing influx of gravitational wave (GW) data. Hence, we created a free and open source package, which is self-contained, modular, all-inclusive, and accessible to the classical gravity community. The ‘EFTofPNG’ Mathematica package also uses the power of the ‘xTensor’ package, suited for complicated tensor computation, where our coding also strategically approaches the generic generation of Feynman contractions, which is universal to all perturbation theories in physics, by efficiently treating n-point functions as tensors of rank n. The package currently contains four independent units, which serve as subsidiaries to the main one. Its final unit serves as a pipeline chain for the obtainment of the final GW templates, and provides the full computation of derivatives and physical observables of interest. The upcoming ‘EFTofPNG’ package version 1.0 should cover the point mass sector, and all the spin sectors, up to the fourth PN order, and the two-loop level. We expect and strongly encourage public development of the package to improve its efficiency, and to extend it to further PN sectors, and observables useful for the waveform modelling.
Three-dimensional computation of magnetic fields and Lorentz forces of an LHC dipole magnet
International Nuclear Information System (INIS)
Daum, C.; Avest, D. ter
1989-07-01
Magnetic fields and Lorentz forces of an LHC dipole magnet are calculated using the method of image currents to represent the effect of the iron shield. The calculation is performed for coils of finite length using a parametrization for coil heads of constant perimeter. A comparison with calculations based on POISSON and TOSCA is made. (author). 5 refs.; 31 figs.; 6 tabs
DEFF Research Database (Denmark)
Si, Haiqing; Shen, Wen Zhong; Zhu, Wei Jun
2013-01-01
Acoustic propagation in the presence of a non-uniform mean flow is studied numerically by using two different acoustic propagating models, which solve linearized Euler equations (LEE) and acoustic perturbation equations (APE). As noise induced by turbulent flows often propagates from near field t...
Computer analysis of multicircuit shells of revolution by the field method
International Nuclear Information System (INIS)
Cohen, G.A.
1975-01-01
The field method has been presented previously for shells of revolution with open branched meridians. The main purpose of the present paper is to extend this work to the case of meridians which contain circuits. Also, a new method for the treatment of arbitrary kinematic constraints is presented. (Auth.)
Directory of Open Access Journals (Sweden)
Sonia López
2016-09-01
Full Text Available This study is part of a research project that aims to characterize the epistemological, psychological and didactic presuppositions of science teachers (Biology, Physics, Chemistry that implement Computational Modeling and Simulation (CMS activities as a part of their teaching practice. We present here a synthesis of a literature review on the subject, evidencing how in the last two decades this form of computer usage for science teaching has boomed in disciplines such as Physics and Chemistry, but in a lesser degree in Biology. Additionally, in the works that dwell on the use of CMS in Biology, we identified a lack of theoretical bases that support their epistemological, psychological and/or didactic postures. Accordingly, this generates significant considerations for the fields of research and teacher instruction in Science Education.
International Nuclear Information System (INIS)
Read, D.; Broyd, T.W.
1988-01-01
This paper provides an introduction to CHEMVAL, an international project concerned with establishing the applicability of chemical speciation and coupled transport models to the simulation of realistic waste disposal situations. The project aims to validate computer-based models quantitatively by comparison with laboratory and field experiments. Verification of the various computer programs employed by research organisations within the European Community is ensured through close inter-laboratory collaboration. The compilation and review of thermodynamic data forms an essential aspect of this work and has led to the production of an internally consistent standard CHEMVAL database. The sensitivity of results to variation in fundamental constants is being monitored at each stage of the project and, where feasible, complementary laboratory studies are used to improve the data set. Currently, thirteen organisations from five countries are participating in CHEMVAL which forms part of the Commission of European Communities' MIRAGE 2 programme of research. (orig.)
Directory of Open Access Journals (Sweden)
K. Ide
2002-01-01
Full Text Available In this paper we develop analytical and numerical methods for finding special hyperbolic trajectories that govern geometry of Lagrangian structures in time-dependent vector fields. The vector fields (or velocity fields may have arbitrary time dependence and be realized only as data sets over finite time intervals, where space and time are discretized. While the notion of a hyperbolic trajectory is central to dynamical systems theory, much of the theoretical developments for Lagrangian transport proceed under the assumption that such a special hyperbolic trajectory exists. This brings in new mathematical issues that must be addressed in order for Lagrangian transport theory to be applicable in practice, i.e. how to determine whether or not such a trajectory exists and, if it does exist, how to identify it in a sequence of instantaneous velocity fields. We address these issues by developing the notion of a distinguished hyperbolic trajectory (DHT. We develop an existence criteria for certain classes of DHTs in general time-dependent velocity fields, based on the time evolution of Eulerian structures that are observed in individual instantaneous fields over the entire time interval of the data set. We demonstrate the concept of DHTs in inhomogeneous (or "forced" time-dependent linear systems and develop a theory and analytical formula for computing DHTs. Throughout this work the notion of linearization is very important. This is not surprising since hyperbolicity is a "linearized" notion. To extend the analytical formula to more general nonlinear time-dependent velocity fields, we develop a series of coordinate transforms including a type of linearization that is not typically used in dynamical systems theory. We refer to it as Eulerian linearization, which is related to the frame independence of DHTs, as opposed to the Lagrangian linearization, which is typical in dynamical systems theory, which is used in the computation of Lyapunov exponents. We
Palmer, Grant
1991-01-01
A CFD technique is developed to calculate the electromagnetic phenomena simultaneously with the fluid flow in the shock layer over an axisymmetric blunt body in a thermal-equilibrium chemical-nonequilibrium environment. The flowfield is solved using an explicit time-marching, first-order spatially accurate scheme. The electromagnetic phenomena are coupled to the real-gas flow solver through an iterative procedure. The electromagnetic terms introduce a strong stiffness, which was overcome by using significantly smaller time steps for the electromagnetic conservation equation. The technique is applied in calculating the flow over a Mars return aerobrake vehicle entering the Earth's atmosphere. For the case where no external field is applied, the electromagnetic effects have little impact on the flowfield.
Shokri, Abbas; Ramezani, Leila; Bidgoli, Mohsen; Akbarzadeh, Mahdi; Ghazikhanlu-Sani, Karim; Fallahi-Sichani, Hamed
2018-03-01
This study aimed to evaluate the effect of field-of-view (FOV) size on the gray values derived from conebeam computed tomography (CBCT) compared with the Hounsfield unit values from multidetector computed tomography (MDCT) scans as the gold standard. A radiographic phantom was designed with 4 acrylic cylinders. One cylinder was filled with distilled water, and the other 3 were filled with 3 types of bone substitute: namely, Nanobone, Cenobone, and Cerabone. The phantom was scanned with 2 CBCT systems using 2 different FOV sizes, and 1 MDCT system was used as the gold standard. The mean gray values (MGVs) of each cylinder were calculated in each imaging protocol. In both CBCT systems, significant differences were noted in the MGVs of all materials between the 2 FOV sizes ( P <.05) except for Cerabone in the Cranex3D system. Significant differences were found in the MGVs of each material compared with the others in both FOV sizes for each CBCT system. No significant difference was seen between the Cranex3D CBCT system and the MDCT system in the MGVs of bone substitutes on images obtained with a small FOV. The size of the FOV significantly changed the MGVs of all bone substitutes, except for Cerabone in the Cranex3D system. Both CBCT systems had the ability to distinguish the 3 types of bone substitutes based on a comparison of their MGVs. The Cranex3D CBCT system used with a small FOV had a significant correlation with MDCT results.
Analysis of steam turbine boresonic NDE data using a field portable computer
International Nuclear Information System (INIS)
Leon-Salamanca, T.; Reinhart, E.R.
2004-01-01
Due to the high combined stress caused by thermal and rotational loading, the highest stress in the hollow rotor forging of typical nuclear power steam turbine and generator units is in the region at or near the bore. Material discontinuities aligned along the axis of the rotor centerline, with depth in the radial plane of the rotor, have the highest probability of becoming flaws of concern to the integrity of the rotor. Due to the nature of the casting/forging process a great number of material discontinuities can be found near the rotor bore. During the ultrasonic examination of rotors with a large number of discontinuities, the engineer must determine if these discontinuities are ultrasonic reflectors caused by fabrication anomalies, reflectors that are probably fabrication discontinuities but in such close proximity that they may link up and form a defect of concern to future operation, or reflectors that have significant size and are real growing flaws, but may appear as separated indications. Until recently, plotting of ultrasonic data to determine the significance of closely spaced indications was time consuming and required special 3-D analysis methods to determine if indications were isolated or linked to form larger discontinuities. To overcome this problem, a software program, compatible with portable personal computers, was written to define a parameter necessary for determining if a group of indications detected from nondestructive ultrasonic testing of turbine and generator rotors could combine to form larger ones. The approach involved using a computer algorithm to model each indication as a three dimensional sphere with a diameter equal to the ultrasonic signal amplitude from an equivalent flat bottom hole reflector and setting a minimum gap distance between spheres necessary for a link up. The program was implemented following a commonly used data format accepted by industry recognized computer codes. The gap distance and link up parameters were
Overview of the assessment of the french in-field tritium experiment with computer codes
International Nuclear Information System (INIS)
Crabol, B.; Graziani, G.; Edlund, O.
1989-01-01
In the framework of the international cooperation settled for the realization of the French tritium experiment, an expert group for the assessment of computer codes, including the Joint Research Center of Ispra (European Communities), Studsvik (Sweden) and the Atomic Energy Commission (France), has been organized. The aim of the group was as follows: - to help the design of the experiment by evaluating beforehand the consequences of the release, - to interpret the results of the experiment. This paper describes the last task and gives the main conclusions drawn from the work
International Nuclear Information System (INIS)
Tran, Michel
2015-01-01
Since the beginning of Cone Beam Computed Tomography (CBCT) in dento-maxillo-facial radiology, many CBCT devices with different technical aspects and characteristics were produced. Technical variations between CBCT and acquisition settings could involve image quality differences. In order to compare the performance of three limited field-of-view CBCT devices, an objective and subjective evaluation of image quality was carried out using an ex-vivo phantom, which combines both diagnostic and technical features. A significant difference in image quality was found between the five acquisition protocols of the study. (author) [fr
DEFF Research Database (Denmark)
Mishra, Shantnu R.;; Pavlasek, Tomas J. F.;; Muresan, Letitia V.
1980-01-01
An automatic facility for measuring the three-dimensional structure of the near fields of microwave radiators and scatterers is described. The amplitude and phase for different polarization components can be recorded in analog and digital form using a microprocessor-based system. The stored data...... are transferred to a large high-speed computer for bulk processing and for the production of isophot and equiphase contour maps or profiles. The performance of the system is demonstrated through results for a single conical horn, for interacting rectangular horns, for multiple cylindrical scatterers...
New Research Perspectives in the Emerging Field of Computational Intelligence to Economic Modeling
Directory of Open Access Journals (Sweden)
Vasile MAZILESCU
2009-01-01
Full Text Available Computational Intelligence (CI is a new development paradigm of intelligentsystems which has resulted from a synergy between fuzzy sets, artificial neuralnetworks, evolutionary computation, machine learning, etc., broadeningcomputer science, physics, economics, engineering, mathematics, statistics. It isimperative to know why these tools can be potentially relevant and effective toeconomic and financial modeling. This paper presents, after a synergic newparadigm of intelligent systems, as a practical case study the fuzzy and temporalproperties of knowledge formalism embedded in an Intelligent Control System(ICS, based on FT-algorithm. We are not dealing high with level reasoningmethods, because we think that real-time problems can only be solved by ratherlow-level reasoning. Most of the overall run-time of fuzzy expert systems isused in the match phase. To achieve a fast reasoning the number of fuzzy setoperations must be reduced. For this, we use a fuzzy compiled structure ofknowledge, like Rete, because it is required for real-time responses. Solving thematch-time predictability problem would allow us to built much more powerfulreasoning techniques.
International Nuclear Information System (INIS)
Valentine, G.A.; Groves, K.R.; Gable, C.W.; Perry, F.V.; Crowe, B.M.
1993-01-01
Assessing the risk of future magmatic activity at a potential Yucca Mountain radioactive waste repository requires, in addition to event probabilities, some knowledge of the consequences of such activity. Magmatic consequences are divided into an eruptive component, which pertains to the possibility of radioactive waste being erupted onto the surface of Yucca Mountain, and a subsurface component, which occurs whether there is an accompanying eruption or not. The subsurface component pertains to a suite of processes such as hydrothermal activity, changes in country rock properties, and long term alteration of the hydrologic flow field which change the waste isolation system. This paper is the second in a series describing progress on studies of the effects of magmatic activity. We describe initial results of field analog studies at small volume basaltic centers where detailed measurements are being conducted of the amount of wall rock debris that can be erupted as a function of depth in the volcanic plumbing system. Constraints from field evidence of wall rock entrainment mechanisms are also discussed. Evidence is described for a mechanism of producing subhorizontal sills versus subvertical dikes, an issue that is important for assessing subsurface effects. Finally, new modeling techniques, which are being developed in order to capture the three dimensional complexities of real geologic situations in subsurface effects, are described
Directory of Open Access Journals (Sweden)
F. Ferricci
1994-06-01
Full Text Available he volcanic area of Vulcano experienced major unrest, which brought the fumarolic field temperatures from slightly less than 300 °C to ca. 700 °C between 1988-1993. The structure underlying the crater, investigated by drillings and by different geophysical techniques, is relatively well-known. This led us to attempt modelling the magnetic anomaly which could be generated by sudden pressure variations in the magma chamber at shallow depth. The rocks embedding the intrusive rock penetrated by drill-holes to a depth of ca. 2000 m are characterized by high susceptibility, which points to the possibility of obtaining significant magnetic anomalies with acceptably weak pressure pulses. The model for straightforward computing of the anomalous field was drawn accounting for (1 the inferred geometry of the Curie isotherrn, (2 presence of a spherical magma reservoir, 2 km wide and centred at a depth of 3.5 km, overlain by (3 a 0.5 km wide and 1.5 km high cylinder simulating the intrusion first revealed by drillings. The model elements (2 and (3 behave as a single source zone and are assumed to lie beyond the Curie point, the contribution to the piezomagnetic effect being provided by the surrounding medium. Under such conditions, a 10 MPa pressure pulse applied within the sourcezone provides a 4 nT piezomagnetic anomaly, compatible with the amplitude of the anomalies observed at those volcanoes of the world where magnetic surveillance is routinely carried out. The analytical method used for computation of the magnetic field generated by mechanical stress is extensively discussed, and the contribution of piezomagnetism to rapid variations of the magnetic field is compared to other types of magnetic anomalies likely to occur at active volcanoes.
Fast and accurate methods for phylogenomic analyses
Directory of Open Access Journals (Sweden)
Warnow Tandy
2011-10-01
Full Text Available Abstract Background Species phylogenies are not estimated directly, but rather through phylogenetic analyses of different gene datasets. However, true gene trees can differ from the true species tree (and hence from one another due to biological processes such as horizontal gene transfer, incomplete lineage sorting, and gene duplication and loss, so that no single gene tree is a reliable estimate of the species tree. Several methods have been developed to estimate species trees from estimated gene trees, differing according to the specific algorithmic technique used and the biological model used to explain differences between species and gene trees. Relatively little is known about the relative performance of these methods. Results We report on a study evaluating several different methods for estimating species trees from sequence datasets, simulating sequence evolution under a complex model including indels (insertions and deletions, substitutions, and incomplete lineage sorting. The most important finding of our study is that some fast and simple methods are nearly as accurate as the most accurate methods, which employ sophisticated statistical methods and are computationally quite intensive. We also observe that methods that explicitly consider errors in the estimated gene trees produce more accurate trees than methods that assume the estimated gene trees are correct. Conclusions Our study shows that highly accurate estimations of species trees are achievable, even when gene trees differ from each other and from the species tree, and that these estimations can be obtained using fairly simple and computationally tractable methods.
Energy Technology Data Exchange (ETDEWEB)
Oxstrand, Johanna; Le Blanc, Katya L.; Bly, Aaron
2015-02-01
The paper-based procedures currently used for nearly all activities in the commercial nuclear power industry have a long history of ensuring safe operation of the plants. However, there is potential to greatly increase efficiency and safety by improving how the human operator interacts with the procedures. One way to achieve these improvements is through the use of computer-based procedures (CBPs). A CBP system offers a vast variety of improvements, such as context driven job aids, integrated human performance tools (e.g., placekeeping, correct component verification, etc.), and dynamic step presentation. The latter means that the CBP system could only display relevant steps based on operating mode, plant status, and the task at hand. A dynamic presentation of the procedure (also known as context-sensitive procedures) will guide the operator down the path of relevant steps based on the current conditions. This feature will reduce the operator’s workload and inherently reduce the risk of incorrectly marking a step as not applicable and the risk of incorrectly performing a step that should be marked as not applicable. The research team at the Idaho National Laboratory has developed a prototype CBP system for field workers, which has been evaluated from a human factors and usability perspective in four laboratory studies. Based on the results from each study revisions were made to the CBP system. However, a crucial step to get the end users' (e.g., auxiliary operators, maintenance technicians, etc.) acceptance is to put the system in their hands and let them use it as a part of their everyday work activities. In the spring 2014 the first field evaluation of the INL CBP system was conducted at a nuclear power plant. Auxiliary operators conduct a functional test of one out of three backup air compressors each week. During the field evaluation activity, one auxiliary operator conducted the test with the paper-based procedure while a second auxiliary operator
International Nuclear Information System (INIS)
Kamide, Y.; Matsushita, S.
1980-07-01
Numerical solution of the current conservation equation gives the distributions of electric fields and currents in the global ionosphere produced by the field-aligned currents. By altering ionospheric conductivity distributions as well as the field-aligned current densities and configurations to simulate a magnetospheric substorm life cycle, which is assumed to last for five hours, various patterns of electric fields and currents are computed for every 30-second interval in the life cycle. The simulated results are compiled in the form of a color movie, where variations of electric equi-potential curves are the first sequence, electric current-vector changes are the second, and fluctuations of the electric current system are the third. The movie compresses real time by a factor of 1/180, taking 1.7 minutes of running time for one sequence. One of the most striking features of this simulation is the clear demonstration of rapid and large scale interactions between the auroral zone and middle-low latitudes during the substorm sequences. This technical note provides an outline of the numerical scheme and world-wide contour maps of the electric potential, ionospheric current vectors, and the equivalent ionospheric current system at 5-minute intervals as an aid in viewing the movie and to further detailed study of the 'model' substorms