Calculation of the inverse data space via sparse inversion
Saragiotis, Christos
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.
Calculation of the inverse data space via sparse inversion
Saragiotis, Christos; Doulgeris, Panagiotis C.; Verschuur, Dirk Jacob Eric
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from
Inverse problems in linear transport theory
Dressler, K.
1988-01-01
Inverse problems for a class of linear kinetic equations are investigated. The aim is to identify the scattering kernel of a transport equation (corresponding to the structure of a background medium) by observing the 'albedo' part of the solution operator for the corresponding direct initial boundary value problem. This means to get information on some integral operator in an integrodifferential equation through on overdetermined boundary value problem. We first derive a constructive method for solving direct halfspace problems and prove a new factorization theorem for the solutions. Using this result we investigate stationary inverse problems with respect to well posedness (e.g. reduce them to classical ill-posed problems, such as integral equations of first kind). In the time-dependent case we show that a quite general inverse problem is well posed and solve it constructively. (orig.)
An Entropic Estimator for Linear Inverse Problems
Amos Golan
2012-05-01
Full Text Available In this paper we examine an Information-Theoretic method for solving noisy linear inverse estimation problems which encompasses under a single framework a whole class of estimation methods. Under this framework, the prior information about the unknown parameters (when such information exists, and constraints on the parameters can be incorporated in the statement of the problem. The method builds on the basics of the maximum entropy principle and consists of transforming the original problem into an estimation of a probability density on an appropriate space naturally associated with the statement of the problem. This estimation method is generic in the sense that it provides a framework for analyzing non-normal models, it is easy to implement and is suitable for all types of inverse problems such as small and or ill-conditioned, noisy data. First order approximation, large sample properties and convergence in distribution are developed as well. Analytical examples, statistics for model comparisons and evaluations, that are inherent to this method, are discussed and complemented with explicit examples.
Microlocal analysis of a seismic linearized inverse problem
Stolk, C.C.
1999-01-01
The seismic inverse problem is to determine the wavespeed c x in the interior of a medium from measurements at the boundary In this paper we analyze the linearized inverse problem in general acoustic media The problem is to nd a left inverse of the linearized forward map F or equivalently to nd the
Inverse Modelling Problems in Linear Algebra Undergraduate Courses
Martinez-Luaces, Victor E.
2013-01-01
This paper will offer an analysis from a theoretical point of view of mathematical modelling, applications and inverse problems of both causation and specification types. Inverse modelling problems give the opportunity to establish connections between theory and practice and to show this fact, a simple linear algebra example in two different…
LinvPy : a Python package for linear inverse problems
Beaud, Guillaume François Paul
2016-01-01
The goal of this project is to make a Python package including the tau-estimator algorithm to solve linear inverse problems. The package must be distributed, well documented, easy to use and easy to extend for future developers.
Treating experimental data of inverse kinetic method by unitary linear regression analysis
Zhao Yusen; Chen Xiaoliang
2009-01-01
The theory of treating experimental data of inverse kinetic method by unitary linear regression analysis was described. Not only the reactivity, but also the effective neutron source intensity could be calculated by this method. Computer code was compiled base on the inverse kinetic method and unitary linear regression analysis. The data of zero power facility BFS-1 in Russia were processed and the results were compared. The results show that the reactivity and the effective neutron source intensity can be obtained correctly by treating experimental data of inverse kinetic method using unitary linear regression analysis and the precision of reactivity measurement is improved. The central element efficiency can be calculated by using the reactivity. The result also shows that the effect to reactivity measurement caused by external neutron source should be considered when the reactor power is low and the intensity of external neutron source is strong. (authors)
Activation Product Inverse Calculations with NDI
Gray, Mark Girard [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-27
NDI based forward calculations of activation product concentrations can be systematically used to infer structural element concentrations from measured activation product concentrations with an iterative algorithm. The algorithm converges exactly for the basic production-depletion chain with explicit activation product production and approximately, in the least-squares sense, for the full production-depletion chain with explicit activation product production and nosub production-depletion chain. The algorithm is suitable for automation.
Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel
El-Gebeily, M.; Yushau, B.
2008-01-01
In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…
Two-Dimensional Linear Inversion of GPR Data with a Shifting Zoom along the Observation Line
Raffaele Persico
2017-09-01
Full Text Available Linear inverse scattering problems can be solved by regularized inversion of a matrix, whose calculation and inversion may require significant computing resources, in particular, a significant amount of RAM memory. This effort is dependent on the extent of the investigation domain, which drives a large amount of data to be gathered and a large number of unknowns to be looked for, when this domain becomes electrically large. This leads, in turn, to the problem of inversion of excessively large matrices. Here, we consider the problem of a ground-penetrating radar (GPR survey in two-dimensional (2D geometry, with antennas at an electrically short distance from the soil. In particular, we present a strategy to afford inversion of large investigation domains, based on a shifting zoom procedure. The proposed strategy was successfully validated using experimental radar data.
An application of sparse inversion on the calculation of the inverse data space of geophysical data
Saragiotis, Christos
2011-07-01
Multiple reflections as observed in seismic reflection measurements often hide arrivals from the deeper target reflectors and need to be removed. The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function and by constraining the 1 norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal. © 2011 IEEE.
The possibilities of linearized inversion of internally scattered seismic data
Aldawood, Ali; Alkhalifah, Tariq Ali; Hoteit, Ibrahim; Zuberi, Mohammad; Turkiyyah, George
2014-01-01
Least-square migration is an iterative linearized inversion scheme that tends to suppress the migration artifacts and enhance the spatial resolution of the migrated image. However, standard least-square migration, based on imaging single scattering energy, may not be able to enhance events that are mainly illuminated by internal multiples such as vertical and nearly vertical faults. To alleviate this problem, we propose a linearized inversion framework to migrate internally multiply scattered energy. We applied this least-square migration of internal multiples to image a vertical fault. Tests on synthetic data demonstrate the ability of the proposed method to resolve a vertical fault plane that is poorly resolved by least-square imaging using primaries only. We, also, demonstrate the robustness of the proposed scheme in the presence of white Gaussian random observational noise and in the case of imaging the fault plane using inaccurate migration velocities.
The possibilities of linearized inversion of internally scattered seismic data
Aldawood, Ali
2014-08-05
Least-square migration is an iterative linearized inversion scheme that tends to suppress the migration artifacts and enhance the spatial resolution of the migrated image. However, standard least-square migration, based on imaging single scattering energy, may not be able to enhance events that are mainly illuminated by internal multiples such as vertical and nearly vertical faults. To alleviate this problem, we propose a linearized inversion framework to migrate internally multiply scattered energy. We applied this least-square migration of internal multiples to image a vertical fault. Tests on synthetic data demonstrate the ability of the proposed method to resolve a vertical fault plane that is poorly resolved by least-square imaging using primaries only. We, also, demonstrate the robustness of the proposed scheme in the presence of white Gaussian random observational noise and in the case of imaging the fault plane using inaccurate migration velocities.
A Sparse Approximate Inverse Preconditioner for Nonsymmetric Linear Systems
Benzi, M.; Tůma, Miroslav
1998-01-01
Roč. 19, č. 3 (1998), s. 968-994 ISSN 1064-8275 R&D Projects: GA ČR GA201/93/0067; GA AV ČR IAA230401 Keywords : large sparse systems * interative methods * preconditioning * approximate inverse * sparse linear systems * sparse matrices * incomplete factorizations * conjugate gradient -type methods Subject RIV: BA - General Mathematics Impact factor: 1.378, year: 1998
Lebrun, D.
1997-05-22
The aim of the dissertation is the linearized inversion of multicomponent seismic data for 3D elastic horizontally stratified media, using Born approximation. A Jacobian matrix is constructed; it will be used to model seismic data from elastic parameters. The inversion technique, relying on single value decomposition (SVD) of the Jacobian matrix, is described. Next, the resolution of inverted elastic parameters is quantitatively studies. A first use of the technique is shown in the frame of an evaluation of a sea bottom acquisition (synthetic data). Finally, a real data set acquired with conventional marine technique is inverted. (author) 70 refs.
Inverse boundary element calculations based on structural modes
Juhl, Peter Møller
2007-01-01
The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods sol...
Linearized versus non-linear inverse methods for seismic localization of underground sources
Oh, Geok Lian; Jacobsen, Finn
2013-01-01
The problem of localization of underground sources from seismic measurements detected by several geophones located on the ground surface is addressed. Two main approaches to the solution of the problem are considered: a beamforming approach that is derived from the linearized inversion problem, a...
New nonlinear methods for linear transport calculations
Adams, M.L.
1993-01-01
We present a new family of methods for the numerical solution of the linear transport equation. With these methods an iteration consists of an 'S N sweep' followed by an 'S 2 -like' calculation. We show, by analysis as well as numerical results, that iterative convergence is always rapid. We show that this rapid convergence does not depend on a consistent discretization of the S 2 -like equations - they can be discretized independently from the S N equations. We show further that independent discretizations can offer significant advantages over consistent ones. In particular, we find that in a wide range of problems, an accurate discretization of the S 2 -like equation can be combined with a crude discretization of the S N equations to produce an accurate S N answer. We demonstrate this by analysis as well as numerical results. (orig.)
Point source reconstruction principle of linear inverse problems
Terazono, Yasushi; Matani, Ayumu; Fujimaki, Norio; Murata, Tsutomu
2010-01-01
Exact point source reconstruction for underdetermined linear inverse problems with a block-wise structure was studied. In a block-wise problem, elements of a source vector are partitioned into blocks. Accordingly, a leadfield matrix, which represents the forward observation process, is also partitioned into blocks. A point source is a source having only one nonzero block. An example of such a problem is current distribution estimation in electroencephalography and magnetoencephalography, where a source vector represents a vector field and a point source represents a single current dipole. In this study, the block-wise norm, a block-wise extension of the l p -norm, was defined as the family of cost functions of the inverse method. The main result is that a set of three conditions was found to be necessary and sufficient for block-wise norm minimization to ensure exact point source reconstruction for any leadfield matrix that admit such reconstruction. The block-wise norm that satisfies the conditions is the sum of the cost of all the observations of source blocks, or in other words, the block-wisely extended leadfield-weighted l 1 -norm. Additional results are that minimization of such a norm always provides block-wisely sparse solutions and that its solutions form cones in source space
Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems
Benzi, M. [Universita di Bologna (Italy); Tuma, M. [Inst. of Computer Sciences, Prague (Czech Republic)
1996-12-31
A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.
Xin-Jia Meng
2015-01-01
Full Text Available Multidisciplinary reliability is an important part of the reliability-based multidisciplinary design optimization (RBMDO. However, it usually has a considerable amount of calculation. The purpose of this paper is to improve the computational efficiency of multidisciplinary inverse reliability analysis. A multidisciplinary inverse reliability analysis method based on collaborative optimization with combination of linear approximations (CLA-CO is proposed in this paper. In the proposed method, the multidisciplinary reliability assessment problem is first transformed into a problem of most probable failure point (MPP search of inverse reliability, and then the process of searching for MPP of multidisciplinary inverse reliability is performed based on the framework of CLA-CO. This method improves the MPP searching process through two elements. One is treating the discipline analyses as the equality constraints in the subsystem optimization, and the other is using linear approximations corresponding to subsystem responses as the replacement of the consistency equality constraint in system optimization. With these two elements, the proposed method realizes the parallel analysis of each discipline, and it also has a higher computational efficiency. Additionally, there are no difficulties in applying the proposed method to problems with nonnormal distribution variables. One mathematical test problem and an electronic packaging problem are used to demonstrate the effectiveness of the proposed method.
Sound source reconstruction using inverse boundary element calculations
Schuhmacher, Andreas; Hald, Jørgen; Rasmussen, Karsten Bo
2003-01-01
Whereas standard boundary element calculations focus on the forward problem of computing the radiated acoustic field from a vibrating structure, the aim in this work is to reverse the process, i.e., to determine vibration from acoustic field data. This inverse problem is brought on a form suited ...... it is demonstrated that the L-curve criterion is robust with respect to the errors in a real measurement situation. In particular, it is shown that the L-curve criterion is superior to the more conventional generalized cross-validation (GCV) approach for the present tire noise studies....
Linearized inversion frameworks toward high-resolution seismic imaging
Aldawood, Ali
2016-09-01
internally multiply scattered seismic waves to obtain highly resolved images delineating vertical faults that are otherwise not easily imaged by primaries. Seismic interferometry is conventionally based on the cross-correlation and convolution of seismic traces to transform seismic data from one acquisition geometry to another. The conventional interferometric transformation yields virtual data that suffers from low temporal resolution, wavelet distortion, and correlation/convolution artifacts. I therefore incorporate a least-squares datuming technique to interferometrically transform vertical-seismic-profile surface-related multiples to surface-seismic-profile primaries. This yields redatumed data with high temporal resolution and less artifacts, which are subsequently imaged to obtain highly resolved subsurface images. Tests on synthetic examples demonstrate the efficiency of the proposed techniques, yielding highly resolved migrated sections compared with images obtained by imaging conventionally redatumed data. I further advance the recently developed cost-effective Generalized Interferometric Multiple Imaging procedure, which aims to not only image first but also higher-order multiples as well. I formulate this procedure as a linearized inversion framework and solve it as a least-squares problem. Tests of the least-squares Generalized Interferometric Multiple imaging framework on synthetic datasets and demonstrate that it could provide highly resolved migrated images and delineate vertical fault planes compared with the standard procedure. The results support the assertion that this linearized inversion framework can illuminate subsurface zones that are mainly illuminated by internally scattered energy.
The linearized inversion of the generalized interferometric multiple imaging
Aldawood, Ali
2016-09-06
The generalized interferometric multiple imaging (GIMI) procedure can be used to image duplex waves and other higher order internal multiples. Imaging duplex waves could help illuminate subsurface zones that are not easily illuminated by primaries such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI procedure yields migrated images that suffer from low spatial resolution, migration artifacts, and cross-talk noise. To alleviate these problems, we propose a least-squares GIMI framework in which we formulate the first two steps as a linearized inversion problem when imaging first-order internal multiples. Tests on synthetic datasets demonstrate the ability to localize subsurface scatterers in their true positions, and delineate a vertical fault plane using the proposed method. We, also, demonstrate the robustness of the proposed framework when imaging the scatterers or the vertical fault plane with erroneous migration velocities.
Linear ideal MHD stability calculations for ITER
Hogan, J.T.
1988-01-01
A survey of MHD stability limits has been made to address issues arising from the MHD--poloidal field design task of the US ITER project. This is a summary report on the results obtained to date. The study evaluates the dependence of ballooning, Mercier and low-n ideal linear MHD stability on key system parameters to estimate overall MHD constraints for ITER. 17 refs., 27 figs
Optimization of linear Monte Carlo calculations
Troubetzkoy, E.S.
1991-01-01
The variance of the calculation is minimized on the basis of parameters generated by a learning technique. The optimum is obtained if sampling is biased proportionally to the expected root-mean-square score. In this paper, the method is compared with existing methods, which bias proportionally to the expected score
Calculation of U, Ra, Th and K contents in uranium ore by multiple linear regression method
Lin Chao; Chen Yingqiang; Zhang Qingwen; Tan Fuwen; Peng Guanghui
1991-01-01
A multiple linear regression method was used to compute γ spectra of uranium ore samples and to calculate contents of U, Ra, Th, and K. In comparison with the inverse matrix method, its advantage is that no standard samples of pure U, Ra, Th and K are needed for obtaining response coefficients
An Exact Formula for Calculating Inverse Radial Lens Distortions
Pierre Drap
2016-06-01
Full Text Available This article presents a new approach to calculating the inverse of radial distortions. The method presented here provides a model of reverse radial distortion, currently modeled by a polynomial expression, that proposes another polynomial expression where the new coefficients are a function of the original ones. After describing the state of the art, the proposed method is developed. It is based on a formal calculus involving a power series used to deduce a recursive formula for the new coefficients. We present several implementations of this method and describe the experiments conducted to assess the validity of the new approach. Such an approach, non-iterative, using another polynomial expression, able to be deduced from the first one, can actually be interesting in terms of performance, reuse of existing software, or bridging between different existing software tools that do not consider distortion from the same point of view.
A compressive sensing approach to the calculation of the inverse data space
Khan, Babar Hasan; Saragiotis, Christos; Alkhalifah, Tariq Ali
2012-01-01
Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion
Alkhalifah, Tariq Ali
2012-09-25
Traveltime inversion focuses on the geometrical features of the waveform (traveltimes), which is generally smooth, and thus, tends to provide averaged (smoothed) information of the model. On other hand, general waveform inversion uses additional elements of the wavefield including amplitudes to extract higher resolution information, but this comes at the cost of introducing non-linearity to the inversion operator, complicating the convergence process. We use unwrapped phase-based objective functions in waveform inversion as a link between the two general types of inversions in a domain in which such contributions to the inversion process can be easily identified and controlled. The instantaneous traveltime is a measure of the average traveltime of the energy in a trace as a function of frequency. It unwraps the phase of wavefields yielding far less non-linearity in the objective function than that experienced with conventional wavefields, yet it still holds most of the critical wavefield information in its frequency dependency. However, it suffers from non-linearity introduced by the model (or reflectivity), as reflections from independent events in our model interact with each other. Unwrapping the phase of such a model can mitigate this non-linearity as well. Specifically, a simple modification to the inverted domain (or model), can reduce the effect of the model-induced non-linearity and, thus, make the inversion more convergent. Simple numerical examples demonstrate these assertions.
Alkhalifah, Tariq Ali; Choi, Yun Seok
2012-01-01
Traveltime inversion focuses on the geometrical features of the waveform (traveltimes), which is generally smooth, and thus, tends to provide averaged (smoothed) information of the model. On other hand, general waveform inversion uses additional elements of the wavefield including amplitudes to extract higher resolution information, but this comes at the cost of introducing non-linearity to the inversion operator, complicating the convergence process. We use unwrapped phase-based objective functions in waveform inversion as a link between the two general types of inversions in a domain in which such contributions to the inversion process can be easily identified and controlled. The instantaneous traveltime is a measure of the average traveltime of the energy in a trace as a function of frequency. It unwraps the phase of wavefields yielding far less non-linearity in the objective function than that experienced with conventional wavefields, yet it still holds most of the critical wavefield information in its frequency dependency. However, it suffers from non-linearity introduced by the model (or reflectivity), as reflections from independent events in our model interact with each other. Unwrapping the phase of such a model can mitigate this non-linearity as well. Specifically, a simple modification to the inverted domain (or model), can reduce the effect of the model-induced non-linearity and, thus, make the inversion more convergent. Simple numerical examples demonstrate these assertions.
Comparison of Linear Microinstability Calculations of Varying Input Realism
Rewoldt, G.
2003-01-01
The effect of varying ''input realism'' or varying completeness of the input data for linear microinstability calculations, in particular on the critical value of the ion temperature gradient for the ion temperature gradient mode, is investigated using gyrokinetic and gyrofluid approaches. The calculations show that varying input realism can have a substantial quantitative effect on the results
Comparison of linear microinstability calculations of varying input realism
Rewoldt, G.; Kinsey, J.E.
2004-01-01
The effect of varying 'input realism' or varying completeness of the input data for linear microinstability calculations, in particular on the critical value of the ion temperature gradient for the ion temperature gradient mode, is investigated using gyrokinetic and gyrofluid approaches. The calculations show that varying input realism can have a substantial quantitative effect on the results
Lin, Lin; Yang, Chao; Chen, Mohan; He, Lixin
2013-01-01
We describe how to apply the recently developed pole expansion and selected inversion (PEXSI) technique to Kohn–Sham density function theory (DFT) electronic structure calculations that are based on atomic orbital discretization. We give analytic expressions for evaluating the charge density, the total energy, the Helmholtz free energy and the atomic forces (including both the Hellmann–Feynman force and the Pulay force) without using the eigenvalues and eigenvectors of the Kohn–Sham Hamiltonian. We also show how to update the chemical potential without using Kohn–Sham eigenvalues. The advantage of using PEXSI is that it has a computational complexity much lower than that associated with the matrix diagonalization procedure. We demonstrate the performance gain by comparing the timing of PEXSI with that of diagonalization on insulating and metallic nanotubes. For these quasi-1D systems, the complexity of PEXSI is linear with respect to the number of atoms. This linear scaling can be observed in our computational experiments when the number of atoms in a nanotube is larger than a few hundreds. Both the wall clock time and the memory requirement of PEXSI are modest. This even makes it possible to perform Kohn–Sham DFT calculations for 10 000-atom nanotubes with a sequential implementation of the selected inversion algorithm. We also perform an accurate geometry optimization calculation on a truncated (8, 0) boron nitride nanotube system containing 1024 atoms. Numerical results indicate that the use of PEXSI does not lead to loss of the accuracy required in a practical DFT calculation. (paper)
An application of sparse inversion on the calculation of the inverse data space of geophysical data
Saragiotis, Christos; Doulgeris, Panagiotis C.; Verschuur, Eric
2011-01-01
Multiple reflections as observed in seismic reflection measurements often hide arrivals from the deeper target reflectors and need to be removed. The inverse data space provides a natural separation of primaries and surface-related multiples
Alvarez-Estrada, R.F.
1979-01-01
A comprehensive review of the inverse scattering solution of certain non-linear evolution equations of physical interest in one space dimension is presented. We explain in some detail the interrelated techniques which allow to linearize exactly the following equations: (1) the Korteweg and de Vries equation; (2) the non-linear Schrodinger equation; (3) the modified Korteweg and de Vries equation; (4) the Sine-Gordon equation. We concentrate in discussing the pairs of linear operators which accomplish such an exact linearization and the solution of the associated initial value problem. The application of the method to other non-linear evolution equations is reviewed very briefly
Inverse Boundary Value Problem for Non-linear Hyperbolic Partial Differential Equations
Nakamura, Gen; Vashisth, Manmohan
2017-01-01
In this article we are concerned with an inverse boundary value problem for a non-linear wave equation of divergence form with space dimension $n\\geq 3$. This non-linear wave equation has a trivial solution, i.e. zero solution. By linearizing this equation at the trivial solution, we have the usual linear isotropic wave equation with the speed $\\sqrt{\\gamma(x)}$ at each point $x$ in a given spacial domain. For any small solution $u=u(t,x)$ of this non-linear equation, we have the linear isotr...
Forensic analysis of explosions: Inverse calculation of the charge mass
Voort, M.M. van der; Wees, R.M.M. van; Brouwer, S.D.; Jagt-Deutekom, M.J. van der; Verreault, J.
2015-01-01
Forensic analysis of explosions consists of determining the point of origin, the explosive substance involved, and the charge mass. Within the EU fP7 project Hyperion, TNO developed the Inverse Explosion Analysis (TNO-IEA) tool to estïmate the charge mass and point of origin based on observed damage
A compressive sensing approach to the calculation of the inverse data space
Khan, Babar Hasan
2012-01-01
Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion techniques has seen mitigation of some artifacts. We reformulate the problem by taking advantage of some of the developments from the field of Compressive Sensing. The seismic data is compressed at the sensor level by recording projections of the traces. We then process this compressed data directly to estimate the inverse data space. Due to the smaller number of data set we also gain in terms of computational complexity.
Calculation of the exponential function of linear idempotent operators
Chavoya-Aceves, O.; Luna, H.M.
1989-01-01
We give a method to calculate the exponential EXP[A r ] where A is a linear operator which satisfies the reaction A n =I, n is an integer and I is the identity operator. The method is generalised to operators such that A n +1=A and is applied to obtain some Lorentz transformations which generalise the notion of 'boost'. (Author)
Zha, Yuanyuan; Yeh, Tian-Chyi J.; Illman, Walter A.; Zeng, Wenzhi; Zhang, Yonggen; Sun, Fangqiang; Shi, Liangsheng
2018-03-01
Hydraulic tomography (HT) is a recently developed technology for characterizing high-resolution, site-specific heterogeneity using hydraulic data (nd) from a series of cross-hole pumping tests. To properly account for the subsurface heterogeneity and to flexibly incorporate additional information, geostatistical inverse models, which permit a large number of spatially correlated unknowns (ny), are frequently used to interpret the collected data. However, the memory storage requirements for the covariance of the unknowns (ny × ny) in these models are prodigious for large-scale 3-D problems. Moreover, the sensitivity evaluation is often computationally intensive using traditional difference method (ny forward runs). Although employment of the adjoint method can reduce the cost to nd forward runs, the adjoint model requires intrusive coding effort. In order to resolve these issues, this paper presents a Reduced-Order Successive Linear Estimator (ROSLE) for analyzing HT data. This new estimator approximates the covariance of the unknowns using Karhunen-Loeve Expansion (KLE) truncated to nkl order, and it calculates the directional sensitivities (in the directions of nkl eigenvectors) to form the covariance and cross-covariance used in the Successive Linear Estimator (SLE). In addition, the covariance of unknowns is updated every iteration by updating the eigenvalues and eigenfunctions. The computational advantages of the proposed algorithm are demonstrated through numerical experiments and a 3-D transient HT analysis of data from a highly heterogeneous field site.
Continuity and general perturbation of the Drazin inverse for closed linear operators
N. Castro González
2002-01-01
Full Text Available We study perturbations and continuity of the Drazin inverse of a closed linear operator A and obtain explicit error estimates in terms of the gap between closed operators and the gap between ranges and nullspaces of operators. The results are used to derive a theorem on the continuity of the Drazin inverse for closed operators and to describe the asymptotic behavior of operator semigroups.
Chien-Wei Lee
2013-10-01
Full Text Available We derive a statistical physics model of two-dimensional electron gas (2DEG and propose an accurate approximation method for calculating the quantum-mechanical effects of metal-oxide-semiconductor (MOS structure in accumulation and strong inversion regions. We use an exponential surface potential approximation in solving the quantization energy levels and derive the function of density of states in 2D to 3D transition region by applying uncertainty principle and Schrödinger equation in k-space. The simulation results show that our approximation method and theory of density of states solve the two major problems of previous researches: the non-negligible error caused by the linear potential approximation and the inconsistency of density of states and carrier distribution in 2D to 3D transition region.
A Projected Non-linear Conjugate Gradient Method for Interactive Inverse Kinematics
Engell-Nørregård, Morten; Erleben, Kenny
2009-01-01
Inverse kinematics is the problem of posing an articulated figure to obtain a wanted goal, without regarding inertia and forces. Joint limits are modeled as bounds on individual degrees of freedom, leading to a box-constrained optimization problem. We present A projected Non-linear Conjugate...... Gradient optimization method suitable for box-constrained optimization problems for inverse kinematics. We show application on inverse kinematics positioning of a human figure. Performance is measured and compared to a traditional Jacobian Transpose method. Visual quality of the developed method...
Pahn, T. [Pahn Ingenieure, Am Seegraben 17b 03051 Cottbus Germany; Rolfes, R. [Institut f?r Statik und Dynamik, Leibniz Universit?t Hannover, Appelstra?e 9A 30167 Hannover Germany; Jonkman, J. [National Renewable Energy Laboratory, 15013 Denver West Parkway Golden Colorado 80401 USA
2017-02-20
A significant number of wind turbines installed today have reached their designed service life of 20 years, and the number will rise continuously. Most of these turbines promise a more economical performance if they operate for more than 20 years. To assess a continued operation, we have to analyze the load-bearing capacity of the support structure with respect to site-specific conditions. Such an analysis requires the comparison of the loads used for the design of the support structure with the actual loads experienced. This publication presents the application of a so-called inverse load calculation to a 5-MW wind turbine support structure. The inverse load calculation determines external loads derived from a mechanical description of the support structure and from measured structural responses. Using numerical simulations with the software fast, we investigated the influence of wind-turbine-specific effects such as the wind turbine control or the dynamic interaction between the loads and the support structure to the presented inverse load calculation procedure. fast is used to study the inverse calculation of simultaneously acting wind and wave loads, which has not been carried out until now. Furthermore, the application of the inverse load calculation procedure to a real 5-MW wind turbine support structure is demonstrated. In terms of this practical application, setting up the mechanical system for the support structure using measurement data is discussed. The paper presents results for defined load cases and assesses the accuracy of the inversely derived dynamic loads for both the simulations and the practical application.
Linear filtering applied to Monte Carlo criticality calculations
Morrison, G.W.; Pike, D.H.; Petrie, L.M.
1975-01-01
A significant improvement in the acceleration of the convergence of the eigenvalue computed by Monte Carlo techniques has been developed by applying linear filtering theory to Monte Carlo calculations for multiplying systems. A Kalman filter was applied to a KENO Monte Carlo calculation of an experimental critical system consisting of eight interacting units of fissile material. A comparison of the filter estimate and the Monte Carlo realization was made. The Kalman filter converged in five iterations to 0.9977. After 95 iterations, the average k-eff from the Monte Carlo calculation was 0.9981. This demonstrates that the Kalman filter has the potential of reducing the calculational effort of multiplying systems. Other examples and results are discussed
Parametrisation of linear accelerator electron beam for computerised dosimetry calculations
Millan, P.E.; Millan, S.; Hernandez, A.; Andreo, P.
1979-01-01
A previously published age-diffusion model has been adapted to obtain parameters for the Saggittaire linear accelerator electron beams. The calculations are shown and the results discussed. A comparison is presented between measured and predicted percentage depth doses for electron beams at various energies between 10 and 32 MeV. Theoretical isodose curves are compared, for an energy of 10 MeV, with experimental curves. The parameters obtained are used for computer electron isodose curve calculation in a program called FIJOE adapted from a previously published program. This program makes it possible to correct for irregular body contours, but not for internal inhomogeneities. (UK)
Power calculation of linear and angular incremental encoders
Prokofev, Aleksandr V.; Timofeev, Aleksandr N.; Mednikov, Sergey V.; Sycheva, Elena A.
2016-04-01
Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and transmit the measured values back to the control unit. The capabilities of these systems are undergoing continual development in terms of their resolution, accuracy and reliability, their measuring ranges, and maximum speeds. This article discusses the method of power calculation of linear and angular incremental photoelectric encoders, to find the optimum parameters for its components, such as light emitters, photo-detectors, linear and angular scales, optical components etc. It analyzes methods and devices that permit high resolutions in the order of 0.001 mm or 0.001°, as well as large measuring lengths of over 100 mm. In linear and angular incremental photoelectric encoders optical beam is usually formulated by a condenser lens passes through the measuring unit changes its value depending on the movement of a scanning head or measuring raster. Past light beam is converting into an electrical signal by the photo-detecter's block for processing in the electrical block. Therefore, for calculating the energy source is a value of the desired value of the optical signal at the input of the photo-detecter's block, which reliably recorded and processed in the electronic unit of linear and angular incremental optoelectronic encoders. Automation technology is constantly expanding its role in improving the efficiency of manufacturing and testing processes in all branches of industry. More than ever before, the mechanical movements of linear slides, rotary tables, robot arms, actuators, etc. are numerically controlled. Linear and angular incremental photoelectric encoders measure mechanical motion and
Emittance calculations for the Stanford Linear Collider injector
Sheppard, J.C.; Clendenin, J.E.; Helm, R.H.; Lee, M.J.; Miller, R.H.; Blocker, C.A.
1983-03-01
A series of measurements have been performed to determine the emittance of the high intensity, single bunch beam that is to be injected into the Stanford Linear Collider. On-line computer programs were used to control the Linac for the purpose of data acquisition and to fit the data to a model in order to deduce the beam emittance. This paper will describe the method of emittance calculation and present some of the measurement results
Calculations of beam dynamics in Sandia linear electron accelerators, 1984
Poukey, J.W.; Coleman, P.D.
1985-03-01
A number of code and analytic studies were made during 1984 which pertain to the Sandia linear accelerators MABE and RADLAC. In this report the authors summarize the important results of the calculations. New results include a better understanding of gap-induced radial oscillations, leakage currents in a typical MABE gas, emittance growth in a beam passing through a series of gaps, some new diocotron results, and the latest diode simulations for both accelerators. 23 references, 30 figures, 1 table
Planktonic food webs revisited: Reanalysis of results from the linear inverse approach
Hlaili, Asma Sakka; Niquil, Nathalie; Legendre, Louis
2014-01-01
Identification of the trophic pathway that dominates a given planktonic assemblage is generally based on the distribution of biomasses among food-web compartments, or better, the flows of materials or energy among compartments. These flows are obtained by field observations and a posteriori analyses, including the linear inverse approach. In the present study, we re-analysed carbon flows obtained by inverse analysis at 32 stations in the global ocean and one large lake. Our results do not support two "classical" views of plankton ecology, i.e. that the herbivorous food web is dominated by mesozooplankton grazing on large phytoplankton, and the microbial food web is based on microzooplankton significantly consuming bacteria; our results suggest instead that phytoplankton are generally grazed by microzooplankton, of which they are the main food source. Furthermore, we identified the "phyto-microbial food web", where microzooplankton largely feed on phytoplankton, in addition to the already known "poly-microbial food web", where microzooplankton consume more or less equally various types of food. These unexpected results led to a (re)definition of the conceptual models corresponding to the four trophic pathways we found to exist in plankton, i.e. the herbivorous, multivorous, and two types of microbial food web. We illustrated the conceptual trophic pathways using carbon flows that were actually observed at representative stations. The latter can be calibrated to correspond to any field situation. Our study also provides researchers and managers with operational criteria for identifying the dominant trophic pathway in a planktonic assemblage, these criteria being based on the values of two carbon ratios that could be calculated from flow values that are relatively easy to estimate in the field.
On The Structure of The Inverse of a Linear Constant Multivariable ...
On The Structure of The Inverse of a Linear Constant Multivariable System. ... It is shown that the use of this representation has certain advantages in the design of multivariable feedback systems. typical examples were considered to indicate the corresponding application. Keywords: Stability Functions, multivariable ...
Inverse chaos synchronization in linearly and nonlinearly coupled systems with multiple time-delays
Shahverdiev, E.M.; Hashimov, R.H.; Nuriev, R.A.; Hashimova, L.H.; Huseynova, E.M.; Shore, K.A.
2005-04-01
We report on inverse chaos synchronization between two unidirectionally linearly and nonlinearly coupled chaotic systems with multiple time-delays and find the existence and stability conditions for different synchronization regimes. We also study the effect of parameter mismatches on synchonization regimes. The method is tested on the famous Ikeda model. Numerical simulations fully support the analytical approach. (author)
Linear augmented plane wave method for self-consistent calculations
Takeda, T.; Kuebler, J.
1979-01-01
O.K. Andersen has recently introduced a linear augmented plane wave method (LAPW) for the calculation of electronic structure that was shown to be computationally fast. A more general formulation of an LAPW method is presented here. It makes use of a freely disposable number of eigenfunctions of the radial Schroedinger equation. These eigenfunctions can be selected in a self-consistent way. The present formulation also results in a computationally fast method. It is shown that Andersen's LAPW is obtained in a special limit from the present formulation. Self-consistent test calculations for copper show the present method to be remarkably accurate. As an application, scalar-relativistic self-consistent calculations are presented for the band structure of FCC lanthanum. (author)
Park, J. J.
2017-12-01
Sheared Layers in the Continental Crust: Nonlinear and Linearized inversion for Ps receiver functions Jeffrey Park, Yale University The interpretation of seismic receiver functions (RFs) in terms of isotropic and anisotropic layered structure can be complex. The relationship between structure and body-wave scattering is nonlinear. The anisotropy can involve more parameters than the observations can readily constrain. Finally, reflectivity-predicted layer reverberations are often not prominent in data, so that nonlinear waveform inversion can search in vain to match ghost signals. Multiple-taper correlation (MTC) receiver functions have uncertainties in the frequency domain that follow Gaussian statistics [Park and Levin, 2016a], so grid-searches for the best-fitting collections of interfaces can be performed rapidly to minimize weighted misfit variance. Tests for layer-reverberations can be performed in the frequency domain without reflectivity calculations, allowing flexible modelling of weak, but nonzero, reverberations. Park and Levin [2016b] linearized the hybridization of P and S body waves in an anisotropic layer to predict first-order Ps conversion amplitudes at crust and mantle interfaces. In an anisotropic layer, the P wave acquires small SV and SH components. To ensure continuity of displacement and traction at the top and bottom boundaries of the layer, shear waves are generated. Assuming hexagonal symmetry with an arbitrary symmetry axis, theory confirms the empirical stacking trick of phase-shifting transverse RFs by 90 degrees in back-azimuth [Shiomi and Park, 2008; Schulte-Pelkum and Mahan, 2014] to enhance 2-lobed and 4-lobed harmonic variation. Ps scattering is generated by sharp interfaces, so that RFs resemble the first derivative of the model. MTC RFs in the frequency domain can be manipulated to obtain a first-order reconstruction of the layered anisotropy, under the above modeling constraints and neglecting reverberations. Examples from long
Retrieval of collision kernels from the change of droplet size distributions with linear inversion
Onishi, Ryo; Takahashi, Keiko [Earth Simulator Center, Japan Agency for Marine-Earth Science and Technology, 3173-25 Showa-machi, Kanazawa-ku, Yokohama Kanagawa 236-0001 (Japan); Matsuda, Keigo; Kurose, Ryoichi; Komori, Satoru [Department of Mechanical Engineering and Science, Kyoto University, Yoshida-honmachi, Sakyo-ku, Kyoto 606-8501 (Japan)], E-mail: onishi.ryo@jamstec.go.jp, E-mail: matsuda.keigo@t03.mbox.media.kyoto-u.ac.jp, E-mail: takahasi@jamstec.go.jp, E-mail: kurose@mech.kyoto-u.ac.jp, E-mail: komori@mech.kyoto-u.ac.jp
2008-12-15
We have developed a new simple inversion scheme for retrieving collision kernels from the change of droplet size distribution due to collision growth. Three-dimensional direct numerical simulations (DNS) of steady isotropic turbulence with colliding droplets are carried out in order to investigate the validity of the developed inversion scheme. In the DNS, air turbulence is calculated using a quasi-spectral method; droplet motions are tracked in a Lagrangian manner. The initial droplet size distribution is set to be equivalent to that obtained in a wind tunnel experiment. Collision kernels retrieved by the developed inversion scheme are compared to those obtained by the DNS. The comparison shows that the collision kernels can be retrieved within 15% error. This verifies the feasibility of retrieving collision kernels using the present inversion scheme.
Fukuda, J.; Johnson, K. M.
2009-12-01
Studies utilizing inversions of geodetic data for the spatial distribution of coseismic slip on faults typically present the result as a single fault plane and slip distribution. Commonly the geometry of the fault plane is assumed to be known a priori and the data are inverted for slip. However, sometimes there is not strong a priori information on the geometry of the fault that produced the earthquake and the data is not always strong enough to completely resolve the fault geometry. We develop a method to solve for the full posterior probability distribution of fault slip and fault geometry parameters in a Bayesian framework using Monte Carlo methods. The slip inversion problem is particularly challenging because it often involves multiple data sets with unknown relative weights (e.g. InSAR, GPS), model parameters that are related linearly (slip) and nonlinearly (fault geometry) through the theoretical model to surface observations, prior information on model parameters, and a regularization prior to stabilize the inversion. We present the theoretical framework and solution method for a Bayesian inversion that can handle all of these aspects of the problem. The method handles the mixed linear/nonlinear nature of the problem through combination of both analytical least-squares solutions and Monte Carlo methods. We first illustrate and validate the inversion scheme using synthetic data sets. We then apply the method to inversion of geodetic data from the 2003 M6.6 San Simeon, California earthquake. We show that the uncertainty in strike and dip of the fault plane is over 20 degrees. We characterize the uncertainty in the slip estimate with a volume around the mean fault solution in which the slip most likely occurred. Slip likely occurred somewhere in a volume that extends 5-10 km in either direction normal to the fault plane. We implement slip inversions with both traditional, kinematic smoothing constraints on slip and a simple physical condition of uniform stress
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla
2014-07-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Sparse contrast-source inversion using linear-shrinkage-enhanced inexact Newton method
Desmal, Abdulla; Bagci, Hakan
2014-01-01
A contrast-source inversion scheme is proposed for microwave imaging of domains with sparse content. The scheme uses inexact Newton and linear shrinkage methods to account for the nonlinearity and ill-posedness of the electromagnetic inverse scattering problem, respectively. Thresholded shrinkage iterations are accelerated using a preconditioning technique. Additionally, during Newton iterations, the weight of the penalty term is reduced consistently with the quadratic convergence of the Newton method to increase accuracy and efficiency. Numerical results demonstrate the applicability of the proposed method.
Inverse kinematics of a dual linear actuator pitch/roll heliostat
Freeman, Joshua; Shankar, Balakrishnan; Sundaram, Ganesh
2017-06-01
This work presents a simple, computationally efficient inverse kinematics solution for a pitch/roll heliostat using two linear actuators. The heliostat design and kinematics have been developed, modeled and tested using computer simulation software. A physical heliostat prototype was fabricated to validate the theoretical computations and data. Pitch/roll heliostats have numerous advantages including reduced cost potential and reduced space requirements, with a primary disadvantage being the significantly more complicated kinematics, which are solved here. Novel methods are applied to simplify the inverse kinematics problem which could be applied to other similar problems.
Sakurai, K; Shima, H [OYO Corp., Tokyo (Japan)
1996-10-01
This paper proposes a modeling method of one-dimensional complex resistivity using linear filter technique which has been extended to the complex resistivity. In addition, a numerical test of inversion was conducted using the monitoring results, to discuss the measured frequency band. Linear filter technique is a method by which theoretical potential can be calculated for stratified structures, and it is widely used for the one-dimensional analysis of dc electrical exploration. The modeling can be carried out only using values of complex resistivity without using values of potential. In this study, a bipolar method was employed as a configuration of electrodes. The numerical test of one-dimensional complex resistivity inversion was conducted using the formulated modeling. A three-layered structure model was used as a numerical model. A multi-layer structure with a thickness of 5 m was analyzed on the basis of apparent complex resistivity calculated from the model. From the results of numerical test, it was found that both the chargeability and the time constant agreed well with those of the original model. A trade-off was observed between the chargeability and the time constant at the stage of convergence. 3 refs., 9 figs., 1 tab.
Li, Guo; Xia, Jun; Li, Lei; Wang, Lidai; Wang, Lihong V.
2015-03-01
Linear transducer arrays are readily available for ultrasonic detection in photoacoustic computed tomography. They offer low cost, hand-held convenience, and conventional ultrasonic imaging. However, the elevational resolution of linear transducer arrays, which is usually determined by the weak focus of the cylindrical acoustic lens, is about one order of magnitude worse than the in-plane axial and lateral spatial resolutions. Therefore, conventional linear scanning along the elevational direction cannot provide high-quality three-dimensional photoacoustic images due to the anisotropic spatial resolutions. Here we propose an innovative method to achieve isotropic resolutions for three-dimensional photoacoustic images through combined linear and rotational scanning. In each scan step, we first elevationally scan the linear transducer array, and then rotate the linear transducer array along its center in small steps, and scan again until 180 degrees have been covered. To reconstruct isotropic three-dimensional images from the multiple-directional scanning dataset, we use the standard inverse Radon transform originating from X-ray CT. We acquired a three-dimensional microsphere phantom image through the inverse Radon transform method and compared it with a single-elevational-scan three-dimensional image. The comparison shows that our method improves the elevational resolution by up to one order of magnitude, approaching the in-plane lateral-direction resolution. In vivo rat images were also acquired.
Chu, Dezhang; Lawson, Gareth L; Wiebe, Peter H
2016-05-01
The linear inversion commonly used in fisheries and zooplankton acoustics assumes a constant inversion kernel and ignores the uncertainties associated with the shape and behavior of the scattering targets, as well as other relevant animal parameters. Here, errors of the linear inversion due to uncertainty associated with the inversion kernel are quantified. A scattering model-based nonlinear inversion method is presented that takes into account the nonlinearity of the inverse problem and is able to estimate simultaneously animal abundance and the parameters associated with the scattering model inherent to the kernel. It uses sophisticated scattering models to estimate first, the abundance, and second, the relevant shape and behavioral parameters of the target organisms. Numerical simulations demonstrate that the abundance, size, and behavior (tilt angle) parameters of marine animals (fish or zooplankton) can be accurately inferred from the inversion by using multi-frequency acoustic data. The influence of the singularity and uncertainty in the inversion kernel on the inversion results can be mitigated by examining the singular values for linear inverse problems and employing a non-linear inversion involving a scattering model-based kernel.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Nwankwo, Obioma; Glatting, Gerhard; Wenz, Frederik; Fleckenstein, Jens
2017-01-01
To introduce a new method of deriving a virtual source model (VSM) of a linear accelerator photon beam from a phase space file (PSF) for Monte Carlo (MC) dose calculation. A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden) and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs) for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC) between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses. The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate) for the evaluated fields. A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
A single-source photon source model of a linear accelerator for Monte Carlo dose calculation.
Obioma Nwankwo
Full Text Available To introduce a new method of deriving a virtual source model (VSM of a linear accelerator photon beam from a phase space file (PSF for Monte Carlo (MC dose calculation.A PSF of a 6 MV photon beam was generated by simulating the interactions of primary electrons with the relevant geometries of a Synergy linear accelerator (Elekta AB, Stockholm, Sweden and recording the particles that reach a plane 16 cm downstream the electron source. Probability distribution functions (PDFs for particle positions and energies were derived from the analysis of the PSF. These PDFs were implemented in the VSM using inverse transform sampling. To model particle directions, the phase space plane was divided into a regular square grid. Each element of the grid corresponds to an area of 1 mm2 in the phase space plane. The average direction cosines, Pearson correlation coefficient (PCC between photon energies and their direction cosines, as well as the PCC between the direction cosines were calculated for each grid element. Weighted polynomial surfaces were then fitted to these 2D data. The weights are used to correct for heteroscedasticity across the phase space bins. The directions of the particles created by the VSM were calculated from these fitted functions. The VSM was validated against the PSF by comparing the doses calculated by the two methods for different square field sizes. The comparisons were performed with profile and gamma analyses.The doses calculated with the PSF and VSM agree to within 3% /1 mm (>95% pixel pass rate for the evaluated fields.A new method of deriving a virtual photon source model of a linear accelerator from a PSF file for MC dose calculation was developed. Validation results show that the doses calculated with the VSM and the PSF agree to within 3% /1 mm.
Noh, Si Wan; Sol, Jeong; Lee, Jai Ki; Lee, Jong Il; Kim, Jang Lyul
2012-01-01
Calculation of total number of disintegrations after intake of radioactive nuclides is indispensable to calculate a dose coefficient which means committed effective dose per unit activity (Sv/Bq). In order to calculate the total number of disintegrations analytically, Birch all's algorithm has been commonly used. As described below, an inverse matrix should be calculated in the algorithm. As biokinetic models have been complicated, however, the inverse matrix does not exist sometime and the total number of disintegrations cannot be calculated. Thus, a numerical method has been applied to DCAL code used to calculate dose coefficients in ICRP publication and IMBA code. In this study, however, we applied the pseudo inverse matrix to solve the problem that the inverse matrix does not exist for. In order to validate our method, the method was applied to two examples and the results were compared to the tabulated data in ICRP publication. MATLAB 2012a was used to calculate the total number of disintegrations and exp m and p inv MATLAB built in functions were employed
On the internal stability of non-linear dynamic inversion: application to flight control
Alam, M.; Čelikovský, Sergej
2017-01-01
Roč. 11, č. 12 (2017), s. 1849-1861 ISSN 1751-8644 R&D Projects: GA ČR(CZ) GA17-04682S Institutional support: RVO:67985556 Keywords : flight control * non-linear dynamic inversion * stability Subject RIV: BC - Control Systems Theory OBOR OECD: Automation and control systems Impact factor: 2.536, year: 2016 http://library.utia.cas.cz/separaty/2017/TR/celikovsky-0476150.pdf
Frequency-domain full-waveform inversion with non-linear descent directions
Geng, Yu; Pan, Wenyong; Innanen, Kristopher A.
2018-05-01
Full-waveform inversion (FWI) is a highly non-linear inverse problem, normally solved iteratively, with each iteration involving an update constructed through linear operations on the residuals. Incorporating a flexible degree of non-linearity within each update may have important consequences for convergence rates, determination of low model wavenumbers and discrimination of parameters. We examine one approach for doing so, wherein higher order scattering terms are included within the sensitivity kernel during the construction of the descent direction, adjusting it away from that of the standard Gauss-Newton approach. These scattering terms are naturally admitted when we construct the sensitivity kernel by varying not the current but the to-be-updated model at each iteration. Linear and/or non-linear inverse scattering methodologies allow these additional sensitivity contributions to be computed from the current data residuals within any given update. We show that in the presence of pre-critical reflection data, the error in a second-order non-linear update to a background of s0 is, in our scheme, proportional to at most (Δs/s0)3 in the actual parameter jump Δs causing the reflection. In contrast, the error in a standard Gauss-Newton FWI update is proportional to (Δs/s0)2. For numerical implementation of more complex cases, we introduce a non-linear frequency-domain scheme, with an inner and an outer loop. A perturbation is determined from the data residuals within the inner loop, and a descent direction based on the resulting non-linear sensitivity kernel is computed in the outer loop. We examine the response of this non-linear FWI using acoustic single-parameter synthetics derived from the Marmousi model. The inverted results vary depending on data frequency ranges and initial models, but we conclude that the non-linear FWI has the capability to generate high-resolution model estimates in both shallow and deep regions, and to converge rapidly, relative to a
A Closed Loop Inverse Kinematics Solver Intended for Offline Calculation Optimized with GA
Emil Dale Bjoerlykhaug
2018-01-01
Full Text Available This paper presents a simple approach to building a robotic control system. Instead of a conventional control system which solves the inverse kinematics in real-time as the robot moves, an alternative approach where the inverse kinematics is calculated ahead of time is presented. This approach reduces the complexity and code necessary for the control system. Robot control systems are usually implemented in low level programming language. This new approach enables the use of high level programming for the complex inverse kinematics problem. For our approach, we implement a program to solve the inverse kinematics, called the Inverse Kinematics Solver (IKS, in Java, with a simple graphical user interface (GUI to load a file with desired end effector poses and edit the configuration of the robot using the Denavit-Hartenberg (DH convention. The program uses the closed-loop inverse kinematics (CLIK algorithm to solve the inverse kinematics problem. As an example, the IKS was set up to solve the kinematics for a custom built serial link robot. The kinematics for the custom robot is presented, and an example of input and output files is also presented. Additionally, the gain of the loop in the IKS is optimized using a GA, resulting in almost a 50% decrease in computational time.
An investigation on the solutions for the linear inverse problem in gamma ray tomography
Araujo, Bruna G.M.; Dantas, Carlos C.; Santos, Valdemir A. dos; Finkler, Christine L.L.; Oliveira, Eric F. de; Melo, Silvio B.; Santos, M. Graca dos
2009-01-01
This paper the results obtained in single beam gamma ray tomography are investigated according to direct problem formulation and the applied solution for the linear system of equations. By image reconstruction based algebraic computational algorithms are used. The sparse under and over-determined linear system of equations was analyzed. Build in functions of Matlab software were applied and optimal solutions were investigate. Experimentally a section of the tube is scanned from various positions and at different angles. The solution, to find the vector of coefficients μ, from the vector of measured p values through the W matrix inversion, constitutes an inverse problem. A industrial tomography process requires a numerical solution of the system of equations. The definition of inverse problem according to Hadmard's is considered and as well the requirement of a well posed problem to find stable solutions. The formulation of the basis function and the computational algorithm to structure the weight matrix W were analyzed. For W full rank matrix the obtained solution is unique as expected. Total Least Squares was implemented which theory and computation algorithm gives adequate treatment for the problems due to non-unique solutions of the system of equations. Stability of the solution was investigating by means of a regularization technique and the comparison shows that it improves the results. An optimal solution as a function of the image quality, computation time and minimum residuals were quantified. The corresponding reconstructed images are shown in 3D graphics in order to compare with the solution. (author)
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Liu, Long; Liu, Wei
2018-04-01
A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.
Burkitt, A.N.; Irving, A.C.
1988-01-01
Two of the methods that are widely used in lattice gauge theory calculations requiring inversion of the fermion matrix are the Lanczos and the conjugate gradient algorithms. Those algorithms are already known to be closely related. In fact for matrix inversion, in exact arithmetic, they give identical results at each iteration and are just alternative formulations of a single algorithm. This equivalence survives rounding errors. We give the identities between the coefficients of the two formulations, enabling many of the best features of them to be combined. (orig.)
Guliyev, Namig J.
2008-01-01
International audience; Inverse problems of recovering the coefficients of Sturm–Liouville problems with the eigenvalue parameter linearly contained in one of the boundary conditions are studied: 1) from the sequences of eigenvalues and norming constants; 2) from two spectra. Necessary and sufficient conditions for the solvability of these inverse problems are obtained.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.; Dutta, Gaurav; Li, Jing
2017-01-01
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Resolution limits of migration and linearized waveform inversion images in a lossy medium
Schuster, Gerard T.
2017-03-10
The vertical-and horizontal-resolution limits Delta x(lossy) and Delta z(lossy) of post-stack migration and linearized waveform inversion images are derived for lossy data in the far-field approximation. Unlike the horizontal resolution limit Delta x proportional to lambda z/L in a lossless medium which linearly worsens in depth z, Delta x(lossy) proportional to z(2)/QL worsens quadratically with depth for a medium with small Q values. Here, Q is the quality factor, lambda is the effective wavelength, L is the recording aperture, and loss in the resolution formulae is accounted for by replacing lambda with z/Q. In contrast, the lossy vertical-resolution limit Delta z(lossy) only worsens linearly in depth compared to Delta z proportional to lambda for a lossless medium. For both the causal and acausal Q models, the resolution limits are linearly proportional to 1/Q for small Q. These theoretical predictions are validated with migration images computed from lossy data.
Shi, Baoli; Wang, Yue; Jia, Lina
2011-02-11
Inverse gas chromatography (IGC) is an important technique for the characterization of surface properties of solid materials. A standard method of surface characterization is that the surface dispersive free energy of the solid stationary phase is firstly determined by using a series of linear alkane liquids as molecular probes, and then the acid-base parameters are calculated from the dispersive parameters. However, for the calculation of surface dispersive free energy, generally, two different methods are used, which are Dorris-Gray method and Schultz method. In this paper, the results calculated from Dorris-Gray method and Schultz method are compared through calculating their ratio with their basic equations and parameters. It can be concluded that the dispersive parameters calculated with Dorris-Gray method will always be larger than the data calculated with Schultz method. When the measuring temperature increases, the ratio increases large. Compared with the parameters in solvents handbook, it seems that the traditional surface free energy parameters of n-alkanes listed in the papers using Schultz method are not enough accurate, which can be proved with a published IGC experimental result. © 2010 Elsevier B.V. All rights reserved.
Comparison of inverse dynamics calculated by two- and three-dimensional models during walking
Alkjaer, T; Simonsen, E B; Dyhre-Poulsen, P
2001-01-01
recorded the subjects as they walked across two force plates. The subjects were invited to approach a walking speed of 4.5 km/h. The ankle, knee and hip joint moments in the sagittal plane were calculated by 2D and 3D inverse dynamics analysis and compared. Despite the uniform walking speed (4.53 km....../h) and similar footwear, relatively large inter-individual variations were found in the joint moment patterns during the stance phase. The differences between individuals were present in both the 2D and 3D analysis. For the entire sample of subjects the overall time course pattern of the ankle, knee and hip...... the magnitude of the joint moments calculated by 2D and 3D inverse dynamics but the inter-individual variation was not affected by the different models. The simpler 2D model seems therefore appropriate for human gait analysis. However, comparisons of gait data from different studies are problematic...
An improved method of inverse kinematics calculation for a six-link manipulator
Sasaki, Shinobu
1987-07-01
As one method of solving the inverse problem related to a six-link manipulator, an improvement was made of previously proposed calculation algorithm based on a solution of an algebraic equation of the 24-th order. In this paper, the same type of a polynomial was derived in the form of the equation of 16-th order, i.e., the order reduced by 8, as compared to previous algorithm. The accuracy of solutions was identified to be much refined. (author)
Linear and non-linear calculations of the hose instability in the ion-focused regime
Buchanan, H.L.
1982-01-01
A simple model is adopted to study the hose instability of an intense relativistic electron beam in a partially neutralized, low density ion channel (ion focused regime). Equations of motion for the beam and the channel are derived and linearized to obtain an approximate dispersion relation. The non-linear equations of motion are then solved numerically and the results compared to linearized data
The Inverse System Method Applied to the Derivation of Power System Non—linear Control Laws
DonghaiLI; XuezhiJIANG; 等
1997-01-01
The differential geometric method has been applied to a series of power system non-linear control problems effectively.However a set of differential equations must be solved for obtaining the required diffeomorphic transformation.Therefore the derivation of control laws is very complicated.In fact because of the specificity of power system models the required diffeomorphic transformation may be obtained directly,so it is unnecessary to solve a set of differential equations.In addition inverse system method is equivalent to differential geometric method in reality and not limited to affine nonlinear systems,Its physical meaning is able to be viewed directly and its deduction needs only algebraic operation and derivation,so control laws can be obtained easily and the application to engineering is very convenient.Authors of this paper take steam valving control of power system as a typical case to be studied.It is demonstrated that the control law deduced by inverse system method is just the same as one by differential geometric method.The conclusion will simplify the control law derivations of steam valving,excitation,converter and static var compensator by differential geometric method and may be suited to similar control problems in other areas.
Kuchment, Peter
2015-05-10
© 2015, Springer Basel. In the previous paper (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012), the authors introduced a simple procedure that allows one to detect whether and explain why internal information arising in several novel coupled physics (hybrid) imaging modalities could turn extremely unstable techniques, such as optical tomography or electrical impedance tomography, into stable, good-resolution procedures. It was shown that in all cases of interest, the Fréchet derivative of the forward mapping is a pseudo-differential operator with an explicitly computable principal symbol. If one can set up the imaging procedure in such a way that the symbol is elliptic, this would indicate that the problem was stabilized. In the cases when the symbol is not elliptic, the technique suggests how to change the procedure (e.g., by adding extra measurements) to achieve ellipticity. In this article, we consider the situation arising in acousto-optical tomography (also called ultrasound modulated optical tomography), where the internal data available involves the Green’s function, and thus depends globally on the unknown parameter(s) of the equation and its solution. It is shown that the technique of (Kuchment and Steinhauer in Inverse Probl 28(8):084007, 2012) can be successfully adopted to this situation as well. A significant part of the article is devoted to results on generic uniqueness for the linearized problem in a variety of situations, including those arising in acousto-electric and quantitative photoacoustic tomography.
Shielding calculation for treatment rooms of high energy linear accelerator
Elleithy, M.A.
2006-01-01
A review of German Institute of Standardization (DIN) scheme of the shielding calculation and the essential data required has been done for X-rays and electron beam in the energy range from 1 MeV to 50 MeV. Shielding calculation was done for primary and secondary radiations generated during X-ray operation of Linac. In addition, shielding was done against X-rays generated (Bremsstrahlung) by useful electron beams. The calculations also covered the neutrons generated from the interactions of useful X-rays (at energies above 8 MeV) with the surrounding. The present application involved the computation of shielding against the double scattered components of X-rays and neutrons in the maze area and the thickness of the paraffin wax of the room door. A new developed computer program was designed to assist shielding thickness calculations for a new Linac installation or in replacing an existing machine. The program used a combination of published tables and figures in computing the shielding thickness at different locations for all possible radiation situations. The DIN published data of 40 MeV accelerator room was compared with the program calculations. It was found that there is good agreement between both calculations. The developed program improved the accuracy and speed of calculation
Ungan, F.; Yesilgul, U.; Kasapoglu, E.; Sari, H.; Sökmen, I.
2012-01-01
In this present work, we have investigated theoretically the effects of applied electric and magnetic fields on the linear and nonlinear optical properties in a GaAs/Al x Ga 1−x As inverse parabolic quantum well for different Al concentrations at the well center. The Al concentration at the barriers was always x max =0.3. The energy levels and wave functions are calculated within the effective mass approximation and the envelope function approach. The analytical expressions of optical properties are obtained by using the compact density-matrix approach. The linear, third-order nonlinear and total absorption and refractive index changes depending on the Al concentration at the well center are investigated as a function of the incident photon energy for the different values of the applied electric and magnetic fields. The results show that the applied electric and magnetic fields have a great effect on these optical quantities. - Highlights: ► The x c concentration has a great effect on the optical characteristics of these structures. ► The EM fields have a great effect on the optical properties of these structures. ► The total absorption coefficients increased as the electric and magnetic field increases. ► The RICs reduced as the electric and magnetic field increases.
Non-linear calculation of PCRV using dynamic relaxation
Schnellenbach, G.
1979-01-01
A brief review is presented of a numerical method called the dynamic relaxation method for stress analysis of the concrete in prestressed concrete pressure vessels. By this method the three-dimensional elliptic differential equations of the continuum are changed into the four-dimensional hyperbolic differential equations known as wave equations. The boundary value problem of the static system is changed into an initial and boundary value problem for which a solution exists if the physical system is defined at time t=0. The effect of non-linear stress-strain behaviour of the material as well as creep and cracking are considered
Ganapol, B.D.; Sumini, M.
1990-01-01
The time dependent space second order discrete form of the monokinetic transport equation is given an analytical solution, within the Laplace transform domain. Th A n dynamic model is presented and the general resolution procedure is worked out. The solution in the time domain is then obtained through the application of a numerical transform inversion technique. The justification of the research relies in the need to produce reliable and physically meaningful transport benchmarks for dynamic calculations. The paper is concluded by a few results followed by some physical comments
Jiang, Yi; Li, Guoyang; Qian, Lin-Xue; Liang, Si; Destrade, Michel; Cao, Yanping
2015-10-01
We use supersonic shear wave imaging (SSI) technique to measure not only the linear but also the nonlinear elastic properties of brain matter. Here, we tested six porcine brains ex vivo and measured the velocities of the plane shear waves induced by acoustic radiation force at different states of pre-deformation when the ultrasonic probe is pushed into the soft tissue. We relied on an inverse method based on the theory governing the propagation of small-amplitude acoustic waves in deformed solids to interpret the experimental data. We found that, depending on the subjects, the resulting initial shear modulus [Formula: see text] varies from 1.8 to 3.2 kPa, the stiffening parameter [Formula: see text] of the hyperelastic Demiray-Fung model from 0.13 to 0.73, and the third- [Formula: see text] and fourth-order [Formula: see text] constants of weakly nonlinear elasticity from [Formula: see text]1.3 to [Formula: see text]20.6 kPa and from 3.1 to 8.7 kPa, respectively. Paired [Formula: see text] test performed on the experimental results of the left and right lobes of the brain shows no significant difference. These values are in line with those reported in the literature on brain tissue, indicating that the SSI method, combined to the inverse analysis, is an efficient and powerful tool for the mechanical characterization of brain tissue, which is of great importance for computer simulation of traumatic brain injury and virtual neurosurgery.
Fitting the two-compartment model in DCE-MRI by linear inversion.
Flouri, Dimitra; Lesnic, Daniel; Sourbron, Steven P
2016-09-01
Model fitting of dynamic contrast-enhanced-magnetic resonance imaging-MRI data with nonlinear least squares (NLLS) methods is slow and may be biased by the choice of initial values. The aim of this study was to develop and evaluate a linear least squares (LLS) method to fit the two-compartment exchange and -filtration models. A second-order linear differential equation for the measured concentrations was derived where model parameters act as coefficients. Simulations of normal and pathological data were performed to determine calculation time, accuracy and precision under different noise levels and temporal resolutions. Performance of the LLS was evaluated by comparison against the NLLS. The LLS method is about 200 times faster, which reduces the calculation times for a 256 × 256 MR slice from 9 min to 3 s. For ideal data with low noise and high temporal resolution the LLS and NLLS were equally accurate and precise. The LLS was more accurate and precise than the NLLS at low temporal resolution, but less accurate at high noise levels. The data show that the LLS leads to a significant reduction in calculation times, and more reliable results at low noise levels. At higher noise levels the LLS becomes exceedingly inaccurate compared to the NLLS, but this may be improved using a suitable weighting strategy. Magn Reson Med 76:998-1006, 2016. © 2015 Wiley Periodicals, Inc. © 2015 Wiley Periodicals, Inc.
The giant resonances in hot nuclei. Linear response calculations
Braghin, F.L.; Vautherin, D.; Abada, A.
1995-01-01
The isovector response function of hot nuclear matter is calculated using various effective Skyrme interactions. For Skyrme forces with a small effective mass the strength distribution is found to be nearly independent of temperature, and shows little collective effects. In contrast effective forces with an effective mass close to unity produce at zero temperature sizeable collective effects which disappear at temperatures of a few MeV. The relevance of these results for the saturation of the multiplicity of photons emitted by the giant dipole resonance in hot nuclei observed in recent experiments beyond T = 3 MeV is discussed. (authors). 12 refs., 3 figs
NUMERICAL CALCULATIONS IN GEOMECHANICS APPLICABLE TO LINEAR STRUCTURES
Vlasov Aleksandr Nikolaevich
2012-10-01
Full Text Available The article covers the problem of applicability of finite-element and engineering methods to the development of a model of interaction between pipeline structures and the environment in the complex conditions with a view to the simulation and projection of exogenous geological processes, trustworthy assessment of their impacts on the pipeline, and the testing of varied calculation methodologies. Pipelining in the areas that have a severe continental climate and permafrost soils is accompanied by cryogenic and exogenous processes and developments. It may also involve the development of karst and/or thermokarst. The adverse effect of the natural environment is intensified by the anthropogenic impact produced onto the natural state of the area, causing destruction of forests and other vegetation, changing the ratio of soils in the course of the site planning, changing the conditions that impact the surface and underground waters, and causing the thawing of the bedding in the course of the energy carrier pumping, etc. The aforementioned consequences are not covered by effective regulatory documents. The latter constitute general and incomplete recommendations in this respect. The appropriate mathematical description of physical processes in complex heterogeneous environments is a separate task to be addressed. The failure to consider the above consequences has repeatedly caused both minor damages (denudation of the pipeline, insulation stripping and substantial accidents; the rectification of their consequences was utterly expensive. Pipelining produces a thermal impact on the environment; it may alter the mechanical properties of soils and de-frost the clay. The stress of the pipeline is one of the principal factors that determines its strength and safety. The pipeline stress exposure caused by loads and impacts (self-weight, internal pressure, etc. may be calculated in advance, and the accuracy of these calculations is sufficient for practical
Huang Can
2014-08-01
Full Text Available In the present paper, a numerical model combining radiation and conduction for porous materials is developed based on the finite volume method. The model can be used to investigate high-temperature thermal insulations which are widely used in metallic thermal protection systems on reusable launch vehicles and high-temperature fuel cells. The effective thermal conductivities (ECTs which are measured experimentally can hardly be used separately to analyze the heat transfer behaviors of conduction and radiation for high-temperature insulation. By fitting the effective thermal conductivities with experimental data, the equivalent radiation transmittance, absorptivity and reflectivity, as well as a linear function to describe the relationship between temperature and conductivity can be estimated by an inverse problems method. The deviation between the calculated and measured effective thermal conductivities is less than 4%. Using the material parameters so obtained for conduction and radiation, the heat transfer process in multilayer thermal insulation (MTI is calculated and the deviation between the calculated and the measured transient temperatures at a certain depth in the multilayer thermal insulation is less than 6.5%.
On a finite moment perturbation of linear functionals and the inverse Szegö transformation
Edinson Fuentes
2016-05-01
Full Text Available Given a sequence of moments $\\{c_{n}\\}_{n\\in\\ze}$ associated with an Hermitian linear functional $\\mathcal{L}$ defined in the space of Laurent polynomials, we study a new functional $\\mathcal{L}_{\\Omega}$ which is a perturbation of $\\mathcal{L}$ in such a way that a finite number of moments are perturbed. Necessary and sufficient conditions are given for the regularity of $\\mathcal{L}_{\\Omega}$, and a connection formula between the corresponding families of orthogonal polynomials is obtained. On the other hand, assuming $\\mathcal{L}_{\\Omega}$ is positive definite, the perturbation is analyzed through the inverse Szegö transformation. Resumen. Dada una sucesión de momentos $\\{c_{n}\\}_{n\\in\\ze}$ asociada a un funcional lineal hermitiano $\\mathcal{L}$ definido en el espacio de los polinomios de Laurent, estudiamos un nuevo funcional $\\mathcal{L}_{\\Omega}$ que consiste en una perturbación de $\\mathcal{L}$ de tal forma que se perturba un número finito de momentos de la sucesión. Se encuentran condiciones necesarias y suficientes para la regularidad de $\\mathcal{L}_{\\Omega}$, y se obtiene una fórmula de conexión que relaciona las familias de polinomios ortogonales correspondientes. Por otro lado, suponiendo que $\\mathcal{L}_{\\Omega}$ es definido positivo, se analiza la perturbación mediante de la transformación inversa de Szegö.
Surface waves tomography and non-linear inversion in the southeast Carpathians
Raykova, R.B.; Panza, G.F.
2005-11-01
A set of shear-wave velocity models of the lithosphere-asthenosphere system in the southeast Carpathians is determined by the non-linear inversion of surface wave group velocity data, obtained from a tomographic analysis. The local dispersion curves are assembled for the period range 7 s - 150 s, combining regional group velocity measurements and published global Rayleigh wave dispersion data. The lithosphere-asthenosphere velocity structure is reliably reconstructed to depths of about 250 km. The thickness of the lithosphere in the region varies from about 120 km to 250 km and the depth of the asthenosphere between 150 km and 250 km. Mantle seismicity concentrates where the high velocity lid is detected just below the Moho. The obtained results are in agreement with recent seismic refraction, receiver function, and travel time P-wave tomography investigations in the region. The similarity among the results obtained from different kinds of structural investigations (including the present work) highlights some new features of the lithosphere-asthenosphere system in southeast Carpathians, as the relatively thin crust under Transylvania basin and Vrancea zone. (author)
Caiyan Qin
2017-12-01
Full Text Available Due to its simple mechanical structure and high motion stability, the H-shaped platform has been increasingly widely used in precision measuring, numerical control machining and semiconductor packaging equipment, etc. The H-shaped platform is normally driven by multiple (three permanent magnet synchronous linear motors. The main challenges for H-shaped platform-control include synchronous control between the two linear motors in the Y direction as well as total positioning error of the platform mover, a combination of position deviation in X and Y directions. To deal with the above challenges, this paper proposes a control strategy based on the inverse system method through state feedback and dynamic decoupling of the thrust force. First, mechanical dynamics equations have been deduced through the analysis of system coupling based on the platform structure. Second, the mathematical model of the linear motors and the relevant coordinate transformation between dq-axis currents and ABC-phase currents are analyzed. Third, after the main concept of inverse system method being explained, the inverse system model of the platform control system has been designed after defining relevant system variables. Inverse system model compensates the original nonlinear coupled system into pseudo-linear decoupled linear system, for which typical linear control methods, like PID, can be adopted to control the system. The simulation model of the control system is built in MATLAB/Simulink and the simulation result shows that the designed control system has both small synchronous deviation and small total trajectory tracking error. Furthermore, the control program has been run on NI controller for both fixed-loop-time and free-loop-time modes, and the test result shows that the average loop computation time needed is rather small, which makes it suitable for real industrial applications. Overall, it proves that the proposed new control strategy can be used in
Liu Guanghui [Department of Physics, College of Physics and Electronic Engineering, Guangzhou University, Guangzhou 510006 (China); Guo Kangxian, E-mail: axguo@sohu.com [Department of Physics, College of Physics and Electronic Engineering, Guangzhou University, Guangzhou 510006 (China); Wang Chao [Institute of Public Administration, Guangzhou University, Guangzhou 510006 (China)
2012-06-15
The linear and nonlinear optical absorption in a disk-shaped quantum dot (DSQD) with parabolic potential plus an inverse squared potential in the presence of a static magnetic field are theoretically investigated within the framework of the compact-density-matrix approach and iterative method. The energy levels and the wave functions of an electron in the DSQD are obtained by using the effective mass approximation. Numerical calculations are presented for typical GaAs/AlAs DSQD. It is found that the optical absorption coefficients are strongly affected not only by a static magnetic field, but also by the strength of external field, the confinement frequency and the incident optical intensity.
Liu Guanghui; Guo Kangxian; Wang Chao
2012-01-01
The linear and nonlinear optical absorption in a disk-shaped quantum dot (DSQD) with parabolic potential plus an inverse squared potential in the presence of a static magnetic field are theoretically investigated within the framework of the compact-density-matrix approach and iterative method. The energy levels and the wave functions of an electron in the DSQD are obtained by using the effective mass approximation. Numerical calculations are presented for typical GaAs/AlAs DSQD. It is found that the optical absorption coefficients are strongly affected not only by a static magnetic field, but also by the strength of external field, the confinement frequency and the incident optical intensity.
Linear GPR inversion for lossy soil and a planar air-soil interface
Meincke, Peter
2001-01-01
A three-dimensional inversion scheme for fixed-offset ground penetrating radar (GPR) is derived that takes into account the loss in the soil and the planar air-soil interface. The forward model of this inversion scheme is based upon the first Born approximation and the dyadic Green function...
Mediavilla, E.; Lopez, P.; Mediavilla, T.; Ariza, O.; Muñoz, J. A.; Gonzalez-Morcillo, C.; Jimenez-Vicente, J.
2011-01-01
We derive an exact solution (in the form of a series expansion) to compute gravitational lensing magnification maps. It is based on the backward gravitational lens mapping of a partition of the image plane in polygonal cells (inverse polygon mapping, IPM), not including critical points (except perhaps at the cell boundaries). The zeroth-order term of the series expansion leads to the method described by Mediavilla et al. The first-order term is used to study the error induced by the truncation of the series at zeroth order, explaining the high accuracy of the IPM even at this low order of approximation. Interpreting the Inverse Ray Shooting (IRS) method in terms of IPM, we explain the previously reported N –3/4 dependence of the IRS error with the number of collected rays per pixel. Cells intersected by critical curves (critical cells) transform to non-simply connected regions with topological pathologies like auto-overlapping or non-preservation of the boundary under the transformation. To define a non-critical partition, we use a linear approximation of the critical curve to divide each critical cell into two non-critical subcells. The optimal choice of the cell size depends basically on the curvature of the critical curves. For typical applications in which the pixel of the magnification map is a small fraction of the Einstein radius, a one-to-one relationship between the cell and pixel sizes in the absence of lensing guarantees both the consistence of the method and a very high accuracy. This prescription is simple but very conservative. We show that substantially larger cells can be used to obtain magnification maps with huge savings in computation time.
An Analytical-empirical Calculation of Linear Attenuation Coefficient of Megavoltage Photon Beams.
Seif, F; Tahmasebi-Birgani, M J; Bayatiani, M R
2017-09-01
In this study, a method for linear attenuation coefficient calculation was introduced. Linear attenuation coefficient was calculated with a new method that base on the physics of interaction of photon with matter, mathematical calculation and x-ray spectrum consideration. The calculation was done for Cerrobend as a common radiotherapy modifier and Mercury. The values of calculated linear attenuation coefficient with this new method are in acceptable range. Also, the linear attenuation coefficient decreases slightly as the thickness of attenuating filter (Cerrobend or mercury) increased, so the procedure of linear attenuation coefficient variation is in agreement with other documents. The results showed that the attenuation ability of mercury was about 1.44 times more than Cerrobend. The method that was introduced in this study for linear attenuation coefficient calculation is general enough to treat beam modifiers with any shape or material by using the same formalism; however, calculating was made only for mercury and Cerrobend attenuator. On the other hand, it seems that this method is suitable for high energy shields or protector designing.
3D Analytical Calculation of Forces between Linear Halbach-Type Permanent Magnet Arrays
Allag , Hicham; Yonnet , Jean-Paul; Latreche , Mohamed E. H.
2009-01-01
International audience; Usely, in analytical calculation of magnetic and mechanical quantities of Halbach systems, the authors use the Fourier series approximation because the exact calculations are more difficult. In this work the interaction forces between linear Halbach arrays are analytically calculated thanks to our recent development 3D exact calculation of forces between two cuboïdal magnets with parallel and perpendicular magnetization. We essentially describe the way to separately ca...
Shearman, Gemma C; Khoo, Bee J; Motherwell, Mary-Lynn; Brakke, Kenneth A; Ces, Oscar; Conn, Charlotte E; Seddon, John M; Templer, Richard H
2007-06-19
Inverse bicontinuous cubic lyotropic phases are a complex solution to the dilemma faced by all self-assembled water-amphiphile systems: how to satisfy the incompatible requirements for uniform interfacial curvature and uniform molecular packing. The solution reached in this case is for the water-amphiphile interfaces to deform hyperbolically onto triply periodic minimal surfaces. We have previously suggested that although the molecular packing in these structures is rather uniform the relative phase behavior of the gyroid, double diamond, and primitive inverse bicontinuous cubic phases can be understood in terms of subtle differences in packing frustration. In this work, we have calculated the packing frustration for these cubics under the constraint that their interfaces have constant mean curvature. We find that the relative packing stress does indeed differ between phases. The gyroid cubic has the least packing stress, and at low water volume fraction, the primitive cubic has the greatest packing stress. However, at very high water volume fraction, the double diamond cubic becomes the structure with the greatest packing stress. We have tested the model in two ways. For a system with a double diamond cubic phase in excess water, the addition of a hydrophobe may release packing frustration and preferentially stabilize the primitive cubic, since this has previously been shown to have lower curvature elastic energy. We have confirmed this prediction by adding the long chain alkane tricosane to 1-monoolein in excess water. The model also predicts that if one were able to hydrate the double diamond cubic to high water volume fractions, one should destabilize the phase with respect to the primitive cubic. We have found that such highly swollen metastable bicontinuous cubic phases can be formed within onion vesicles. Data from monoelaidin in excess water display a well-defined transition, with the primitive cubic appearing above a water volume fraction of 0.75. Both of
Recent progress for Linear Collider SM/BSM Higgs/electroweak symmetry breaking calculations
Reuter, Juergen
2012-01-01
In this paper I review the calculations (and partially simulations and theoretical studies) that have been made and published during the last two to three years focusing on the electroweak symmetry breaking sector and the Higgs boson(s) within the Standard Model and models beyond the Standard Model (BSM) at or relevant for either the International Linear Collider (ILC) or the Compact Linear Collider (CLIC), commonly abbreviated as Linear Collider (LC). (orig.)
Slope Safety Calculation With A Non-Linear Mohr Criterion Using Finite Element Method
Clausen, Johan; Damkilde, Lars
2005-01-01
Safety factors for soil slopes are calculated using a non-linear Mohr envelope. The often used linear Mohr-Coulomb envelope tends to overestimate the safety as the material parameters are usually determined at much higher stress levels, than those present at slope failure. Experimental data...
A Simple and Convenient Method of Multiple Linear Regression to Calculate Iodine Molecular Constants
Cooper, Paul D.
2010-01-01
A new procedure using a student-friendly least-squares multiple linear-regression technique utilizing a function within Microsoft Excel is described that enables students to calculate molecular constants from the vibronic spectrum of iodine. This method is advantageous pedagogically as it calculates molecular constants for ground and excited…
Cleather Daniel J
2010-11-01
Full Text Available Abstract Background A vast number of biomechanical studies have employed inverse dynamics methods to calculate inter-segmental moments during movement. Although all inverse dynamics methods are rooted in classical mechanics and thus theoretically the same, there exist a number of distinct computational methods. Recent research has demonstrated a key influence of the dynamics computation of the inverse dynamics method on the calculated moments, despite the theoretical equivalence of the methods. The purpose of this study was therefore to explore the influence of the choice of inverse dynamics on the calculation of inter-segmental moments. Methods An inverse dynamics analysis was performed to analyse vertical jumping and weightlifting movements using two distinct methods. The first method was the traditional inverse dynamics approach, in this study characterized as the 3 step method, where inter-segmental moments were calculated in the local coordinate system of each segment, thus requiring multiple coordinate system transformations. The second method (the 1 step method was the recently proposed approach based on wrench notation that allows all calculations to be performed in the global coordinate system. In order to best compare the effect of the inverse dynamics computation a number of the key assumptions and methods were harmonized, in particular unit quaternions were used to parameterize rotation in both methods in order to standardize the kinematics. Results Mean peak inter-segmental moments calculated by the two methods were found to agree to 2 decimal places in all cases and were not significantly different (p > 0.05. Equally the normalized dispersions of the two methods were small. Conclusions In contrast to previously documented research the difference between the two methods was found to be negligible. This study demonstrates that the 1 and 3 step method are computationally equivalent and can thus be used interchangeably in
An inverse method for non linear ablative thermics with experimentation of automatic differentiation
Alestra, S [Simulation Information Technology and Systems Engineering, EADS IW Toulouse (France); Collinet, J [Re-entry Systems and Technologies, EADS ASTRIUM ST, Les Mureaux (France); Dubois, F [Professor of Applied Mathematics, Conservatoire National des Arts et Metiers Paris (France)], E-mail: stephane.alestra@eads.net, E-mail: jean.collinet@astrium.eads.net, E-mail: fdubois@cnam.fr
2008-11-01
Thermal Protection System is a key element for atmospheric re-entry missions of aerospace vehicles. The high level of heat fluxes encountered in such missions has a direct effect on mass balance of the heat shield. Consequently, the identification of heat fluxes is of great industrial interest but is in flight only available by indirect methods based on temperature measurements. This paper is concerned with inverse analyses of highly evolutive heat fluxes. An inverse problem is used to estimate transient surface heat fluxes (convection coefficient), for degradable thermal material (ablation and pyrolysis), by using time domain temperature measurements on thermal protection. The inverse problem is formulated as a minimization problem involving an objective functional, through an optimization loop. An optimal control formulation (Lagrangian, adjoint and gradient steepest descent method combined with quasi-Newton method computations) is then developed and applied, using Monopyro, a transient one-dimensional thermal model with one moving boundary (ablative surface) that has been developed since many years by ASTRIUM-ST. To compute numerically the adjoint and gradient quantities, for the inverse problem in heat convection coefficient, we have used both an analytical manual differentiation and an Automatic Differentiation (AD) engine tool, Tapenade, developed at INRIA Sophia-Antipolis by the TROPICS team. Several validation test cases, using synthetic temperature measurements are carried out, by applying the results of the inverse method with minimization algorithm. Accurate results of identification on high fluxes test cases, and good agreement for temperatures restitutions, are obtained, without and with ablation and pyrolysis, using bad fluxes initial guesses. First encouraging results with an automatic differentiation procedure are also presented in this paper.
Miranda-Alonso, S.
1991-01-01
A Cauchy-Riemann problem is solved for the case of the linearized equations for long waves. The initial-values are amplitudes and phases measured at the coast. No boundary values are made use of. This inverse-problem is solved by starting the calculations at the coast and continuing outwards to the open ocean in a rectangular areas with one side at the coast and the other three at the open ocean. The initial values were expanded into the complex plane to get a platform to perform with the calculations. This non-well-posed problem was solved by means of two different mathematical techniques for comparison. The results produced with the inverse model were compared with those produced with a 'classical' model initialized at the three open boundaries with the results of the inverse model. The oscillating systems produced by both models were quite similar, giving validity to this invese modeling approach which should be a useful technique to solve problems when only initial values are known. (orig.)
The effect of dendrimer charge inversion in complexes with linear polyelectrolytes
Lyulin, S.V.; Lyulin, A.V.; Darinskii, A.A.; Emri, I.
2005-01-01
The structure of complexes formed by charged dendrimers and oppositely charged linear chains with a charge of at least the same as that of dendrimers was studied by computer simulation using the Brownian dynamics method. The freely jointed, free-draining model of the dendrimer and the linear chain
Friedrich, R.; Drewelow, W.
1978-01-01
An algorithm is described that is based on the method of breaking the Laplace transform down into partial fractions which are then inverse-transformed separately. The sum of the resulting partial functions is the wanted time function. Any problems caused by equation system forms are largely limited by appropriate normalization using an auxiliary parameter. The practical limits of program application are reached when the degree of the denominator of the Laplace transform is seven to eight.
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Gyungho Khim
2015-01-01
Full Text Available We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement.
A Method of Calculating Motion Error in a Linear Motion Bearing Stage
Khim, Gyungho; Park, Chun Hong; Oh, Jeong Seok
2015-01-01
We report a method of calculating the motion error of a linear motion bearing stage. The transfer function method, which exploits reaction forces of individual bearings, is effective for estimating motion errors; however, it requires the rail-form errors. This is not suitable for a linear motion bearing stage because obtaining the rail-form errors is not straightforward. In the method described here, we use the straightness errors of a bearing block to calculate the reaction forces on the bearing block. The reaction forces were compared with those of the transfer function method. Parallelism errors between two rails were considered, and the motion errors of the linear motion bearing stage were measured and compared with the results of the calculations, revealing good agreement. PMID:25705715
A versatile program for the calculation of linear accelerator room shielding.
Hassan, Zeinab El-Taher; Farag, Nehad M; Elshemey, Wael M
2018-03-22
This work aims at designing a computer program to calculate the necessary amount of shielding for a given or proposed linear accelerator room design in radiotherapy. The program (Shield Calculation in Radiotherapy, SCR) has been developed using Microsoft Visual Basic. It applies the treatment room shielding calculations of NCRP report no. 151 to calculate proper shielding thicknesses for a given linear accelerator treatment room design. The program is composed of six main user-friendly interfaces. The first enables the user to upload their choice of treatment room design and to measure the distances required for shielding calculations. The second interface enables the user to calculate the primary barrier thickness in case of three-dimensional conventional radiotherapy (3D-CRT), intensity modulated radiotherapy (IMRT) and total body irradiation (TBI). The third interface calculates the required secondary barrier thickness due to both scattered and leakage radiation. The fourth and fifth interfaces provide a means to calculate the photon dose equivalent for low and high energy radiation, respectively, in door and maze areas. The sixth interface enables the user to calculate the skyshine radiation for photons and neutrons. The SCR program has been successfully validated, precisely reproducing all of the calculated examples presented in NCRP report no. 151 in a simple and fast manner. Moreover, it easily performed the same calculations for a test design that was also calculated manually, and produced the same results. The program includes a new and important feature that is the ability to calculate required treatment room thickness in case of IMRT and TBI. It is characterised by simplicity, precision, data saving, printing and retrieval, in addition to providing a means for uploading and testing any proposed treatment room shielding design. The SCR program provides comprehensive, simple, fast and accurate room shielding calculations in radiotherapy.
Investigation and Calculation of Magnetic Field in Tubular Linear Reluctance Motor Using FEM
MOSALLANEJAD, A.
2010-11-01
Full Text Available In this paper the magnetic flux density of tubular linear reluctance motor (TLRM in open type magnetic circuit is studied. Also, all magnetic flux density calculation methods in winding of tubular linear reluctance motor are described. The effect of structure parameters on magnetic flux density is also discussed. Electromagnetic finite-element analysis is used for simulation of magnetic field, and simulation results of the magnetic field analysis with DC voltage excitation are compared with results obtained from calculation methods. The comparison yields a good agreement.
Slope Safety Factor Calculations With Non-Linear Yield Criterion Using Finite Elements
Clausen, Johan; Damkilde, Lars
2006-01-01
The factor of safety for a slope is calculated with the finite element method using a non-linear yield criterion of the Hoek-Brown type. The parameters of the Hoek-Brown criterion are found from triaxial test data. Parameters of the linear Mohr-Coulomb criterion are calibrated to the same triaxial...... are carried out at much higher stress levels than present in a slope failure, this leads to the conclusion that the use of the non-linear criterion leads to a safer slope design...
Spurr, Robert; Stamnes, Knut; Eide, Hans; Li Wei; Zhang Kexin; Stamnes, Jakob
2007-01-01
In this paper and the sequel, we investigate the application of classic inverse methods based on iterative least-squares cost-function minimization to the simultaneous retrieval of aerosol and ocean properties from visible and near infrared spectral radiance measurements such as those from the SeaWiFS and MODIS instruments. Radiance measurements at the satellite are simulated directly using an accurate coupled atmosphere-ocean-discrete-ordinate radiative transfer (CAO-DISORT) code as the main component of the forward model. For this kind of cost-function inverse problem, we require the forward model to generate weighting functions (radiance partial derivatives) with respect to the aerosol and marine properties to be retrieved, and to other model parameters which are sources of error in the retrievals. In this paper, we report on the linearization of the CAO-DISORT model. This linearization provides a complete analytic differentiation of the coupled-media radiative transfer theory, and it allows the model to generate analytic weighting functions for any atmospheric or marine parameter. For high solar zenith angles, we give an implementation of the pseudo-spherical (P-S) approach to solar beam attenuation in the atmosphere in the linearized model. We summarize a number of performance enhancements such as the use of an exact single-scattering calculation to improve accuracy. We derive inherent optical property inputs for the linearized CAO-DISORT code for a simple 2-parameter bio-optical model for the marine environment coupled to a 2-parameter bimodal atmospheric aerosol medium
Linearity of bulk-controlled inverter ring VCO in weak and strong inversion
Wismar, Ulrik Sørensen; Wisland, D.; Andreani, Pietro
2007-01-01
In this paper linearity of frequency modulation in voltage controlled inverter ring oscillators for non feedback sigma delta converter applications is studied. The linearity is studied through theoretical models of the oscillator operating at supply voltages above and below the threshold voltage......, process variations and temperature variations have also been simulated to indicate the advantages of having the soft rail bias transistor in the VCO....
Fayazbakhsh, M.A.; Bagheri, F.; Bahrami, M.
2015-01-01
Highlights: • An inverse method is proposed to calculate thermal inertia in HVAC-R systems. • Real-time thermal loads are estimated using the proposed intelligent algorithm. • Calculation algorithm is validated with on-site measurements. • Freezer duty cycle data are extracted only based on temperature measurements. - Abstract: A new inverse method is proposed for estimation of thermal inertia and heat gain in air conditioning and refrigeration systems using on-site temperature measurements. The method is applied on a walk-in freezer room of a restaurant in Surrey, British Columbia, Canada during one week of its regular operation. The thermal inertia and instantaneous heat gain are calculated and the results are validated using actual information of the materials inside the freezer room. The proposed method can be implemented in intelligent control systems designed for new and existing HVAC-R systems to improve their overall energy efficiency and reduce their environmental impacts
Generalized Uncertainty Quantification for Linear Inverse Problems in X-ray Imaging
Fowler, Michael James [Clarkson Univ., Potsdam, NY (United States)
2014-04-25
In industrial and engineering applications, X-ray radiography has attained wide use as a data collection protocol for the assessment of material properties in cases where direct observation is not possible. The direct measurement of nuclear materials, particularly when they are under explosive or implosive loading, is not feasible, and radiography can serve as a useful tool for obtaining indirect measurements. In such experiments, high energy X-rays are pulsed through a scene containing material of interest, and a detector records a radiograph by measuring the radiation that is not attenuated in the scene. One approach to the analysis of these radiographs is to model the imaging system as an operator that acts upon the object being imaged to produce a radiograph. In this model, the goal is to solve an inverse problem to reconstruct the values of interest in the object, which are typically material properties such as density or areal density. The primary objective in this work is to provide quantitative solutions with uncertainty estimates for three separate applications in X-ray radiography: deconvolution, Abel inversion, and radiation spot shape reconstruction. For each problem, we introduce a new hierarchical Bayesian model for determining a posterior distribution on the unknowns and develop efficient Markov chain Monte Carlo (MCMC) methods for sampling from the posterior. A Poisson likelihood, based on a noise model for photon counts at the detector, is combined with a prior tailored to each application: an edge-localizing prior for deconvolution; a smoothing prior with non-negativity constraints for spot reconstruction; and a full covariance sampling prior based on a Wishart hyperprior for Abel inversion. After developing our methods in a general setting, we demonstrate each model on both synthetically generated datasets, including those from a well known radiation transport code, and real high energy radiographs taken at two U. S. Department of Energy
Linear beam-beam tune shift calculations for the Tevatron Collider
Johnson, D.
1989-01-01
A realistic estimate of the linear beam-beam tune shift is necessary for the selection of an optimum working point in the tune diagram. Estimates of the beam-beam tune shift using the ''Round Beam Approximation'' (RBA) have over estimated the tune shift for the Tevatron. For a hadron machine with unequal lattice functions and beam sizes, an explicit calculation using the beam size at the crossings is required. Calculations for various Tevatron lattices used in Collider operation are presented. Comparisons between the RBA and the explicit calculation, for elliptical beams, are presented. This paper discusses the calculation of the linear tune shift using the program SYNCH. Selection of a working point is discussed. The magnitude of the tune shift is influenced by the choice of crossing points in the lattice as determined by the pbar ''cogging effects''. Also discussed is current cogging procedures and presents results of calculations for tune shifts at various crossing points in the lattice. Finally, a comparison of early pbar tune measurements with the present linear tune shift calculations is presented. 17 refs., 13 figs., 3 tabs
Parker, Peter A.; Geoffrey, Vining G.; Wilson, Sara R.; Szarka, John L., III; Johnson, Nels G.
2010-01-01
The calibration of measurement systems is a fundamental but under-studied problem within industrial statistics. The origins of this problem go back to basic chemical analysis based on NIST standards. In today's world these issues extend to mechanical, electrical, and materials engineering. Often, these new scenarios do not provide "gold standards" such as the standard weights provided by NIST. This paper considers the classic "forward regression followed by inverse regression" approach. In this approach the initial experiment treats the "standards" as the regressor and the observed values as the response to calibrate the instrument. The analyst then must invert the resulting regression model in order to use the instrument to make actual measurements in practice. This paper compares this classical approach to "reverse regression," which treats the standards as the response and the observed measurements as the regressor in the calibration experiment. Such an approach is intuitively appealing because it avoids the need for the inverse regression. However, it also violates some of the basic regression assumptions.
В.Т. Чемерис
2006-04-01
Full Text Available There is a method of simplified calculation and design parameters choice elaborated in this article with corresponding basing for the induction system of electron-beam sterilizer on the base of linear induction accelerator taking into account the parameters of magnetic material for production of cores and parameters of pulsed voltage.
Gauvain, J.; Hoffmann, A.; Jeandidier, C.; Livolant, M.
1978-01-01
This study presents the tests of a reinforced concrete beam conducted by the Department of Mechanical and Thermal Studies at the Centre d'Etudes Nucleaires de Saclay, France. The actual behavior of nuclear power plant buildings submitted to seismic loads is generally non linear even for moderate seismic levels. The non-linearity is specially important for reinforced concrete type buildings. To estimate the safety factors when the building is designed by standard methods, accurate non linear calculations are necessary. For such calculations one of the most difficult point is to define a correct model for the behavior of a reinforced concrete beam subject to reversed loads. For that purpose, static and dynamic experimental tests on a shaking table have been carried out and a model reasonably accurate has been established and checked on the test results [fr
Linear response calculation using the canonical-basis TDHFB with a schematic pairing functional
Ebata, Shuichiro; Nakatsukasa, Takashi; Yabana, Kazuhiro
2011-01-01
A canonical-basis formulation of the time-dependent Hartree-Fock-Bogoliubov (TDHFB) theory is obtained with an approximation that the pair potential is assumed to be diagonal in the time-dependent canonical basis. The canonical-basis formulation significantly reduces the computational cost. We apply the method to linear-response calculations for even-even nuclei. E1 strength distributions for proton-rich Mg isotopes are systematically calculated. The calculation suggests strong Landau damping of giant dipole resonance for drip-line nuclei.
Modelling and Inverse-Modelling: Experiences with O.D.E. Linear Systems in Engineering Courses
Martinez-Luaces, Victor
2009-01-01
In engineering careers courses, differential equations are widely used to solve problems concerned with modelling. In particular, ordinary differential equations (O.D.E.) linear systems appear regularly in Chemical Engineering, Food Technology Engineering and Environmental Engineering courses, due to the usefulness in modelling chemical kinetics,…
Syrio. A program for the calculation of the inverse of a matrix
Garcia de Viedma Alonso, L.
1963-01-01
SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)
Geodynamic inversion to constrain the non-linear rheology of the lithosphere
Baumann, T. S.; Kaus, Boris J. P.
2015-08-01
One of the main methods to determine the strength of the lithosphere is by estimating it's effective elastic thickness. This method assumes that the lithosphere is a thin elastic plate that floats on the mantle and uses both topography and gravity anomalies to estimate the plate thickness. Whereas this seems to work well for oceanic plates, it has given controversial results in continental collision zones. For most of these locations, additional geophysical data sets such as receiver functions and seismic tomography exist that constrain the geometry of the lithosphere and often show that it is rather complex. Yet, lithospheric geometry by itself is insufficient to understand the dynamics of the lithosphere as this also requires knowledge of the rheology of the lithosphere. Laboratory experiments suggest that rocks deform in a viscous manner if temperatures are high and stresses low, or in a plastic/brittle manner if the yield stress is exceeded. Yet, the experimental results show significant variability between various rock types and there are large uncertainties in extrapolating laboratory values to nature, which leaves room for speculation. An independent method is thus required to better understand the rheology and dynamics of the lithosphere in collision zones. The goal of this paper is to discuss such an approach. Our method relies on performing numerical thermomechanical forward models of the present-day lithosphere with an initial geometry that is constructed from geophysical data sets. We employ experimentally determined creep-laws for the various parts of the lithosphere, but assume that the parameters of these creep-laws as well as the temperature structure of the lithosphere are uncertain. This is used as a priori information to formulate a Bayesian inverse problem that employs topography, gravity, horizontal and vertical surface velocities to invert for the unknown material parameters and temperature structure. In order to test the general methodology
A study on the calculation of the shielding wall thickness in medical linear accelerator
Lee, Dong Yeon [Dept. of Radiation Oncology, Dongnam Ins. of Radiological and Medical Science, Busan (Korea, Republic of); Park, Eun Tae [Dept. of Radiation Oncology, Inje University Busan Paik Hospital, Busan (Korea, Republic of); Kim, Jung Hoon [Dept. of Radiological science, college of health sciences, Catholic University of Pusan, Busan (Korea, Republic of)
2017-06-15
The purpose of this study is to calculate the thickness of shielding for concrete which is mainly used for radiation shielding and study of the walls constructed to shield medical linear accelerator. The optimal shielding thickness was calculated using MCNPX(Ver.2.5.0) for 10 MV of photon beam energy generated by linear accelerator. As a result, the TVL for photon shielding was formed at 50⁓100 cm for pure concrete and concrete with Boron+polyethylene at 80⁓100 cm. The neutron shielding was calculated 100⁓140 cm for pure concrete and concrete with Boron+polyethylene at 90⁓100 cm. Based on this study, the concrete is considered to be most efficient method of using steel plates and adding Boron+polyethylene th the concrete.
Inverse estimation of multiple muscle activations based on linear logistic regression.
Sekiya, Masashi; Tsuji, Toshiaki
2017-07-01
This study deals with a technology to estimate the muscle activity from the movement data using a statistical model. A linear regression (LR) model and artificial neural networks (ANN) have been known as statistical models for such use. Although ANN has a high estimation capability, it is often in the clinical application that the lack of data amount leads to performance deterioration. On the other hand, the LR model has a limitation in generalization performance. We therefore propose a muscle activity estimation method to improve the generalization performance through the use of linear logistic regression model. The proposed method was compared with the LR model and ANN in the verification experiment with 7 participants. As a result, the proposed method showed better generalization performance than the conventional methods in various tasks.
Linear calculations of edge current driven kink modes with BOUT++ code
Li, G. Q., E-mail: ligq@ipp.ac.cn; Xia, T. Y. [Institute of Plasma Physics, CAS, Hefei, Anhui 230031 (China); Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Xu, X. Q. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); Snyder, P. B.; Turnbull, A. D. [General Atomics, San Diego, California 92186 (United States); Ma, C. H.; Xi, P. W. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States); FSC, School of Physics, Peking University, Beijing 100871 (China)
2014-10-15
This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density.
Linear calculations of edge current driven kink modes with BOUT++ code
Li, G. Q.; Xia, T. Y.; Xu, X. Q.; Snyder, P. B.; Turnbull, A. D.; Ma, C. H.; Xi, P. W.
2014-01-01
This work extends previous BOUT++ work to systematically study the impact of edge current density on edge localized modes, and to benchmark with the GATO and ELITE codes. Using the CORSICA code, a set of equilibria was generated with different edge current densities by keeping total current and pressure profile fixed. Based on these equilibria, the effects of the edge current density on the MHD instabilities were studied with the 3-field BOUT++ code. For the linear calculations, with increasing edge current density, the dominant modes are changed from intermediate-n and high-n ballooning modes to low-n kink modes, and the linear growth rate becomes smaller. The edge current provides stabilizing effects on ballooning modes due to the increase of local shear at the outer mid-plane with the edge current. For edge kink modes, however, the edge current does not always provide a destabilizing effect; with increasing edge current, the linear growth rate first increases, and then decreases. In benchmark calculations for BOUT++ against the linear results with the GATO and ELITE codes, the vacuum model has important effects on the edge kink mode calculations. By setting a realistic density profile and Spitzer resistivity profile in the vacuum region, the resistivity was found to have a destabilizing effect on both the kink mode and on the ballooning mode. With diamagnetic effects included, the intermediate-n and high-n ballooning modes can be totally stabilized for finite edge current density
Ruggeri, Fabrizio
2016-05-12
In this work we develop a Bayesian setting to infer unknown parameters in initial-boundary value problems related to linear parabolic partial differential equations. We realistically assume that the boundary data are noisy, for a given prescribed initial condition. We show how to derive the joint likelihood function for the forward problem, given some measurements of the solution field subject to Gaussian noise. Given Gaussian priors for the time-dependent Dirichlet boundary values, we analytically marginalize the joint likelihood using the linearity of the equation. Our hierarchical Bayesian approach is fully implemented in an example that involves the heat equation. In this example, the thermal diffusivity is the unknown parameter. We assume that the thermal diffusivity parameter can be modeled a priori through a lognormal random variable or by means of a space-dependent stationary lognormal random field. Synthetic data are used to test the inference. We exploit the behavior of the non-normalized log posterior distribution of the thermal diffusivity. Then, we use the Laplace method to obtain an approximated Gaussian posterior and therefore avoid costly Markov Chain Monte Carlo computations. Expected information gains and predictive posterior densities for observable quantities are numerically estimated using Laplace approximation for different experimental setups.
J. S. de Villiers
2014-10-01
Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.
Confirm calculation of 12 MeV non-destructive testing electron linear accelerator target
Ma Shudong; Zhang Rutong; Guo Yanbin; Zhou Yuan; Li Xuexian; Chen Yan
2012-01-01
The confirm calculation of 12 MeV non-destructive testing (NDT) electron linear accelerator (LINAC) target was studied. Firstly, the most optimal target thickness and related photon dose yield, distributions of dose rate, and related photon conversion efficiencies were got by calculation with specific analysis of the physical mechanism of the interactions between the beam and target; Secondly, the photon dose rate distribution, converter efficiencies, and thickness of various kinds of targets, such as W, Au, Ta, etc. were verified by MCNP simulation and the most optimal target was got using the MCNP code; Lastly, the calculation results of theory and MCNP were compared to confirm the validity of target calculation. (authors)
Mikulović Jovan Č.
2014-01-01
Full Text Available A methodology for calculation of overvoltages in transformer windings, based on a numerical method of inverse Laplace transform, is presented. Mathematical model of transformer windings is described by partial differential equations corresponding to distributed parameters electrical circuits. The procedure of calculating overvoltages is applied to windings having either isolated neutral point, or grounded neutral point, or neutral point grounded through impedance. A comparative analysis of the calculation results obtained by the proposed numerical method and by analytical method of calculation of overvoltages in transformer windings is presented. The results computed by the proposed method and measured voltage distributions, when a voltage surge is applied to a three-phase 30 kVA power transformer, are compared. [Projekat Ministartsva nauke Republike Srbije, br. TR-33037 i br. TR-33020
Murray L. Ireland
2015-06-01
Full Text Available Multirotor is the umbrella term for the family of unmanned aircraft, which include the quadrotor, hexarotor and other vertical take-off and landing (VTOL aircraft that employ multiple main rotors for lift and control. Development and testing of novel multirotor designs has been aided by the proliferation of 3D printing and inexpensive flight controllers and components. Different multirotor configurations exhibit specific strengths, while presenting unique challenges with regards to design and control. This article highlights the primary differences between three multirotor platforms: a quadrotor; a fully-actuated hexarotor; and an octorotor. Each platform is modelled and then controlled using non-linear dynamic inversion. The differences in dynamics, control and performance are then discussed.
Baran, Richard; Northen, Trent R
2013-10-15
Untargeted metabolite profiling using liquid chromatography and mass spectrometry coupled via electrospray ionization is a powerful tool for the discovery of novel natural products, metabolic capabilities, and biomarkers. However, the elucidation of the identities of uncharacterized metabolites from spectral features remains challenging. A critical step in the metabolite identification workflow is the assignment of redundant spectral features (adducts, fragments, multimers) and calculation of the underlying chemical formula. Inspection of the data by experts using computational tools solving partial problems (e.g., chemical formula calculation for individual ions) can be performed to disambiguate alternative solutions and provide reliable results. However, manual curation is tedious and not readily scalable or standardized. Here we describe an automated procedure for the robust automated mass spectra interpretation and chemical formula calculation using mixed integer linear programming optimization (RAMSI). Chemical rules among related ions are expressed as linear constraints and both the spectra interpretation and chemical formula calculation are performed in a single optimization step. This approach is unbiased in that it does not require predefined sets of neutral losses and positive and negative polarity spectra can be combined in a single optimization. The procedure was evaluated with 30 experimental mass spectra and was found to effectively identify the protonated or deprotonated molecule ([M + H](+) or [M - H](-)) while being robust to the presence of background ions. RAMSI provides a much-needed standardized tool for interpreting ions for subsequent identification in untargeted metabolomics workflows.
Choi, Yun Seok
2017-05-26
Full waveform inversion (FWI) using an energy-based objective function has the potential to provide long wavelength model information even without low frequency in the data. However, without the back-propagation method (adjoint-state method), its implementation is impractical for the model size of general seismic survey. We derive the gradient of the energy-based objective function using the back-propagation method to make its FWI feasible. We also raise the energy signal to the power of a small positive number to properly handle the energy signal imbalance as a function of offset. Examples demonstrate that the proposed FWI algorithm provides a convergent long wavelength structure model even without low-frequency information, which can be used as a good starting model for the subsequent conventional FWI.
Inverse calculation of strain profiles from ETDR measurements using artificial neural networks
R. Höhne
2017-12-01
Full Text Available A novel carbon fibre sensor is developed for the spatially resolved strain measurement. A unique feature of the sensor is the fibre-break resistive measurement principle and the two-core transmission line design. The electrical time domain reflectometry (ETDR is used in order to realize a spatially resolved measurement of the electrical parameters of the sensor. In this contribution, the process of mapping between the ETDR signals to the existing strain profile is described. Artificial neural networks (ANNs are used to solve the inverse electromagnetic problem. The investigations were carried out with a sensor patch in a cantilever arm configuration. Overall, 136 experiments with varying strain distribution over the sensor length were performed to generate the necessary training data to learn the ANN model. The validation of the ANN highlights the feasibility as well as the current limits concerning the quantitative accuracy of mapping ETDR signals to strain profiles.
Fernández-Fernández, Mario; Rodríguez-González, Pablo; García Alonso, J Ignacio
2016-10-01
We have developed a novel, rapid and easy calculation procedure for Mass Isotopomer Distribution Analysis based on multiple linear regression which allows the simultaneous calculation of the precursor pool enrichment and the fraction of newly synthesized labelled proteins (fractional synthesis) using linear algebra. To test this approach, we used the peptide RGGGLK as a model tryptic peptide containing three subunits of glycine. We selected glycine labelled in two 13 C atoms ( 13 C 2 -glycine) as labelled amino acid to demonstrate that spectral overlap is not a problem in the proposed methodology. The developed methodology was tested first in vitro by changing the precursor pool enrichment from 10 to 40% of 13 C 2 -glycine. Secondly, a simulated in vivo synthesis of proteins was designed by combining the natural abundance RGGGLK peptide and 10 or 20% 13 C 2 -glycine at 1 : 1, 1 : 3 and 3 : 1 ratios. Precursor pool enrichments and fractional synthesis values were calculated with satisfactory precision and accuracy using a simple spreadsheet. This novel approach can provide a relatively rapid and easy means to measure protein turnover based on stable isotope tracers. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Calculating Higher-Order Moments of Phylogenetic Stochastic Mapping Summaries in Linear Time
Dhar, Amrit
2017-01-01
Abstract Stochastic mapping is a simulation-based method for probabilistically mapping substitution histories onto phylogenies according to continuous-time Markov models of evolution. This technique can be used to infer properties of the evolutionary process on the phylogeny and, unlike parsimony-based mapping, conditions on the observed data to randomly draw substitution mappings that do not necessarily require the minimum number of events on a tree. Most stochastic mapping applications simulate substitution mappings only to estimate the mean and/or variance of two commonly used mapping summaries: the number of particular types of substitutions (labeled substitution counts) and the time spent in a particular group of states (labeled dwelling times) on the tree. Fast, simulation-free algorithms for calculating the mean of stochastic mapping summaries exist. Importantly, these algorithms scale linearly in the number of tips/leaves of the phylogenetic tree. However, to our knowledge, no such algorithm exists for calculating higher-order moments of stochastic mapping summaries. We present one such simulation-free dynamic programming algorithm that calculates prior and posterior mapping variances and scales linearly in the number of phylogeny tips. Our procedure suggests a general framework that can be used to efficiently compute higher-order moments of stochastic mapping summaries without simulations. We demonstrate the usefulness of our algorithm by extending previously developed statistical tests for rate variation across sites and for detecting evolutionarily conserved regions in genomic sequences. PMID:28177780
Yang, James N.; Pino, Ramiro [Department of Radiation Physics, Unit 94, University of Texas M. D. Anderson Cancer Center, Houston, Texas 77030 (United States); Department of Radiology, Baylor College of Medicine and Methodist Hospital, Houston, Texas 77030 (United States)
2008-10-15
Narrow beams are extensively used in stereotactic radiosurgery. The accuracy of treatment planning dose calculation depends largely on how well the dosimetric data are measured during the machine commissioning. Narrow beams are characterized by the lack of lateral electronic equilibrium. The lateral electronic disequilibrium in the radiation field and detector's finite size are likely to compromise the accuracy in dose measurements in these beams. This may have a profound impact on outcome in patients who undergo stereotactic radiosurgery. To confirm the measured commissioning data for a dedicated 6-MV linear accelerator-based radiosurgery system, we developed an analytical model to calculate the narrow photon beam central-axis dose. This model is an extension of a previously reported method of Nizin and Mooij for the calculation of the absorbed dose under lateral electronic disequilibrium conditions at depth of d{sub max} or greater. The scatter factor and tissue-maximum ratio were calculated for narrow beams using the parametrized model and compared to carefully measured results for the same beams. For narrow beam radii ranging from 0.2 to 1.5 cm, the differences between the analytical and measured scatter factors were no greater than 1.4%. In addition, the differences between the analytical and measured tissue-maximum ratios were within 3.3% for regions greater than the maximum dose depth. The estimated error of this analytical calculation was less than 2%, which is sufficient to validate measurement results.
Calculations for Extra Well Shielding for 15 MV Clinical Linear accelerator
Mahmoud, M.A.; Emran, M.M.; Ahmad, A.S.
2000-01-01
A radiological survey was conducted around the walls of a clinical linear accelerator (Siemens Mevatron) in South Egypt Cancer Institute, Assiut University. Neutron measurements showed adequate results for all beam orientations. Photon measurements showed adequate results for all beam orientations except for beam orientation 270 degree, facing the control room. During operation, photon measurements were taken in order to calculate the additional shield thickness required to reduce measurements to accepted values. For convenience, lead was the material of choice for extra shielding. A value for the build up factor needed in the calculations of broad beam attenuation was estimated. Measurements inside the control room after adding the calculated lead thickness are much lower than the annual effective equivalent dose limits recommended by the ICRP-60 (International Commission on Radiation Protection) for occupational exposure. Also, measurements taken in the patients waiting hall recorded levels consistent with the six-hour daily occupancy for members of the public. The value of the build up factor was verified by calculations. Also the variation of build up factor distance from the field centre was calculated. Important and useful recommendations were reached from this experience which should be discussed to avoid facing similar situations in radiotherapy departments in Egypt
Quantum density fluctuations in liquid neon from linearized path-integral calculations
Poulsen, Jens Aage; Scheers, Johan; Nyman, Gunnar; Rossky, Peter J.
2007-01-01
The Feynman-Kleinert linearized path-integral [J. A. Poulsen et al., J. Chem. Phys. 119, 12179 (2003)] representation of quantum correlation functions is applied to compute the spectrum of density fluctuations for liquid neon at T=27.6 K, p=1.4 bar, and Q vector 1.55 Aa -1 . The calculated spectrum as well as the kinetic energy of the liquid are in excellent agreement with the experiment of Cunsolo et al. [Phys. Rev. B 67, 024507 (2003)
Integral linear momentum balance in combining flows for calculating the pressure drop coefficients
Bollmann, A.
1983-01-01
Equations for calculating the loss coefficient in combining flows in tee functions are obtained by an integral linear momentum balance. It is a practice, when solving this type of problem, to neglect the pressure difference in the upstream location as well as the wall-fluid interaction in the lateral branch of the junction. In this work it is demonstrated the influence of the above parameters on the loss coefficient based on experimental values and by apropriate algebraic manipulation of the loss coefficient values published by previous investigators. (Author) [pt
Liu, Yishan; Han, Ping [School of Biological Sciences, The University of Hong Kong, Pokfulam Road, Hong Kong (China); Li, Xiao-yan; Shih, Kaimin [Department of Civil Engineering, The University of Hong Kong, Pokfulam Road, Hong Kong (China); Gu, Ji-Dong, E-mail: jdgu@hkucc.hku.hk [School of Biological Sciences, The University of Hong Kong, Pokfulam Road, Hong Kong (China); The Swire Institute of Marine Science, The University of Hong Kong, Shek O, Cape d' Aguilar, Hong Kong (China)
2011-09-15
Highlights: {yields} We isolated a Xanthobacter flavus strain PA1 utilizing the racemic 2-PBA and the single enantiomers as the sole source of carbon and energy. {yields} Both (R) and (S) forms of enantiomers can be degraded in a sequential manner in which the (S) disappeared before the (R) form. {yields} The biochemical degradation pathway involves an initial oxidation of the alkyl side chain before aromatic ring cleavage. - Abstract: Microbial degradation of the chiral 2-phenylbutyric acid (2-PBA), a metabolite of surfactant linear alkylbenzene sulfonates (LAS), was investigated using both racemic and enantiomer-pure compounds together with quantitative stereoselective analyses. A pure culture of bacteria, identified as Xanthobacter flavus strain PA1 isolated from the mangrove sediment of Hong Kong Mai Po Nature Reserve, was able to utilize the racemic 2-PBA as well as the single enantiomers as the sole source of carbon and energy. In the presence of the racemic compounds, X. flavus PA1 degraded both (R) and (S) forms of enantiomers to completion in a sequential manner in which the (S) enantiomer disappeared much faster than the (R) enantiomer. When the single pure enantiomer was supplied as the sole substrate, a unidirectional chiral inversion involving (S) enantiomer to (R) enantiomer was evident. No major difference was observed in the degradation intermediates with either of the individual enantiomers when used as the growth substrate. Two major degradation intermediates were detected and identified as 3-hydroxy-2-phenylbutanoic acid and 4-methyl-3-phenyloxetan-2-one, using a combination of liquid chromatography-mass spectrometry (LC-MS), and {sup 1}H and {sup 13}C nuclear magnetic resonance (NMR) spectroscopy. The biochemical degradation pathway follows an initial oxidation of the alkyl side chain before aromatic ring cleavage. This study reveals new evidence for enantiomeric inversion catalyzed by pure culture of environmental bacteria and emphasizes the
FEAST: a two-dimensional non-linear finite element code for calculating stresses
Tayal, M.
1986-06-01
The computer code FEAST calculates stresses, strains, and displacements. The code is two-dimensional. That is, either plane or axisymmetric calculations can be done. The code models elastic, plastic, creep, and thermal strains and stresses. Cracking can also be simulated. The finite element method is used to solve equations describing the following fundamental laws of mechanics: equilibrium; compatibility; constitutive relations; yield criterion; and flow rule. FEAST combines several unique features that permit large time-steps in even severely non-linear situations. The features include a special formulation for permitting many finite elements to simultaneously cross the boundary from elastic to plastic behaviour; accomodation of large drops in yield-strength due to changes in local temperature and a three-step predictor-corrector method for plastic analyses. These features reduce computing costs. Comparisons against twenty analytical solutions and against experimental measurements show that predictions of FEAST are generally accurate to ± 5%
Becker, R.L.; Svenne, J.P.
1975-12-01
Energy levels of states connected by a symmetry of the Hamiltonian normally should be degenerate. In self-consistent field theories, when only one of a pair of single-particle levels connected by a symmetry of the full Hamiltonian is occupied, the degeneracy is split and the unoccupied level often lies below the occupied one. Inversions of neutron-proton (charge) and time-reversal doublets in odd nuclei, charge doublets in even nuclei with a neutron excess, and spin-orbit doublets in spherical configurations with spin-unsaturated shells are examined. The origin of the level inversion is investigated, and the following explanation offered. Unoccupied single-particle levels, from a calculation in an A-particle system, should be interpreted as levels of the (A + 1)-particle system. When the symmetry-related level, occupied in the A-particle system, is also calculated in the (A + 1)-particle system it is degenerate with or lies lower than the other. That is, when both levels are calculated in the (A + 1)-particle system, they are not inverted. It is demonstrated that the usual prescription to occupy the lowest-lying orbitals should be modified to refer to the single-particle energies calculated in the (A + 1)- or the (A - 1)-particle system. This observation is shown to provide a justification for avoiding an oscillation of occupancy between symmetry-related partners in successive iterations leading to a self-consistency. It is pointed out that two degenerate determinants arise from occupying one or the other partner of an initially degenerate pair of levels and then iterating to self-consistency. The existence of the degenerate determinants indicates the need for introducing correlations, either by mixing the two configurations or by allowing additional symmetry-breaking (resulting in a more highly deformed non-degenerate configuration). 2 figures, 3 tables, 43 references
Maia, Julio Daniel Carvalho; Urquiza Carvalho, Gabriel Aires; Mangueira, Carlos Peixoto; Santana, Sidney Ramos; Cabral, Lucidio Anjos Formiga; Rocha, Gerd B
2012-09-11
In this study, we present some modifications in the semiempirical quantum chemistry MOPAC2009 code that accelerate single-point energy calculations (1SCF) of medium-size (up to 2500 atoms) molecular systems using GPU coprocessors and multithreaded shared-memory CPUs. Our modifications consisted of using a combination of highly optimized linear algebra libraries for both CPU (LAPACK and BLAS from Intel MKL) and GPU (MAGMA and CUBLAS) to hasten time-consuming parts of MOPAC such as the pseudodiagonalization, full diagonalization, and density matrix assembling. We have shown that it is possible to obtain large speedups just by using CPU serial linear algebra libraries in the MOPAC code. As a special case, we show a speedup of up to 14 times for a methanol simulation box containing 2400 atoms and 4800 basis functions, with even greater gains in performance when using multithreaded CPUs (2.1 times in relation to the single-threaded CPU code using linear algebra libraries) and GPUs (3.8 times). This degree of acceleration opens new perspectives for modeling larger structures which appear in inorganic chemistry (such as zeolites and MOFs), biochemistry (such as polysaccharides, small proteins, and DNA fragments), and materials science (such as nanotubes and fullerenes). In addition, we believe that this parallel (GPU-GPU) MOPAC code will make it feasible to use semiempirical methods in lengthy molecular simulations using both hybrid QM/MM and QM/QM potentials.
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn
2017-01-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.
Monajemi, T. T.; Clements, Charles M.; Sloboda, Ron S.
2011-01-01
Purpose: The objectives of this study were (i) to develop a dose calculation method for permanent prostate implants that incorporates a clinically motivated model for edema and (ii) to illustrate the use of the method by calculating the preimplant dosimetry error for a reference configuration of 125 I, 103 Pd, and 137 Cs seeds subject to edema-induced motions corresponding to a variety of model parameters. Methods: A model for spatially anisotropic edema that resolves linearly with time was developed based on serial magnetic resonance imaging measurements made previously at our center to characterize the edema for a group of n=40 prostate implant patients [R. S. Sloboda et al., ''Time course of prostatic edema post permanent seed implant determined by magnetic resonance imaging,'' Brachytherapy 9, 354-361 (2010)]. Model parameters consisted of edema magnitude, Δ, and period, T. The TG-43 dose calculation formalism for a point source was extended to incorporate the edema model, thus enabling calculation via numerical integration of the cumulative dose around an individual seed in the presence of edema. Using an even power piecewise-continuous polynomial representation for the radial dose function, the cumulative dose was also expressed in closed analytical form. Application of the method was illustrated by calculating the preimplant dosimetry error, RE preplan , in a 5x5x5 cm 3 volume for 125 I (Oncura 6711), 103 Pd (Theragenics 200), and 131 Cs (IsoRay CS-1) seeds arranged in the Radiological Physics Center test case 2 configuration for a range of edema relative magnitudes (Δ=[0.1,0.2,0.4,0.6,1.0]) and periods (T=[28,56,84] d). Results were compared to preimplant dosimetry errors calculated using a variation of the isotropic edema model developed by Chen et al. [''Dosimetric effects of edema in permanent prostate seed implants: A rigorous solution,'' Int. J. Radiat. Oncol., Biol., Phys. 47, 1405-1419 (2000)]. Results: As expected, RE preplan for our edema model
Compressive Loads on the Lumbar Spine During Lifting: 4D WATBAK versus Inverse Dynamics Calculations
M. H. Cole
2005-01-01
Full Text Available Numerous two- and three-dimensional biomechanical models exist for the purpose of assessing the stresses placed on the lumbar spine during the performance of a manual material handling task. More recently, researchers have utilised their knowledge to develop specific computer-based models that can be applied in an occupational setting; an example of which is 4D WATBAK. The model used by 4D WATBAK bases its predications on static calculations and it is assumed that these static loads reasonably depict the actual dynamic loads acting on the lumbar spine. Consequently, it was the purpose of this research to assess the agreement between the static predictions made by 4D WATBAK and those from a comparable dynamic model. Six individuals were asked to perform a series of five lifting tasks, which ranged from lifting 2.5 kg to 22.5 kg and were designed to replicate the lifting component of the Work Capacity Assessment Test used within Australia. A single perpendicularly placed video camera was used to film each performance in the sagittal plane. The resultant two-dimensional kinematic data were input into the 4D WATBAK software and a dynamic biomechanical model to quantify the compression forces acting at the L4/L5 intervertebral joint. Results of this study indicated that as the mass of the load increased from 2.5 kg to 22.5 kg, the static compression forces calculated by 4D WATBAK became increasingly less than those calculated using the dynamic model (mean difference ranged from 22.0% for 2.5 kg to 42.9% for 22.5 kg. This study suggested that, for research purposes, a validated three-dimensional dynamic model should be employed when a task becomes complex and when a more accurate indication of spinal compression or shear force is required. Additionally, although it is clear that 4D WATBAK is particularly suited to industrial applications, it is suggested that the limitations of such modelling tools be carefully considered when task-risk and employee
LINPACK, Subroutine Library for Linear Equation System Solution and Matrix Calculation
Dongarra, J.J.
1979-01-01
1 - Description of problem or function: LINPACK is a collection of FORTRAN subroutines which analyze and solve various classes of systems of simultaneous linear algebraic equations. The collection deals with general, banded, symmetric indefinite, symmetric positive definite, triangular, and tridiagonal square matrices, as well as with least squares problems and the QR and singular value decompositions of rectangular matrices. A subroutine-naming convention is employed in which each subroutine name consists of five letters which represent a coded specification (TXXYY) of the computation done by that subroutine. The first letter, T, indicates the matrix data type. Standard FORTRAN allows the use of three such types: S REAL, D DOUBLE PRECISION, and C COMPLEX. In addition, some FORTRAN systems allow a double-precision complex type: Z COMPLEX*16. The second and third letters of the subroutine name, XX, indicate the form of the matrix or its decomposition: GE: General, GB: General band, PO: Positive definite, PP: Positive definite packed, PB: Positive definite band, SI: Symmetric indefinite, SP: Symmetric indefinite packed, HI: Hermitian indefinite, HP: Hermitian indefinite packed, TR: Triangular, GT: General tridiagonal, PT: Positive definite tridiagonal, CH: Cholesky decomposition, QR: Orthogonal-triangular decomposition, SV: Singular value decomposition. The final two letters, YY, indicate the computation done by the particular subroutine: FA: Factor, CO: Factor and estimate condition, SL: Solve, DI: Determinant and/or inverse and/or inertia, DC: Decompose, UD: Update, DD: Down-date, EX Exchange. The following chart shows all the LINPACK subroutines. The initial 'S' in the names may be replaced by D, C or Z and the initial 'C' in the complex-only names may be replaced by a Z. SGE: FA, CO, SL, DI; SGB: FA, CO, SL, DI; SPO: FA, CO, SL, DI; SPP: FA, CO, SL, DI; SPB: FA, CO, SL, DI; SSI: FA, CO, SL, DI; SSP: FA, CO, SL, DI; CHI: FA, CO, SL, DI; CHP: FA, CO, SL, DI; STR
Calculation of elastic-plastic strain ranges for fatigue analysis based on linear elastic stresses
Sauer, G.
1998-01-01
Fatigue analysis requires that the maximum strain ranges be known. These strain ranges are generally computed from linear elastic analysis. The elastic strain ranges are enhanced by a factor K e to obtain the total elastic-plastic strain range. The reliability of the fatigue analysis depends on the quality of this factor. Formulae for calculating the K e factor are proposed. A beam is introduced as a computational model for determining the elastic-plastic strains. The beam is loaded by the elastic stresses of the real structure. The elastic-plastic strains of the beam are compared with the beam's elastic strains. This comparison furnishes explicit expressions for the K e factor. The K e factor is tested by means of seven examples. (orig.)
Calculation model of non-linear dynamic deformation of composite multiphase rods
Mishchenko Andrey Viktorovich
2014-05-01
Full Text Available The method of formulating non-linear physical equations for multiphase rods is suggested in the article. Composite multiphase rods possess various structures, include shear, polar, radial and axial inhomogeneity. The Timoshenko’s hypothesis with the large rotation angles is used. The method is based on the approximation of longitudinal normal stress low by basic functions expansions regarding the linear viscosity low. The shear stresses are calculated with the equilibrium equation using the subsidiary function of the longitudinal shift force. The system of differential equations connecting the internal forces and temperature with abstract deformations are offered by the basic functions. The application of power functions with arbitrary index allows presenting the compact form equations. The functional coefficients in this system are the highest order rigidity characteristics. The whole multiphase cross-section rigidity characteristics are offered the sums of the rigidity characteristics of the same phases individually. The obtained system allows formulating the well-known particular cases. Among them: hard plasticity and linear elastic deformation, different module deformation and quadratic Gerstner’s low elastic deformation. The reform of differential equations system to the quasilinear is suggested. This system contains the secant variable rigidity characteristics depending on abstract deformations. This system includes the sum of the same uniform blocks of different order. The rods phases defined the various set of uniform blocks phase materials. The integration of dynamic, kinematic and physical equations taking into account initial and edge condition defines the full dynamical multiphase rods problem. The quasilinear physical equations allow getting the variable flexibility matrix of multiphase rod and rods system.
Tian, F.; Tian, H.; Whitmore, L.; Ye, L.Y.
2015-01-01
The energy dependent on volume of hexagonal close-packed (hcp) nickel with different magnetism is calculated by full-potential linearized augmented plane wave method. Based on the calculation ferromagnetic state is found to be the most stable state. The magnetic moment of hcp Ni is calculated and compared to those calculated by different pseudo-potential methods. Furthermore, it is also compared to that of face-centered cubic (fcc) one with the reason discussed
Chai, Xintao; Tang, Genyang; Peng, Ronghua; Liu, Shaoyong
2018-03-01
Full-waveform inversion (FWI) reconstructs the subsurface properties from acquired seismic data via minimization of the misfit between observed and simulated data. However, FWI suffers from considerable computational costs resulting from the numerical solution of the wave equation for each source at each iteration. To reduce the computational burden, constructing supershots by combining several sources (aka source encoding) allows mitigation of the number of simulations at each iteration, but it gives rise to crosstalk artifacts because of interference between the individual sources of the supershot. A modified Gauss-Newton FWI (MGNFWI) approach showed that as long as the difference between the initial and true models permits a sparse representation, the ℓ _1-norm constrained model updates suppress subsampling-related artifacts. However, the spectral-projected gradient ℓ _1 (SPGℓ _1) algorithm employed by MGNFWI is rather complicated that makes its implementation difficult. To facilitate realistic applications, we adapt a linearized Bregman (LB) method to sparsity-promoting FWI (SPFWI) because of the efficiency and simplicity of LB in the framework of ℓ _1-norm constrained optimization problem and compressive sensing. Numerical experiments performed with the BP Salt model, the Marmousi model and the BG Compass model verify the following points. The FWI result with LB solving ℓ _1-norm sparsity-promoting problem for the model update outperforms that generated by solving ℓ _2-norm problem in terms of crosstalk elimination and high-fidelity results. The simpler LB method performs comparably and even superiorly to the complicated SPGℓ _1 method in terms of computational efficiency and model quality, making the LB method a viable alternative for realistic implementations of SPFWI.
Kandel, Tanka P; Lærke, Poul Erik; Elsgaard, Lars
2016-01-01
One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre-deployment flu......One of the shortcomings of closed chamber methods for soil respiration (SR) measurements is the decreased CO2 diffusion rate from soil to chamber headspace that may occur due to increased chamber CO2 concentrations. This feedback on diffusion rate may lead to underestimation of pre...... was placed on fixed collars, and CO2 concentration in the chamber headspace were recorded at 1-s intervals for 45 min. Fluxes were measured in different soil types (sandy, sandy loam and organic soils), and for various manipulations (tillage, rain and drought) and soil conditions (temperature and moisture......) to obtain a range of fluxes with different shapes of flux curves. The linear method provided more stable flux results during short enclosure times (few min) but underestimated initial fluxes by 15–300% after 45 min deployment time. Non-linear models reduced the underestimation as average underestimation...
Murata, M; Uchida, T; Yang, Y; Lezhava, A; Kinashi, H
2011-04-01
We have comprehensively analyzed the linear chromosomes of Streptomyces griseus mutants constructed and kept in our laboratory. During this study, macrorestriction analysis of AseI and DraI fragments of mutant 402-2 suggested a large chromosomal inversion. The junctions of chromosomal inversion were cloned and sequenced and compared with the corresponding target sequences in the parent strain 2247. Consequently, a transposon-involved mechanism was revealed. Namely, a transposon originally located at the left target site was replicatively transposed to the right target site in an inverted direction, which generated a second copy and at the same time caused a 2.5-Mb chromosomal inversion. The involved transposon named TnSGR was grouped into a new subfamily of the resolvase-encoding Tn3 family transposons based on its gene organization. At the end, terminal diversity of S. griseus chromosomes is discussed by comparing the sequences of strains 2247 and IFO13350.
SU-E-T-270: Optimized Shielding Calculations for Medical Linear Accelerators (LINACs).
Muhammad, W; Lee, S; Hussain, A
2012-06-01
The purpose of radiation shielding is to reduce the effective equivalent dose from a medical linear accelerator (LINAC) to a point outside the room to a level determined by individual state/international regulations. The study was performed to design LINAC's room for newly planned radiotherapy centers. Optimized shielding calculations were performed for LINACs having maximum photon energy of 20 MV based on NCRP 151. The maximum permissible dose limits were kept 0.04 mSv/week and 0.002 mSv/week for controlled and uncontrolled areas respectively by following ALARA principle. The planned LINAC's room was compared to the already constructed (non-optimized) LINAC's room to evaluate the shielding costs and the other facilities those are directly related to the room design. In the evaluation process it was noted that the non-optimized room size (i.e., 610 × 610 cm 2 or 20 feet × 20 feet) is not suitable for total body irradiation (TBI) although the machine installed inside was having not only the facility of TBI but the license was acquired. By keeping this point in view, the optimized INAC's room size was kept 762 × 762 cm 2. Although, the area of the optimized rooms was greater than the non-planned room (i.e., 762 × 762 cm 2 instead of 610 × 610 cm 2), the shielding cost for the optimized LINAC's rooms was reduced by 15%. When optimized shielding calculations were re-performed for non-optimized shielding room (i.e., keeping room size, occupancy factors, workload etc. same), it was found that the shielding cost may be lower to 41 %. In conclusion, non- optimized LINAC's room can not only put extra financial burden on the hospital but also can cause of some serious issues related to providing health care facilities for patients. © 2012 American Association of Physicists in Medicine.
Russenschuck, S.
1999-01-01
The Large Hadron Collider (LHC) will provide proton-proton collisions with a center-of-mass energy of 14 TeV which requires high field superconducting magnets to guide the counter-rotating beams in the existing LEP tunnel with a circumference of about 27 km. The LHC magnet system consists of 1232 superconducting dipoles and 386 main quadrupoles together with about 20 different types of magnets for insertions and correction. The design and optimization of these magnets is dominated by the requirement of a extremely uniform field which is mainly defined by the layout of the superconducting coils. The program package ROXIE (Routine for the Optimization of magnet X-sections, Inverse field calculation and coil End design) has been developed for the design and optimization of the coil geometries in two and three dimensions. Recently it has been extended in a collaboration with the University of Graz, Austria, to the calculation of saturation induced effects using a reduced vector-potential FEM formulation. With the University of Stuttgart, Germany, a collaboration exists fro the application of the BEM-FEM coupling method for the 2D and 3D field calculation. ROXIE now also features a TCL-TK user interface. The growing number of ROXIE users inside and outside CERN gave rise to the idea of organizing the 'First International ROXIE Users Meeting and Workshop' at CERN, March 16-18, 1998 which brought together about 50 researchers in the field. This report contains the contributions to the workshop and describes the features of the program, the mathematical optimization techniques applied and gives examples of the recent design work carried out. It also gives the theoretical background for the field computation methods and serves as a handbook for the installation and application of the program. (orig.)
Russenschuck, S [ed.
1999-04-12
The Large Hadron Collider (LHC) will provide proton-proton collisions with a center-of-mass energy of 14 TeV which requires high field superconducting magnets to guide the counter-rotating beams in the existing LEP tunnel with a circumference of about 27 km. The LHC magnet system consists of 1232 superconducting dipoles and 386 main quadrupoles together with about 20 different types of magnets for insertions and correction. The design and optimization of these magnets is dominated by the requirement of a extremely uniform field which is mainly defined by the layout of the superconducting coils. The program package ROXIE (Routine for the Optimization of magnet X-sections, Inverse field calculation and coil End design) has been developed for the design and optimization of the coil geometries in two and three dimensions. Recently it has been extended in a collaboration with the University of Graz, Austria, to the calculation of saturation induced effects using a reduced vector-potential FEM formulation. With the University of Stuttgart, Germany, a collaboration exists fro the application of the BEM-FEM coupling method for the 2D and 3D field calculation. ROXIE now also features a TCL-TK user interface. The growing number of ROXIE users inside and outside CERN gave rise to the idea of organizing the 'First International ROXIE Users Meeting and Workshop' at CERN, March 16-18, 1998 which brought together about 50 researchers in the field. This report contains the contributions to the workshop and describes the features of the program, the mathematical optimization techniques applied and gives examples of the recent design work carried out. It also gives the theoretical background for the field computation methods and serves as a handbook for the installation and application of the program. (orig.)
Faggiani Dias, D.; Subramanian, A. C.; Zanna, L.; Miller, A. J.
2017-12-01
Sea surface temperature (SST) in the Pacific sector is well known to vary on time scales from seasonal to decadal, and the ability to predict these SST fluctuations has many societal and economical benefits. Therefore, we use a suite of statistical linear inverse models (LIMs) to understand the remote and local SST variability that influences SST predictions over the North Pacific region and further improve our understanding on how the long-observed SST record can help better guide multi-model ensemble forecasts. Observed monthly SST anomalies in the Pacific sector (between 15oS and 60oN) are used to construct different regional LIMs for seasonal to decadal prediction. The forecast skills of the LIMs are compared to that from two operational forecast systems in the North American Multi-Model Ensemble (NMME) revealing that the LIM has better skill in the Northeastern Pacific than NMME models. The LIM is also found to have comparable forecast skill for SST in the Tropical Pacific with NMME models. This skill, however, is highly dependent on the initialization month, with forecasts initialized during the summer having better skill than those initialized during the winter. The forecast skill with LIM is also influenced by the verification period utilized to make the predictions, likely due to the changing character of El Niño in the 20th century. The North Pacific seems to be a source of predictability for the Tropics on seasonal to interannual time scales, while the Tropics act to worsen the skill for the forecast in the North Pacific. The data were also bandpassed into seasonal, interannual and decadal time scales to identify the relationships between time scales using the structure of the propagator matrix. For the decadal component, this coupling occurs the other way around: Tropics seem to be a source of predictability for the Extratropics, but the Extratropics don't improve the predictability for the Tropics. These results indicate the importance of temporal
The linearly scaling 3D fragment method for large scale electronic structure calculations
Zhao Zhengji [National Energy Research Scientific Computing Center (NERSC) (United States); Meza, Juan; Shan Hongzhang; Strohmaier, Erich; Bailey, David; Wang Linwang [Computational Research Division, Lawrence Berkeley National Laboratory (United States); Lee, Byounghak, E-mail: ZZhao@lbl.go [Physics Department, Texas State University (United States)
2009-07-01
The linearly scaling three-dimensional fragment (LS3DF) method is an O(N) ab initio electronic structure method for large-scale nano material simulations. It is a divide-and-conquer approach with a novel patching scheme that effectively cancels out the artificial boundary effects, which exist in all divide-and-conquer schemes. This method has made ab initio simulations of thousand-atom nanosystems feasible in a couple of hours, while retaining essentially the same accuracy as the direct calculation methods. The LS3DF method won the 2008 ACM Gordon Bell Prize for algorithm innovation. Our code has reached 442 Tflop/s running on 147,456 processors on the Cray XT5 (Jaguar) at OLCF, and has been run on 163,840 processors on the Blue Gene/P (Intrepid) at ALCF, and has been applied to a system containing 36,000 atoms. In this paper, we will present the recent parallel performance results of this code, and will apply the method to asymmetric CdSe/CdS core/shell nanorods, which have potential applications in electronic devices and solar cells.
Extension of the linear nodal method to large concrete building calculations
Childs, R.L.; Rhoades, W.A.
1985-01-01
The implementation of the linear nodal method in the TORT code is described, and the results of a mesh refinement study to test the effectiveness of the linear nodal and weighted diamond difference methods available in TORT are presented
Ranaivo Nomenjanahary, F.; Rakoto, H.; Ratsimbazafy, J.B.
1994-08-01
This paper is concerned with resistivity sounding measurements performed from single site (vertical sounding) or from several sites (profiles) within a bounded area. The objective is to present an accurate information about the study area and to estimate the likelihood of the produced quantitative models. The achievement of this objective obviously requires quite relevant data and processing methods. It also requires interpretation methods which should take into account the probable effect of an heterogeneous structure. In front of such difficulties, the interpretation of resistivity sounding data inevitably involves the use of inversion methods. We suggest starting the interpretation in simple situation (1-D approximation), and using the rough but correct model obtained as an a-priori model for any more refined interpretation. Related to this point of view, special attention should be paid for the inverse problem applied to the resistivity sounding data. This inverse problem is nonlinear, while linearity inherent in the functional response used to describe the physical experiment. Two different approaches are used to build an approximate but higher dimensional inversion of geoelectrical data: the linear approach and the bayesian statistical approach. Some illustrations of their application in resistivity sounding data acquired at Tritrivakely volcanic lake (single site) and at Mahitsy area (several sites) will be given. (author). 28 refs, 7 figs
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, A.
2016-01-01
Roč. 9, č. 11 (2016), s. 4297-4311 ISSN 1991-959X R&D Projects: GA MŠk(CZ) 7F14287 Institutional support: RVO:67985556 Keywords : Linear inverse problem * Bayesian regularization * Source-term determination * Variational Bayes method Subject RIV: BB - Applied Statistics, Operational Research Impact factor: 3.458, year: 2016 http://library.utia.cas.cz/separaty/2016/AS/tichy-0466029.pdf
Jamema, S.V.; Deshpande, D.D.; Kirisits, C.; Trnkova, P.; Poetter, R.; Mahantshetty, U.; Shrivastava, S.K.; Dinshaw, K.A.
2008-01-01
In the recent past, inverse planning algorithms were introduced for intracavitary brachytherapy planning (ICBT) for cervical cancer. The loading pattern of these algorithms in comparison with traditional systems may not be similar. The purpose of this study was to objectively compare the loading patterns of traditional systems with the inverse optimization. Based on the outcome of the comparison, an attempt was made to obtain a loading pattern that takes into account the experience made with the inverse optimization
Liu, Yuanrong; Chen, Weimin; Zhong, Jing
2017-01-01
The previously developed numerical inverse method was applied to determine the composition-dependent interdiffusion coefficients in single-phase finite diffusion couples. The numerical inverse method was first validated in a fictitious binary finite diffusion couple by pre-assuming four standard...... sets of interdiffusion coefficients. After that, the numerical inverse method was then adopted in a ternary Al-Cu-Ni finite diffusion couple. Based on the measured composition profiles, the ternary interdiffusion coefficients along the entire diffusion path of the target ternary diffusion couple were...... obtained by using the numerical inverse approach. The comprehensive comparisons between the computations and the experiments indicate that the numerical inverse method is also applicable to high-throughput determination of the composition-dependent interdiffusion coefficients in finite diffusion couples....
Da Silva Pinto, P.S.; Eustache, R.P.; Audenaert, M.; Bernassau, J.M.
1996-01-01
This work deals with carbon 13 nuclear magnetic resonance chemical shifts empiric calculations by multi linear regression and molecular modeling. The multi linear regression is indeed one way to obtain an equation able to describe the behaviour of the chemical shift for some molecules which are in the data base (rigid molecules with carbons). The methodology consists of structures describer parameters definition which can be bound to carbon 13 chemical shift known for these molecules. Then, the linear regression is used to determine the equation significant parameters. This one can be extrapolated to molecules which presents some resemblances with those of the data base. (O.L.). 20 refs., 4 figs., 1 tab
Edery, D.
1983-11-01
The reduced system of the non linear resistive MHD equations is used in the 2-D one helicity approximation in the numerical computations of stationary tearing modes. The critical magnetic Raynolds number S (S=tausub(r)/tausub(H) where tausub(R) and tausub(H) are respectively the characteristic resistive and hydro magnetic times) and the corresponding linear solution are computed as a starting approximation for the full non linear equations. These equations are then treated numerically by an iterative procedure which is shown to be rapidly convergent. A numerical application is given in the last part of this paper
Larin, S.V.; Lyulin, S.V.; Lyulin, A.V.; Darinskii, A.A.
2009-01-01
Complexes of fully ionized third-generation dendrimers with oppositely charged linear polyelectrolyte chains are studied by the Brownian dynamics method. A freely jointed model of a dendrimer and a linear chain is used. Electrostatic interactions are considered within the Debye-Hückel approximation
Tonellot, Th.L.
2000-03-24
In this thesis, we propose a method which takes into account a priori information (geological, diagraphic and stratigraphic knowledge) in linearized pre-stack seismic data inversion. The approach is based on a formalism in which the a priori information is incorporated in an a priori model of elastic parameters - density, P and S impedances - and a model covariance operator which describes the uncertainties in the model. The first part of the thesis is dedicated to the study of this covariance operator and to the norm associated to its inverse. We have generalized the exponential covariance operator in order to describe the uncertainties in the a priori model elastic parameters and their correlations at each location. We give the analytical expression of the covariance operator inverse in 1-D, 2-D, and 3-D, and we discretized the associated norm with a finite element method. The second part is dedicated to synthetic and real examples. In a preliminary step, we have developed a pre-stack data well calibration method which allows the estimation of the source signal. The impact of different a priori information is then demonstrated on synthetic and real data. (author)
The Computer Program LIAR for Beam Dynamics Calculations in Linear Accelerators
Assmann, R.W.; Adolphsen, C.; Bane, K.; Raubenheimer, T.O.; Siemann, R.H.; Thompson, K.
2011-01-01
Linear accelerators are the central components of the proposed next generation of linear colliders. They need to provide acceleration of up to 750 GeV per beam while maintaining very small normalized emittances. Standard simulation programs, mainly developed for storage rings, do not meet the specific requirements for high energy linear accelerators. We present a new program LIAR ('LInear Accelerator Research code') that includes wakefield effects, a 6D coupled beam description, specific optimization algorithms and other advanced features. Its modular structure allows to use and to extend it easily for different purposes. The program is available for UNIX workstations and Windows PC's. It can be applied to a broad range of accelerators. We present examples of simulations for SLC and NLC.
Dinh Nho Hao; Nguyen Trung Thanh; Sahli, Hichem
2008-01-01
In this paper we consider a multi-dimensional inverse heat conduction problem with time-dependent coefficients in a box, which is well-known to be severely ill-posed, by a variational method. The gradient of the functional to be minimized is obtained by aids of an adjoint problem and the conjugate gradient method with a stopping rule is then applied to this ill-posed optimization problem. To enhance the stability and the accuracy of the numerical solution to the problem we apply this scheme to the discretized inverse problem rather than to the continuous one. The difficulties with large dimensions of discretized problems are overcome by a splitting method which only requires the solution of easy-to-solve one-dimensional problems. The numerical results provided by our method are very good and the techniques seem to be very promising.
Kissi, Philip Siaw; Opoku, Gyabaah; Boateng, Sampson Kwadwo
2016-01-01
The aim of the study was to investigate the effect of Microsoft Math Tool (graphical calculator) on students' achievement in the linear function. The study employed Quasi-experimental research design (Pre-test Post-test two group designs). A total of ninety-eight (98) students were selected for the study from two different Senior High Schools…
Hikosaka Kenji
2012-11-01
Full Text Available Abstract Background Mitochondrial (mt genomes vary considerably in size, structure and gene content. The mt genomes of the phylum Apicomplexa, which includes important human pathogens such as the malaria parasite Plasmodium, also show marked diversity of structure. Plasmodium has a concatenated linear mt genome of the smallest size (6-kb; Babesia and Theileria have a linear monomeric mt genome (6.5-kb to 8.2-kb with terminal inverted repeats; Eimeria, which is distantly related to Plasmodium and Babesia/Theileria, possesses a mt genome (6.2-kb with a concatemeric form similar to that of Plasmodium; Cryptosporidium, the earliest branching lineage within the phylum Apicomplexa, has no mt genome. We are interested in the evolutionary origin of linear mt genomes of Babesia/Theileria, and have investigated mt genome structures in members of archaeopiroplasmid, a lineage branched off earlier from Babesia/Theileria. Results The complete mt genomes of archaeopiroplasmid parasites, Babesia microti and Babesia rodhaini, were sequenced. The mt genomes of B. microti (11.1-kb and B. rodhaini (6.9-kb possess two pairs of unique inverted repeats, IR-A and IR-B. Flip-flop inversions between two IR-As and between two IR-Bs appear to generate four distinct genome structures that are present at an equi-molar ratio. An individual parasite contained multiple mt genome structures, with 20 copies and 2 – 3 copies per haploid nuclear genome in B. microti and B. rodhaini, respectively. Conclusion We found a novel linear monomeric mt genome structure of B. microti and B. rhodhaini equipped with dual flip-flop inversion system, by which four distinct genome structures are readily generated. To our knowledge, this study is the first to report the presence of two pairs of distinct IR sequences within a monomeric linear mt genome. The present finding provides insight into further understanding of evolution of mt genome structure.
Inverse problems of geophysics
Yanovskaya, T.B.
2003-07-01
This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given
Bunch lengthening calculations for the SLC [Stanford Linear Collider] damping rings
Bane, K.L.F.; Ruth, R.D.
1989-03-01
The problem of bunch lengthening in electron storage rings has been treated by many people, and there have been many experiments. In the typical experiment, the theory is used to determine the impedance of the ring. What has been lacking thus far, however, is a calculation of bunch lengthening that uses a carefully calculated ring impedance (or wakefield). In this paper we begin by finding the potential well distortion due to some very simple impedance models, in order to illustrate different types of bunch lengthening behavior. We then give a prescription for extending potential well calculations into the turbulent regime once the threshold is known. Then finally, using the wakefield calculated for the SLC damping rings, combined with the measured value of the threshold, we calculate bunch lengthening for the damping rings, and compare the results with the measurements. 9 refs., 6 figs
Zhang, Y. C.; Zhang, J. Z. H.; Kouri, D. J.; Haug, K.; Schwenke, D. W.
1988-01-01
Numerically exact, fully three-dimensional quantum mechanicl reactive scattering calculations are reported for the H2Br system. Both the exchange (H + H-prime Br to H-prime + HBr) and abstraction (H + HBR to H2 + Br) reaction channels are included in the calculations. The present results are the first completely converged three-dimensional quantum calculations for a system involving a highly exoergic reaction channel (the abstraction process). It is found that the production of vibrationally hot H2 in the abstraction reaction, and hence the extent of population inversion in the products, is a sensitive function of initial HBr rotational state and collision energy.
Experimental validation for calcul methods of structures having shock non-linearity
Brochard, D.; Buland, P.
1987-01-01
For the seismic analysis of non-linear structures, numerical methods have been developed which need to be validated on experimental results. The aim of this paper is to present the design method of a test program which results will be used for this purpose. Some applications to nuclear components will illustrate this presentation [fr
Calculation of the interfacial tension of the methane-water system with the linear gradient theory
Schmidt, Kurt A. G.; Folas, Georgios; Kvamme, Bjørn
2007-01-01
The linear gradient theory (LGT) combined with the Soave-Redlich-Kwong (SRK EoS) and the Peng-Robinson (PR EoS) equations of state has been used to correlate the interfacial tension data of the methane-water system. The pure component influence parameters and the binary interaction coefficient...... for the mixture influence parameter have been obtained for this system. The model was successfully applied to correlate the interfacial tension data set to within 2.3% for the linear gradient theory and the SRK EoS (LGT-SRK) and 2.5% for the linear gradient theory and PE EoS (LGT-PR). A posteriori comparison...... of data not used in the parameterisation were to within 3.2% for the LGT-SRK model and 2.7% for the LGT-PR model. An exhaustive literature review resulted in a large database for the investigation which covers a wide range of temperature and pressures. The results support the success of the linear...
Fujimura, Kaoru
1980-11-01
The numerical treatment of Orr-Sommerfeld equation which is the fundamental equation of linear hydrodynamic stability theory is described. Present calculation procedure is applied to the two-dimensional quasi-parallel flow for which linearized disturbance equation (Orr-Sommerfeld equation) contains one simple turning point and αR >> 1. The numerical procedure for this problem and one numerical example for Jeffery-Hamel flow (J-H III 1 ) are presented. These treatment can be extended to the other velocity profiles by slight midifications. (author)
V. Popov
2013-03-01
Full Text Available In the course of microeconomics it is convenient to use additive functions of requirements in educational purposes, in which the volume of requirements is set by the linear function of the price, revenue and other factors. But in arriving at the substitution effect there is a number of problems in which impossible answers come out. The formula adjustment concluded by the author, which will allow to avoid contradictions, is described in the article.
A Linear Gradient Theory Model for Calculating Interfacial Tensions of Mixtures
Zou, You-Xiang; Stenby, Erling Halfdan
1996-01-01
excellent agreement between the predicted and experimental IFTs at high and moderate levels of IFTs, while the agreement is reasonably accurate in the near-critical region as the used equations of state reveal classical scaling behavior. To predict accurately low IFTs (sigma ... with proper scaling behavior at the critical point is at least required.Key words: linear gradient theory; interfacial tension; equation of state; influence parameter; density profile....
Tarasenko Alexandr
2016-01-01
Full Text Available The paper is aimed at determining the possibility of applying the simplified method proposed by the authors to calculate the tank seismic resistance in compliance with current regulations and scientific provisions. The authors propose a highly detailed numerical model for a common oil storage tank RVSPK-50000 that enables static operational loads and dynamic action of earthquakes to be calculated. Within the modal analysis the natural oscillation frequencies in the range of 0-10 Hz were calculated; the results are given for the first ten modes. The model takes into account the effect of impulsive and convective components of hydrodynamic pressure during earthquakes. Within the spectral analysis by generalized response spectra was calculated a general stress-strain state of a structure during earthquakes of 7, 8, 9 intensity degrees on the MSK-64 scale for a completely filled up, a half-filled up to the mark of 8.5 m and an empty RVSPK-50000 tank. The developed finite element model can be used to perform calculations of seismic resistance by the direct dynamic method, which will give further consideration to the impact of individual structures (floating roof, support posts, adjoined elements of added stiffness on the general stress-strain state of a tank.
Inversed linear dichroism in F K-edge NEXAFS spectra of fluorinated planar aromatic molecules
de Oteyza, D. G.; Sakko, A.; El-Sayed, A.
2012-01-01
The symmetry and energy distribution of unoccupied molecular orbitals is addressed in this work by means of NEXAFS and density functional theory calculations for planar, fluorinated organic semiconductors (perfluorinated copper phthalocyanines and perfluoropentacene). We demonstrate how molecular...
Larriba-Andaluz, Carlos; Hogan, Christopher J.
2014-01-01
Structural characterization of ions in the gas phase is facilitated by measurement of ion collision cross sections (CCS) using techniques such as ion mobility spectrometry. Further information is gained from CCS measurement when comparison is made between measurements and accurately predicted CCSs for model ion structures and the gas in which measurements are made. While diatomic gases, namely molecular nitrogen and air, are being used in CCS measurement with increasingly prevalency, the majority of studies in which measurements are compared to predictions use models in which gas molecules are spherical or non-rotating, which is not necessarily appropriate for diatomic gases. Here, we adapt a momentum transfer based CCS calculation approach to consider rotating, diatomic gas molecule collisions with polyatomic ions, and compare CCS predictions with a diatomic gas molecule to those made with a spherical gas molecular for model spherical ions, tetra-alkylammonium ions, and multiply charged polyethylene glycol ions. CCS calculations are performed using both specular-elastic and diffuse-inelastic collisions rules, which mimic negligible internal energy exchange and complete thermal accommodation, respectively, between gas molecule and ion. The influence of the long range ion-induced dipole potential on calculations is also examined with both gas molecule models. In large part we find that CCSs calculated with specular-elastic collision rules decrease, while they increase with diffuse-inelastic collision rules when using diatomic gas molecules. Results clearly show the structural model of both the ion and gas molecule, the potential energy field between ion and gas molecule, and finally the modeled degree of kinetic energy exchange between ion and gas molecule internal energy are coupled to one another in CCS calculations, and must be considered carefully to obtain results which agree with measurements
M. D. Corre
2010-08-01
Full Text Available Soil respiration is the second largest flux in the global carbon cycle, yet the underlying below-ground process, carbon dioxide (CO2 production, is not well understood because it can not be measured in the field. CO2 production has frequently been calculated from the vertical CO2 diffusive flux divergence, known as "soil-CO2 profile method". This relatively simple model requires knowledge of soil CO2 concentration profiles and soil diffusive properties. Application of the method for a tropical lowland forest soil in Panama gave inconsistent results when using diffusion coefficients (D calculated based on relationships with soil porosity and moisture ("physically modeled" D. Our objective was to investigate whether these inconsistencies were related to (1 the applied interpolation and solution methods and/or (2 uncertainties in the physically modeled profile of D. First, we show that the calculated CO2 production strongly depends on the function used to interpolate between measured CO2 concentrations. Secondly, using an inverse analysis of the soil-CO2 profile method, we deduce which D would be required to explain the observed CO2 concentrations, assuming the model perception is valid. In the top soil, this inversely modeled D closely resembled the physically modeled D. In the deep soil, however, the inversely modeled D increased sharply while the physically modeled D did not. When imposing a constraint during the fit parameter optimization, a solution could be found where this deviation between the physically and inversely modeled D disappeared. A radon (Rn mass balance model, in which diffusion was calculated based on the physically modeled or constrained inversely modeled D, simulated observed Rn profiles reasonably well. However, the CO2 concentrations which corresponded to the constrained inversely modeled D were too small compared to the measurements. We suggest that, in well-structured soils, a missing description of steady state CO2
Linear cascade calculations of matrix due to neutron-induced nuclear reactions
Avila, Ricardo E
2000-01-01
A method is developed to calculate the total number of displacements created by energetic particles resulting from neutron-induced nuclear reactions. The method is specifically conceived to calculate the damage in lithium ceramics by the 6L i(n, α)T reaction. The damage created by any particle is related to that caused by atoms from the matrix recoiling after collision with the primary particle. An integral equation for that self-damage is solved by interactions, using the magic stopping powers of Ziegler, Biersack and Littmark. A projectile-substrate dependent Kinchin-Pease model is proposed, giving and analytic approximation to the total damage as a function of the initial particle energy (au)
Torres Pozas, S.; Monja Rey, P. de la; Sanchez Carrasca, M.; Yanez Lopez, D.; Macias Verde, D.; Martin Oliva, R.
2011-01-01
In recent years, the progress experienced in cancer treatment with ionizing radiation can deliver higher doses to smaller volumes and better shaped, making it necessary to take into account new aspects in the calculation of structural barriers. Furthermore, given that forecasts suggest that in the near future will install a large number of accelerators, or existing ones modified, we believe a useful tool to estimate the thickness of the structural barriers of treatment rooms. The shielding calculation methods are based on standard DIN 6847-2 and the recommendations given by the NCRP 151. In our experience we found only estimates originated from the DIN. Therefore, we considered interesting to develop an application that incorporates the formulation suggested by the NCRP, together with previous work based on the rules DIN allow us to establish a comparison between the results of both methods. (Author)
Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng
2013-04-01
This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a
The calculated longitudinal impedance of the SLC [Stanford Linear Collider] damping rings
Bane, K.L.F.
1988-05-01
A high level of current dependent bunch lengthening has been observed in the north damping ring of the Stanford Linear Collider (SLC), indicating that the ring's impedance is very inductive. This level of bunch lengthening will limit the performance of the SLC. In order to study the problem of bunch lengthening in the damping ring and the possibility of reducing their inductance we compute, in this report, the longitudinal impedance of the damping ring vacuum chamber. More specifically we find the response function of the ring to a short gaussian bunch. This function will later be used as a driving term in the longitudinal equation of motion. We also identify the important inductive elements of the vacuum chamber and estimate their contribution to the total ring inductance. This information will be useful in assessing the effect of vacuum chamber modifications. 7 refs. , 8 figs., 1 tab
Electromagnetic Performance Calculation of HTS Linear Induction Motor for Rail Systems
Liu, Bin; Fang, Jin; Cao, Junci; Chen, Jie; Shu, Hang; Sheng, Long
2017-01-01
According to a high temperature superconducting (HTS) linear induction motor (LIM) designed for rail systems, the influence of electromagnetic parameters and mechanical structure parameters on the electromagnetic horizontal thrust, vertical force of HTS LIM and the maximum vertical magnetic field of HTS windings are analyzed. Through the research on the vertical field of HTS windings, the development regularity of the HTS LIM maximum input current with different stator frequency and different thickness value of the secondary conductive plate is obtained. The theoretical results are of great significance to analyze the stability of HTS LIM. Finally, based on theory analysis, HTS LIM test platform was built and the experiment was carried out with load. The experimental results show that the theoretical analysis is correct and reasonable. (paper)
Electromagnetic Performance Calculation of HTS Linear Induction Motor for Rail Systems
Liu, Bin; Fang, Jin; Cao, Junci; Chen, Jie; Shu, Hang; Sheng, Long
2017-07-01
According to a high temperature superconducting (HTS) linear induction motor (LIM) designed for rail systems, the influence of electromagnetic parameters and mechanical structure parameters on the electromagnetic horizontal thrust, vertical force of HTS LIM and the maximum vertical magnetic field of HTS windings are analyzed. Through the research on the vertical field of HTS windings, the development regularity of the HTS LIM maximum input current with different stator frequency and different thickness value of the secondary conductive plate is obtained. The theoretical results are of great significance to analyze the stability of HTS LIM. Finally, based on theory analysis, HTS LIM test platform was built and the experiment was carried out with load. The experimental results show that the theoretical analysis is correct and reasonable.
Radiation calculations and shielding considerations for the design of the Next Linear Collider
Nelson, W.R.; Rokni, S.H.; Vylet, V.
1996-11-01
The authors describe some of the work that they have done as a contribution to the Next Linear Collider (NLC) Zeroth-Order Design Report (ZDR), with specific emphasis placed on radiation-protection issues. However, because of the very nature of this machine--namely, extremely-small beam spots of high intensity--a new approach in accelerator radiation-protection philosophy appears to be warranted. Accordingly, the presentation will first take a look at recent design studies directed at protecting the machine itself, since this has resulted in a much better understanding of the very short exposure times involved whenever beam is lost and radiation sources are created. At the end of the paper, the authors suggest a Beam Containment System (BCS) that would provide an independent, redundant guarantee that exposure times are, indeed, kept very short. This, in turn, has guided them in the determination of the transverse shield thickness for the machine
Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martì Molist, Joan
2017-06-01
We test an innovative inversion scheme using Green's functions from an array of pressure sources embedded in finite-element method (FEM) models to image, without assuming an a-priori geometry, the composite and complex shape of a volcano deformation source. We invert interferometric synthetic aperture radar (InSAR) data to estimate the pressurization and shape of the magma reservoir of Rabaul caldera, Papua New Guinea. The results image the extended shallow magmatic system responsible for a broad and long-term subsidence of the caldera between 2007 February and 2010 December. Elastic FEM solutions are integrated into the regularized linear inversion of InSAR data of volcano surface displacements in order to obtain a 3-D image of the source of deformation. The Green's function matrix is constructed from a library of forward line-of-sight displacement solutions for a grid of cubic elementary deformation sources. Each source is sequentially generated by removing the corresponding cubic elements from a common meshed domain and simulating the injection of a fluid mass flux into the cavity, which results in a pressurization and volumetric change of the fluid-filled cavity. The use of a single mesh for the generation of all FEM models avoids the computationally expensive process of non-linear inversion and remeshing a variable geometry domain. Without assuming an a-priori source geometry other than the configuration of the 3-D grid that generates the library of Green's functions, the geodetic data dictate the geometry of the magma reservoir as a 3-D distribution of pressure (or flux of magma) within the source array. The inversion of InSAR data of Rabaul caldera shows a distribution of interconnected sources forming an amorphous, shallow magmatic system elongated under two opposite sides of the caldera. The marginal areas at the sides of the imaged magmatic system are the possible feeding reservoirs of the ongoing Tavurvur volcano eruption of andesitic products on the
Khan, S.H.; Ivanov, A.A.
1995-01-01
An analytical method for calculating static characteristics of linear dc step motors (LSM) is described. These multiphase passive-armature motors are now being developed for control rod drives (CRD) in large nuclear reactors. The static characteristics of such LSM is defined by the variation of electromagnetic force with armature displacement and it determines motor performance in its standing and dynamic modes of operation. The proposed analytical technique for calculating this characteristic is based on the permeance analysis method applied to phase magnetic circuits of LSM. Reluctances of various parts of phase magnetic circuit is calculated analytically by assuming probable flux paths and by taking into account complex nature of magnetic field distribution in it. For given armature positions stator and armature iron saturations are taken into account by an efficient iterative algorithm which gives fast convergence. The method is validated by comparing theoretical results with experimental ones which shows satisfactory agreement for small stator currents and weak iron saturation
Eriksen, Troels K; Karlsen, Eva; Spanget-Larsen, Jens
2015-01-01
The title compounds were investigated by means of Linear Dichroism (LD) IR spectroscopy on samples partially aligned in uniaxially stretched low-density polyethylene and by density functional theory calculations. Satisfactory overall agreement between observed and calculated vibrational wavenumbers...
P.D.Gujrati
2002-01-01
Full Text Available Theoretical evidence is presented in this review that architectural aspects can play an important role, not only in the bulk but also in confined geometries by using our recursive lattice theory, which is equally applicable to fixed architectures (regularly branched polymers, stars, dendrimers, brushes, linear chains, etc. and variable architectures, i.e. randomly branched structures. Linear chains possess an inversion symmetry (IS of a magnetic system (see text, whose presence or absence determines the bulk phase diagram. Fixed architectures possess the IS and yield a standard bulk phase diagram in which there exists a theta point at which two critical lines C and C' meet and the second virial coefficient A2 vanishes. The critical line C appears only for infinitely large polymers, and an order parameter is identified for this criticality. The critical line C' exists for polymers of all sizes and represents phase separation criticality. Variable architectures, which do not possess the IS, give rise to a topologically different phase diagram with no theta point in general. In confined regions next to surfaces, it is not the IS but branching and monodispersity, which becomes important in the surface regions. We show that branching plays no important role for polydisperse systems, but become important for monodisperse systems. Stars and linear chains behave differently near a surface.
End Effects on the Linear Induction MHD Generator Calculated by Two-Sided Laplace Transform
Engeln, F.; Peschka, W. [Deutsche Versuchsanstalt fuer Luft- und Raumfahrt e.V., Institut fuer Energiewandlung und Elektrische Antriebe, Stuttgart, Federal Republic of Germany (Germany)
1966-11-15
In induction MHD systems special problems occur where the flow enters or leaves the magnetic field. These problems are generally described as end effects. Large gradients of the magnetic field are present at the inlet and also at the outlet of an MHD induction engine, these generating electric current systems in the fluid which may spoil the performance characteristics of the generator due to the interaction with the primary field of the engine. The two-dimensional induction MHD generator of finite length, using a polyphase winding system to obtain a travelling magnetic field, is treated as a boundary value problem by two-sided Laplace transform. For simplicity incompressibility is assumed. The two- dimensional boundary value problem of the induction engine is solved for - {infinity} Less-Than-Over-Equal-To x Less-Than-Over-Equal-To {infinity}. x is parallel to the flow direction of the linear MHD generator. In the region 0 Less-Than-Over-Equal-To x Less-Than-Over-Equal-To L the magnetic travelling wave is sinusoidal with a cyclical frequency {omega} and a phase-velocity v{sub s}. At x = 0 the conducting incompressible working fluid enters the field region and leaves it at the point-x = L. Two mathematical methods can be used to solve the boundary value problem, the Fourier transform or the two-sided Laplace transform. The latter offers the advantage of representing a complex analytical function in the image space. Moreover, it is possible to obtain the characteristics of the generator in the image space (e. g. field configuration, power flow function, etc.). That implies a large simplification of mathematical treatment. The solution in the original space then is given by asymptotic expansion of the known image function. (author)
Reynolds, Jacob G.
2013-01-01
Partial molar properties are the changes occurring when the fraction of one component is varied while the fractions of all other component mole fractions change proportionally. They have many practical and theoretical applications in chemical thermodynamics. Partial molar properties of chemical mixtures are difficult to measure because the component mole fractions must sum to one, so a change in fraction of one component must be offset with a change in one or more other components. Given that more than one component fraction is changing at a time, it is difficult to assign a change in measured response to a change in a single component. In this study, the Component Slope Linear Model (CSLM), a model previously published in the statistics literature, is shown to have coefficients that correspond to the intensive partial molar properties. If a measured property is plotted against the mole fraction of a component while keeping the proportions of all other components constant, the slope at any given point on a graph of this curve is the partial molar property for that constituent. Actually plotting this graph has been used to determine partial molar properties for many years. The CSLM directly includes this slope in a model that predicts properties as a function of the component mole fractions. This model is demonstrated by applying it to the constant pressure heat capacity data from the NaOH-NaAl(OH 4 H 2 O system, a system that simplifies Hanford nuclear waste. The partial molar properties of H 2 O, NaOH, and NaAl(OH) 4 are determined. The equivalence of the CSLM and the graphical method is verified by comparing results detennined by the two methods. The CSLM model has been previously used to predict the liquidus temperature of spinel crystals precipitated from Hanford waste glass. Those model coefficients are re-interpreted here as the partial molar spinel liquidus temperature of the glass components
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
Saloranta, Tuomo M; Andersen, Tom; Naes, Kristoffer
2006-01-01
Rate constant bioaccumulation models are applied to simulate the flow of polychlorinated dibenzo-p-dioxins and dibenzofurans (PCDD/Fs) in the coastal marine food web of Frierfjorden, a contaminated fjord in southern Norway. We apply two different ways to parameterize the rate constants in the model, global sensitivity analysis of the models using Extended Fourier Amplitude Sensitivity Test (Extended FAST) method, as well as results from general linear system theory, in order to obtain a more thorough insight to the system's behavior and to the flow pathways of the PCDD/Fs. We calibrate our models against observed body concentrations of PCDD/Fs in the food web of Frierfjorden. Differences between the predictions from the two models (using the same forcing and parameter values) are of the same magnitude as their individual deviations from observations, and the models can be said to perform about equally well in our case. Sensitivity analysis indicates that the success or failure of the models in predicting the PCDD/F concentrations in the food web organisms highly depends on the adequate estimation of the truly dissolved concentrations in water and sediment pore water. We discuss the pros and cons of such models in understanding and estimating the present and future concentrations and bioaccumulation of persistent organic pollutants in aquatic food webs.
van der Heijden, R T; Heijnen, J J; Hellinga, C; Romein, B; Luyben, K C
1994-01-05
Measurements provide the basis for process monitoring and control as well as for model development and validation. Systematic approaches to increase the accuracy and credibility of the empirical data set are therefore of great value. In (bio)chemical conversions, linear conservation relations such as the balance equations for charge, enthalpy, and/or chemical elements, can be employed to relate conversion rates. In a pactical situation, some of these rates will be measured (in effect, be calculated directly from primary measurements of, e.g., concentrations and flow rates), as others can or cannot be calculated from the measured ones. When certain measured rates can also be calculated from other measured rates, the set of equations, the accuracy and credibility of the measured rates can indeed be improved by, respectively, balancing and gross error diagnosis. The balanced conversion rates are more accurate, and form a consistent set of data, which is more suitable for further application (e.g., to calculate nonmeasured rates) than the raw measurements. Such an approach has drawn attention in previous studies. The current study deals mainly with the problem of mathematically classifying the conversion rates into balanceable and calculable rates, given the subset of measured rates. The significance of this problem is illustrated with some examples. It is shown that a simple matrix equation can be derived that contains the vector of measured conversion rates and the redundancy matrix R. Matrix R plays a predominant role in the classification problem. In supplementary articles, significance of the redundancy matrix R for an improved gross error diagnosis approach will be shown. In addition, efficient equations have been derived to calculate the balanceable and/or calculable rates. The method is completely based on matrix algebra (principally different from the graph-theoretical approach), and it is easily implemented into a computer program. (c) 1994 John Wiley & Sons
Bahreyni Toossi, M.T.; Hashemi, S.M.; Momen Nezhad, M.
2008-01-01
In recent decades, cancer has been one of the main ever increasing causes of death in developed countries. In order to fulfill the aforementioned considerations different techniques have been used, one of which is Monte Carlo simulation technique. High accuracy of the Monte Carlo simulation has been one of the main reason for its wide spread application. In this study, MCNP-4C code was employed to simulate electron mode of the Neptun 10 PC Linac, dosimetric quantities for conventional fields have also been both measured and calculated. Although Neptun 10 PC Linac is no longer licensed for installation in European and some other countries but regrettably nearly 10 of them have been installed in different centers around the country and are in operation. Therefore, in this circumstance, to improve the accuracy of treatment planning, Monte Carlo simulation for Neptun 10 PC was recognized as a necessity. Simulated and measured values of depth dose curves, off axis dose distributions for 6 , 8 and 10 MeV electrons applied for four different size fields, 6 x 6 cm 2 , 10 x 10 cm 2 , 15 x 15 cm 2 and 20 x 20 cm 2 were obtained. The measurements were carried out by a Welhofer-Scanditronix dose scanning system, Semiconductor Detector and Ionization Chamber. The results of this study have revealed that the values of two main dosimetric quantities depth dose curves and off axis dose distributions, acquired by MCNP-4C simulation and the corresponding values achieved by direct measurements are in a very good agreement (within 1% to 2% difference). In general, very good consistency of simulated and measured results, is a good proof that the goal of this work has been accomplished. In other word where measurements of some parameters are not practically achievable, MCNP-4C simulation can be implemented confidently. (author)
Khan, S.H.; Ivanov, A.A.
1993-01-01
This paper describes an approximate method for calculating the static characteristics of linear step motors (LSM), being developed for control rod drives (CRD) in large nuclear reactors. The static characteristic of such an LSM which is given by the variation of electromagnetic force with armature displacement determines the motor performance in its standing and dynamic modes. The approximate method of calculation of these characteristics is based on the permeance analysis method applied to the phase magnetic circuit of LSM. This is a simple, fast and efficient analytical approach which gives satisfactory results for small stator currents and weak iron saturation, typical to the standing mode of operation of LSM. The method is validated by comparing theoretical results with experimental ones. (Author)
Navarro, J. A.; Madariaga, J. A.; Santamaria, C. M.; Saviron, J. M.
1980-01-01
10 refs. Flow pattern calculations in natural convection between two vertical coaxial cylinders are reported. It is assumed trough the paper. that fluid properties, viscosity, thermal conductivity and density, depend no-linearly on temperature and that the aspects (height/radius) ratio of the cylinders is high. Velocity profiles are calculated trough a perturbative scheme and analytic results for the three first perturbation orders are presented. We outline also an iterative method to estimate the perturbations on the flow patterns which arise when a radial composition gradient is established by external forces in a two-component fluid. This procedure, based on semiempirical basis, is applied to gaseous convection. The influence of the molecules gas properties on tho flow is also discussed. (Author) 10 refs
Bonin, A.; Tsilanizara, A. [CEA Saclay, LIST, DENIDANSIDM2SISERMA, 91 - Gif-sur-Yvette (France)
2010-07-01
The authors present a method for the calculation of the dose equivalent rate which takes main isotopes as well as minority isotopes into account. According to this method, they first calculate the initial composition (before ageing) from what can be observed at a certain time. Then, from this reconstructed initial composition, they complete the isotopic assessment, thus the sources of emitted particles at the same time. The method is implemented in the MENDEL code. Validation is performed with data corresponding to an UOX fuel pin
Direct and inverse reactions of LiH+ with He(1S) from quantum calculations: mechanisms and rates.
Tacconi, M; Bovino, S; Gianturco, F A
2012-01-14
The gas-phase reaction of LiH(+) (X(2)Σ) with He((1)S) atoms, yielding Li(+)He with a small endothermicity for the rotovibrational ground state of the reagents, is analysed using the quantum reactive approach that employs the Negative Imaginary Potential (NIP) scheme discussed earlier in the literature. The dependence of low-T rates on the initial vibrational state of LiH(+) is analysed and the role of low-energy Feshbach resonances is also discussed. The inverse destruction reaction of LiHe(+), a markedly exothermic process, is also investigated and the rates are computed in the same range of temperatures. The possible roles of these reactions in early universe astrophysical networks, in He droplets environments or in cold traps are briefly discussed.
Hilton, P.R.; Nordholm, S.; Hush, N.S.
1980-01-01
The ground-state inversion method, which we have previously developed for the calculation of atomic cross-sections, is applied to the calculation of molecular photoionization cross-sections. These are obtained as a weighted sum of atomic subshell cross-sections plus multi-centre interference terms. The atomic cross-sections are calculated directly for the atomic functions which when summed over centre and symmetry yield the molecular orbital wave function. The use of the ground-state inversion method for this allows the effect of the molecular environment on the atomic cross-sections to be calculated. Multi-centre terms are estimated on the basis of an effective plane-wave expression for this contribution to the total cross-section. Finally the method is applied to the range of photon energies from 0 to 44 eV where atomic extrapolation procedures have not previously been tested. Results obtained for H 2 , N 2 and CO show good agreement with experiment, particularly when interference effects and effects of the molecular environment on the atomic cross-sections are included. The accuracy is very much better than that of previous plane-wave and orthogonalized plane-wave methods, and can stand comparison with that of recent more sophisticated approaches. It is a feature of the method that calculation of cross-sections either of atoms or of large molecules requires very little computer time, provided that good quality wave functions are available, and it is then of considerable potential practical interest for photoelectorn spectroscopy. (orig.)
Ab initio electronic structure calculations for Mn linear chains deposited on CuN/Cu(001) surfaces
Barral, Maria Andrea; Weht, Ruben; Lozano, Gustavo; Maria Llois, Ana
2007-01-01
In a recent experiment, scanning tunneling microscopy has been used to obtain a direct probe of the magnetic interaction in linear manganese chains arranged by atomic manipulation on thin insulating copper nitride islands grown on Cu(001). The local spin excitation spectra of these chains have been measured with inelastic electron tunneling spectroscopy. Analyzing the spectroscopic results with a Heisenberg Hamiltonian the interatomic coupling strength within the chains has been obtained. It has been found that the coupling strength depends on the deposition sites of the Mn atoms on the islands. In this contribution, we perform ab initio calculations for different arrangements of infinite Mn chains on CuN in order to understand the influence of the environment on the value of the magnetic interactions
Albayrak, Erhan; Keskin, Mustafa
2000-01-01
The linear chain approximation is used to study the temperature dependence of the order parameters and the phase diagrams of the Blume-Emery-Griffiths model on the simple cubic lattice with dipole-dipole, quadrupole-quadrupole coupling strengths and a crystal-field interaction. The problem is approached introducing first a trial one-dimensional Hamiltonian whose free energy can be calculated exactly by the transfer matrix method. Then using the Bogoliubov variational principle, the free energy of the model is determined. It is assumed that the dipolar and quadrupolar intrachain coupling constants are much stronger than the corresponding interchain constants and confined the attention to the case of nearest-neighbor interactions. The phase transitions are examined and the phase diagrams are obtained for several values of the coupling strengths in the three different planes. A comparison with other approximate techniques is also made
Albayrak, E
2000-01-01
The linear chain approximation is used to study the temperature dependence of the order parameters and the phase diagrams of the Blume-Emery-Griffiths model on the simple cubic lattice with dipole-dipole, quadrupole-quadrupole coupling strengths and a crystal-field interaction. The problem is approached introducing first a trial one-dimensional Hamiltonian whose free energy can be calculated exactly by the transfer matrix method. Then using the Bogoliubov variational principle, the free energy of the model is determined. It is assumed that the dipolar and quadrupolar intrachain coupling constants are much stronger than the corresponding interchain constants and confined the attention to the case of nearest-neighbor interactions. The phase transitions are examined and the phase diagrams are obtained for several values of the coupling strengths in the three different planes. A comparison with other approximate techniques is also made.
Amini Afshar, Mostafa; Bingham, Harry B.; Read, Robert
During recent years a computational strategy has been developed at the Technical University of Denmark for numerical simulation of water wave problems based on the high-order nite-dierence method, [2],[4]. These methods exhibit a linear scaling of the computational eort as the number of grid points...... increases. This understanding is being applied to develop a tool for predicting the added resistance (drift force) of ships in ocean waves. We expect that the optimal scaling properties of this solver will allow us to make a convincing demonstration of convergence of the added resistance calculations based...... on both near-eld and far-eld methods. The solver has been written inside a C++ library known as Overture [3], which can be used to solve partial dierential equations on overlapping grids based on the high-order nite-dierence method. The resulting code is able to solve, in the time domain, the linearised...
Bayesian seismic AVO inversion
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
Lin Lin; Chao Yang; Jiangfeng Lu; Lexing Ying; Weinan, E.
2009-01-01
We present an efficient parallel algorithm and its implementation for computing the diagonal of H -1 where H is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of H through a recently developed pole-expansion technique LinLuYingE2009. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems HohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of (H-z i I) -1 for a small number of poles z i is much faster, especially when the quantum dot contains many electrons.
Lin, Lin; Yang, Chao; Lu, Jiangfeng; Ying, Lexing; E, Weinan
2009-09-25
We present an efficient parallel algorithm and its implementation for computing the diagonal of $H^-1$ where $H$ is a 2D Kohn-Sham Hamiltonian discretized on a rectangular domain using a standard second order finite difference scheme. This type of calculation can be used to obtain an accurate approximation to the diagonal of a Fermi-Dirac function of $H$ through a recently developed pole-expansion technique \\cite{LinLuYingE2009}. The diagonal elements are needed in electronic structure calculations for quantum mechanical systems \\citeHohenbergKohn1964, KohnSham 1965,DreizlerGross1990. We show how elimination tree is used to organize the parallel computation and how synchronization overhead is reduced by passing data level by level along this tree using the technique of local buffers and relative indices. We analyze the performance of our implementation by examining its load balance and communication overhead. We show that our implementation exhibits an excellent weak scaling on a large-scale high performance distributed parallel machine. When compared with standard approach for evaluating the diagonal a Fermi-Dirac function of a Kohn-Sham Hamiltonian associated a 2D electron quantum dot, the new pole-expansion technique that uses our algorithm to compute the diagonal of $(H-z_i I)^-1$ for a small number of poles $z_i$ is much faster, especially when the quantum dot contains many electrons.
O. Tichý
2016-11-01
Full Text Available Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values as a product of the source-receptor sensitivity (SRS matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Tichý, Ondřej; Šmídl, Václav; Hofman, Radek; Stohl, Andreas
2016-11-01
Estimation of pollutant releases into the atmosphere is an important problem in the environmental sciences. It is typically formalized as an inverse problem using a linear model that can explain observable quantities (e.g., concentrations or deposition values) as a product of the source-receptor sensitivity (SRS) matrix obtained from an atmospheric transport model multiplied by the unknown source-term vector. Since this problem is typically ill-posed, current state-of-the-art methods are based on regularization of the problem and solution of a formulated optimization problem. This procedure depends on manual settings of uncertainties that are often very poorly quantified, effectively making them tuning parameters. We formulate a probabilistic model, that has the same maximum likelihood solution as the conventional method using pre-specified uncertainties. Replacement of the maximum likelihood solution by full Bayesian estimation also allows estimation of all tuning parameters from the measurements. The estimation procedure is based on the variational Bayes approximation which is evaluated by an iterative algorithm. The resulting method is thus very similar to the conventional approach, but with the possibility to also estimate all tuning parameters from the observations. The proposed algorithm is tested and compared with the standard methods on data from the European Tracer Experiment (ETEX) where advantages of the new method are demonstrated. A MATLAB implementation of the proposed algorithm is available for download.
Pavanello, Michele [Department of Chemistry, Rutgers University, Newark, New Jersey 07102-1811 (United States); Van Voorhis, Troy [Department of Chemistry, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139-4307 (United States); Visscher, Lucas [Amsterdam Center for Multiscale Modeling, VU University, De Boelelaan 1083, 1081 HV Amsterdam (Netherlands); Neugebauer, Johannes [Theoretische Organische Chemie, Organisch-Chemisches Institut der Westfaelischen Wilhelms-Universitaet Muenster, Corrensstrasse 40, 48149 Muenster (Germany)
2013-02-07
Quantum-mechanical methods that are both computationally fast and accurate are not yet available for electronic excitations having charge transfer character. In this work, we present a significant step forward towards this goal for those charge transfer excitations that take place between non-covalently bound molecules. In particular, we present a method that scales linearly with the number of non-covalently bound molecules in the system and is based on a two-pronged approach: The molecular electronic structure of broken-symmetry charge-localized states is obtained with the frozen density embedding formulation of subsystem density-functional theory; subsequently, in a post-SCF calculation, the full-electron Hamiltonian and overlap matrix elements among the charge-localized states are evaluated with an algorithm which takes full advantage of the subsystem DFT density partitioning technique. The method is benchmarked against coupled-cluster calculations and achieves chemical accuracy for the systems considered for intermolecular separations ranging from hydrogen-bond distances to tens of Angstroms. Numerical examples are provided for molecular clusters comprised of up to 56 non-covalently bound molecules.
Botto, D.; Zucca, S.; Gola, M.M.
2003-01-01
In the literature many works have been written dealing with the task of on-line calculation of temperature and thermal stress for machine components and structures, in order to evaluate fatigue damage accumulation and estimate residual life. One of the most widespread methodologies is the Green's function technique (GFT), by which machine parameters such as fluid temperatures, pressures and flow rates are converted into metal temperature transients and thermal stresses. However, since the GFT is based upon the linear superposition principle, it cannot be directly used in the case of varying heat transfer coefficients. In the present work, a different methodology is proposed, based upon CMS for temperature transient calculation and upon the GFT for the related thermal stress evaluation. This new approach allows variable heat transfer coefficients to be accounted for. The methodology is applied for two different case studies, taken from the literature: a thick pipe and a nozzle connected to a spherical head, both subjected to multiple convective boundary conditions
Chan, T.; Cook, N.G.W.
1979-12-01
Thermally induced displacements and stresses have been calculated by finite element analysis to guide the design, operation, and data interpretation of the in situ heating experiments in a granite formation at Stripa, Sweden. There are two full-scale tests with electrical heater canisters comparable in size and power to those envisaged for reprocessed high level waste canisters and a time-scaled test. To provide a simple theoretical basis for data analysis, linear thermoelasticity was assumed. Constant (temperature-independent) thermal and mechanical rock properties were used in the calculations. These properties were determined by conventional laboratory testing on small intact core specimens recovered from the Stripa test site. Two-dimensional axisymmetric models were used for the full-scale experiments, and three-dimensional models for the time-scaled experiment. Highest compressive axial and tangential stresses are expected at the wall of the heater borehole. For the 3.6 kW full-scale heated experiment, maximum compressive tangential stress was predicted to be below the unconfined compressive strength of Stripa granite, while for the 5 kW experiment, the maximum was approximately equal to the compressive strength before the concentric ring of eight 1 kW peripheral heaters was activated, but would exceed that soon afterwards. Three zones of tensile thermomechanical stresses will occur in each full-scale experiment. Maximum vertical displacements range from a fraction of a millimeter over most of the instrumented area of the time-scaled experiment to a few millimeters in the higher-power full-scale experiment. Radial displacements are typically half or less than vertical displacements. The predicted thermomechanical displacements and stresses have been stored in an on-site computer to facilitate instant graphic comparison with field data as the latter are collected
Tayal, M.
1987-01-01
Structures often operate at elevated temperatures. Temperature calculations are needed so that the design can accommodate thermally induced stresses and material changes. A finite element computer called FEAT has been developed to calculate temperatures in solids of arbitrary shapes. FEAT solves the classical equation for steady state conduction of heat. The solution is obtained for two-dimensional (plane or axisymmetric) or for three-dimensional problems. Gap elements are use to simulate interfaces between neighbouring surfaces. The code can model: conduction; internal generation of heat; prescribed convection to a heat sink; prescribed temperatures at boundaries; prescribed heat fluxes on some surfaces; and temperature-dependence of material properties like thermal conductivity. The user has a option of specifying the detailed variation of thermal conductivity with temperature. For convenience to the nuclear fuel industry, the user can also opt for pre-coded values of thermal conductivity, which are obtained from the MATPRO data base (sponsored by the U.S. Nuclear Regulatory Commission). The finite element method makes FEAT versatile, and enables it to accurately accommodate complex geometries. The optional link to MATPRO makes it convenient for the nuclear fuel industry to use FEAT, without loss of generality. Special numerical techniques make the code inexpensive to run, for the type of material non-linearities often encounter in the analysis of nuclear fuel. The code, however, is general, and can be used for other components of the reactor, or even for non-nuclear systems. The predictions of FEAT have been compared against several analytical solutions. The agreement is usually better than 5%. Thermocouple measurements show that the FEAT predictions are consistent with measured changes in temperatures in simulated pressure tubes. FEAT was also found to predict well, the axial variations in temperatures in the end-pellets(UO 2 ) of two fuel elements irradiated
Marsolat, F; De Marzi, L; Mazal, A; Pouzoulet, F
2016-01-01
In proton therapy, the relative biological effectiveness (RBE) depends on various types of parameters such as linear energy transfer (LET). An analytical model for LET calculation exists (Wilkens’ model), but secondary particles are not included in this model. In the present study, we propose a correction factor, L sec , for Wilkens’ model in order to take into account the LET contributions of certain secondary particles. This study includes secondary protons and deuterons, since the effects of these two types of particles can be described by the same RBE-LET relationship. L sec was evaluated by Monte Carlo (MC) simulations using the GATE/GEANT4 platform and was defined by the ratio of the LET d distributions of all protons and deuterons and only primary protons. This method was applied to the innovative Pencil Beam Scanning (PBS) delivery systems and L sec was evaluated along the beam axis. This correction factor indicates the high contribution of secondary particles in the entrance region, with L sec values higher than 1.6 for a 220 MeV clinical pencil beam. MC simulations showed the impact of pencil beam parameters, such as mean initial energy, spot size, and depth in water, on L sec . The variation of L sec with these different parameters was integrated in a polynomial function of the L sec factor in order to obtain a model universally applicable to all PBS delivery systems. The validity of this correction factor applied to Wilkens’ model was verified along the beam axis of various pencil beams in comparison with MC simulations. A good agreement was obtained between the corrected analytical model and the MC calculations, with mean-LET deviations along the beam axis less than 0.05 keV μm −1 . These results demonstrate the efficacy of our new correction of the existing LET model in order to take into account secondary protons and deuterons along the pencil beam axis. (paper)
Inverse Faraday Effect Revisited
Mendonça, J. T.; Ali, S.; Davies, J. R.
2010-11-01
The inverse Faraday effect is usually associated with circularly polarized laser beams. However, it was recently shown that it can also occur for linearly polarized radiation [1]. The quasi-static axial magnetic field by a laser beam propagating in plasma can be calculated by considering both the spin and the orbital angular momenta of the laser pulse. A net spin is present when the radiation is circularly polarized and a net orbital angular momentum is present if there is any deviation from perfect rotational symmetry. This orbital angular momentum has recently been discussed in the plasma context [2], and can give an additional contribution to the axial magnetic field, thus enhancing or reducing the inverse Faraday effect. As a result, this effect that is usually attributed to circular polarization can also be excited by linearly polarized radiation, if the incident laser propagates in a Laguerre-Gauss mode carrying a finite amount of orbital angular momentum.[4pt] [1] S. ALi, J.R. Davies and J.T. Mendonca, Phys. Rev. Lett., 105, 035001 (2010).[0pt] [2] J. T. Mendonca, B. Thidé, and H. Then, Phys. Rev. Lett. 102, 185005 (2009).
Joel Sereno
2010-01-01
Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.
Generalized inverses theory and computations
Wang, Guorong; Qiao, Sanzheng
2018-01-01
This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.
Reactivity-induced time-dependencies of EBR-II linear and non-linear feedbacks
Grimm, K.N.; Meneghetti, D.
1988-01-01
Time-dependent linear feedback reactivities are calculated for stereotypical subassemblies in the EBR-II reactor. These quantities are calculated from nodal reactivities obtained from a kinetic code analysis of an experiment in which the change in power resulted from the dropping of a control rod. Shown with these linear reactivities are the reactivity associated with the control-rod shaft contraction and also time-dependent non-linear (mainly bowing) component deduced from the inverse kinetics of the experimentally measured fission power and the calculated linear reactivities. (author)
Optimized nonlinear inversion of surface-wave dispersion data
Raykova, Reneta B.
2014-01-01
A new code for inversion of surface wave dispersion data is developed to obtain Earth’s crustal and upper mantle velocity structure. The author developed Optimized Non–Linear Inversion ( ONLI ) software, based on Monte-Carlo search. The values of S–wave velocity VS and thickness h for a number of horizontal homogeneous layers are parameterized. Velocity of P–wave VP and density ρ of relevant layers are calculated by empirical or theoretical relations. ONLI explores parameters space in two modes, selective and full search, and the main innovation of software is evaluation of tested models. Theoretical dispersion curves are calculated if tested model satisfied specific conditions only, reducing considerably the computation time. A number of tests explored impact of parameterization and proved the ability of ONLI approach to deal successfully with non–uniqueness of inversion problem. Key words: Earth’s structure, surface–wave dispersion, non–linear inversion, software
Angle-domain inverse scattering migration/inversion in isotropic media
Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan
2018-07-01
The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.
Parkhurst, David L.; Appelo, C.A.J.
2013-01-01
PHREEQC version 3 is a computer program written in the C and C++ programming languages that is designed to perform a wide variety of aqueous geochemical calculations. PHREEQC implements several types of aqueous models: two ion-association aqueous models (the Lawrence Livermore National Laboratory model and WATEQ4F), a Pitzer specific-ion-interaction aqueous model, and the SIT (Specific ion Interaction Theory) aqueous model. Using any of these aqueous models, PHREEQC has capabilities for (1) speciation and saturation-index calculations; (2) batch-reaction and one-dimensional (1D) transport calculations with reversible and irreversible reactions, which include aqueous, mineral, gas, solid-solution, surface-complexation, and ion-exchange equilibria, and specified mole transfers of reactants, kinetically controlled reactions, mixing of solutions, and pressure and temperature changes; and (3) inverse modeling, which finds sets of mineral and gas mole transfers that account for differences in composition between waters within specified compositional uncertainty limits. Many new modeling features were added to PHREEQC version 3 relative to version 2. The Pitzer aqueous model (pitzer.dat database, with keyword PITZER) can be used for high-salinity waters that are beyond the range of application for the Debye-Hückel theory. The Peng-Robinson equation of state has been implemented for calculating the solubility of gases at high pressure. Specific volumes of aqueous species are calculated as a function of the dielectric properties of water and the ionic strength of the solution, which allows calculation of pressure effects on chemical reactions and the density of a solution. The specific conductance and the density of a solution are calculated and printed in the output file. In addition to Runge-Kutta integration, a stiff ordinary differential equation solver (CVODE) has been included for kinetic calculations with multiple rates that occur at widely different time scales
Hung, Linda; Huang, Chen; Shin, Ilgyou; Ho, Gregory S.; Lignères, Vincent L.; Carter, Emily A.
2010-12-01
: Intel with ifort; AMD Opteron with pathf90 Operating system: Linux Has the code been vectorized or parallelized?: Yes. Parallelization is implemented through domain composition using MPI. RAM: Problem dependent, but 2 GB is sufficient for up to 10,000 ions. Classification: 7.3 External routines: FFTW 2.1.5 ( http://www.fftw.org) Catalogue identifier of previous version: AEBN_v1_0 Journal reference of previous version: Comput. Phys. Comm. 179 (2008) 839 Does the new version supersede the previous version?: Yes Nature of problem: Given a set of coordinates describing the initial ion positions under periodic boundary conditions, recovers the ground state energy, electron density, ion positions, and cell lattice vectors predicted by orbital-free density functional theory. The computation of all terms is effectively linear scaling. Parallelization is implemented through domain decomposition, and up to ˜10,000 ions may be included in the calculation on just a single processor, limited by RAM. For example, when optimizing the geometry of ˜50,000 aluminum ions (plus vacuum) on 48 cores, a single iteration of conjugate gradient ion geometry optimization takes ˜40 minutes wall time. However, each CG geometry step requires two or more electron density optimizations, so step times will vary. Solution method: Computes energies as described in text; minimizes this energy with respect to the electron density, ion positions, and cell lattice vectors. Reasons for new version: To allow much larger systems to be simulated using PROFESS. Restrictions: PROFESS cannot use nonlocal (such as ultrasoft) pseudopotentials. A variety of local pseudopotential files are available at the Carter group website ( http://www.princeton.edu/mae/people/faculty/carter/homepage/research/localpseudopotentials/). Also, due to the current state of the kinetic energy functionals, PROFESS is only reliable for main group metals and some properties of semiconductors. Running time: Problem dependent: the test
Boer, Jan de; Peeters, Bas; Skenderis, Kostas; Nieuwenhuizen, Peter van
1995-01-01
We construct the path integral for one-dimensional non-linear sigma models, starting from a given Hamiltonian operator and states in a Hilbert space. By explicit evaluation of the discretized propagators and vertices we find the correct Feynman rules which differ from those often assumed. These
Bayesian inversion of refraction seismic traveltime data
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test
Kenzhebaev, Sh.K.; Djuraev, Sh.H.; Mannanov, D.E.; Khugaev, A.V.
1994-01-01
The investigation of nonstationary fermi-gas thermalization of nucleons in the residual excited nucleus as an open nonlinearize system and analytical methods of calculation are presented. (author). 9 refs
Xing, Qiang; Wu, Bingfang; Zhu, Weiwei
2014-01-01
The aerodynamic roughness is one of the major parameters in describing the turbulent exchange process between terrestrial and atmosphere. Remote Sensing is recognized as an effective way to inverse this parameter at the regional scale. However, in the long time the inversion method is either dependent on the lookup table for different land covers or the Normalized Difference Vegetation Index (NDVI) factor only, which plays a very limited role in describing the spatial heterogeneity of this parameter and the evapotranspiration (ET) for different land covers. In fact, the aerodynamic roughness is influenced by different factors at the same time, including the roughness unit for hard surfaces, the vegetation dynamic growth and the undulating terrain. Therefore, this paper aims at developing an innovative aerodynamic roughness inversion method based on multi-source remote sensing data in a semiarid region, within the upper and middle reaches of Heihe River Basin. The radar backscattering coefficient was used to inverse the micro-relief of the hard surface. The NDVI was utilized to reflect the dynamic change of vegetated surface. Finally, the slope extracted from SRTM DEM (Shuttle Radar Topography Mission Digital Elevation Model) was used to correct terrain influence. The inversed aerodynamic roughness was imported into ETWatch system to validate the availability. The inversed and tested results show it plays a significant role in improving the spatial heterogeneity of the aerodynamic roughness and related ET for the experimental site
Anaf, J.; Chalhoub, E.S.
1991-01-01
The NJOY and LINEAR/RECENT/GROUPIE calculational procedures for the resolved and unresolved resonance contributions and background cross sections are evaluated. Elastic scattering, fission and capture multigroup cross sections generated by these codes and the previously validated ETOG-3Q, ETOG-3, FLANGE-II and XLACS are compared. Constant weighting function and zero Kelvin temperature are considered. Discrepancies are presented and analyzed. (author)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Garcia de Viedma Alonso, L.
1963-07-01
SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)
Garcia de Viedma Alonso, L.
1963-07-01
SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)
Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I
2017-01-01
This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.
Petit, Andrew S.; Subotnik, Joseph E.
2014-01-01
In this paper, we develop a surface hopping approach for calculating linear absorption spectra using ensembles of classical trajectories propagated on both the ground and excited potential energy surfaces. We demonstrate that our method allows the dipole-dipole correlation function to be determined exactly for the model problem of two shifted, uncoupled harmonic potentials with the same harmonic frequency. For systems where nonadiabatic dynamics and electronic relaxation are present, preliminary results show that our method produces spectra in better agreement with the results of exact quantum dynamics calculations than spectra obtained using the standard ground-state Kubo formalism. As such, our proposed surface hopping approach should find immediate use for modeling condensed phase spectra, especially for expensive calculations using ab initio potential energy surfaces
Alsmiller, R.G. Jr.; Alsmiller, F.S.; Lewis, T.A.
1986-05-01
In a series of previous papers, calculated results obtained using a one-dimensional ballistic model were presented to aid in the design of a prebuncher for the Oak Ridge Electron Linear Accelerator. As part of this work, a model was developed to provide limits on the fraction of an incident current pulse that would be accelerated by the existing accelerator. In this paper experimental data on this fraction are presented and the validity of the model developed previously is tested by comparing calculated and experimental data. Part of the experimental data is used to fix the physical parameters in the model and then good agreement between the calculated results and the rest of the experimental data is obtained
Santos, Maira R.; Silveira, Thiago B.; Garcia, Paulo L.; Trindade, Cassia; Martins, Lais P.; Batista, Delano V.S.
2013-01-01
Given the new methodology introduced in the shielding calculation due to recent modulated techniques in radiotherapy treatment, it became necessary to evaluate the impact of changes in the accelerator routine using such techniques. Based on a group of 30 patients from the National Cancer Institute (INCA) the workload multiplier factors for intensity modulated radiotherapy (IMRT factor) and for RapidArc™ (RA factor) were established. Four different routines in a 6 MV generic accelerator were proposed to estimate the impact of these modified workloads in the building cost of the secondary barriers. The results indicate that if 50% of patients are treating with IMRT, the secondary barrier becomes 14,1% more expensive than the barrier calculated for conformal treatments exclusive. While RA, in the same proportion, leads to a barrier only 3,7% more expensive. Showing that RA can, while reducing treatment time, increase the proportion of patients treated with modulation technique, without increasing the cost of the barrier, when compared with IMRT. (author)
Maï, S El; Petit, J; Mercier, S; Molinari, A
2014-01-01
The fragmentation of structures subject to dynamic conditions is a matter of interest for civil industries as well as for Defence institutions. Dynamic expansions of structures, such as cylinders or rings, have been performed to obtain crucial information on fragment distributions. Many authors have proposed to capture by FEA the experimental distribution of fragment size by introducing in the FE model a perturbation. Stability and bifurcation analyses have also been proposed to describe the evolution of the perturbation growth rate. In the proposed contribution, the multiple necking of a round bar in dynamic tensile loading is analysed by the FE method. A perturbation on the initial flow stress is introduced in the numerical model to trigger instabilities. The onset time and the dominant mode of necking have been characterized precisely and showed power law evolutions, with the loading velocities and moderately with the amplitudes and the cell sizes of the perturbations. In the second part of the paper, the development of linear stability analysis and the use of salient criteria in terms of the growth rate of perturbations enabled comparisons with the numerical results. A good correlation in terms of onset time of instabilities and of number of necks is shown.
Pertsev, N. A.; Zembilgotov, A. G.; Waser, R.
1998-08-01
The effective dielectric, piezoelectric, and elastic constants of polycrystalline ferroelectric materials are calculated from single-crystal data by an advanced method of effective medium, which takes into account the piezoelectric interactions between grains in full measure. For bulk BaTiO3 and PbTiO3 polarized ceramics, the dependences of material constants on the remanent polarization are reported. Dielectric and elastic constants are computed also for unpolarized c- and a-textured ferroelectric thin films deposited on cubic or amorphous substrates. It is found that the dielectric properties of BaTiO3 and PbTiO3 polycrystalline thin films strongly depend on the type of crystal texture. The influence of two-dimensional clamping by the substrate on the dielectric and piezoelectric responses of polarized films is described quantitatively and shown to be especially important for the piezoelectric charge coefficient of BaTiO3 films.
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
Kuznetsov, N.; Maz'ya, V.; Vainberg, B.
2002-08-01
This book gives a self-contained and up-to-date account of mathematical results in the linear theory of water waves. The study of waves has many applications, including the prediction of behavior of floating bodies (ships, submarines, tension-leg platforms etc.), the calculation of wave-making resistance in naval architecture, and the description of wave patterns over bottom topography in geophysical hydrodynamics. The first section deals with time-harmonic waves. Three linear boundary value problems serve as the approximate mathematical models for these types of water waves. The next section uses a plethora of mathematical techniques in the investigation of these three problems. The techniques used in the book include integral equations based on Green's functions, various inequalities between the kinetic and potential energy and integral identities which are indispensable for proving the uniqueness theorems. The so-called inverse procedure is applied to constructing examples of non-uniqueness, usually referred to as 'trapped nodes.'
Linearization Method and Linear Complexity
Tanaka, Hidema
We focus on the relationship between the linearization method and linear complexity and show that the linearization method is another effective technique for calculating linear complexity. We analyze its effectiveness by comparing with the logic circuit method. We compare the relevant conditions and necessary computational cost with those of the Berlekamp-Massey algorithm and the Games-Chan algorithm. The significant property of a linearization method is that it needs no output sequence from a pseudo-random number generator (PRNG) because it calculates linear complexity using the algebraic expression of its algorithm. When a PRNG has n [bit] stages (registers or internal states), the necessary computational cost is smaller than O(2n). On the other hand, the Berlekamp-Massey algorithm needs O(N2) where N(≅2n) denotes period. Since existing methods calculate using the output sequence, an initial value of PRNG influences a resultant value of linear complexity. Therefore, a linear complexity is generally given as an estimate value. On the other hand, a linearization method calculates from an algorithm of PRNG, it can determine the lower bound of linear complexity.
Green, David L.; Berry, Lee A.; Simpson, Adam B.; Younkin, Timothy R.
2018-04-01
We present the KINETIC-J code, a computational kernel for evaluating the linearized Vlasov equation with application to calculating the kinetic plasma response (current) to an applied time harmonic wave electric field. This code addresses the need for a configuration space evaluation of the plasma current to enable kinetic full-wave solvers for waves in hot plasmas to move beyond the limitations of the traditional Fourier spectral methods. We benchmark the kernel via comparison with the standard k →-space forms of the hot plasma conductivity tensor.
Mosegaard, Klaus
2012-01-01
For non-linear inverse problems, the mathematical structure of the mapping from model parameters to data is usually unknown or partly unknown. Absence of information about the mathematical structure of this function prevents us from presenting an analytical solution, so our solution depends on our......-heuristics are inefficient for large-scale, non-linear inverse problems, and that the 'no-free-lunch' theorem holds. We discuss typical objections to the relevance of this theorem. A consequence of the no-free-lunch theorem is that algorithms adapted to the mathematical structure of the problem perform more efficiently than...... pure meta-heuristics. We study problem-adapted inversion algorithms that exploit the knowledge of the smoothness of the misfit function of the problem. Optimal sampling strategies exist for such problems, but many of these problems remain hard. © 2012 Springer-Verlag....
Inversion assuming weak scattering
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2013-01-01
due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...
Smith, R.E.; Waisman, R.; Hu, M.H.; Frick, T.M.
1995-01-01
A non-linear analysis has been performed to determine relative motions between tubes and tube support plates (TSP) during a steam line break (SLB) event for steam generators. The SLB event results in blowdown of steam and water out of the steam generator. The fluid blowdown generates pressure drops across the TSPS, resulting in out-of-plane motion. The SLB induced pressure loads are calculated with a computer program that uses a drift-flux modeling of the two-phase flow. In order to determine the relative tube/TSP motions, a nonlinear dynamic time-history analysis is performed using a structural model that considers all of the significant component members relative to the tube support system. The dynamic response of the structure to the pressure loads is calculated using a special purpose computer program. This program links the various substructures at common degrees of freedom into a combined mass and stiffness matrix. The program accounts for structural non-linearities, including potential tube and TSP interaction at any given tube position. The program also accounts for structural damping as part of the dynamic response. Incorporating all of the above effects, the equations of motion are solved to give TSP displacements at the reduced set of DOF. Using the displacement results from the dynamic analysis, plate stresses are then calculated using the detailed component models. Displacements form the dynamic analysis are imposed as boundary conditions at the DOF locations, and the finite element program then solves for the overall distorted geometry. Calculations are also performed to assure that assumptions regarding elastic response of the various structural members and support points are valid
Holzwarth, N.A.; Matthews, G.E.; Dunning, R.B.; Tackett, A.R.; Zeng, Y.
1997-01-01
The projector augmented-wave (PAW) method was developed by Bloechl as a method to accurately and efficiently calculate the electronic structure of materials within the framework of density-functional theory. It contains the numerical advantages of pseudopotential calculations while retaining the physics of all-electron calculations, including the correct nodal behavior of the valence-electron wave functions and the ability to include upper core states in addition to valence states in the self-consistent iterations. It uses many of the same ideas developed by Vanderbilt in his open-quotes soft pseudopotentialclose quotes formalism and in earlier work by Bloechl in his open-quotes generalized separable potentials,close quotes and has been successfully demonstrated for several interesting materials. We have developed a version of the PAW formalism for general use in structural and dynamical studies of materials. In the present paper, we investigate the accuracy of this implementation in comparison with corresponding results obtained using pseudopotential and linearized augmented-plane-wave (LAPW) codes. We present results of calculations for the cohesive energy, equilibrium lattice constant, and bulk modulus for several representative covalent, ionic, and metallic materials including diamond, silicon, SiC, CaF 2 , fcc Ca, and bcc V. With the exception of CaF 2 , for which core-electron polarization effects are important, the structural properties of these materials are represented equally well by the PAW, LAPW, and pseudopotential formalisms. copyright 1997 The American Physical Society
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
Javier Hernádez Benítez
2012-12-01
Full Text Available In reactor design phase bubble column type (CBT is required to have the distribution of solids within the reactor. This distribution satisfies an ordinary differential equation (ODE of order two, with boundary conditions that was developed by D. R. Cova [2], followed by D. N. Smith and J. A. Ruether [8]. Some elements of this equation are given by correlations that depend on certain parameters that are unknown but may be obtained from experimental data. The methodology used to determine these parameters is the sub- piecewise linear underestimation developed by O. L. Mangasarian, J. B. Rosen, M. E. Thompson. // RESUMEN: En el diseño de reactores trifásicos tipo columna de burbujeo (CBT, se requiere tener la distribución de solidos dentro del reactor. Esta distribución satisface una ecuación diferencial ordinaria (EDO de orden dos, con condiciones de frontera que fue desarrollada por D. R. Cova [2], y posteriormente por D. N. Smith y J. A. Ruether [8]. Algunos elementos de esta ecuación están dados por correlaciones que dependen de ciertos parámetros que son desconocidos, pero se pueden obtener a partir de datos experimentales. La metodología utilizada para determinar dichos parámetros es la sub-estimación lineal a trozos desarrollada por O. L. Mangasarian, J. B. Rosen y M. E. Thompson.
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
Yamamoto, Masahiro
2003-06-01
This book is a valuable introduction to inverse problems. In particular, from the educational point of view, the author addresses the questions of what constitutes an inverse problem and how and why we should study them. Such an approach has been eagerly awaited for a long time. Professor Groetsch, of the University of Cincinnati, is a world-renowned specialist in inverse problems, in particular the theory of regularization. Moreover, he has made a remarkable contribution to educational activities in the field of inverse problems, which was the subject of his previous book (Groetsch C W 1993 Inverse Problems in the Mathematical Sciences (Braunschweig: Vieweg)). For this reason, he is one of the most qualified to write an introductory book on inverse problems. Without question, inverse problems are important, necessary and appear in various aspects. So it is crucial to introduce students to exercises in inverse problems. However, there are not many introductory books which are directly accessible by students in the first two undergraduate years. As a consequence, students often encounter diverse concrete inverse problems before becoming aware of their general principles. The main purpose of this book is to present activities to allow first-year undergraduates to learn inverse theory. To my knowledge, this book is a rare attempt to do this and, in my opinion, a great success. The author emphasizes that it is very important to teach inverse theory in the early years. He writes; `If students consider only the direct problem, they are not looking at the problem from all sides .... The habit of always looking at problems from the direct point of view is intellectually limiting ...' (page 21). The book is very carefully organized so that teachers will be able to use it as a textbook. After an introduction in chapter 1, sucessive chapters deal with inverse problems in precalculus, calculus, differential equations and linear algebra. In order to let one gain some insight
D. Galán Martínez
2000-07-01
Full Text Available Una de las herramientas matemáticas más utilizadas en ingeniería en el estudio de los denominados sistemas de control dedatos muestreados es la transformada Z. La transformada Z como método operacional puede ser utilizada en la resoluciónde ecuaciones en diferencias finitas; las cuales formulan la dinámica de los sistemas de control de datos muestreados. Estatransformada juega un papel similar que el de la transformada de Laplace en el análisis de los sistemas de control de tiempocontinuo.El presente trabajo tiene como objetivo la confección de un programa para computadora digital, utilizando el asistentematemático DERIVE, para la determinación de la transformada Z inversa de una función algebraica racional, las cualesmodelan matemáticamente los sistemas de control de datos muestreados lineales que aparecen con mucha frecuencia en elestudio de los procesos de ingeniería.Palabras claves: Algoritmo, transformada Z, DERIVE, función algebraica racional, modelo matemático._______________________________________________________________________AbstractOne of the mathematical tools more used in engineering in the study of the denominated systems of data control samples isthe transformed Z. The transformed Z like as an operational method can be used in the resolution of equations in finitedifferences; which formulate the dynamics of the systems of data control samples. This transformed plays a similar paperthat the Laplace transformed in the analysis of the systems of control in continuous time.The present work has as objective the confection of a program for digital computer, using the mathematical assistantDERIVES, for the determination of the Z inverse transformed of a rational algebraic function, which model mathematicallythe systems of lineal data control samples that appear very frecuently in the study of the engineering processesKey words: algorithm, Z inverse transformed, Derives, Digital computer program, Rational
Ronchin, Erika; Masterlark, Timothy; Dawson, John; Saunders, Steve; Martí Molist, Joan
2015-04-01
In this study, we present a method to fully integrate a family of finite element models (FEMs) into the regularized linear inversion of InSAR data collected at Rabaul caldera (PNG) between February 2007 and December 2010. During this period the caldera experienced a long-term steady subsidence that characterized surface movement both inside the caldera and outside, on its western side. The inversion is based on an array of FEM sources in the sense that the Green's function matrix is a library of forward numerical displacement solutions generated by the sources of an array common to all FEMs. Each entry of the library is the LOS surface displacement generated by injecting a unity mass of fluid, of known density and bulk modulus, into a different source cavity of the array for each FEM. By using FEMs, we are taking advantage of their capability of including topography and heterogeneous distribution of elastic material properties. All FEMs of the family share the same mesh in which only one source is activated at the time by removing the corresponding elements and applying the unity fluid flux. The domain therefore only needs to be discretized once. This precludes remeshing for each activated source, thus reducing computational requirements, often a downside of FEM-based inversions. Without imposing an a-priori source, the method allows us to identify, from a least-squares standpoint, a complex distribution of fluid flux (or change in pressure) with a 3D free geometry within the source array, as dictated by the data. The results of applying the proposed inversion to Rabaul InSAR data show a shallow magmatic system under the caldera made of two interconnected lobes located at the two opposite sides of the caldera. These lobes could be consistent with feeding reservoirs of the ongoing Tavuvur volcano eruption of andesitic products, on the eastern side, and of the past Vulcan volcano eruptions of more evolved materials, on the western side. The interconnection and
中沢, 喜昌
1989-01-01
We gave linear algebra lessons to the fifth grade students as an elective subject and analyzed that to what extent students understood the linear algebra, judging from the result of questionaires and tests. It showed that they are good at the problems accompanied by calculations such as inverse matrix, simultaneous linear equation, and proper value problem and that, on the contrary, it is difficult to understand the abstract notion like linear space and linear map.
G. Wollenberg
2004-01-01
Full Text Available An interconnection system whose loads protected by a voltage suppressor and a low-pass filter against overvoltages caused by coupling pulse-shaped electromagnetic waves is analyzed. The external wave influencing the system is assumed as a plane wave with HPM form. The computation is provided by a full-wave PEEC model for the interconnection structure incorporated in the SPICE code. Thus, nonlinear elements of the protection circuit can be included in the calculation. The analysis shows intermodulation distortions and penetrations of low frequency interferences caused by intermodulations through the protection circuits. The example examined shows the necessity of using full-wave models for interconnections together with non-linear circuit solvers for simulation of noise immunity in systems protected by nonlinear devices.
Support minimized inversion of acoustic and elastic wave scattering
Safaeinili, A.
1994-01-01
This report discusses the following topics on support minimized inversion of acoustic and elastic wave scattering: Minimum support inversion; forward modelling of elastodynamic wave scattering; minimum support linearized acoustic inversion; support minimized nonlinear acoustic inversion without absolute phase; and support minimized nonlinear elastic inversion
Libotte, Rafael Barbosa; Alves Filho, Hermes; Oliva, Amaury Muñoz
2017-01-01
The physical phenomenon of transport of neutral particles in a host environment is of interest in various scientific applications, e.g., nuclear reactors, shielding calculations, radiological protection, nuclear medicine, agronomy, materials science, oil prospecting, etc. In all these areas there is a need for an accurate description of the transport of the particles in the host medium. In this class of applications are the neutron shielding problems, also referred to as 'fixed-source' problems, where the interaction of the particles with the medium does not produce new neutrons, i.e., non-multiplicative medium. In this context, the development of tools that model these problems is relevant and of a beneficial return to society. In this work, we propose the development of deterministic mathematical and computational modeling of neutron transport using the linearized equation of Boltzmann applied to neutron shielding problems. Here we present also the development of a spectro-nodal method (coarse mesh) considering the scattering phenomenon as being linearly anisotropic. We show the results using a computational application, developed in Java language, version 1.8.0 9 1
Inverse scale space decomposition
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Le Thanh Xuan; Nguyen Thi Cam Thu; Tran Van Nghia; Truong Thi Hong Loan; Vo Thanh Nhon
2015-01-01
The dose distribution calculation is one of the major steps in radiotherapy. In this paper the Monte Carlo code MCNP5 has been applied for simulation 15 MV photon beams emitted from linear accelerator in a case of lung cancer of the General Hospital of Kien Giang. The settings for beam directions, field sizes and isocenter position used in MCNP5 must be the same as those in treatment plan at the hospital to ensure the results from MCNP5 are accurate. We also built a program CODIM by using MATLAB® programming software. This program was used to construct patient model from lung CT images obtained from cancer treatment cases at the General Hospital of Kien Giang and then MCNP5 code was used to simulate the delivered dose in the patient. The results from MCNP5 show that there is a difference of 5% in comparison with Prowess Panther program - a semi-empirical simulation program which is being used for treatment planning in the General Hospital of Kien Giang. The success of the work will help the planners to verify the patient dose distribution calculated from the treatment planning program being used at the hospital. (author)
Queralt, R. (Junta de Saneamientos. Generalidad de Catalua (Spain))
1993-03-01
The calculation of the investment involved in purifying industrial waste water poses certain problems since this is affected either by employing complicated methods which require a great deal of data or, as the sole alternative, through subjective estimates. The present article purposes an intermediate system based on simplified formulas for which it is only necessary to know three parameters, namely, (in the majority of cases) the industrial activity, the flow and the Q.O.D. (Author)
Adaptive regularization of noisy linear inverse problems
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T......: The expectation of the regularization function, i.e., takes the same value in the posterior and prior distribution. We present three examples: two simulations, and application in fMRI neuroimaging....
Isomorphs in the phase diagram of a model liquid without inverse power law repulsion
Veldhorst, Arnold Adriaan; Bøhling, Lasse; Dyre, J. C.
2012-01-01
scattering function are calculated. The results are shown to reflect a hidden scale invariance; despite its exponential repulsion the Buckingham potential is well approximated by an inverse power-law plus a linear term in the region of the first peak of the radial distribution function. As a consequence...... the dynamics of the viscous Buckingham liquid is mimicked by a corresponding model with purely repulsive inverse-power-law interactions. The results presented here closely resemble earlier results for Lennard-Jones type liquids, demonstrating that the existence of strong correlations and isomorphs does...... not depend critically on the mathematical form of the repulsion being an inverse power law....
Inverse and Ill-posed Problems Theory and Applications
Kabanikhin, S I
2011-01-01
The text demonstrates the methods for proving the existence (if et all) and finding of inverse and ill-posed problems solutions in linear algebra, integral and operator equations, integral geometry, spectral inverse problems, and inverse scattering problems. It is given comprehensive background material for linear ill-posed problems and for coefficient inverse problems for hyperbolic, parabolic, and elliptic equations. A lot of examples for inverse problems from physics, geophysics, biology, medicine, and other areas of application of mathematics are included.
Subhash, P V; Madhavan, S; Chaturvedi, S
2008-01-01
Two-dimensional (2D) magneto-hydrodynamic (MHD) liner-on-plasma computations have been performed to study the growth of instabilities in a magnetized target fusion system involving the cylindrical compression of an inverse Z-pinch target plasma by a metallic liner. The growth of modes in the plasma can be divided into two phases. During the first phase, the plasma continues to be Kadomtsev stable. The dominant mode in the liner instability is imposed upon the plasma in the form of a growing perturbation. This mode further transfers part of its energy to its harmonics. During the second phase, however, non-uniform implosion of the liner leads to axial variations in plasma quantities near the liner-plasma interface, such that certain regions of the plasma locally violate the Kadomtsev criteria. Further growth ofthe plasma modes is then due to plasma instability. The above numerical study has been complemented with a linear stability analysis for the plasma, the boundary conditions for this analysis being obtained from the liner-on-plasma simulation. The stability of axisymmetric modes in the first phase is found to satisfy the Kadomtsev condition Q 0 1 modes, using equilibrium profiles from the 2D MHD study, shows that their growth rates can exceed those for m=0 by as much as an order of magnitude
Lamhasni, T; Ait Lyazidi, S; Hnach, M; Haddad, M; Desmaële, D; Spanget-Larsen, J; Nguyen, D D; Ducasse, L
2013-09-01
The photophysical properties of the antiviral 7-nicotinoyl-styrylquinoline (MB96) were investigated by means of UV-Vis linear dichroism (LD) spectroscopy on molecular samples aligned in stretched polyvinylalcohol (PVA), supported by time dependent density functional theory (TD-DFT) calculations. Experimentally, the directions of the transitions moments with respect to the long axis of the molecule were deduced from the orientation K factors, determined by means of "trial-and-error" procedure. The absorption spectrum presents two parts. The main transition in the lowest energy part, observed around 365 nm and showing the highest K value 0.8, is longitudinally in-plane polarized. The highest energy part which is extended between 230 and 320 nm, large, diffuse, and of weak intensity, shows estimated K values between 0.2 and 0.5. This complex structure is transversally polarized with some contamination by the longitudinal character of the first strong band. The TD-DFT results agree fairly well with the LD measurements. Copyright © 2013 Elsevier B.V. All rights reserved.
Caballero G, C. A.; Plascencia, J. C.; Vargas V, M. X.; Toledo J, P.
2010-09-01
The helicoid tomo therapy is an external radiotherapy system of modulated intensity, guided by image, in which the radiation is imparted to the patient using a narrow radiation beam in helicoid form, in a similar way to the scanning process with a computerized tomography. The tomo therapy equipment (Tomo Therapy Hi-Art) consists in an electrons linear accelerator with acceleration voltages of 6 MV for treatment and 3.5 MV for image, coupled to a ring that turn around the patient as this is transferred through this ring in perpendicular sense to the radiation beam. The radiation beam is narrow because has the maximum size of 5 x 40 cm 2 in the isocenter. The intensity modulation of the beam is carried out with a binary dynamic collimator of 64 crisscross sheets, and the guide by image though a system of megavoltage computerized tomography. Opposed to the radiation beam, also coupled to the rotational ring, a group of lead plates exists with a total thickness of 13 cm that acts as barrier of the primary radiation beam. The special configuration of the tomography equipment makes to have the following characteristics: 1) the presence of the lead barrier of the equipment reduces the intensity of the primary beam that reaches the bunker walls in considerable way, 2) the disperse and leakage radiations are increased with regard to a conventional accelerator due to the increase in the necessary irradiation time to produce modulated intensity fields by means of the narrow radiation beam. These special characteristics of the tomo therapy equipment make that particularities exist in the application of the formulations for structural shielding calculations that appears in the NCRP reports 49, NCRP 151 and IAEA-SRS-47. For this reason, several researches have development analytic models based on geometric considerations of continuous rotation of the equipment ring to determine the shielding requirements for the primary beam, the dispersed and leakage radiation in tomo therapy
Desesquelles, P.
1997-01-01
Computer Monte Carlo simulations occupy an increasingly important place between theory and experiment. This paper introduces a global protocol for the comparison of model simulations with experimental results. The correlated distributions of the model parameters are determined using an original recursive inversion procedure. Multivariate analysis techniques are used in order to optimally synthesize the experimental information with a minimum number of variables. This protocol is relevant in all fields if physics dealing with event generators and multi-parametric experiments. (authors)
Automatic differentiation in geophysical inverse problems
Sambridge, M.; Rickwood, P.; Rawlinson, N.; Sommacal, S.
2007-07-01
Automatic differentiation (AD) is the technique whereby output variables of a computer code evaluating any complicated function (e.g. the solution to a differential equation) can be differentiated with respect to the input variables. Often AD tools take the form of source to source translators and produce computer code without the need for deriving and hand coding of explicit mathematical formulae by the user. The power of AD lies in the fact that it combines the generality of finite difference techniques and the accuracy and efficiency of analytical derivatives, while at the same time eliminating `human' coding errors. It also provides the possibility of accurate, efficient derivative calculation from complex `forward' codes where no analytical derivatives are possible and finite difference techniques are too cumbersome. AD is already having a major impact in areas such as optimization, meteorology and oceanography. Similarly it has considerable potential for use in non-linear inverse problems in geophysics where linearization is desirable, or for sensitivity analysis of large numerical simulation codes, for example, wave propagation and geodynamic modelling. At present, however, AD tools appear to be little used in the geosciences. Here we report on experiments using a state of the art AD tool to perform source to source code translation in a range of geoscience problems. These include calculating derivatives for Gibbs free energy minimization, seismic receiver function inversion, and seismic ray tracing. Issues of accuracy and efficiency are discussed.
Inverse kinematics of OWI-535 robotic arm
DEBENEC, PRIMOŽ
2015-01-01
The thesis aims to calculate the inverse kinematics for the OWI-535 robotic arm. The calculation of the inverse kinematics determines the joint parameters that provide the right pose of the end effector. The pose consists of the position and orientation, however, we will focus only on the second one. Due to arm limitations, we have created our own type of the calculation of the inverse kinematics. At first we have derived it only theoretically, and then we have transferred the derivation into...
Algebraic properties of generalized inverses
Cvetković‐Ilić, Dragana S
2017-01-01
This book addresses selected topics in the theory of generalized inverses. Following a discussion of the “reverse order law” problem and certain problems involving completions of operator matrices, it subsequently presents a specific approach to solving the problem of the reverse order law for {1} -generalized inverses. Particular emphasis is placed on the existence of Drazin invertible completions of an upper triangular operator matrix; on the invertibility and different types of generalized invertibility of a linear combination of operators on Hilbert spaces and Banach algebra elements; on the problem of finding representations of the Drazin inverse of a 2x2 block matrix; and on selected additive results and algebraic properties for the Drazin inverse. In addition to the clarity of its content, the book discusses the relevant open problems for each topic discussed. Comments on the latest references on generalized inverses are also included. Accordingly, the book will be useful for graduate students, Ph...
Inverse Theory for Petroleum Reservoir Characterization and History Matching
Oliver, Dean S.; Reynolds, Albert C.; Liu, Ning
This book is a guide to the use of inverse theory for estimation and conditional simulation of flow and transport parameters in porous media. It describes the theory and practice of estimating properties of underground petroleum reservoirs from measurements of flow in wells, and it explains how to characterize the uncertainty in such estimates. Early chapters present the reader with the necessary background in inverse theory, probability and spatial statistics. The book demonstrates how to calculate sensitivity coefficients and the linearized relationship between models and production data. It also shows how to develop iterative methods for generating estimates and conditional realizations. The text is written for researchers and graduates in petroleum engineering and groundwater hydrology and can be used as a textbook for advanced courses on inverse theory in petroleum engineering. It includes many worked examples to demonstrate the methodologies and a selection of exercises.
Stroescu, Ionut Emanuel; Sørensen, Lasse; Frigaard, Peter Bak
2016-01-01
A non-linear stretching method was implemented for stream function theory to solve wave kinematics for physical conditions close to breaking waves in shallow waters, with wave heights limited by the water depth. The non-linear stretching method proves itself robust, efficient and fast, showing good...
3D CSEM inversion based on goal-oriented adaptive finite element method
Zhang, Y.; Key, K.
2016-12-01
We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with
Probabilistic inversion in priority setting of emerging zoonoses.
Kurowicka, D.; Bucura, C.; Cooke, R.; Havelaar, A.H.
2010-01-01
This article presents methodology of applying probabilistic inversion in combination with expert judgment in priority setting problem. Experts rank scenarios according to severity. A linear multi-criteria analysis model underlying the expert preferences is posited. Using probabilistic inversion, a
Statistical perspectives on inverse problems
Andersen, Kim Emil
of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...
Fournier, K.B.; Goldstein, W.H.; Stutman, D.; Finkenthal, M.; Soukhanovskii, V.; May, M.J.
1999-01-01
We present calculations of the quasi-steady state gain coefficient for the 4p 5 4d 1 P-4p 5 5p 1/2[1/2] 0 transition in Kr-like Y IV, Zr V, Nb VI and Mo VII ions. Gain coefficients which can lead to FUV-VUV (∝260 to 60 nm) lasing are found in all ions. Large gain coefficients are found for each ion at temperatures in excess of the ion's equilibrium temperature; realizing lasing in these systems will require a transient excitation mechanism. The density at which the maximal gain coefficient obtains increases for increasing ionization state. The 4p 5 5s 1/2[1/2] 1 -4p 5 5p 1/2[1/2] 0 and 4p 5 5s 3/2[3/2] 1 -4p 5 5p 1/2[1/2] 0 transitions also show population inversion and modest gain coefficients. Attractive features of these ions as potential lasents are the large ratio between the energy of the lasing transition and the excitation energy of the upper level of the lasing transition as well as the case with which they are produced in a low temperature, table-top scale plasma source. (orig.)
MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS
Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.
2012-01-01
Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.
Finite-dimensional linear algebra
Gockenbach, Mark S
2010-01-01
Some Problems Posed on Vector SpacesLinear equationsBest approximationDiagonalizationSummaryFields and Vector SpacesFields Vector spaces Subspaces Linear combinations and spanning sets Linear independence Basis and dimension Properties of bases Polynomial interpolation and the Lagrange basis Continuous piecewise polynomial functionsLinear OperatorsLinear operatorsMore properties of linear operatorsIsomorphic vector spaces Linear operator equations Existence and uniqueness of solutions The fundamental theorem; inverse operatorsGaussian elimination Newton's method Linear ordinary differential eq
Shieh, Gwowen; Jan, Show-Li
2015-01-01
The general formulation of a linear combination of population means permits a wide range of research questions to be tested within the context of ANOVA. However, it has been stressed in many research areas that the homogeneous variances assumption is frequently violated. To accommodate the heterogeneity of variance structure, the…
Stjernschantz, E.M.; Marelius, J.; Medina, C.; Jacobsson, M.; Vermeulen, N.P.E.; Oostenbrink, C.
2006-01-01
An extensive evaluation of the linear interaction energy (LIE) method for the prediction of binding affinity of docked compounds has been performed, with an emphasis on its applicability in lead optimization. An automated setup is presented, which allows for the use of the method in an industrial
Tanwiwat Jaikuna
2017-02-01
Full Text Available Purpose: To develop an in-house software program that is able to calculate and generate the biological dose distribution and biological dose volume histogram by physical dose conversion using the linear-quadratic-linear (LQL model. Material and methods : The Isobio software was developed using MATLAB version 2014b to calculate and generate the biological dose distribution and biological dose volume histograms. The physical dose from each voxel in treatment planning was extracted through Computational Environment for Radiotherapy Research (CERR, and the accuracy was verified by the differentiation between the dose volume histogram from CERR and the treatment planning system. An equivalent dose in 2 Gy fraction (EQD2 was calculated using biological effective dose (BED based on the LQL model. The software calculation and the manual calculation were compared for EQD2 verification with pair t-test statistical analysis using IBM SPSS Statistics version 22 (64-bit. Results: Two and three-dimensional biological dose distribution and biological dose volume histogram were displayed correctly by the Isobio software. Different physical doses were found between CERR and treatment planning system (TPS in Oncentra, with 3.33% in high-risk clinical target volume (HR-CTV determined by D90%, 0.56% in the bladder, 1.74% in the rectum when determined by D2cc, and less than 1% in Pinnacle. The difference in the EQD2 between the software calculation and the manual calculation was not significantly different with 0.00% at p-values 0.820, 0.095, and 0.593 for external beam radiation therapy (EBRT and 0.240, 0.320, and 0.849 for brachytherapy (BT in HR-CTV, bladder, and rectum, respectively. Conclusions : The Isobio software is a feasible tool to generate the biological dose distribution and biological dose volume histogram for treatment plan evaluation in both EBRT and BT.
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Chi-Chang Wang
2013-09-01
Full Text Available This paper seeks to use the proposed residual correction method in coordination with the monotone iterative technique to obtain upper and lower approximate solutions of singularly perturbed non-linear boundary value problems. First, the monotonicity of a non-linear differential equation is reinforced using the monotone iterative technique, then the cubic-spline method is applied to discretize and convert the differential equation into the mathematical programming problems of an inequation, and finally based on the residual correction concept, complex constraint solution problems are transformed into simpler questions of equational iteration. As verified by the four examples given in this paper, the method proposed hereof can be utilized to fast obtain the upper and lower solutions of questions of this kind, and to easily identify the error range between mean approximate solutions and exact solutions.
An accurate solver for forward and inverse transport
Monard, Francois; Bal, Guillaume
2010-01-01
This paper presents a robust and accurate way to solve steady-state linear transport (radiative transfer) equations numerically. Our main objective is to address the inverse transport problem, in which the optical parameters of a domain of interest are reconstructed from measurements performed at the domain's boundary. This inverse problem has important applications in medical and geophysical imaging, and more generally in any field involving high frequency waves or particles propagating in scattering environments. Stable solutions of the inverse transport problem require that the singularities of the measurement operator, which maps the optical parameters to the available measurements, be captured with sufficient accuracy. This in turn requires that the free propagation of particles be calculated with care, which is a difficult problem on a Cartesian grid. A standard discrete ordinates method is used for the direction of propagation of the particles. Our methodology to address spatial discretization is based on rotating the computational domain so that each direction of propagation is always aligned with one of the grid axes. Rotations are performed in the Fourier domain to achieve spectral accuracy. The numerical dispersion of the propagating particles is therefore minimal. As a result, the ballistic and single scattering components of the transport solution are calculated robustly and accurately. Physical blurring effects, such as small angular diffusion, are also incorporated into the numerical tool. Forward and inverse calculations performed in a two-dimensional setting exemplify the capabilities of the method. Although the methodology might not be the fastest way to solve transport equations, its physical accuracy provides us with a numerical tool to assess what can and cannot be reconstructed in inverse transport theory.
Cheng, Heming; Huang, Xieqing; Fan, Jiang; Wang, Honggang
1999-10-01
The calculation of a temperature field has a great influence upon the analysis of thermal stresses and stains during quenching. In this paper, a 42CrMo steel cylinder was used an example for investigation. From the TTT diagram of the 42CrMo steel, the CCT diagram was simulated by mathematical transformation, and the volume fraction of phase constituents was calculated. The thermal physical properties were treated as functions of temperature and the volume fraction of phase constituents. The rational approximation was applied to the finite element method. The temperature field with phase transformation and non-linear surface heat-transfer coefficients was calculated using this technique, which can effectively avoid oscillationin the numerical solution for a small time step. The experimental results of the temperature field calculation coincide with the numerical solutions.
Yi, Jun; Yang, Wenhong; Sun, Wen-Hua; Nomura, Kotohiro; Hada, Masahiko
2017-11-30
The NMR chemical shifts of vanadium ( 51 V) in (imido)vanadium(V) dichloride complexes with imidazolin-2-iminato and imidazolidin-2-iminato ligands were calculated by the density functional theory (DFT) method with GIAO. The calculated 51 V NMR chemical shifts were analyzed by the multiple linear regression (MLR) analysis (MLRA) method with a series of calculated molecular properties. Some of calculated NMR chemical shifts were incorrect using the optimized molecular geometries of the X-ray structures. After the global minimum geometries of all of the molecules were determined, the trend of the observed chemical shifts was well reproduced by the present DFT method. The MLRA method was performed to investigate the correlation between the 51 V NMR chemical shift and the natural charge, band energy gap, and Wiberg bond index of the V═N bond. The 51 V NMR chemical shifts obtained with the present MLR model were well reproduced with a correlation coefficient of 0.97.
Numerical computation of FCT equilibria by inverse equilibrium method
Tokuda, Shinji; Tsunematsu, Toshihide; Takeda, Tatsuoki
1986-11-01
FCT (Flux Conserving Tokamak) equilibria were obtained numerically by the inverse equilibrium method. The high-beta tokamak ordering was used to get the explicit boundary conditions for FCT equilibria. The partial differential equation was reduced to the simultaneous quasi-linear ordinary differential equations by using the moment method. The regularity conditions for solutions at the singular point of the equations can be expressed correctly by this reduction and the problem to be solved becomes a tractable boundary value problem on the quasi-linear ordinary differential equations. This boundary value problem was solved by the method of quasi-linearization, one of the shooting methods. Test calculations show that this method provides high-beta tokamak equilibria with sufficiently high accuracy for MHD stability analysis. (author)
de Assis, Thiago A.; Dall’Agnol, Fernando F.
2018-05-01
Numerical simulations are important when assessing the many characteristics of field emission related phenomena. In small simulation domains, the electrostatic effect from the boundaries is known to influence the calculated apex field enhancement factor (FEF) of the emitter, but no established dependence has been reported at present. In this work, we report the dependence of the lateral size, L, and the height, H, of the simulation domain on the apex-FEF of a single conducting ellipsoidal emitter. Firstly, we analyze the error, ε, in the calculation of the apex-FEF as a function of H and L. Importantly, our results show that the effects of H and L on ε are scale invariant, allowing one to predict ε for ratios L/h and H/h, where h is the height of the emitter. Next, we analyze the fractional change of the apex-FEF, δ, from a single emitter, , and a pair, . We show that small relative errors in (i.e. ), due to the finite domain size, are sufficient to alter the functional dependence , where c is the distance from the emitters in the pair. We show that obeys a recently proposed power law decay (Forbes 2016 J. Appl. Phys. 120 054302), at sufficiently large distances in the limit of infinite domain size (, say), which is not observed when using a long time established exponential decay (Bonard et al 2001 Adv. Mater. 13 184) or a more sophisticated fitting formula proposed recently by Harris et al (2015 AIP Adv. 5 087182). We show that the inverse-third power law functional dependence is respected for various systems like infinity arrays and small clusters of emitters with different shapes. Thus, , with m = 3, is suggested to be a universal signature of the charge-blunting effect in small clusters or arrays, at sufficient large distances between emitters with any shape. These results improve the physical understanding of the field electron emission theory to accurately characterize emitters in small clusters or arrays.
Tsunami waveform inversion by numerical finite-elements Green’s functions
A. Piatanesi
2001-01-01
Full Text Available During the last few years, the steady increase in the quantity and quality of the data concerning tsunamis has led to an increasing interest in the inversion problem for tsunami data. This work addresses the usually ill-posed problem of the hydrodynamical inversion of tsunami tide-gage records to infer the initial sea perturbation. We use an inversion method for which the data space consists of a given number of waveforms and the model parameter space is represented by the values of the initial water elevation field at a given number of points. The forward model, i.e. the calculation of the synthetic tide-gage records from an initial water elevation field, is based on the linear shallow water equations and is simply solved by applying the appropriate Green’s functions to the known initial state. The inversion of tide-gage records to determine the initial state results in the least square inversion of a rectangular system of linear equations. When the inversions are unconstrained, we found that in order to attain good results, the dimension of the data space has to be much larger than that of the model space parameter. We also show that a large number of waveforms is not sufficient to ensure a good inversion if the corresponding stations do not have a good azimuthal coverage with respect to source directivity. To improve the inversions we use the available a priori information on the source, generally coming from the inversion of seismological data. In this paper we show how to implement very common information about a tsunamigenic seismic source, i.e. the earthquake source region, as a set of spatial constraints. The results are very satisfactory, since even a rough localisation of the source enables us to invert correctly the initial elevation field.
Inverse m-matrices and ultrametric matrices
Dellacherie, Claude; San Martin, Jaime
2014-01-01
The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.
Source Estimation by Full Wave Form Inversion
Sjögreen, Björn [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Petersson, N. Anders [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing
2013-08-07
Given time-dependent ground motion recordings at a number of receiver stations, we solve the inverse problem for estimating the parameters of the seismic source. The source is modeled as a point moment tensor source, characterized by its location, moment tensor components, the start time, and frequency parameter (rise time) of its source time function. In total, there are 11 unknown parameters. We use a non-linear conjugate gradient algorithm to minimize the full waveform misfit between observed and computed ground motions at the receiver stations. An important underlying assumption of the minimization problem is that the wave propagation is accurately described by the elastic wave equation in a heterogeneous isotropic material. We use a fourth order accurate finite difference method, developed in [12], to evolve the waves forwards in time. The adjoint wave equation corresponding to the discretized elastic wave equation is used to compute the gradient of the misfit, which is needed by the non-linear conjugated minimization algorithm. A new source point moment source discretization is derived that guarantees that the Hessian of the misfit is a continuous function of the source location. An efficient approach for calculating the Hessian is also presented. We show how the Hessian can be used to scale the problem to improve the convergence of the non-linear conjugated gradient algorithm. Numerical experiments are presented for estimating the source parameters from synthetic data in a layer over half-space problem (LOH.1), illustrating rapid convergence of the proposed approach.
Optimization for nonlinear inverse problem
Boyadzhiev, G.; Brandmayr, E.; Pinat, T.; Panza, G.F.
2007-06-01
The nonlinear inversion of geophysical data in general does not yield a unique solution, but a single model, representing the investigated field, is preferred for an easy geological interpretation of the observations. The analyzed region is constituted by a number of sub-regions where the multi-valued nonlinear inversion is applied, which leads to a multi-valued solution. Therefore, combining the values of the solution in each sub-region, many acceptable models are obtained for the entire region and this complicates the geological interpretation of geophysical investigations. In this paper are presented new methodologies, capable to select one model, among all acceptable ones, that satisfies different criteria of smoothness in the explored space of solutions. In this work we focus on the non-linear inversion of surface waves dispersion curves, which gives structural models of shear-wave velocity versus depth, but the basic concepts have a general validity. (author)
Sumner, H.M.
1969-03-01
The KDF9/EGDON program ZIP MK 2 is the third of a series of programs for off-line digital computer analysis of dynamic systems: it has been designed specifically to cater for the needs of the design or control engineer in having an input scheme which is minimally computer-oriented. It uses numerical algorithms which are as near fool-proof as the author could discover or devise, and has comprehensive diagnostic sections to help the user in the event of faulty data or machine execution. ZIP MK 2 accepts mathematical models comprising first order linear differential and linear algebraic equations, and from these computes and factorises the transfer functions between specified pairs of output and input variables; if desired, the frequency response may be computed from the computed transfer function. The model input scheme is fully compatible with the frequency response programs FRP MK 1 and MK 2, except that, for ZIP MK 2, transport, or time-delays must be converted by the user to Pade or Bode approximations prior to input. ZIP provides the pole-zero plot, (or complex plane analysis), while FRP provides the frequency response and FIFI the time domain analyses. The pole-zero method of analysis has been little used in the past for complex models, especially where transport delays occur, and one of its primary purposes is as a research tool to investigate the usefulness of this method, for process plant, whether nuclear, chemical or other continuous processes. (author)
Simon-Weidner, J.
1975-05-01
The digital program TIMTEM calculates twodimensional, nonlinear temperature fields of reactor components of complex structure; inhomogeneity and anisotropy are taken into account. Systems consisting of different materials and therefore having different temperature- and/or time-dependent material characteristics are allowed. Various local, time- and/or temperature-dependent boundary conditions can be considered, too, which may be locally different from each other or can be interconnected. (orig.) [de
Lucatero, M.A.; Hernandez L, H.
2003-01-01
The linear heat generation rates (LHGR) for a BWR type generic fuel rod, as function of the burnup that violate the thermomechanical limit of circumferential plastic deformation of the can (canning) in nominal operation in stationary state of the fuel rod are calculated. The evaluation of the LHGR in function of the burnt of the fuel, is carried out under the condition that the deformation values of the circumferential plastic deformation of the can exceeds in 0.1 the thermomechanical value operation limit of 1%. The results of the calculations are compared with the generation rates of linear operation heat in function of the burnt for this fuel rod type. The calculations are carried out with the FEMAXI-V and RODBURN codes. The results show that for exhibitions or burnt between 0 and 16,000 M Wd/tU a minimum margin of 160.8 W/cm exists among LHGR (439.6 W/cm) operation peak for the given fuel and maximum LHGR of the fuel (calculated) to reach 1.1% of circumferential plastic deformation of the can, for the peak factor of power of 1.40. For burnt of 20,000 MWd/tU and 60,000 MWd/tU exist a margin of 150.3 and 298.6 W/cm, respectively. (Author)
Rasouli, Zolaikha; Ghavami, Raouf
2016-08-01
Vanillin (VA), vanillic acid (VAI) and syringaldehyde (SIA) are important food additives as flavor enhancers. The current study for the first time is devote to the application of partial least square (PLS-1), partial robust M-regression (PRM) and feed forward neural networks (FFNNs) as linear and nonlinear chemometric methods for the simultaneous detection of binary and ternary mixtures of VA, VAI and SIA using data extracted directly from UV-spectra with overlapped peaks of individual analytes. Under the optimum experimental conditions, for each compound a linear calibration was obtained in the concentration range of 0.61-20.99 [LOD = 0.12], 0.67-23.19 [LOD = 0.13] and 0.73-25.12 [LOD = 0.15] μg mL- 1 for VA, VAI and SIA, respectively. Four calibration sets of standard samples were designed by combination of a full and fractional factorial designs with the use of the seven and three levels for each factor for binary and ternary mixtures, respectively. The results of this study reveal that both the methods of PLS-1 and PRM are similar in terms of predict ability each binary mixtures. The resolution of ternary mixture has been accomplished by FFNNs. Multivariate curve resolution-alternating least squares (MCR-ALS) was applied for the description of spectra from the acid-base titration systems each individual compound, i.e. the resolution of the complex overlapping spectra as well as to interpret the extracted spectral and concentration profiles of any pure chemical species identified. Evolving factor analysis (EFA) and singular value decomposition (SVD) were used to distinguish the number of chemical species. Subsequently, their corresponding dissociation constants were derived. Finally, FFNNs has been used to detection active compounds in real and spiked water samples.
Hansen, Bjarke Knud Vilster; Møller, Søren; Spanget-Larsen, Jens
2006-01-01
than 40 vibrational transitions. The observed IR wavenumbers, relative intensities, and polarization directions were generally well reproduced by the results of a harmonic analysis based on B3LYP/cc-pVTZ density functional theory (DFT). The combined experimental and theoretical results led to proposal...... of a nearly complete assignment of the IR active fundamentals of DPB, involving reassignment of a number of transitions. In addition, previously published Raman spectra of DPB were well predicted by the B3LYP/cc-pVTZ calculations....
Calculation of reactivity using a finite impulse response filter
Suescun Diaz, Daniel [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil); Senra Martinez, Aquilino [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil)], E-mail: aquilino@lmp.ufrj.br; Carvalho Da Silva, Fernando [COPPE/UFRJ, Programa de Engenharia Nuclear, Caixa Postal 68509, CEP 21941-914, RJ (Brazil)
2008-03-15
A new formulation is presented in this paper to solve the inverse kinetics equation. This method is based on the Laplace transform of the point kinetics equations, resulting in an expression equivalent to the inverse kinetics equation as a function of the power history. Reactivity can be written in terms of the summation of convolution with response to impulse, characteristic of a linear system. For its digital form the Z-transform is used, which is the discrete version of the Laplace transform. This new method of reactivity calculation has very special features, amongst which it can be pointed out that the linear part is characterized by a filter named finite impulse response (FIR). The FIR filter will always be, stable and non-varying in time, and, apart from this, it can be implemented in the non-recursive form. This type of implementation does not require feedback, allowing the calculation of reactivity in a continuous way.
Sarkar, P.; Bhattacharyya, S.P.
1995-01-01
The effects of quartic anharmonicity on the quantum dynamics of a linear oscillator with time-dependent force constant (K) or harmonic frequency (ω) are studied both perturbatively and numerically by the time-dependent Fourier grid Hamiltonian method. In the absence of anharmonicity, the ground-state population decreases and the population of an accessible excited state (k = 2.4, 6 ... ) increases with time. However, when anharmonicity is introduced, both the ground- and excited-state populations show typical oscillations. For weak coupling, the population of an accessible excited state at a certain instant of time (short) turns out to be a parabolic function of the anharmonic coupling constant (λ), when all other parameters of the system are kept fixed. This parabolic nature of the excited-state population vs. the λ profile is independent of the specific form of the time dependence of the force constant, K t . However, it depends upon the rate at which K t relaxes. For small anharmonic coupling strength and short time scales, the numerical results corroborate expectations based on the first-order time-dependent perturbative analysis, using a suitably repartitioned Hamiltonian that makes H 0 time-independent. Some of the possible experimental implications of our observations are analyzed, especially in relation to intensity oscillations observed in some charge-transfer spectra in systems in which the dephasing rates are comparable with the time scale of the electron transfer. 21 refs., 7 figs., 1 tab
Ammar Guellab
2018-01-01
Full Text Available We propose an efficient finite difference time domain (FDTD method based on the piecewise linear recursive convolution (PLRC technique to evaluate the human body exposure to electromagnetic (EM radiation. The source of radiation considered in this study is a high-power antenna, mounted on a military vehicle, covering a broad band of frequency (100 MHz–3 GHz. The simulation is carried out using a nonhomogeneous human body model which takes into consideration most of the internal body tissues. The human tissues are modeled by a four-pole Debye model which is derived from experimental data by using particle swarm optimization (PSO. The human exposure to EM radiation is evaluated by computing the local and whole-body average specific absorption rate (SAR for each occupant. The higher in-tissue electric field intensity points are localized, and the SAR values are compared with the crew safety standard recommendations. The accuracy of the proposed PLRC-FDTD approach and the matching of the Debye model with the experimental data are verified in this study.
Hopping absorption edge in silicon inversion layers
Kostadinov, I.Z.
1983-09-01
The low frequency gap observed in the absorption spectrum of silicon inversion layers is related to the AC variable range hopping. The frequency dependence of the absorption coefficient is calculated. (author)
Gesheva-Atanassova, N.; Balabanova, A.
2006-01-01
Full text: The purpose of the study is to check the long-time stability of the wedge angle and the wedge factor (WF) of Virtual Wedges for 6 and 18 MV photon beams, and the accuracy of the TPS HELAX-TMS, in order to accept the virtual wedge technique for patient treatment. All measurements - dose profiles, central axis dose distributions and applied dose for pre-calculated monitor units, have been performed in water, applying a calibrated 0.3 cm 3 ion chamber, 47 ion chamber array LA48 and the beam analyzing system MP3. The measured data has been compared with the corresponding planned data. During a four years time period the long time stability checking revealed no changes in the central axis distributions and variations of the wedge angles are within ± 2 deg. The values of WFs and the differences between calculated and measured dose values are in the acceptable limits, except for the 6 MV beam with wedge angle 60 deg and field size 20x20 cm 2 , where the deviation reaches 6.5%. The dose profile for depth up to 10 cm showed a good coincidence. Non acceptable deviations have been found for beam profiles at depth 20 cm and field size 20x20 cm 2 for both 6 and 18 MV. The Virtual Wedge Option of PRIMUS in combination with HELAX-TMS can be applied with confidence for radiotherapy with wedged beams except for the combination of field 20x20 cm 2 and angle 60 deg
Seismic inverse scattering in the downward continuation approach
Stolk, C.C.; de Hoop, M.V.
Seismic data are commonly modeled by a linearization around a smooth background medium in combination with a high frequency approximation. The perturbation of the medium coefficient is assumed to contain the discontinuities. This leads to two inverse problems, first the linearized inverse problem
Sikora, M; Dohm, O; Alber, M
2007-08-07
A dedicated, efficient Monte Carlo (MC) accelerator head model for intensity modulated stereotactic radiosurgery treatment planning is needed to afford a highly accurate simulation of tiny IMRT fields. A virtual source model (VSM) of a mini multi-leaf collimator (MLC) (the Elekta Beam Modulator (EBM)) is presented, allowing efficient generation of particles even for small fields. The VSM of the EBM is based on a previously published virtual photon energy fluence model (VEF) (Fippel et al 2003 Med. Phys. 30 301) commissioned with large field measurements in air and in water. The original commissioning procedure of the VEF, based on large field measurements only, leads to inaccuracies for small fields. In order to improve the VSM, it was necessary to change the VEF model by developing (1) a method to determine the primary photon source diameter, relevant for output factor calculations, (2) a model of the influence of the flattening filter on the secondary photon spectrum and (3) a more realistic primary photon spectrum. The VSM model is used to generate the source phase space data above the mini-MLC. Later the particles are transmitted through the mini-MLC by a passive filter function which significantly speeds up the time of generation of the phase space data after the mini-MLC, used for calculation of the dose distribution in the patient. The improved VSM model was commissioned for 6 and 15 MV beams. The results of MC simulation are in very good agreement with measurements. Less than 2% of local difference between the MC simulation and the diamond detector measurement of the output factors in water was achieved. The X, Y and Z profiles measured in water with an ion chamber (V = 0.125 cm(3)) and a diamond detector were used to validate the models. An overall agreement of 2%/2 mm for high dose regions and 3%/2 mm in low dose regions between measurement and MC simulation for field sizes from 0.8 x 0.8 cm(2) to 16 x 21 cm(2) was achieved. An IMRT plan film verification
Dennis Raj, A.; Jeeva, M.; Shankar, M.; Venkatesa Prabhu, G.; Vimalan, M.; Vetha Potheher, I.
2017-11-01
2-Naphthol substituted Mannich base 1-morpholino(phenyl)methyl)naphthalen-2-ol (MPMN), a potential NLO active organic single crystal was developed using acetonitrile as a solvent by slow evaporation method. The experimental and theoretical analysis made towards the exploitation in the field of electro-optic and NLO applications. The cubic structure with non-centrosymmetric space group Cc was confirmed and cell dimensions of the grown crystal were obtained from single crystal X-ray diffraction (XRD) study. The formation of the Csbnd Nsbnd C vibrational band at 1115 cm-1 in Fourier Transform Infra-Red (FTIR) analysis confirms the formation of MPMN compound. The placement of protons and carbons of MPMN were identified from Nuclear Magnetic Resonance Spectroscopy (NMR) analysis. The wide optical absorption window and the lower cutoff wavelength of MPMN show the suitability of the material for the various laser related applications. The presence of dislocations and growth pattern of crystal were analyzed using chemical etching technique. The Second Harmonic Generation (SHG) of MPMN was found to be 1.57 times greater than the standard KDP crystal. The laser damage threshold was measured by using Nd: YAG laser beam passed through the sample and it was found to be 1.006 GW/cm2. The electronic structure of the molecular system and the optical properties were also studied from quantum chemical calculations using Density Functional Theory (DFT) and reported for the first time.
Humanoid Walking Robot: Modeling, Inverse Dynamics, and Gain Scheduling Control
Elvedin Kljuno
2010-01-01
Full Text Available This article presents reference-model-based control design for a 10 degree-of-freedom bipedal walking robot, using nonlinear gain scheduling. The main goal is to show concentrated mass models can be used for prediction of the required joint torques for a bipedal walking robot. Relatively complicated architecture, high DOF, and balancing requirements make the control task of these robots difficult. Although linear control techniques can be used to control bipedal robots, nonlinear control is necessary for better performance. The emphasis of this work is to show that the reference model can be a bipedal walking model with concentrated mass at the center of gravity, which removes the problems related to design of a pseudo-inverse system. Another significance of this approach is the reduced calculation requirements due to the simplified procedure of nominal joint torques calculation. Kinematic and dynamic analysis is discussed including results for joint torques and ground force necessary to implement a prescribed walking motion. This analysis is accompanied by a comparison with experimental data. An inverse plant and a tracking error linearization-based controller design approach is described. We propose a novel combination of a nonlinear gain scheduling with a concentrated mass model for the MIMO bipedal robot system.
Recurrent Neural Network for Computing Outer Inverse.
Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin
2016-05-01
Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.
2.5D Inversion Algorithm of Frequency-Domain Airborne Electromagnetics with Topography
Jianjun Xi
2016-01-01
Full Text Available We presented a 2.5D inversion algorithm with topography for frequency-domain airborne electromagnetic data. The forward modeling is based on edge finite element method and uses the irregular hexahedron to adapt the topography. The electric and magnetic fields are split into primary (background and secondary (scattered field to eliminate the source singularity. For the multisources of frequency-domain airborne electromagnetic method, we use the large-scale sparse matrix parallel shared memory direct solver PARDISO to solve the linear system of equations efficiently. The inversion algorithm is based on Gauss-Newton method, which has the efficient convergence rate. The Jacobian matrix is calculated by “adjoint forward modelling” efficiently. The synthetic inversion examples indicated that our proposed method is correct and effective. Furthermore, ignoring the topography effect can lead to incorrect results and interpretations.
Born reflection kernel analysis and wave-equation reflection traveltime inversion in elastic media
Wang, Tengfei
2017-08-17
Elastic reflection waveform inversion (ERWI) utilize the reflections to update the low and intermediate wavenumbers in the deeper part of model. However, ERWI suffers from the cycle-skipping problem due to the objective function of waveform residual. Since traveltime information relates to the background model more linearly, we use the traveltime residuals as objective function to update background velocity model using wave equation reflected traveltime inversion (WERTI). The reflection kernel analysis shows that mode decomposition can suppress the artifacts in gradient calculation. We design a two-step inversion strategy, in which PP reflections are firstly used to invert P wave velocity (Vp), followed by S wave velocity (Vs) inversion with PS reflections. P/S separation of multi-component seismograms and spatial wave mode decomposition can reduce the nonlinearity of inversion effectively by selecting suitable P or S wave subsets for hierarchical inversion. Numerical example of Sigsbee2A model validates the effectiveness of the algorithms and strategies for elastic WERTI (E-WERTI).
Chromosome Gene Orientation Inversion Networks (GOINs) of Plasmodium Proteome.
Quevedo-Tumailli, Viviana F; Ortega-Tenezaca, Bernabé; González-Díaz, Humbert
2018-03-02
The spatial distribution of genes in chromosomes seems not to be random. For instance, only 10% of genes are transcribed from bidirectional promoters in humans, and many more are organized into larger clusters. This raises intriguing questions previously asked by different authors. We would like to add a few more questions in this context, related to gene orientation inversions. Does gene orientation (inversion) follow a random pattern? Is it relevant to biological activity somehow? We define a new kind of network coined as the gene orientation inversion network (GOIN). GOIN's complex network encodes short- and long-range patterns of inversion of the orientation of pairs of gene in the chromosome. We selected Plasmodium falciparum as a case of study due to the high relevance of this parasite to public health (causal agent of malaria). We constructed here for the first time all of the GOINs for the genome of this parasite. These networks have an average of 383 nodes (genes in one chromosome) and 1314 links (pairs of gene with inverse orientation). We calculated node centralities and other parameters of these networks. These numerical parameters were used to study different properties of gene inversion patterns, for example, distribution, local communities, similarity to Erdös-Rényi random networks, randomness, and so on. We find clues that seem to indicate that gene orientation inversion does not follow a random pattern. We noted that some gene communities in the GOINs tend to group genes encoding for RIFIN-related proteins in the proteome of the parasite. RIFIN-like proteins are a second family of clonally variant proteins expressed on the surface of red cells infected with Plasmodium falciparum. Consequently, we used these centralities as input of machine learning (ML) models to predict the RIFIN-like activity of 5365 proteins in the proteome of Plasmodium sp. The best linear ML model found discriminates RIFIN-like from other proteins with sensitivity and
Advanced linear algebra for engineers with Matlab
Dianat, Sohail A
2009-01-01
Matrices, Matrix Algebra, and Elementary Matrix OperationsBasic Concepts and NotationMatrix AlgebraElementary Row OperationsSolution of System of Linear EquationsMatrix PartitionsBlock MultiplicationInner, Outer, and Kronecker ProductsDeterminants, Matrix Inversion and Solutions to Systems of Linear EquationsDeterminant of a MatrixMatrix InversionSolution of Simultaneous Linear EquationsApplications: Circuit AnalysisHomogeneous Coordinates SystemRank, Nu
Freethey, G.W.; Spangler, L.E.; Monheiser, W.J.
1994-01-01
A 48-square-mile area in the southeastern part of the Salt Lake Valley, Utah, was studied to determine if generalized information obtained from geologic maps, water-level maps, and drillers' logs could be used to estimate hydraulic conduc- tivity, porosity, and slope of the potentiometric surface: the three properties needed to calculate average linear velocity of ground water. Estimated values of these properties could be used by water- management and regulatory agencies to compute values of average linear velocity, which could be further used to estimate travel time of ground water along selected flow lines, and thus to determine wellhead protection areas around public- supply wells. The methods used to estimate the three properties are based on assumptions about the drillers' descriptions, the depositional history of the sediments, and the boundary con- ditions of the hydrologic system. These assump- tions were based on geologic and hydrologic infor- mation determined from previous investigations. The reliability of the estimated values for hydro- logic properties and average linear velocity depends on the accuracy of these assumptions. Hydraulic conductivity of the principal aquifer was estimated by calculating the thickness- weighted average of values assigned to different drillers' descriptions of material penetrated during the construction of 98 wells. Using these 98 control points, the study area was divided into zones representing approximate hydraulic- conductivity values of 20, 60, 100, 140, 180, 220, and 250 feet per day. This range of values is about the same range of values used in developing a ground-water flow model of the principal aquifer in the early 1980s. Porosity of the principal aquifer was estimated by compiling the range of porosity values determined or estimated during previous investigations of basin-fill sediments, and then using five different values ranging from 15 to 35 percent to delineate zones in the study area that were assumed to
A general method for closed-loop inverse simulation of helicopter maneuver flight
Wei WU
2017-12-01
Full Text Available Maneuverability is a key factor to determine whether a helicopter could finish certain flight missions successfully or not. Inverse simulation is commonly used to calculate the pilot controls of a helicopter to complete a certain kind of maneuver flight and to assess its maneuverability. A general method for inverse simulation of maneuver flight for helicopters with the flight control system online is developed in this paper. A general mathematical describing function is established to provide mathematical descriptions of different kinds of maneuvers. A comprehensive control solver based on the optimal linear quadratic regulator theory is developed to calculate the pilot controls of different maneuvers. The coupling problem between pilot controls and flight control system outputs is well solved by taking the flight control system model into the control solver. Inverse simulation of three different kinds of maneuvers with different agility requirements defined in the ADS-33E-PRF is implemented based on the developed method for a UH-60 helicopter. The results show that the method developed in this paper can solve the closed-loop inverse simulation problem of helicopter maneuver flight with high reliability as well as efficiency. Keywords: Closed-loop, Flying quality, Helicopters, Inverse simulation, Maneuver flight
Magnetotelluric inversion via reverse time migration algorithm of seismic data
Ha, Taeyoung; Shin, Changsoo
2007-01-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data
Shilov, Georgi E
1977-01-01
Covers determinants, linear spaces, systems of linear equations, linear functions of a vector argument, coordinate transformations, the canonical form of the matrix of a linear operator, bilinear and quadratic forms, Euclidean spaces, unitary spaces, quadratic forms in Euclidean and unitary spaces, finite-dimensional space. Problems with hints and answers.
Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data
Gary D. Egbert
2007-03-22
The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to the full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before
Frequency-wavenumber domain phase inversion along reflection wavepaths
Yu, Han
2014-12-01
A background velocity model containing the correct low-wavenumber information is desired for both the quality of the migration image and the success of waveform inversion. To achieve this goal, the velocity is updated along the reflection wavepaths, rather than along both the reflection ellipses and transmission wavepaths as in conventional FWI. This method allows for reconstructing the low-wavenumber part of the background velocity model, even in the absence of long offsets and low-frequency component of the data. Moreover, in gradient-based iterative updates, instead of forming the data error conventionally, we propose to exploit the phase mismatch between the observed and the calculated data. The phase mismatch emphasizes a kinematic error and varies quasi-linearly with respect to the velocity error. The phase mismatch is computed (1) in the frequency-wavenumber (f-k) domain replacing the magnitudes of the calculated common shot gather by those of the observed one, and (2) in the temporal-spatial domain to form the difference between the transformed calculated common-shot gather and the observed one. The background velocity model inverted according to the proposed methods can serve as an improved initial velocity model for conventional waveform inversion. Tests with synthetic and field data show both the benefits and limitations of this method.
Hatch, Andrew G; Smith, Ralph C; De, Tathagata; Salapaka, Murti V
2005-01-01
.... In this paper, we illustrate the construction of inverse filters, based on homogenized energy models, which can be used to approximately linearize the piezoceramic transducer behavior for linear...
a method of gravity and seismic sequential inversion and its GPU implementation
Liu, G.; Meng, X.
2011-12-01
In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing
Recursive Algorithm For Linear Regression
Varanasi, S. V.
1988-01-01
Order of model determined easily. Linear-regression algorithhm includes recursive equations for coefficients of model of increased order. Algorithm eliminates duplicative calculations, facilitates search for minimum order of linear-regression model fitting set of data satisfactory.
Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm
Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali
2013-04-01
The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.
Masuda, Yosuke; Yoshida, Tomoki; Yamaotsu, Noriyuki; Hirono, Shuichi
2018-01-01
We recently reported that the Gibbs free energy of hydrolytic water molecules (ΔG wat ) in acyl-trypsin intermediates calculated by hydration thermodynamics analysis could be a useful metric for estimating the catalytic rate constants (k cat ) of mechanism-based reversible covalent inhibitors. For thorough evaluation, the proposed method was tested with an increased number of covalent ligands that have no corresponding crystal structures. After modeling acyl-trypsin intermediate structures using flexible molecular superposition, ΔG wat values were calculated according to the proposed method. The orbital energies of antibonding π* molecular orbitals (MOs) of carbonyl C=O in covalently modified catalytic serine (E orb ) were also calculated by semi-empirical MO calculations. Then, linear discriminant analysis (LDA) was performed to build a model that can discriminate covalent inhibitor candidates from substrate-like ligands using ΔG wat and E orb . The model was built using a training set (10 compounds) and then validated by a test set (4 compounds). As a result, the training set and test set ligands were perfectly discriminated by the model. Hydrolysis was slower when (1) the hydrolytic water molecule has lower ΔG wat ; (2) the covalent ligand presents higher E orb (higher reaction barrier). Results also showed that the entropic term of hydrolytic water molecule (-TΔS wat ) could be used for estimating k cat and for covalent inhibitor optimization; when the rotational freedom of the hydrolytic water molecule is limited, the chance for favorable interaction with the electrophilic acyl group would also be limited. The method proposed in this study would be useful for screening and optimizing the mechanism-based reversible covalent inhibitors.
Population inversion in recombining hydrogen plasma
Furukane, Utaro; Yokota, Toshiaki; Oda, Toshiatsu.
1978-11-01
The collisional-radiative model is applied to a recombining hydrogen plasma in order to investigate the plasma condition in which the population inversion between the energy levels of hydrogen can be generated. The population inversion is expected in a plasma where the three body recombination has a large contribution to the recombining processes and the effective recombination rate is beyond a certain value for a given electron density and temperature. Calculated results are presented in figures and tables. (author)
Acute puerperal uterine inversion
Hussain, M.; Liaquat, N.; Noorani, K.; Bhutta, S.Z; Jabeen, T.
2004-01-01
Objective: To determine the frequency, causes, clinical presentations, management and maternal mortality associated with acute puerperal inversion of the uterus. Materials and Methods: All the patients who developed acute puerperal inversion of the uterus either in or outside the JPMC were included in the study. Patients of chronic uterine inversion were not included in the present study. Abdominal and vaginal examination was done to confirm and classify inversion into first, second or third degrees. Results: 57036 deliveries and 36 acute uterine inversions occurred during the study period, so the frequency of uterine inversion was 1 in 1584 deliveries. Mismanagement of third stage of labour was responsible for uterine inversion in 75% of patients. Majority of the patients presented with shock, either hypovolemic (69%) or neurogenic (13%) in origin. Manual replacement of the uterus under general anaesthesia with 2% halothane was successfully done in 35 patients (97.5%). Abdominal hysterectomy was done in only one patient. There were three maternal deaths due to inversion. Conclusion: Proper education and training regarding placental delivery, diagnosis and management of uterine inversion must be imparted to the maternity care providers especially to traditional birth attendants and family physicians to prevent this potentially life-threatening condition. (author)
General inverse problems for regular variation
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Kimura, W.D.
1993-01-01
The final report describes work performed to investigate inverse Cherenkov acceleration (ICA) as a promising method for laser particle acceleration. In particular, an improved configuration of ICA is being tested in a experiment presently underway on the Accelerator Test Facility (ATF). In the experiment, the high peak power (∼ 10 GW) linearly polarized ATF CO 2 laser beam is converted to a radially polarized beam. This is beam is focused with an axicon at the Cherenkov angle onto the ATF 50-MeV e-beam inside a hydrogen gas cell, where the gas acts as the phase matching medium of the interaction. An energy gain of ∼12 MeV is predicted assuming a delivered laser peak power of 5 GW. The experiment is divided into two phases. The Phase I experiments, which were completed in the spring of 1992, were conducted before the ATF e-beam was available and involved several successful tests of the optical systems. Phase II experiments are with the e-beam and laser beam, and are still in progress. The ATF demonstrated delivery of the e-beam to the experiment in Dec. 1992. A preliminary ''debugging'' run with the e-beam and laser beam occurred in May 1993. This revealed the need for some experimental modifications, which have been implemented. The second run is tentatively scheduled for October or November 1993. In parallel to the experimental efforts has been ongoing theoretical work to support the experiment and investigate improvement and/or offshoots. One exciting offshoot has been theoretical work showing that free-space laser acceleration of electrons is possible using a radially-polarized, axicon-focused laser beam, but without any phase-matching gas. The Monte Carlo code used to model the ICA process has been upgraded and expanded to handle different types of laser beam input profiles
Inverse logarithmic potential problem
Cherednichenko, V G
1996-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Inverse Kinematics using Quaternions
Henriksen, Knud; Erleben, Kenny; Engell-Nørregård, Morten
In this project I describe the status of inverse kinematics research, with the focus firmly on the methods that solve the core problem. An overview of the different methods are presented Three common methods used in inverse kinematics computation have been chosen as subject for closer inspection....
Radionuclide release rate inversion of nuclear accidents in nuclear facility based on Kalman filter
Tang Xiuhuan; Bao Lihong; Li Hua; Wan Junsheng
2014-01-01
The rapidly and continually back-calculating source term is important for nuclear emergency response. The Gaussian multi-puff atmospheric dispersion model was used to produce regional environment monitoring data virtually, and then a Kalman filter was designed to inverse radionuclide release rate of nuclear accidents in nuclear facility and the release rate tracking in real time was achieved. The results show that the Kalman filter combined with Gaussian multi-puff atmospheric dispersion model can successfully track the virtually stable, linear or nonlinear release rate after being iterated about 10 times. The standard error of inversion results increases with the true value. Meanwhile extended Kalman filter cannot inverse the height parameter of accident release as interceptive error is too large to converge. Kalman filter constructed from environment monitoring data and Gaussian multi-puff atmospheric dispersion model can be applied to source inversion in nuclear accident which is characterized by static height and position, short and continual release in nuclear facility. Hence it turns out to be an alternative source inversion method in nuclear emergency response. (authors)
Approximation of the inverse G-frame operator
... projection method for -frames which works for all conditional -Riesz frames. We also derive a method for approximation of the inverse -frame operator which is efficient for all -frames. We show how the inverse of -frame operator can be approximated as close as we like using finite-dimensional linear algebra.
Absorption line profiles in a moving atmosphere - A single scattering linear perturbation theory
Hays, P. B.; Abreu, V. J.
1989-01-01
An integral equation is derived which linearly relates Doppler perturbations in the spectrum of atmospheric absorption features to the wind system which creates them. The perturbation theory is developed using a single scattering model, which is validated against a multiple scattering calculation. The nature and basic properties of the kernels in the integral equation are examined. It is concluded that the kernels are well behaved and that wind velocity profiles can be recovered using standard inversion techniques.
Minimal-Inversion Feedforward-And-Feedback Control System
Seraji, Homayoun
1990-01-01
Recent developments in theory of control systems support concept of minimal-inversion feedforward-and feedback control system consisting of three independently designable control subsystems. Applicable to the control of linear, time-invariant plant.
Full Waveform Inversion Using Oriented Time Migration Method
Zhang, Zhendong
2016-01-01
Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have
Inverse problem in hydrogeology
Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.
2005-03-01
cas dans d'autres cas de figure. Par ailleurs, il peut être vu comme une des étapes dans le processus de détermination du comportement de l'aquifère. Il est montré que les méthodes d'évaluation des paramètres actuels ne diffèrent pas si ce n'est dans les détails des calculs informatiques. Il est montré qu'il existe une large panoplie de techniques d'inversion : codes de calcul utilisables par tout-un-chacun, accommodation de la variabilité via la géostatistique, incorporation d'informations géologiques et de différents types de données (température, occurrence, concentration en isotopes, âge, etc.), détermination de l'incertitude. Vu ces développements, la calibration automatique facilite énormément la modélisation. Par ailleurs, il est souhaitable que son utilisation devienne une pratique standardisée. Se sintetiza el estado del problema inverso en aguas subterráneas. El énfasis se ubica en la caracterización de acuíferos, donde los modeladores tienen que enfrentar la incertidumbre del modelo conceptual (principalmente variabilidad temporal y espacial), dependencia de escala, muchos tipos de parámetros desconocidos (transmisividad, recarga, condiciones limitantes, etc), no linealidad, y frecuentemente baja sensibilidad de variables de estado (típicamente presiones y concentraciones) a las propiedades del acuífero. Debido a estas dificultades, no puede separarse la calibración de los procesos de modelado, como frecuentemente se hace en otros campos. En su lugar, debe de visualizarse como un paso en el proceso de enten dimiento del comportamiento del acuífero. En realidad, se muestra que los métodos reales de estimación de parámetros no difieren uno del otro en lo esencial, aunque sí pueden diferir en los detalles computacionales. Se discute que existe amplio espacio para la mejora del problema inverso en aguas subterráneas: desarrollo de códigos amigables alusuario, acomodamiento de variabilidad a través de geoestad
Suwono.
1978-01-01
A linear gate providing a variable gate duration from 0,40μsec to 4μsec was developed. The electronic circuity consists of a linear circuit and an enable circuit. The input signal can be either unipolar or bipolar. If the input signal is bipolar, the negative portion will be filtered. The operation of the linear gate is controlled by the application of a positive enable pulse. (author)
Vretenar, M
2014-01-01
The main features of radio-frequency linear accelerators are introduced, reviewing the different types of accelerating structures and presenting the main characteristics aspects of linac beam dynamics
Analog fault diagnosis by inverse problem technique
Ahmed, Rania F.
2011-12-01
A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.
A linear model for flow over complex terrain
Frank, H P [Risoe National Lab., Wind Energy and Atmospheric Physics Dept., Roskilde (Denmark)
1999-03-01
A linear flow model similar to WA{sup s}P or LINCOM has been developed. Major differences are an isentropic temperature equation which allows internal gravity waves, and vertical advection of the shear of the mean flow. The importance of these effects are illustrated by examples. Resource maps are calculated from a distribution of geostrophic winds and stratification for Pyhaetunturi Fell in northern Finland and Acqua Spruzza in Italy. Stratification becomes important if the inverse Froude number formulated with the width of the hill becomes of order one or greater. (au) EU-JOULE-3. 16 refs.
Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.
2011-01-01
Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.
Linear algebra for dense matrices on a hypercube
Sears, M.P.
1990-01-01
A set of routines has been written for dense matrix operations optimized for the NCUBE/6400 parallel processor. This paper was motivated by a Sandia effort to parallelize certain electronic structure calculations. Routines are included for matrix transpose, multiply, Cholesky decomposition, triangular inversion, and Householder tridiagonalization. The library is written in C and is callable from Fortran. Matrices up to order 1600 can be handled on 128 processors. For each operation, the algorithm used is presented along with typical timings and estimates of performance. Performance for order 1600 on 128 processors varies from 42 MFLOPs (House-holder tridiagonalization, triangular inverse) up to 126 MFLOPs (matrix multiply). The authors also present performance results for communications and basic linear algebra operations (saxpy and dot products)
Granato, Gregory E.
2006-01-01
The Kendall-Theil Robust Line software (KTRLine-version 1.0) is a Visual Basic program that may be used with the Microsoft Windows operating system to calculate parameters for robust, nonparametric estimates of linear-regression coefficients between two continuous variables. The KTRLine software was developed by the U.S. Geological Survey, in cooperation with the Federal Highway Administration, for use in stochastic data modeling with local, regional, and national hydrologic data sets to develop planning-level estimates of potential effects of highway runoff on the quality of receiving waters. The Kendall-Theil robust line was selected because this robust nonparametric method is resistant to the effects of outliers and nonnormality in residuals that commonly characterize hydrologic data sets. The slope of the line is calculated as the median of all possible pairwise slopes between points. The intercept is calculated so that the line will run through the median of input data. A single-line model or a multisegment model may be specified. The program was developed to provide regression equations with an error component for stochastic data generation because nonparametric multisegment regression tools are not available with the software that is commonly used to develop regression models. The Kendall-Theil robust line is a median line and, therefore, may underestimate total mass, volume, or loads unless the error component or a bias correction factor is incorporated into the estimate. Regression statistics such as the median error, the median absolute deviation, the prediction error sum of squares, the root mean square error, the confidence interval for the slope, and the bias correction factor for median estimates are calculated by use of nonparametric methods. These statistics, however, may be used to formulate estimates of mass, volume, or total loads. The program is used to read a two- or three-column tab-delimited input file with variable names in the first row and
Trimming and procrastination as inversion techniques
Backus, George E.
1996-12-01
By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.
Said-Houari, Belkacem
2017-01-01
This self-contained, clearly written textbook on linear algebra is easily accessible for students. It begins with the simple linear equation and generalizes several notions from this equation for the system of linear equations and introduces the main ideas using matrices. It then offers a detailed chapter on determinants and introduces the main ideas with detailed proofs. The third chapter introduces the Euclidean spaces using very simple geometric ideas and discusses various major inequalities and identities. These ideas offer a solid basis for understanding general Hilbert spaces in functional analysis. The following two chapters address general vector spaces, including some rigorous proofs to all the main results, and linear transformation: areas that are ignored or are poorly explained in many textbooks. Chapter 6 introduces the idea of matrices using linear transformation, which is easier to understand than the usual theory of matrices approach. The final two chapters are more advanced, introducing t...
Inverse scattering problems with multi-frequencies
Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi
2015-01-01
This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods. (topical review)
Sharp spatially constrained inversion
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....
Rosenwald, J.-C.
2008-01-01
The lecture addressed the following topics: Optimizing radiotherapy dose distribution; IMRT contributes to optimization of energy deposition; Inverse vs direct planning; Main steps of IMRT; Background of inverse planning; General principle of inverse planning; The 3 main components of IMRT inverse planning; The simplest cost function (deviation from prescribed dose); The driving variable : the beamlet intensity; Minimizing a 'cost function' (or 'objective function') - the walker (or skier) analogy; Application to IMRT optimization (the gradient method); The gradient method - discussion; The simulated annealing method; The optimization criteria - discussion; Hard and soft constraints; Dose volume constraints; Typical user interface for definition of optimization criteria; Biological constraints (Equivalent Uniform Dose); The result of the optimization process; Semi-automatic solutions for IMRT; Generalisation of the optimization problem; Driving and driven variables used in RT optimization; Towards multi-criteria optimization; and Conclusions for the optimization phase. (P.A.)
Inverse Higgs effect in nonlinear realizations
Ivanov, E.A.; Ogievetskij, V.I.
1975-01-01
In theories with nonlinearly realized symmetry it is possible in a number of cases to eliminate some initial Goldstone and gauge fields by means of putting appropriate Cartan forms equal to zero. This is called the inverse Higgs phenomenon. We give a general treatment of the inverse Higgs phenomenon for gauge and space-time symmetries and consider four instructive examples which are the elimination of unessential gauge fields in chiral symmetry and in non-linearly realized supersymmetry and also the elimination of unessential Goldstone fields in the spontaneously broken conformal and projective symmetries
Brooke, D.; Vondrasek, D. V.
1978-01-01
The aerodynamic influence coefficients calculated using an existing linear theory program were used to modify the pressures calculated using impact theory. Application of the combined approach to several wing-alone configurations shows that the combined approach gives improved predictions of the local pressure and loadings over either linear theory alone or impact theory alone. The approach not only removes most of the short-comings of the individual methods, as applied in the Mach 4 to 8 range, but also provides the basis for an inverse design procedure applicable to high speed configurations.
Towards the mechanical characterization of abdominal wall by inverse analysis.
Simón-Allué, R; Calvo, B; Oberai, A A; Barbone, P E
2017-02-01
The aim of this study is to characterize the passive mechanical behaviour of abdominal wall in vivo in an animal model using only external cameras and numerical analysis. The main objective lies in defining a methodology that provides in vivo information of a specific patient without altering mechanical properties. It is demonstrated in the mechanical study of abdomen for hernia purposes. Mechanical tests consisted on pneumoperitoneum tests performed on New Zealand rabbits, where inner pressure was varied from 0mmHg to 12mmHg. Changes in the external abdominal surface were recorded and several points were tracked. Based on their coordinates we reconstructed a 3D finite element model of the abdominal wall, considering an incompressible hyperelastic material model defined by two parameters. The spatial distributions of these parameters (shear modulus and non linear parameter) were calculated by inverse analysis, using two different types of regularization: Total Variation Diminishing (TVD) and Tikhonov (H 1 ). After solving the inverse problem, the distribution of the material parameters were obtained along the abdominal surface. Accuracy of the results was evaluated for the last level of pressure. Results revealed a higher value of the shear modulus in a wide stripe along the craneo-caudal direction, associated with the presence of linea alba in conjunction with fascias and rectus abdominis. Non linear parameter distribution was smoother and the location of higher values varied with the regularization type. Both regularizations proved to yield in an accurate predicted displacement field, but H 1 obtained a smoother material parameter distribution while TVD included some discontinuities. The methodology here presented was able to characterize in vivo the passive non linear mechanical response of the abdominal wall. Copyright © 2016 Elsevier Ltd. All rights reserved.
Inversion of GPS meteorology data
K. Hocke
Full Text Available The GPS meteorology (GPS/MET experiment, led by the Universities Corporation for Atmospheric Research (UCAR, consists of a GPS receiver aboard a low earth orbit (LEO satellite which was launched on 3 April 1995. During a radio occultation the LEO satellite rises or sets relative to one of the 24 GPS satellites at the Earth's horizon. Thereby the atmospheric layers are successively sounded by radio waves which propagate from the GPS satellite to the LEO satellite. From the observed phase path increases, which are due to refraction of the radio waves by the ionosphere and the neutral atmosphere, the atmospheric parameter refractivity, density, pressure and temperature are calculated with high accuracy and resolution (0.5–1.5 km. In the present study, practical aspects of the GPS/MET data analysis are discussed. The retrieval is based on the Abelian integral inversion of the atmospheric bending angle profile into the refractivity index profile. The problem of the upper boundary condition of the Abelian integral is described by examples. The statistical optimization approach which is applied to the data above 40 km and the use of topside bending angle profiles from model atmospheres stabilize the inversion. The retrieved temperature profiles are compared with corresponding profiles which have already been calculated by scientists of UCAR and Jet Propulsion Laboratory (JPL, using Abelian integral inversion too. The comparison shows that in some cases large differences occur (5 K and more. This is probably due to different treatment of the upper boundary condition, data runaways and noise. Several temperature profiles with wavelike structures at tropospheric and stratospheric heights are shown. While the periodic structures at upper stratospheric heights could be caused by residual errors of the ionospheric correction method, the periodic temperature fluctuations at heights below 30 km are most likely caused by atmospheric waves (vertically
On the calibration process of film dosimetry: OLS inverse regression versus WLS inverse prediction
Crop, F; Thierens, H; Rompaye, B Van; Paelinck, L; Vakaet, L; Wagter, C De
2008-01-01
The purpose of this study was both putting forward a statistically correct model for film calibration and the optimization of this process. A reliable calibration is needed in order to perform accurate reference dosimetry with radiographic (Gafchromic) film. Sometimes, an ordinary least squares simple linear (in the parameters) regression is applied to the dose-optical-density (OD) curve with the dose as a function of OD (inverse regression) or sometimes OD as a function of dose (inverse prediction). The application of a simple linear regression fit is an invalid method because heteroscedasticity of the data is not taken into account. This could lead to erroneous results originating from the calibration process itself and thus to a lower accuracy. In this work, we compare the ordinary least squares (OLS) inverse regression method with the correct weighted least squares (WLS) inverse prediction method to create calibration curves. We found that the OLS inverse regression method could lead to a prediction bias of up to 7.3 cGy at 300 cGy and total prediction errors of 3% or more for Gafchromic EBT film. Application of the WLS inverse prediction method resulted in a maximum prediction bias of 1.4 cGy and total prediction errors below 2% in a 0-400 cGy range. We developed a Monte-Carlo-based process to optimize calibrations, depending on the needs of the experiment. This type of thorough analysis can lead to a higher accuracy for film dosimetry
Stoll, R R
1968-01-01
Linear Algebra is intended to be used as a text for a one-semester course in linear algebra at the undergraduate level. The treatment of the subject will be both useful to students of mathematics and those interested primarily in applications of the theory. The major prerequisite for mastering the material is the readiness of the student to reason abstractly. Specifically, this calls for an understanding of the fact that axioms are assumptions and that theorems are logical consequences of one or more axioms. Familiarity with calculus and linear differential equations is required for understand
Hartzell, S.; Liu, P.
1996-01-01
A method is presented for the simultaneous calculation of slip amplitudes and rupture times for a finite fault using a hybrid global search algorithm. The method we use combines simulated annealing with the downhill simplex method to produce a more efficient search algorithm then either of the two constituent parts. This formulation has advantages over traditional iterative or linearized approaches to the problem because it is able to escape local minima in its search through model space for the global optimum. We apply this global search method to the calculation of the rupture history for the Landers, California, earthquake. The rupture is modeled using three separate finite-fault planes to represent the three main fault segments that failed during this earthquake. Both the slip amplitude and the time of slip are calculated for a grid work of subfaults. The data used consist of digital, teleseismic P and SH body waves. Long-period, broadband, and short-period records are utilized to obtain a wideband characterization of the source. The results of the global search inversion are compared with a more traditional linear-least-squares inversion for only slip amplitudes. We use a multi-time-window linear analysis to relax the constraints on rupture time and rise time in the least-squares inversion. Both inversions produce similar slip distributions, although the linear-least-squares solution has a 10% larger moment (7.3 ?? 1026 dyne-cm compared with 6.6 ?? 1026 dyne-cm). Both inversions fit the data equally well and point out the importance of (1) using a parameterization with sufficient spatial and temporal flexibility to encompass likely complexities in the rupture process, (2) including suitable physically based constraints on the inversion to reduce instabilities in the solution, and (3) focusing on those robust rupture characteristics that rise above the details of the parameterization and data set.
Govoni, Marco; Argonne National Lab., Argonne, IL; Galli, Giulia; Argonne National Lab., Argonne, IL
2015-01-01
We present GW calculations of molecules, ordered and disordered solids and interfaces, which employ an efficient contour deformation technique for frequency integration and do not require the explicit evaluation of virtual electronic states nor the inversion of dielectric matrices. We also present a parallel implementation of the algorithm, which takes advantage of separable expressions of both the single particle Green's function and the screened Coulomb interaction. The method can be used starting from density functional theory calculations performed with semilocal or hybrid functionals. The newly developed technique was applied to GW calculations of systems of unprecedented size, including water/semiconductor interfaces with thousands of electrons
Nezhad, Mohsen Motahari; Shojaeefard, Mohammad Hassan; Shahraki, Saeid
2016-02-01
In this study, the experiments aimed at analyzing thermally the exhaust valve in an air-cooled internal combustion engine and estimating the thermal contact conductance in fixed and periodic contacts. Due to the nature of internal combustion engines, the duration of contact between the valve and its seat is too short, and much time is needed to reach the quasi-steady state in the periodic contact between the exhaust valve and its seat. Using the methods of linear extrapolation and the inverse solution, the surface contact temperatures and the fixed and periodic thermal contact conductance were calculated. The results of linear extrapolation and inverse methods have similar trends, and based on the error analysis, they are accurate enough to estimate the thermal contact conductance. Moreover, due to the error analysis, a linear extrapolation method using inverse ratio is preferred. The effects of pressure, contact frequency, heat flux, and cooling air speed on thermal contact conductance have been investigated. The results show that by increasing the contact pressure the thermal contact conductance increases substantially. In addition, by increasing the engine speed the thermal contact conductance decreases. On the other hand, by boosting the air speed the thermal contact conductance increases, and by raising the heat flux the thermal contact conductance reduces. The average calculated error equals to 12.9 %.
Ravenna, Matteo; Lebedev, Sergei; Celli, Nicolas
2017-04-01
We develop a Markov Chain Monte Carlo inversion of fundamental and higher mode phase-velocity curves for radially and azimuthally anisotropic structure of the crust and upper mantle. In the inversions of Rayleigh- and Love-wave dispersion curves for radially anisotropic structure, we obtain probabilistic 1D radially anisotropic shear-velocity profiles of the isotropic average Vs and anisotropy (or Vsv and Vsh) as functions of depth. In the inversions for azimuthal anisotropy, Rayleigh-wave dispersion curves at different azimuths are inverted for the vertically polarized shear-velocity structure (Vsv) and the 2-phi component of azimuthal anisotropy. The strength and originality of the method is in its fully non-linear approach. Each model realization is computed using exact forward calculations. The uncertainty of the models is a part of the output. In the inversions for azimuthal anisotropy, in particular, the computation of the forward problem is performed separately at different azimuths, with no linear approximations on the relation of the Earth's elastic parameters to surface wave phase velocities. The computations are performed in parallel in order reduce the computing time. We compare inversions of the fundamental mode phase-velocity curves alone with inversions that also include overtones. The addition of higher modes enhances the resolving power of the anisotropic structure of the deep upper mantle. We apply the inversion method to phase-velocity curves in a few regions, including the Hangai dome region in Mongolia. Our models provide constraints on the Moho depth, the Lithosphere-Asthenosphere Boundary, and the alignment of the anisotropic fabric and the direction of current and past flow, from the crust down to the deep asthenosphere.
Solow, Daniel
2014-01-01
This text covers the basic theory and computation for a first course in linear programming, including substantial material on mathematical proof techniques and sophisticated computation methods. Includes Appendix on using Excel. 1984 edition.
Liesen, Jörg
2015-01-01
This self-contained textbook takes a matrix-oriented approach to linear algebra and presents a complete theory, including all details and proofs, culminating in the Jordan canonical form and its proof. Throughout the development, the applicability of the results is highlighted. Additionally, the book presents special topics from applied linear algebra including matrix functions, the singular value decomposition, the Kronecker product and linear matrix equations. The matrix-oriented approach to linear algebra leads to a better intuition and a deeper understanding of the abstract concepts, and therefore simplifies their use in real world applications. Some of these applications are presented in detailed examples. In several ‘MATLAB-Minutes’ students can comprehend the concepts and results using computational experiments. Necessary basics for the use of MATLAB are presented in a short introduction. Students can also actively work with the material and practice their mathematical skills in more than 300 exerc...
Berberian, Sterling K
2014-01-01
Introductory treatment covers basic theory of vector spaces and linear maps - dimension, determinants, eigenvalues, and eigenvectors - plus more advanced topics such as the study of canonical forms for matrices. 1992 edition.
Searle, Shayle R
2012-01-01
This 1971 classic on linear models is once again available--as a Wiley Classics Library Edition. It features material that can be understood by any statistician who understands matrix algebra and basic statistical methods.
Christofilos, N.C.; Polk, I.J.
1959-02-17
Improvements in linear particle accelerators are described. A drift tube system for a linear ion accelerator reduces gap capacity between adjacent drift tube ends. This is accomplished by reducing the ratio of the diameter of the drift tube to the diameter of the resonant cavity. Concentration of magnetic field intensity at the longitudinal midpoint of the external sunface of each drift tube is reduced by increasing the external drift tube diameter at the longitudinal center region.
Point-source inversion techniques
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Handling of impact forces in inverse dynamics
Bisseling, Rob W.; Hof, At L.
2006-01-01
In the standard inverse dynamic method, joint moments are assessed from ground reaction force data and position data, where segmental accelerations are calculated by numerical differentiation of position data after low-pass filtering. This method falls short in analyzing the impact phase, e.g.
Improving Inversions of the Overlap Operator
Krieg, S.; Cundy, N.; Eshof, J. van den; Frommer, A.; Lippert, Th.; Schaefer, K.
2005-01-01
We present relaxation and preconditioning techniques which accelerate the inversion of the overlap operator by a factor of four on small lattices, with larger gains as the lattice size increases. These improvements can be used in both propagator calculations and dynamical simulations
The seismic reflection inverse problem
Symes, W W
2009-01-01
The seismic reflection method seeks to extract maps of the Earth's sedimentary crust from transient near-surface recording of echoes, stimulated by explosions or other controlled sound sources positioned near the surface. Reasonably accurate models of seismic energy propagation take the form of hyperbolic systems of partial differential equations, in which the coefficients represent the spatial distribution of various mechanical characteristics of rock (density, stiffness, etc). Thus the fundamental problem of reflection seismology is an inverse problem in partial differential equations: to find the coefficients (or at least some of their properties) of a linear hyperbolic system, given the values of a family of solutions in some part of their domains. The exploration geophysics community has developed various methods for estimating the Earth's structure from seismic data and is also well aware of the inverse point of view. This article reviews mathematical developments in this subject over the last 25 years, to show how the mathematics has both illuminated innovations of practitioners and led to new directions in practice. Two themes naturally emerge: the importance of single scattering dominance and compensation for spectral incompleteness by spatial redundancy. (topical review)
Linearized inversion frameworks toward high-resolution seismic imaging
Aldawood, Ali
2016-01-01
installed along the earth surface or down boreholes. Seismic imaging is a powerful tool to map these reflected and scattered energy back to their subsurface scattering or reflection points. Seismic imaging is conventionally based on the single
The linearized inversion of the generalized interferometric multiple imaging
Aldawood, Ali; Hoteit, Ibrahim; Alkhalifah, Tariq Ali
2016-01-01
such as vertical and nearly vertical fault planes, and salt flanks. To image first-order internal multiple, the GIMI framework consists of three datuming steps, followed by applying the zero-lag cross-correlation imaging condition. However, the standard GIMI
Linearization of the Lorenz system
Li, Chunbiao; Sprott, Julien Clinton; Thio, Wesley
2015-01-01
A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation
Linearization of the Lorenz system
Li, Chunbiao, E-mail: goontry@126.com [School of Electronic & Information Engineering, Nanjing University of Information Science & Technology, Nanjing 210044 (China); Engineering Technology Research and Development Center of Jiangsu Circulation Modernization Sensor Network, Jiangsu Institute of Commerce, Nanjing 211168 (China); Sprott, Julien Clinton [Department of Physics, University of Wisconsin–Madison, Madison, WI 53706 (United States); Thio, Wesley [Department of Electrical and Computer Engineering, The Ohio State University, Columbus, OH 43210 (United States)
2015-05-08
A partial and complete piecewise linearized version of the Lorenz system is proposed. The linearized versions have an independent total amplitude control parameter. Additional further linearization leads naturally to a piecewise linear version of the diffusionless Lorenz system. A chaotic circuit with a single amplitude controller is then implemented using a new switch element, producing a chaotic oscillation that agrees with the numerical calculation for the piecewise linear diffusionless Lorenz system. - Highlights: • A partial and complete piecewise linearized version of the Lorenz system are addressed. • The linearized versions have an independent total amplitude control parameter. • A piecewise linear version of the diffusionless Lorenz system is derived by further linearization. • A corresponding chaotic circuit without any multiplier is implemented for the chaotic oscillation.
Recursive Matrix Inverse Update On An Optical Processor
Casasent, David P.; Baranoski, Edward J.
1988-02-01
A high accuracy optical linear algebraic processor (OLAP) using the digital multiplication by analog convolution (DMAC) algorithm is described for use in an efficient matrix inverse update algorithm with speed and accuracy advantages. The solution of the parameters in the algorithm are addressed and the advantages of optical over digital linear algebraic processors are advanced.
Olive, David J
2017-01-01
This text covers both multiple linear regression and some experimental design models. The text uses the response plot to visualize the model and to detect outliers, does not assume that the error distribution has a known parametric distribution, develops prediction intervals that work when the error distribution is unknown, suggests bootstrap hypothesis tests that may be useful for inference after variable selection, and develops prediction regions and large sample theory for the multivariate linear regression model that has m response variables. A relationship between multivariate prediction regions and confidence regions provides a simple way to bootstrap confidence regions. These confidence regions often provide a practical method for testing hypotheses. There is also a chapter on generalized linear models and generalized additive models. There are many R functions to produce response and residual plots, to simulate prediction intervals and hypothesis tests, to detect outliers, and to choose response trans...
Alcaraz, J.
2001-01-01
After several years of study e''+ e''- linear colliders in the TeV range have emerged as the major and optimal high-energy physics projects for the post-LHC era. These notes summarize the present status form the main accelerator and detector features to their physics potential. The LHC era. These notes summarize the present status, from the main accelerator and detector features to their physics potential. The LHC is expected to provide first discoveries in the new energy domain, whereas an e''+ e''- linear collider in the 500 GeV-1 TeV will be able to complement it to an unprecedented level of precision in any possible areas: Higgs, signals beyond the SM and electroweak measurements. It is evident that the Linear Collider program will constitute a major step in the understanding of the nature of the new physics beyond the Standard Model. (Author) 22 refs
Edwards, Harold M
1995-01-01
In his new undergraduate textbook, Harold M Edwards proposes a radically new and thoroughly algorithmic approach to linear algebra Originally inspired by the constructive philosophy of mathematics championed in the 19th century by Leopold Kronecker, the approach is well suited to students in the computer-dominated late 20th century Each proof is an algorithm described in English that can be translated into the computer language the class is using and put to work solving problems and generating new examples, making the study of linear algebra a truly interactive experience Designed for a one-semester course, this text adopts an algorithmic approach to linear algebra giving the student many examples to work through and copious exercises to test their skills and extend their knowledge of the subject Students at all levels will find much interactive instruction in this text while teachers will find stimulating examples and methods of approach to the subject
Some results on inverse scattering
Ramm, A.G.
2008-01-01
A review of some of the author's results in the area of inverse scattering is given. The following topics are discussed: (1) Property C and applications, (2) Stable inversion of fixed-energy 3D scattering data and its error estimate, (3) Inverse scattering with 'incomplete' data, (4) Inverse scattering for inhomogeneous Schroedinger equation, (5) Krein's inverse scattering method, (6) Invertibility of the steps in Gel'fand-Levitan, Marchenko, and Krein inversion methods, (7) The Newton-Sabatier and Cox-Thompson procedures are not inversion methods, (8) Resonances: existence, location, perturbation theory, (9) Born inversion as an ill-posed problem, (10) Inverse obstacle scattering with fixed-frequency data, (11) Inverse scattering with data at a fixed energy and a fixed incident direction, (12) Creating materials with a desired refraction coefficient and wave-focusing properties. (author)
Alternating minimisation for glottal inverse filtering
Bleyer, Ismael Rodrigo; Lybeck, Lasse; Auvinen, Harri; Siltanen, Samuli; Airaksinen, Manu; Alku, Paavo
2017-01-01
A new method is proposed for solving the glottal inverse filtering (GIF) problem. The goal of GIF is to separate an acoustical speech signal into two parts: the glottal airflow excitation and the vocal tract filter. To recover such information one has to deal with a blind deconvolution problem. This ill-posed inverse problem is solved under a deterministic setting, considering unknowns on both sides of the underlying operator equation. A stable reconstruction is obtained using a double regularization strategy, alternating between fixing either the glottal source signal or the vocal tract filter. This enables not only splitting the nonlinear and nonconvex problem into two linear and convex problems, but also allows the use of the best parameters and constraints to recover each variable at a time. This new technique, called alternating minimization glottal inverse filtering (AM-GIF), is compared with two other approaches: Markov chain Monte Carlo glottal inverse filtering (MCMC-GIF), and iterative adaptive inverse filtering (IAIF), using synthetic speech signals. The recent MCMC-GIF has good reconstruction quality but high computational cost. The state-of-the-art IAIF method is computationally fast but its accuracy deteriorates, particularly for speech signals of high fundamental frequency ( F 0). The results show the competitive performance of the new method: With high F 0, the reconstruction quality is better than that of IAIF and close to MCMC-GIF while reducing the computational complexity by two orders of magnitude. (paper)
Wang, T.
2017-05-26
Elastic full waveform inversion (EFWI) provides high-resolution parameter estimation of the subsurface but requires good initial guess of the true model. The traveltime inversion only minimizes traveltime misfits which are more sensitive and linearly related to the low-wavenumber model perturbation. Therefore, building initial P and S wave velocity models for EFWI by using elastic wave-equation reflections traveltime inversion (WERTI) would be effective and robust, especially for the deeper part. In order to distinguish the reflection travletimes of P or S-waves in elastic media, we decompose the surface multicomponent data into vector P- and S-wave seismogram. We utilize the dynamic image warping to extract the reflected P- or S-wave traveltimes. The P-wave velocity are first inverted using P-wave traveltime followed by the S-wave velocity inversion with S-wave traveltime, during which the wave mode decomposition is applied to the gradients calculation. Synthetic example on the Sigbee2A model proves the validity of our method for recovering the long wavelength components of the model.
A linearization of quantum channels
Crowder, Tanner
2015-06-01
Because the quantum channels form a compact, convex set, we can express any quantum channel as a convex combination of extremal channels. We give a Euclidean representation for the channels whose inverses are also valid channels; these are a subset of the extreme points. They form a compact, connected Lie group, and we calculate its Lie algebra. Lastly, we calculate a maximal torus for the group and provide a constructive approach to decomposing any invertible channel into a product of elementary channels.
Inversion interpretation of the mise-a-la-masse data; Denryu den`i ho data no inversion kaiseki
Okuno, M; Hatanaka, H; Mizunaga, H; Ushijima, K [Kyushu University, Fukuoka (Japan). Faculty of Engineering
1996-05-01
A program was developed for the inversion interpretation of the mise-a-la-masse data, and was applied to a numerical model experiment and to the study of data obtained by actual probing. For the development of this program, a program was used that calculated by finite difference approximation the potential produced by a linear current source, and studies were made through forward interpretation, inversion interpretation of the acquired apparent resistivity data, comparison with the true solution, accuracy and tendency, and the limitations. In the simulation of a horizontal 2-layer model, the parametric value after 20 repetitions converged with deviation of 1% or lower. This program was applied to the data from probing the Hatchobara district, Oita Prefecture, using a model wherein the target area was divided into 5 from east to west, and into 2 in the direction of depth. The result suggested that there was a large-scale low-resistivity body deep in the ground in the southeastern part of the investigated area. Furthermore, there was a spot detected in the direction of east-northeast that suggested an electric structure continuous in the direction of depth and a fault-like structure discontinuous in the transverse direction. 7 refs., 9 figs.
Kinetic equation solution by inverse kinetic method
Salas, G.
1983-01-01
We propose a computer program (CAMU) which permits to solve the inverse kinetic equation. The CAMU code is written in HPL language for a HP 982 A microcomputer with a peripheral interface HP 9876 A ''thermal graphic printer''. The CAMU code solves the inverse kinetic equation by taking as data entry the output of the ionization chambers and integrating the equation with the help of the Simpson method. With this program we calculate the evolution of the reactivity in time for a given disturbance
Introduction to the mathematics of inversion in remote sensing and indirect measurements
Twomey, S
2013-01-01
Developments in Geomathematics, 3: Introduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements focuses on the application of the mathematics of inversion in remote sensing and indirect measurements, including vectors and matrices, eigenvalues and eigenvectors, and integral equations. The publication first examines simple problems involving inversion, theory of large linear systems, and physical and geometric aspects of vectors and matrices. Discussions focus on geometrical view of matrix operations, eigenvalues and eigenvectors, matrix products, inverse of a matrix, transposition and rules for product inversion, and algebraic elimination. The manuscript then tackles the algebraic and geometric aspects of functions and function space and linear inversion methods, as well as the algebraic and geometric nature of constrained linear inversion, least squares solution, approximation by sums of functions, and integral equations. The text examines information content of indirect sensing m...
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander; Matthies, Hermann G.
2014-01-01
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Pericentric inversion of chromosome 12; a three family study
Haagerup, Annette; Hertz, Jens Michael
1992-01-01
A pericentric inversion of chromosome 12 has been followed in three large independently ascertained Danish families. Out of a total number of 52 persons examined, 25 were found to carry the inversion. The breakpoints in all three families were localized to p13 and q13, resulting in more than one...... rate is calculated to be 0.58, which is not significantly different from an expected segregation rate of 0.5. In family 3, an additional inversion of a chromosome 9 has been found in 4 individuals. Our results are discussed in relation to previous findings and with respect to the genetic counselling...... of families with pericentric inversions....
Moisan, John R.; Moisan, Tiffany A. H.; Linkswiler, Matthew A.
2011-01-01
Phytoplankton absorption spectra and High-Performance Liquid Chromatography (HPLC) pigment observations from the Eastern U.S. and global observations from NASA's SeaBASS archive are used in a linear inverse calculation to extract pigment-specific absorption spectra. Using these pigment-specific absorption spectra to reconstruct the phytoplankton absorption spectra results in high correlations at all visible wavelengths (r(sup 2) from 0.83 to 0.98), and linear regressions (slopes ranging from 0.8 to 1.1). Higher correlations (r(sup 2) from 0.75 to 1.00) are obtained in the visible portion of the spectra when the total phytoplankton absorption spectra are unpackaged by multiplying the entire spectra by a factor that sets the total absorption at 675 nm to that expected from absorption spectra reconstruction using measured pigment concentrations and laboratory-derived pigment-specific absorption spectra. The derived pigment-specific absorption spectra were further used with the total phytoplankton absorption spectra in a second linear inverse calculation to estimate the various phytoplankton HPLC pigments. A comparison between the estimated and measured pigment concentrations for the 18 pigment fields showed good correlations (r(sup 2) greater than 0.5) for 7 pigments and very good correlations (r(sup 2) greater than 0.7) for chlorophyll a and fucoxanthin. Higher correlations result when the analysis is carried out at more local geographic scales. The ability to estimate phytoplankton pigments using pigment-specific absorption spectra is critical for using hyperspectral inverse models to retrieve phytoplankton pigment concentrations and other Inherent Optical Properties (IOPs) from passive remote sensing observations.
Inverse Kinematic Analysis Of A Quadruped Robot
Muhammed Arif Sen
2017-09-01
Full Text Available This paper presents an inverse kinematics program of a quadruped robot. The kinematics analysis is main problem in the manipulators and robots. Dynamic and kinematic structures of quadruped robots are very complex compared to industrial and wheeled robots. In this study inverse kinematics solutions for a quadruped robot with 3 degrees of freedom on each leg are presented. Denavit-Hartenberg D-H method are used for the forward kinematic. The inverse kinematic equations obtained by the geometrical and mathematical methods are coded in MATLAB. And thus a program is obtained that calculate the legs joint angles corresponding to desired various orientations of robot and endpoints of legs. Also the program provides the body orientations of robot in graphical form. The angular positions of joints obtained corresponding to desired different orientations of robot and endpoints of legs are given in this study.
Heitz, Eric
2017-01-01
We present a geometric method for computing an ellipse that subtends the same solid-angle domain as an arbitrarily positioned ellipsoid. With this method we can extend existing analytical solid-angle calculations of ellipses to ellipsoids. Our idea consists of applying a linear transformation on the ellipsoid such that it is transformed into a sphere from which a disk that covers the same solid-angle domain can be computed. We demonstrate that by applying the inverse linear transformation on this disk we obtain an ellipse that subtends the same solid-angle domain as the ellipsoid. We provide a MATLAB implementation of our algorithm and we validate it numerically.
Heitz, Eric, E-mail: eheitz.research@gmail.com
2017-04-21
We present a geometric method for computing an ellipse that subtends the same solid-angle domain as an arbitrarily positioned ellipsoid. With this method we can extend existing analytical solid-angle calculations of ellipses to ellipsoids. Our idea consists of applying a linear transformation on the ellipsoid such that it is transformed into a sphere from which a disk that covers the same solid-angle domain can be computed. We demonstrate that by applying the inverse linear transformation on this disk we obtain an ellipse that subtends the same solid-angle domain as the ellipsoid. We provide a MATLAB implementation of our algorithm and we validate it numerically.
Karloff, Howard
1991-01-01
To this reviewer’s knowledge, this is the first book accessible to the upper division undergraduate or beginning graduate student that surveys linear programming from the Simplex Method…via the Ellipsoid algorithm to Karmarkar’s algorithm. Moreover, its point of view is algorithmic and thus it provides both a history and a case history of work in complexity theory. The presentation is admirable; Karloff's style is informal (even humorous at times) without sacrificing anything necessary for understanding. Diagrams (including horizontal brackets that group terms) aid in providing clarity. The end-of-chapter notes are helpful...Recommended highly for acquisition, since it is not only a textbook, but can also be used for independent reading and study. —Choice Reviews The reader will be well served by reading the monograph from cover to cover. The author succeeds in providing a concise, readable, understandable introduction to modern linear programming. —Mathematics of Computing This is a textbook intend...
Preliminary isodose calculation for gynecological curietherapy
Bridier, A.; Dutreix, A.; Gerbaulet, A.; Chassagne, D.
1981-01-01
We present a preliminary method of calculating the dimensions of the reference isodose, based upon the geometrical distribution and length of the sources used, their linear activity and the length of treatment, that does not require use of a computer. Inversely, this method can be used to determine the factors necessary to produce a given shape of isodose, and also to predict the change in shape of the isodose that will be produced by altering the various factors. This method was derived from a systematic computer study of dose distribution in which each factor was varied independently of all others. The dimensions of the isodoses, calculated by this method, were found to be in agreement with those derived from computer calculation to within an error of about 2 mm. The method is only applicable for a limited range of positions of the vaginal sources. The influence of the positions of these sources along the line of the axis of uterine catheter and of their inclination to this line, are currently being studied. The results are presented as mathematical formulae relating each dimension of the isodose curves to the features of the application, but could equally well be expressed in tabular form that would be more convenient for everyday use. An example of the calculation used is given to facilitate understanding of the method [fr
Unwrapped phase inversion with an exponential damping
Choi, Yun Seok
2015-07-28
Full-waveform inversion (FWI) suffers from the phase wrapping (cycle skipping) problem when the frequency of data is not low enough. Unless we obtain a good initial velocity model, the phase wrapping problem in FWI causes a result corresponding to a local minimum, usually far away from the true solution, especially at depth. Thus, we have developed an inversion algorithm based on a space-domain unwrapped phase, and we also used exponential damping to mitigate the nonlinearity associated with the reflections. We construct the 2D phase residual map, which usually contains the wrapping discontinuities, especially if the model is complex and the frequency is high. We then unwrap the phase map and remove these cycle-based jumps. However, if the phase map has several residues, the unwrapping process becomes very complicated. We apply a strong exponential damping to the wavefield to eliminate much of the residues in the phase map, thus making the unwrapping process simple. We finally invert the unwrapped phases using the back-propagation algorithm to calculate the gradient. We progressively reduce the damping factor to obtain a high-resolution image. Numerical examples determined that the unwrapped phase inversion with a strong exponential damping generated convergent long-wavelength updates without low-frequency information. This model can be used as a good starting model for a subsequent inversion with a reduced damping, eventually leading to conventional waveform inversion.
Electrochemically driven emulsion inversion
Johans, Christoffer; Kontturi, Kyösti
2007-09-01
It is shown that emulsions stabilized by ionic surfactants can be inverted by controlling the electrical potential across the oil-water interface. The potential dependent partitioning of sodium dodecyl sulfate (SDS) was studied by cyclic voltammetry at the 1,2-dichlorobenzene|water interface. In the emulsion the potential control was achieved by using a potential-determining salt. The inversion of a 1,2-dichlorobenzene-in-water (O/W) emulsion stabilized by SDS was followed by conductometry as a function of added tetrapropylammonium chloride. A sudden drop in conductivity was observed, indicating the change of the continuous phase from water to 1,2-dichlorobenzene, i.e. a water-in-1,2-dichlorobenzene emulsion was formed. The inversion potential is well in accordance with that predicted by the hydrophilic-lipophilic deviation if the interfacial potential is appropriately accounted for.
Gale, A.S.; Surlyk, Finn; Anderskouv, Kresten
2013-01-01
Evidence from regional stratigraphical patterns in Santonian−Campanian chalk is used to infer the presence of a very broad channel system (5 km across) with a depth of at least 50 m, running NNW−SSE across the eastern Isle of Wight; only the western part of the channel wall and fill is exposed. W......−Campanian chalks in the eastern Isle of Wight, involving penecontemporaneous tectonic inversion of the underlying basement structure, are rejected....
Reactivity in inverse micelles
Brochette, Pascal
1987-01-01
This research thesis reports the study of the use of micro-emulsions of water in oil as reaction support. Only the 'inverse micelles' domain of the ternary mixing (water/AOT/isooctane) has been studied. The main addressed issues have been: the micro-emulsion disturbance in presence of reactants, the determination of reactant distribution and the resulting kinetic theory, the effect of the interface on electron transfer reactions, and finally protein solubilization [fr
Ensemble Kalman methods for inverse problems
Iglesias, Marco A; Law, Kody J H; Stuart, Andrew M
2013-01-01
The ensemble Kalman filter (EnKF) was introduced by Evensen in 1994 (Evensen 1994 J. Geophys. Res. 99 10143–62) as a novel method for data assimilation: state estimation for noisily observed time-dependent problems. Since that time it has had enormous impact in many application domains because of its robustness and ease of implementation, and numerical evidence of its accuracy. In this paper we propose the application of an iterative ensemble Kalman method for the solution of a wide class of inverse problems. In this context we show that the estimate of the unknown function that we obtain with the ensemble Kalman method lies in a subspace A spanned by the initial ensemble. Hence the resulting error may be bounded above by the error found from the best approximation in this subspace. We provide numerical experiments which compare the error incurred by the ensemble Kalman method for inverse problems with the error of the best approximation in A, and with variants on traditional least-squares approaches, restricted to the subspace A. In so doing we demonstrate that the ensemble Kalman method for inverse problems provides a derivative-free optimization method with comparable accuracy to that achieved by traditional least-squares approaches. Furthermore, we also demonstrate that the accuracy is of the same order of magnitude as that achieved by the best approximation. Three examples are used to demonstrate these assertions: inversion of a compact linear operator; inversion of piezometric head to determine hydraulic conductivity in a Darcy model of groundwater flow; and inversion of Eulerian velocity measurements at positive times to determine the initial condition in an incompressible fluid. (paper)
Steinhauer, L.C.; Romea, R.D.; Kimura, W.D.
1997-01-01
A new method for laser acceleration is proposed based upon the inverse process of transition radiation. The laser beam intersects an electron-beam traveling between two thin foils. The principle of this acceleration method is explored in terms of its classical and quantum bases and its inverse process. A closely related concept based on the inverse of diffraction radiation is also presented: this concept has the significant advantage that apertures are used to allow free passage of the electron beam. These concepts can produce net acceleration because they do not satisfy the conditions in which the Lawson-Woodward theorem applies (no net acceleration in an unbounded vacuum). Finally, practical aspects such as damage limits at optics are employed to find an optimized set of parameters. For reasonable assumptions an acceleration gradient of 200 MeV/m requiring a laser power of less than 1 GW is projected. An interesting approach to multi-staging the acceleration sections is also presented. copyright 1997 American Institute of Physics
Intersections, ideals, and inversion
Vasco, D.W.
1998-01-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons
Intersections, ideals, and inversion
Vasco, D.W.
1998-10-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.
Assigning uncertainties in the inversion of NMR relaxation data.
Parker, Robert L; Song, Yi-Qaio
2005-06-01
Recovering the relaxation-time density function (or distribution) from NMR decay records requires inverting a Laplace transform based on noisy data, an ill-posed inverse problem. An important objective in the face of the consequent ambiguity in the solutions is to establish what reliable information is contained in the measurements. To this end we describe how upper and lower bounds on linear functionals of the density function, and ratios of linear functionals, can be calculated using optimization theory. Those bounded quantities cover most of those commonly used in the geophysical NMR, such as porosity, T(2) log-mean, and bound fluid volume fraction, and include averages over any finite interval of the density function itself. In the theory presented statistical considerations enter to account for the presence of significant noise in the signal, but not in a prior characterization of density models. Our characterization of the uncertainties is conservative and informative; it will have wide application in geophysical NMR and elsewhere.
Testing earthquake source inversion methodologies
Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data
Contributions to Large Covariance and Inverse Covariance Matrices Estimation
Kang, Xiaoning
2016-01-01
Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...
Observer-dependent sign inversions of polarization singularities.
Freund, Isaac
2014-10-15
We describe observer-dependent sign inversions of the topological charges of vector field polarization singularities: C points (points of circular polarization), L points (points of linear polarization), and two virtually unknown singularities we call γ(C) and α(L) points. In all cases, the sign of the charge seen by an observer can change as she changes the direction from which she views the singularity. Analytic formulas are given for all C and all L point sign inversions.
The shifting zoom: new possibilities for inverse scattering on electrically large domains
Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien
2017-04-01
Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C
Saeki, Tazu; Patra, Prabir K.
2017-12-01
Measurement and modelling of regional or country-level carbon dioxide (CO2) fluxes are becoming critical for verification of the greenhouse gases emission control. One of the commonly adopted approaches is inverse modelling, where CO2 fluxes (emission: positive flux, sink: negative flux) from the terrestrial ecosystems are estimated by combining atmospheric CO2 measurements with atmospheric transport models. The inverse models assume anthropogenic emissions are known, and thus the uncertainties in the emissions introduce systematic bias in estimation of the terrestrial (residual) fluxes by inverse modelling. Here we show that the CO2 sink increase, estimated by the inverse model, over East Asia (China, Japan, Korea and Mongolia), by about 0.26 PgC year-1 (1 Pg = 1012 g) during 2001-2010, is likely to be an artifact of the anthropogenic CO2 emissions increasing too quickly in China by 1.41 PgC year-1. Independent results from methane (CH4) inversion suggested about 41% lower rate of East Asian CH4 emission increase during 2002-2012. We apply a scaling factor of 0.59, based on CH4 inversion, to the rate of anthropogenic CO2 emission increase since the anthropogenic emissions of both CO2 and CH4 increase linearly in the emission inventory. We find no systematic increase in land CO2 uptake over East Asia during 1993-2010 or 2000-2009 when scaled anthropogenic CO2 emissions are used, and that there is a need of higher emission increase rate for 2010-2012 compared to those calculated by the inventory methods. High bias in anthropogenic CO2 emissions leads to stronger land sinks in global land-ocean flux partitioning in our inverse model. The corrected anthropogenic CO2 emissions also produce measurable reductions in the rate of global land CO2 sink increase post-2002, leading to a better agreement with the terrestrial biospheric model simulations that include CO2-fertilization and climate effects.
Connection between Dirac and matrix Schroedinger inverse-scattering transforms
Jaulent, M.; Leon, J.J.P.
1978-01-01
The connection between two applications of the inverse scattering method for solving nonlinear equations is established. The inverse method associated with the massive Dirac system (D) : (iσ 3 d/dx - i q 3 σ 1 - q 1 σ 2 + mσ 2 )Y = epsilonY is rediscovered from the inverse method associated with the 2 x 2 matrix Schroedinger equation (S) : Ysub(xx) + (k 2 -Q)Y = 0. Here Q obeys a nonlinear constraint equivalent to a linear constraint on the reflection coefficient for (S). (author)
Inverse radiative transfer problems in two-dimensional heterogeneous media
Tito, Mariella Janette Berrocal
2001-01-01
The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)
Inverse planning and optimization: a comparison of solutions
Ringor, Michael [School of Health Sciences, Purdue University, West Lafayette, IN (United States); Papiez, Lech [Department of Radiation Oncology, Indiana University, Indianapolis, IN (United States)
1998-09-01
The basic problem in radiation therapy treatment planning is to determine an appropriate set of treatment parameters that would induce an effective dose distribution inside a patient. One can approach this task as an inverse problem, or as an optimization problem. In this presentation, we compare both approaches. The inverse problem is presented as a dose reconstruction problem similar to tomography reconstruction. We formulate the optimization problem as linear and quadratic programs. Explicit comparisons are made between the solutions obtained by inversion and those obtained by optimization for the case in which scatter and attenuation are ignored (the NS-NA approximation)
Reduction of Linear Programming to Linear Approximation
Vaserstein, Leonid N.
2006-01-01
It is well known that every Chebyshev linear approximation problem can be reduced to a linear program. In this paper we show that conversely every linear program can be reduced to a Chebyshev linear approximation problem.
Three-dimensional gravity modeling and focusing inversion using rectangular meshes.
Commer, M.
2011-03-01
Rectangular grid cells are commonly used for the geophysical modeling of gravity anomalies, owing to their flexibility in constructing complex models. The straightforward handling of cubic cells in gravity inversion algorithms allows for a flexible imposition of model regularization constraints, which are generally essential in the inversion of static potential field data. The first part of this paper provides a review of commonly used expressions for calculating the gravity of a right polygonal prism, both for gravity and gradiometry, where the formulas of Plouff and Forsberg are adapted. The formulas can be cast into general forms practical for implementation. In the second part, a weighting scheme for resolution enhancement at depth is presented. Modelling the earth using highly digitized meshes, depth weighting schemes are typically applied to the model objective functional, subject to minimizing the data misfit. The scheme proposed here involves a non-linear conjugate gradient inversion scheme with a weighting function applied to the non-linear conjugate gradient scheme's gradient vector of the objective functional. The low depth resolution due to the quick decay of the gravity kernel functions is counteracted by suppressing the search directions in the parameter space that would lead to near-surface concentrations of gravity anomalies. Further, a density parameter transformation function enabling the imposition of lower and upper bounding constraints is employed. Using synthetic data from models of varying complexity and a field data set, it is demonstrated that, given an adequate depth weighting function, the gravity inversion in the transform space can recover geologically meaningful models requiring a minimum of prior information and user interaction.
Support Minimized Inversion of Acoustic and Elastic Wave Scattering
Safaeinili, Ali
Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work
Improved algorithm for three-dimensional inverse method
Qiu, Xuwen
An inverse method, which works for full 3D viscous applications in turbomachinery aerodynamic design, is developed. The method takes pressure loading and thickness distribution as inputs and computes the 3D-blade geometry. The core of the inverse method consists of two closely related steps, which are integrated into a time-marching procedure of a Navier-Stokes solver. First, the pressure loading condition is enforced while flow is allowed to cross the blade surfaces. A permeable blade boundary condition is developed here in order to be consistent with the propagation characteristics of the transient Navier-Stokes equations. In the second step, the blade geometry is adjusted so that the flow-tangency condition is satisfied for the new blade. A Non-Uniform Rational B-Spline (NURBS) model is used to represent the span-wise camber curves. The flow-tangency condition is then transformed into a general linear least squares fitting problem, which is solved by a robust Singular Value Decomposition (SVD) scheme. This blade geometry generation scheme allows the designer to have direct control over the smoothness of the calculated blade, and thus ensures the numerical stability during the iteration process. Numerical experiments show that this method is very accurate, efficient and robust. In target-shooting tests, the program was able to converge to the target blade accurately from a different initial blade. The speed of an inverse run is only about 15% slower than its analysis counterpart, which means a complete 3D viscous inverse design can be done in a matter of hours. The method is also proved to work well with the presence of clearance between the blade and the housing, a key factor to be considered in aerodynamic design. The method is first developed for blades without splitters, and is then extended to provide the capability of analyzing and designing machines with splitters. This gives designers an integrated environment where the aerodynamic design of both full
Introduction to Schroedinger inverse scattering
Roberts, T.M.
1991-01-01
Schroedinger inverse scattering uses scattering coefficients and bound state data to compute underlying potentials. Inverse scattering has been studied extensively for isolated potentials q(x), which tend to zero as vertical strokexvertical stroke→∞. Inverse scattering for isolated impurities in backgrounds p(x) that are periodic, are Heaviside steps, are constant for x>0 and periodic for x<0, or that tend to zero as x→∞ and tend to ∞ as x→-∞, have also been studied. This paper identifies literature for the five inverse problems just mentioned, and for four other inverse problems. Heaviside-step backgrounds are discussed at length. (orig.)
Mai, P. M.; Schorlemmer, D.; Page, M.
2012-04-01
Earthquake source inversions image the spatio-temporal rupture evolution on one or more fault planes using seismic and/or geodetic data. Such studies are critically important for earthquake seismology in general, and for advancing seismic hazard analysis in particular, as they reveal earthquake source complexity and help (i) to investigate earthquake mechanics; (ii) to develop spontaneous dynamic rupture models; (iii) to build models for generating rupture realizations for ground-motion simulations. In applications (i - iii), the underlying finite-fault source models are regarded as "data" (input information), but their uncertainties are essentially unknown. After all, source models are obtained from solving an inherently ill-posed inverse problem to which many a priori assumptions and uncertain observations are applied. The Source Inversion Validation (SIV) project is a collaborative effort to better understand the variability between rupture models for a single earthquake (as manifested in the finite-source rupture model database) and to develop robust uncertainty quantification for earthquake source inversions. The SIV project highlights the need to develop a long-standing and rigorous testing platform to examine the current state-of-the-art in earthquake source inversion, and to develop and test novel source inversion approaches. We will review the current status of the SIV project, and report the findings and conclusions of the recent workshops. We will briefly discuss several source-inversion methods, how they treat uncertainties in data, and assess the posterior model uncertainty. Case studies include initial forward-modeling tests on Green's function calculations, and inversion results for synthetic data from spontaneous dynamic crack-like strike-slip earthquake on steeply dipping fault, embedded in a layered crustal velocity-density structure.
Geostatistical regularization operators for geophysical inverse problems on irregular meshes
Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA
2018-05-01
Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.
Kalman, C.S.; Tran, B.; Hall, R.L.
1987-01-01
The hypothesis that the interquark potential in the baryon is the sum of a Coulomb and a linear potential is evalutated in terms of the model of Isgur and Karl as modified by Kalman, Hal and Misra. Six parameters are used to fit the eight ground-state baryon masses. The closeness of the predicted values to the experimental values verifies the hypothesis
Source-independent elastic waveform inversion using a logarithmic wavefield
Choi, Yun Seok
2012-01-01
The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.
Elementary linear programming with applications
Kolman, Bernard
1995-01-01
Linear programming finds the least expensive way to meet given needs with available resources. Its results are used in every area of engineering and commerce: agriculture, oil refining, banking, and air transport. Authors Kolman and Beck present the basic notions of linear programming and illustrate how they are used to solve important common problems. The software on the included disk leads students step-by-step through the calculations. The Second Edition is completely revised and provides additional review material on linear algebra as well as complete coverage of elementary linear program
Inverse Free Electron Laser accelerator
Fisher, A.; Gallardo, J.; van Steenbergen, A.; Sandweiss, J.
1992-09-01
The study of the INVERSE FREE ELECTRON LASER, as a potential mode of electron acceleration, is being pursued at Brookhaven National Laboratory. Recent studies have focussed on the development of a low energy, high gradient, multi stage linear accelerator. The elementary ingredients for the IFEL interaction are the 50 MeV Linac e - beam and the 10 11 Watt CO 2 laser beam of BNL's Accelerator Test Facility (ATF), Center for Accelerator Physics (CAP) and a wiggler. The latter element is designed as a fast excitation unit making use of alternating stacks of Vanadium Permendur (VaP) ferromagnetic laminations, periodically interspersed with conductive, nonmagnetic laminations, which act as eddy current induced field reflectors. Wiggler parameters and field distribution data will be presented for a prototype wiggler in a constant period and in a ∼ 1.5 %/cm tapered period configuration. The CO 2 laser beam will be transported through the IFEL interaction region by means of a low loss, dielectric coated, rectangular waveguide. Short waveguide test sections have been constructed and have been tested using a low power cw CO 2 laser. Preliminary results of guide attenuation and mode selectivity will be given, together with a discussion of the optical issues for the IFEL accelerator. The IFEL design is supported by the development and use of 1D and 3D simulation programs. The results of simulation computations, including also wiggler errors, for a single module accelerator and for a multi-module accelerator will be presented
Djebbi, Ramzi
2015-08-19
The instantaneous traveltime is able to reduce the non-linearity of full waveform inversion (FWI) that originates from the wrapping of the phase. However, the adjoint state method in this case requires a total of 5 modeling calculations to compute the gradient. Also, considering the larger modeling cost for anisotropic wavefield extrapolation and the necessity to use a line-search algorithm to estimate a step length that depends on the parameters scale, we propose to calculate the gradient based on the instantaneous traveltime sensitivity kernels. We, specifically, use the sensitivity kernels computed using dynamic ray-tracing to build the gradient. The resulting update is computed using a matrix decomposition and accordingly the computational cost is reduced. We consider a simple example where an anomaly is embedded into a constant background medium and we compute the update for the VTI wave equation parameterized using vh, η and ε.
An alternative 3D inversion method for magnetic anomalies with depth resolution
M. Chiappini
2006-06-01
Full Text Available This paper presents a new method to invert magnetic anomaly data in a variety of non-complex contexts when a priori information about the sources is not available. The region containing magnetic sources is discretized into a set of homogeneously magnetized rectangular prisms, polarized along a common direction. The magnetization distribution is calculated by solving an underdetermined linear system, and is accomplished through the simultaneous minimization of the norm of the solution and the misfit between the observed and the calculated field. Our algorithm makes use of a dipolar approximation to compute the magnetic field of the rectangular blocks. We show how this approximation, in conjunction with other correction factors, presents numerous advantages in terms of computing speed and depth resolution, and does not affect significantly the success of the inversion. The algorithm is tested on both synthetic and real magnetic datasets.
Djebbi, Ramzi; Alkhalifah, Tariq Ali
2015-01-01
The instantaneous traveltime is able to reduce the non-linearity of full waveform inversion (FWI) that originates from the wrapping of the phase. However, the adjoint state method in this case requires a total of 5 modeling calculations to compute the gradient. Also, considering the larger modeling cost for anisotropic wavefield extrapolation and the necessity to use a line-search algorithm to estimate a step length that depends on the parameters scale, we propose to calculate the gradient based on the instantaneous traveltime sensitivity kernels. We, specifically, use the sensitivity kernels computed using dynamic ray-tracing to build the gradient. The resulting update is computed using a matrix decomposition and accordingly the computational cost is reduced. We consider a simple example where an anomaly is embedded into a constant background medium and we compute the update for the VTI wave equation parameterized using vh, η and ε.
National Oceanic and Atmospheric Administration, Department of Commerce — Declination is calculated using the current International Geomagnetic Reference Field (IGRF) model. Declination is calculated using the current World Magnetic Model...
Interferogram analysis using the Abel inversion technique
Yusof Munajat; Mohamad Kadim Suaidi
2000-01-01
High speed and high resolution optical detection system were used to capture the image of acoustic waves propagation. The freeze image in the form of interferogram was analysed to calculate the transient pressure profile of the acoustic waves. The interferogram analysis was based on the fringe shift and the application of the Abel inversion technique. An easier approach was made by mean of using MathCAD program as a tool in the programming; yet powerful enough to make such calculation, plotting and transfer of file. (Author)
Phase and amplitude inversion of crosswell radar data
Ellefsen, Karl J.; Mazzella, Aldo T.; Horton, Robert J.; McKenna, Jason R.
2011-01-01
Phase and amplitude inversion of crosswell radar data estimates the logarithm of complex slowness for a 2.5D heterogeneous model. The inversion is formulated in the frequency domain using the vector Helmholtz equation. The objective function is minimized using a back-propagation method that is suitable for a 2.5D model and that accounts for the near-, intermediate-, and far-field regions of the antennas. The inversion is tested with crosswell radar data collected in a laboratory tank. The model anomalies are consistent with the known heterogeneity in the tank; the model’s relative dielectric permittivity, which is calculated from the real part of the estimated complex slowness, is consistent with independent laboratory measurements. The methodologies developed for this inversion can be adapted readily to inversions of seismic data (e.g., crosswell seismic and vertical seismic profiling data).
A nanolens-type enhancement in the linear and second harmonic response of a metallic dimer
Pustovit, Vitaliy; Biswas, Sushmita; Vaia, Richard; Urbas, Augustine
2014-01-01
In this paper we explore the linear and second-order nonlinear response of gold nanoparticle pairs (dimers). Despite that even-order nonlinear processes are forbidden in bulk centrosymmetric media like metals, second order nonlinear response exhibits a high degree of sensitivity for spherical nanoparticles where inversion symmetry is broken at the surface. Recent experiments demonstrate significant dependence of linear response and second-harmonic surface nonlinear response arising from the local fundamental field distribution in a dimer configuration. Our calculations are carried out taking into account high order multipolar interactions between metal nanoparticles, and demonstrate that linear and nonlinear optical responses of the dimer exhibit periodic behavior dependent on the separation distance between nanoparticles. This response increases for dimers with a large difference between particle sizes. (paper)
Near constant-time optimal piecewise LDR to HDR inverse tone mapping
Chen, Qian; Su, Guan-Ming; Yin, Peng
2015-02-01
In a backward compatible HDR image/video compression, it is a general approach to reconstruct HDR from compressed LDR as a prediction to original HDR, which is referred to as inverse tone mapping. Experimental results show that 2- piecewise 2nd order polynomial has the best mapping accuracy than 1 piece high order or 2-piecewise linear, but it is also the most time-consuming method because to find the optimal pivot point to split LDR range to 2 pieces requires exhaustive search. In this paper, we propose a fast algorithm that completes optimal 2-piecewise 2nd order polynomial inverse tone mapping in near constant time without quality degradation. We observe that in least square solution, each entry in the intermediate matrix can be written as the sum of some basic terms, which can be pre-calculated into look-up tables. Since solving the matrix becomes looking up values in tables, computation time barely differs regardless of the number of points searched. Hence, we can carry out the most thorough pivot point search to find the optimal pivot that minimizes MSE in near constant time. Experiment shows that our proposed method achieves the same PSNR performance while saving 60 times computation time compared to the traditional exhaustive search in 2-piecewise 2nd order polynomial inverse tone mapping with continuous constraint.
Markus Spiliotis
Full Text Available Inverse fusion PCR cloning (IFPC is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.
Hicks, H.R.; Dory, R.A.; Holmes, J.A.
1983-01-01
We illustrate in some detail a 2D inverse-equilibrium solver that was constructed to analyze tokamak configurations and stellarators (the latter in the context of the average method). To ensure that the method is suitable not only to determine equilibria, but also to provide appropriately represented data for existing stability codes, it is important to be able to control the Jacobian, tilde J is identical to delta(R,Z)/delta(rho, theta). The form chosen is tilde J = J 0 (rho)R/sup l/rho where rho is a flux surface label, and l is an integer. The initial implementation is for a fixed conducting-wall boundary, but the technique can be extended to a free-boundary model
Szabó, Norbert Péter
2018-03-01
An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.
Mantle conductivity obtained by 3-D inversion of magnetic satellite data
Kuvshinov, A.; Olsen, Nils
distributed geomagnetic observatories. Due to the high computational load of a 3-D inversion (requiring thousands of forward calculations), a comprehensive numerical framework is developed to increase the efficiency of the inversion.In particular, we take an advantage of specific features of the IE approach...... and perform the most consuming-time part of the IE forward simulations (the calculation of electric and magnetic tensor Green’s functions) only once. Approximate calculation of the data sensitivities also gives essential speed up of the inversion. We validate our inversion scheme using synthetic induction...
Blocky inversion of multichannel elastic impedance for elastic parameters
Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza
2018-04-01
Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.
Transmuted Generalized Inverse Weibull Distribution
Merovci, Faton; Elbatal, Ibrahim; Ahmed, Alaa
2013-01-01
A generalization of the generalized inverse Weibull distribution so-called transmuted generalized inverse Weibull dis- tribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM) in order to generate a flexible family of probability distributions taking generalized inverse Weibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expression...
Atmospheric Inverse Estimates of Methane Emissions from Central California
Zhao, Chuanfeng; Andrews, Arlyn E.; Bianco, Laura; Eluszkiewicz, Janusz; Hirsch, Adam; MacDonald, Clinton; Nehrkorn, Thomas; Fischer, Marc L.
2008-11-21
Methane mixing ratios measured at a tall-tower are compared to model predictions to estimate surface emissions of CH{sub 4} in Central California for October-December 2007 using an inverse technique. Predicted CH{sub 4} mixing ratios are calculated based on spatially resolved a priori CH{sub 4} emissions and simulated atmospheric trajectories. The atmospheric trajectories, along with surface footprints, are computed using the Weather Research and Forecast (WRF) coupled to the Stochastic Time-Inverted Lagrangian Transport (STILT) model. An uncertainty analysis is performed to provide quantitative uncertainties in estimated CH{sub 4} emissions. Three inverse model estimates of CH{sub 4} emissions are reported. First, linear regressions of modeled and measured CH{sub 4} mixing ratios obtain slopes of 0.73 {+-} 0.11 and 1.09 {+-} 0.14 using California specific and Edgar 3.2 emission maps respectively, suggesting that actual CH{sub 4} emissions were about 37 {+-} 21% higher than California specific inventory estimates. Second, a Bayesian 'source' analysis suggests that livestock emissions are 63 {+-} 22% higher than the a priori estimates. Third, a Bayesian 'region' analysis is carried out for CH{sub 4} emissions from 13 sub-regions, which shows that inventory CH{sub 4} emissions from the Central Valley are underestimated and uncertainties in CH{sub 4} emissions are reduced for sub-regions near the tower site, yielding best estimates of flux from those regions consistent with 'source' analysis results. The uncertainty reductions for regions near the tower indicate that a regional network of measurements will be necessary to provide accurate estimates of surface CH{sub 4} emissions for multiple regions.
A study of block algorithms for fermion matrix inversion
Henty, D.
1990-01-01
We compare the convergence properties of Lanczos and Conjugate Gradient algorithms applied to the calculation of columns of the inverse fermion matrix for Kogut-Susskind and Wilson fermions in lattice QCD. When several columns of the inverse are required simultaneously, a block version of the Lanczos algorithm is most efficient at small mass, being over 5 times faster than the single algorithms. The block algorithm is also less susceptible to critical slowing down. (orig.)
The inverse problem of the magnetostatic nondestructive testing
Pechenkov, A.N.; Shcherbinin, V.E.
2006-01-01
The inverse problem of magnetostatic nondestructive testing consists in the calculation of the shape and magnetic characteristics of a flaw in a uniform magnetized body with measurement of static magnetic field beyond the body. If the flaw does not contain any magnetic material, the inverse problem is reduced to identification of the shape and magnetic susceptibility of the substance. This case has been considered in the study [ru
Three-dimensional inversion of multisource array electromagnetic data
Tartaras, Efthimios
Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM
Inversion based on computational simulations
Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.
1998-01-01
A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal
Inverse feasibility problems of the inverse maximum flow problems
199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.
Andersen, Bjarne Stig; Gunnels, John A.; Gustavson, Fred
2002-01-01
A new Recursive Packed Inverse Calculation Algorithm for symmetric positive definite matrices has been developed. The new Recursive Inverse Calculation algorithm uses minimal storage, \\$n(n+1)/2\\$, and has nearly the same performance as the LAPACK full storage algorithm using \\$n\\^2\\$ memory words...
Quantitative mapping of soil salinity using the DUALEM‐21S instrument and EM inversion software
Koganti, Triven; Narjary, Bhaskar; Zare, Ehsan
2018-01-01
Soil‐V302). The best linear relationship (ECe = −11.814 + 0.043 × σ) was achieved using full solution (FS), S1 inversion algorithm, and a damping factor (λ) of 0.6 that had a large coefficient of determination (R2 = 0.84). A cross‐validation technique was used to validate the model, and given the high...... by establishing a linear relationship between calculated true electrical conductivity (σ) and laboratory measured ECe at various depths (0–0.3, 0.3–0.6, 0.6–0.9, and 0.9–1.2 m). We estimate σ by inverting DUALEM‐21S apparent electrical conductivity (ECa) data using a quasi‐3‐dimensional inversion algorithm (EM4...... accuracy (RMSE = 8.31 dS m−1), small bias (mean error = −0.0628 dS m−1), large R2 = 0.82, and Lin's concordance (0.93), between measured and predicted ECe, we were well able to predict the ECe distribution at all the four depths. However, the predictions made in the topsoil (0–0.3 m) at a few locations...
The inverse problem for Schwinger pair production
F. Hebenstreit
2016-02-01
Full Text Available The production of electron–positron pairs in time-dependent electric fields (Schwinger mechanism depends non-linearly on the applied field profile. Accordingly, the resulting momentum spectrum is extremely sensitive to small variations of the field parameters. Owing to this non-linear dependence it is so far unpredictable how to choose a field configuration such that a predetermined momentum distribution is generated. We show that quantum kinetic theory along with optimal control theory can be used to approximately solve this inverse problem for Schwinger pair production. We exemplify this by studying the superposition of a small number of harmonic components resulting in predetermined signatures in the asymptotic momentum spectrum. In the long run, our results could facilitate the observation of this yet unobserved pair production mechanism in quantum electrodynamics by providing suggestions for tailored field configurations.
Facies Constrained Elastic Full Waveform Inversion
Zhang, Z.
2017-05-26
Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.
Facies Constrained Elastic Full Waveform Inversion
Zhang, Z.; Zabihi Naeini, E.; Alkhalifah, Tariq Ali
2017-01-01
Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.
Computer-Aided Numerical Inversion of Laplace Transform
Umesh Kumar
2000-01-01
Full Text Available This paper explores the technique for the computer aided numerical inversion of Laplace transform. The inversion technique is based on the properties of a family of three parameter exponential probability density functions. The only limitation in the technique is the word length of the computer being used. The Laplace transform has been used extensively in the frequency domain solution of linear, lumped time invariant networks but its application to the time domain has been limited, mainly because of the difficulty in finding the necessary poles and residues. The numerical inversion technique mentioned above does away with the poles and residues but uses precomputed numbers to find the time response. This technique is applicable to the solution of partially differentiable equations and certain classes of linear systems with time varying components.
Torres Pozas, S.; Monja Rey, P. de la; Sanchez Carrasca, M.; Yanez Lopez, D.; Macias Verde, D.; Martin Oliva, R.
2011-07-01
In recent years, the progress experienced in cancer treatment with ionizing radiation can deliver higher doses to smaller volumes and better shaped, making it necessary to take into account new aspects in the calculation of structural barriers. Furthermore, given that forecasts suggest that in the near future will install a large number of accelerators, or existing ones modified, we believe a useful tool to estimate the thickness of the structural barriers of treatment rooms. The shielding calculation methods are based on standard DIN 6847-2 and the recommendations given by the NCRP 151. In our experience we found only estimates originated from the DIN. Therefore, we considered interesting to develop an application that incorporates the formulation suggested by the NCRP, together with previous work based on the rules DIN allow us to establish a comparison between the results of both methods. (Author)
Sorting signed permutations by inversions in O(nlogn) time.
Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E
2010-03-01
The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.
The inverse problem: Ocean tides derived from earth tide observations
Kuo, J. T.
1978-01-01
Indirect mapping ocean tides by means of land and island-based tidal gravity measurements is presented. The inverse scheme of linear programming is used for indirect mapping of ocean tides. Open ocean tides were measured by the numerical integration of Laplace's tidal equations.
Uniqueness in inverse elastic scattering with finitely many incident waves
Elschner, Johannes; Yamamoto, Masahiro
2009-01-01
We consider the third and fourth exterior boundary value problems of linear isotropic elasticity and present uniqueness results for the corresponding inverse scattering problems with polyhedral-type obstacles and a finite number of incident plane elastic waves. Our approach is based on a reflection principle for the Navier equation. (orig.)
Solving probabilistic inverse problems rapidly with prior samples
Käufl, Paul; Valentine, Andrew P.; de Wit, Ralph W.; Trampert, Jeannot
2016-01-01
Owing to the increasing availability of computational resources, in recent years the probabilistic solution of non-linear, geophysical inverse problems by means of sampling methods has become increasingly feasible. Nevertheless, we still face situations in which a Monte Carlo approach is not
A mathematical framework for inverse wave problems in heterogeneous media
Blazek, K.D.; Stolk, C.; Symes, W.W.
2013-01-01
This paper provides a theoretical foundation for some common formulations of inverse problems in wave propagation, based on hyperbolic systems of linear integro-differential equations with bounded and measurable coefficients. The coefficients of these time-dependent partial differential equations
Full Waveform Inversion Using Oriented Time Migration Method
Zhang, Zhendong
2016-04-12
Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I
Wang, Yi; Park, Yang-Kyun; Doppke, Karen P. [Department of Radiation Oncology, Massachusetts General Hospital, Harvard Medical School, Boston, MA (United States)
2015-06-15
Purpose: This study evaluated the performance of the electron Monte Carlo dose calculation algorithm in RayStation v4.0 for an Elekta machine with Agility™ treatment head. Methods: The machine has five electron energies (6–8 MeV) and five applicators (6×6 to 25×25 cm {sup 2}). The dose (cGy/MU at d{sub max}), depth dose and profiles were measured in water using an electron diode at 100 cm SSD for nine square fields ≥2×2 cm{sup 2} and four complex fields at normal incidence, and a 14×14 cm{sup 2} field at 15° and 30° incidence. The dose was also measured for three square fields ≥4×4 cm{sup 2} at 98, 105 and 110 cm SSD. Using selected energies, the EBT3 radiochromic film was used for dose measurements in slab-shaped inhomogeneous phantoms and a breast phantom with surface curvature. The measured and calculated doses were analyzed using a gamma criterion of 3%/3 mm. Results: The calculated and measured doses varied by <3% for 116 of the 120 points, and <5% for the 4×4 cm{sup 2} field at 110 cm SSD at 9–18 MeV. The gamma analysis comparing the 105 pairs of in-water isodoses passed by >98.1%. The planar doses measured from films placed at 0.5 cm below a lung/tissue layer (12 MeV) and 1.0 cm below a bone/air layer (15 MeV) showed excellent agreement with calculations, with gamma passing by 99.9% and 98.5%, respectively. At the breast-tissue interface, the gamma passing rate is >98.8% at 12–18 MeV. The film results directly validated the accuracy of MU calculation and spatial dose distribution in presence of tissue inhomogeneity and surface curvature - situations challenging for simpler pencil-beam algorithms. Conclusion: The electron Monte Carlo algorithm in RayStation v4.0 is fully validated for clinical use for the Elekta Agility™ machine. The comprehensive validation included small fields, complex fields, oblique beams, extended distance, tissue inhomogeneity and surface curvature.
Amin Asadi
2017-10-01
Full Text Available Purpose: To study the benefits of Directional Bremsstrahlung Splitting (DBS dose variance reduction technique in BEAMnrc Monte Carlo (MC code for Oncor® linac at 6MV and 18MV energies. Materials and Method: A MC model of Oncor® linac was built using BEAMnrc MC Code and verified by the measured data for 6MV and 18MV energies of various field sizes. Then Oncor® machine was modeled running DBS technique, and the efficiency of total fluence and spatial fluence for electron and photon, the efficiency of dose variance reduction of MC calculations for PDD on the central beam axis and lateral dose profile across the nominal field was measured and compared. Result: With applying DBS technique, the total fluence of electron and photon increased in turn 626.8 (6MV and 983.4 (6MV, and 285.6 (18MV and 737.8 (18MV, the spatial fluence of electron and photon improved in turn 308.6±1.35% (6MV and 480.38±0.43% (6MV, and 153±0.9% (18MV and 462.6±0.27% (18MV. Moreover, by running DBS technique, the efficiency of dose variance reduction for PDD MC dose calculations before maximum dose point and after dose maximum point enhanced 187.8±0.68% (6MV and 184.6±0.65% (6MV, 156±0.43% (18MV and 153±0.37% (18MV, respectively, and the efficiency of MC calculations for lateral dose profile remarkably on the central beam axis and across the treatment field raised in turn 197±0.66% (6MV and 214.6±0.73% (6MV, 175±0.36% (18MV and 181.4±0.45% (18MV. Conclusion: Applying dose variance reduction technique of DBS for modeling Oncor® linac with using BEAMnrc MC Code surprisingly improved the fluence of electron and photon, and it therefore enhanced the efficiency of dose variance reduction for MC calculations. As a result, running DBS in different kinds of MC simulation Codes might be beneficent in reducing the uncertainty of MC calculations.
Modelling Loudspeaker Non-Linearities
Agerkvist, Finn T.
2007-01-01
This paper investigates different techniques for modelling the non-linear parameters of the electrodynamic loudspeaker. The methods are tested not only for their accuracy within the range of original data, but also for the ability to work reasonable outside that range, and it is demonstrated...... that polynomial expansions are rather poor at this, whereas an inverse polynomial expansion or localized fitting functions such as the gaussian are better suited for modelling the Bl-factor and compliance. For the inductance the sigmoid function is shown to give very good results. Finally the time varying...
Lucatero, M.A.; Hernandez L, H. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)]. e-mail: mal@nuclear.inin.mx
2003-07-01
The linear heat generation rates (LHGR) for a BWR type generic fuel rod, as function of the burnup that violate the thermomechanical limit of circumferential plastic deformation of the can (canning) in nominal operation in stationary state of the fuel rod are calculated. The evaluation of the LHGR in function of the burnt of the fuel, is carried out under the condition that the deformation values of the circumferential plastic deformation of the can exceeds in 0.1 the thermomechanical value operation limit of 1%. The results of the calculations are compared with the generation rates of linear operation heat in function of the burnt for this fuel rod type. The calculations are carried out with the FEMAXI-V and RODBURN codes. The results show that for exhibitions or burnt between 0 and 16,000 M Wd/tU a minimum margin of 160.8 W/cm exists among LHGR (439.6 W/cm) operation peak for the given fuel and maximum LHGR of the fuel (calculated) to reach 1.1% of circumferential plastic deformation of the can, for the peak factor of power of 1.40. For burnt of 20,000 MWd/tU and 60,000 MWd/tU exist a margin of 150.3 and 298.6 W/cm, respectively. (Author)
Face inversion increases attractiveness.
Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A
2017-07-01
Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Zhang, Dongliang
2013-01-01
To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.
An interpretation of signature inversion
Onishi, Naoki; Tajima, Naoki
1988-01-01
An interpretation in terms of the cranking model is presented to explain why signature inversion occurs for positive γ of the axially asymmetric deformation parameter and emerges into specific orbitals. By introducing a continuous variable, the eigenvalue equation can be reduced to a one dimensional Schroedinger equation by means of which one can easily understand the cause of signature inversion. (author)
Inverse problems for Maxwell's equations
Romanov, V G
1994-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Linear neoclassical tearing mode in tokamaks
Shaing, K. C.
2007-01-01
The growth rate of linear tearing modes in tokamaks is calculated including the neoclassical dissipation mechanism. It is found that when the growth rate is much smaller than the ion-ion collision frequency, the growth rate is reduced approximately by a factor of (B p /B) 2/5 from its standard value, and when the growth rate is much larger than the ion-ion collision frequency, the growth rate is reduced by a factor [√(ε)/(1.6q 2 )] 1/5 . Here, B p is the poloidal magnetic field strength, B is the magnetic field strength, ε is the inverse aspect ratio, and q is the safety factor. The width of the resistive layer is broadened when compared to that of the standard theory. In both limits, the growth rate and the resistive layer width only depend on B p and are independent of B. The growth rates in the plateau regime and for the inertia dominant modes are also presented
Inverse Scattering Method and Soliton Solution Family for String Effective Action
Ya-Jun, Gao
2009-01-01
A modified Hauser–Ernst-type linear system is established and used to develop an inverse scattering method for solving the motion equations of the string effective action describing the coupled gravity, dilaton and Kalb–Ramond fields. The reduction procedures in this inverse scattering method are found to be fairly simple, which makes the proposed inverse scattering method applied fine and effective. As an application, a concrete family of soliton solutions for the considered theory is obtained
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-05-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most effective anisotropic model that approximates the azimuthal anisotropy we observe in seismic data. In the framework of full wave form inversion (FWI), the large number of parameters describing orthorhombic media exerts a considerable trade-off and increases the non-linearity of the inversion problem. Choosing a suitable parameterization for the model, and identifying which parameters in that parameterization could be well resolved, are essential to a successful inversion. In this thesis, I derive the radiation patterns for different acoustic orthorhombic parameterization. Analyzing the angular dependence of the scattering of the parameters of different parameterizations starting with the conventionally used notation, I assess the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. In order to build practical inversion strategies, I suggest new parameters (called deviation parameters) for a new parameterization style in orthorhombic media. The novel parameters denoted ∈d, ƞd and δd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. The main feature of the deviation parameters consists of keeping the scattering of the vertical transversely isotropic (VTI) parameters stationary with azimuth. Using these scattering features, we can condition FWI to invert for the parameters which the data are sensitive to, at different stages, scales, and locations in the model. With this parameterization, the data are mainly sensitive to the scattering of 3 parameters (out of six that describe an acoustic orthorhombic medium): the horizontal velocity in the x1 direction, ∈1 which provides scattering mainly near
Linear Algebra and Smarandache Linear Algebra
Vasantha, Kandasamy
2003-01-01
The present book, on Smarandache linear algebra, not only studies the Smarandache analogues of linear algebra and its applications, it also aims to bridge the need for new research topics pertaining to linear algebra, purely in the algebraic sense. We have introduced Smarandache semilinear algebra, Smarandache bilinear algebra and Smarandache anti-linear algebra and their fuzzy equivalents. Moreover, in this book, we have brought out the study of linear algebra and vector spaces over finite p...
Scholtyssek, W.
1995-01-01
In the first phase of a benchmark comparison, the CONTAIN code was used to calculate an assumed EPR accident 'medium-sized leak in the cold leg', especially for the first two days after initiation of the accident. The results for global characteristics compare well with those of FIPLOC, MELCOR and WAVCO calculations, if the same materials data are used as input. However, significant differences show up for local quantities such as flows through leakages. (orig.)
Massively Parallel Geostatistical Inversion of Coupled Processes in Heterogeneous Porous Media
Ngo, A.; Schwede, R. L.; Li, W.; Bastian, P.; Ippisch, O.; Cirpka, O. A.
2012-04-01
The quasi-linear geostatistical approach is an inversion scheme that can be used to estimate the spatial distribution of a heterogeneous hydraulic conductivity field. The estimated parameter field is considered to be a random variable that varies continuously in space, meets the measurements of dependent quantities (such as the hydraulic head, the concentration of a transported solute or its arrival time) and shows the required spatial correlation (described by certain variogram models). This is a method of conditioning a parameter field to observations. Upon discretization, this results in as many parameters as elements of the computational grid. For a full three dimensional representation of the heterogeneous subsurface it is hardly sufficient to work with resolutions (up to one million parameters) of the model domain that can be achieved on a serial computer. The forward problems to be solved within the inversion procedure consists of the elliptic steady-state groundwater flow equation and the formally elliptic but nearly hyperbolic steady-state advection-dominated solute transport equation in a heterogeneous porous medium. Both equations are discretized by Finite Element Methods (FEM) using fully scalable domain decomposition techniques. Whereas standard conforming FEM is sufficient for the flow equation, for the advection dominated transport equation, which rises well known numerical difficulties at sharp fronts or boundary layers, we use the streamline diffusion approach. The arising linear systems are solved using efficient iterative solvers with an AMG (algebraic multigrid) pre-conditioner. During each iteration step of the inversion scheme one needs to solve a multitude of forward and adjoint problems in order to calculate the sensitivities of each measurement and the related cross-covariance matrix of the unknown parameters and the observations. In order to reduce interprocess communications and to improve the scalability of the code on larger clusters
Zhang, Zhendong; Alkhalifah, Tariq Ali
2017-01-01
Full waveform inversion for reection events is limited by its linearized update re-quirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate, the resulting gradient can have an inaccurate
Full-waveform inversion with reflected waves for 2D VTI media
Pattnaik, Sonali; Tsvankin, Ilya; Wang, Hui; Alkhalifah, Tariq
2016-01-01
Full-waveform inversion in anisotropic media using reflected waves suffers from the strong non-linearity of the objective function and trade-offs between model parameters. Estimating long-wavelength model components by fixing parameter perturbations
Hartzell, S.
1989-01-01
The July 8, 1986, North Palm Strings earthquake is used as a basis for comparison of several different approaches to the solution for the rupture history of a finite fault. The inversion of different waveform data is considered; both teleseismic P waveforms and local strong ground motion records. Linear parametrizations for slip amplitude are compared with nonlinear parametrizations for both slip amplitude and rupture time. Inversions using both synthetic and empirical Green's functions are considered. In general, accurate Green's functions are more readily calculable for the teleseismic problem where simple ray theory and flat-layered velocity structures are usually sufficient. However, uncertainties in the variation in t* with frequency most limit the resolution of teleseismic inversions. A set of empirical Green's functions that are well recorded at teleseismic distances could avoid the uncertainties in attenuation. In the inversion of strong motion data, the accurate calculation of propagation path effects other than attenuation effects is the limiting factor in the resolution of source parameters. -from Author
Complex nonlinear Fourier transform and its inverse
Saksida, Pavle
2015-01-01
We study the nonlinear Fourier transform associated to the integrable systems of AKNS-ZS type. Two versions of this transform appear in connection with the AKNS-ZS systems. These two versions can be considered as two real forms of a single complex transform F c . We construct an explicit algorithm for the calculation of the inverse transform (F c ) -1 (h) for an arbitrary argument h. The result is given in the form of a convergent series of functions in the domain space and the terms of this series can be computed explicitly by means of finitely many integrations. (paper)
Inverse Ising Inference Using All the Data
Aurell, Erik; Ekeberg, Magnus
2012-03-01
We show that a method based on logistic regression, using all the data, solves the inverse Ising problem far better than mean-field calculations relying only on sample pairwise correlation functions, while still computationally feasible for hundreds of nodes. The largest improvement in reconstruction occurs for strong interactions. Using two examples, a diluted Sherrington-Kirkpatrick model and a two-dimensional lattice, we also show that interaction topologies can be recovered from few samples with good accuracy and that the use of l1 regularization is beneficial in this process, pushing inference abilities further into low-temperature regimes.
Stanford Linear Collider magnet positioning
Wand, B.T.
1991-08-01
For the installation of the Stanford Linear Collider (SLC) the positioning and alignment of the beam line components was performed in several individual steps. In the following the general procedures for each step are outlined. The calculation of ideal coordinates for the magnets in the entire SLC will be discussed in detail. Special emphasis was given to the mathematical algorithms and geometry used in the programs to calculate these ideal positions. 35 refs., 21 figs
Application of the kernel method to the inverse geosounding problem.
Hidalgo, Hugo; Sosa León, Sonia; Gómez-Treviño, Enrique
2003-01-01
Determining the layered structure of the earth demands the solution of a variety of inverse problems; in the case of electromagnetic soundings at low induction numbers, the problem is linear, for the measurements may be represented as a linear functional of the electrical conductivity distribution. In this paper, an application of the support vector (SV) regression technique to the inversion of electromagnetic data is presented. We take advantage of the regularizing properties of the SV learning algorithm and use it as a modeling technique with synthetic and field data. The SV method presents better recovery of synthetic models than Tikhonov's regularization. As the SV formulation is solved in the space of the data, which has a small dimension in this application, a smaller problem than that considered with Tikhonov's regularization is produced. For field data, the SV formulation develops models similar to those obtained via linear programming techniques, but with the added characteristic of robustness.
Inversion of Atmospheric Tracer Measurements, Localization of Sources
Issartel, J.-P.; Cabrit, B.; Hourdin, F.; Idelkadi, A.
When abnormal concentrations of a pollutant are observed in the atmosphere, the question of its origin arises immediately. The radioactivity from Tchernobyl was de- tected in Sweden before the accident was announced. This situation emphasizes the psychological, political and medical stakes of a rapid identification of sources. In tech- nical terms, most industrial sources can be modeled as a fixed point at ground level with undetermined duration. The classical method of identification involves the cal- culation of a backtrajectory departing from the detector with an upstream integration of the wind field. We were first involved in such questions as we evaluated the ef- ficiency of the international monitoring network planned in the frame of the Com- prehensive Test Ban Treaty. We propose a new approach of backtracking based upon the use of retroplumes associated to available measurements. Firstly the retroplume is related to inverse transport processes, describing quantitatively how the air in a sam- ple originates from regions that are all the more extended and diffuse as we go back far in the past. Secondly it clarifies the sensibility of the measurement with respect to all potential sources. It is therefore calculated by adjoint equations including of course diffusive processes. Thirdly, the statistical interpretation, valid as far as sin- gle particles are concerned, should not be used to investigate the position and date of a macroscopic source. In that case, the retroplume rather induces a straightforward constraint between the intensity of the source and its position. When more than one measurements are available, including zero valued measurements, the source satisfies the same number of linear relations tightly related to the retroplumes. This system of linear relations can be handled through the simplex algorithm in order to make the above intensity-position correlation more restrictive. This method enables to manage in a quantitative manner the
A Generalization of the Spherical Inversion
Ramírez, José L.; Rubiano, Gustavo N.
2017-01-01
In the present article, we introduce a generalization of the spherical inversion. In particular, we define an inversion with respect to an ellipsoid, and prove several properties of this new transformation. The inversion in an ellipsoid is the generalization of the elliptic inversion to the three-dimensional space. We also study the inverse images…
The attitude inversion method of geostationary satellites based on unscented particle filter
Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao
2018-04-01
The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.
Inverse Schroedinger equation and the exact wave function
Nakatsuji, Hiroshi
2002-01-01
Using the inverse of the Hamiltonian, we introduce the inverse Schroedinger equation (ISE) that is equivalent to the ordinary Schroedinger equation (SE). The ISE has the variational principle and the H-square group of equations as the SE has. When we use a positive Hamiltonian, shifting the energy origin, the inverse energy becomes monotonic and we further have the inverse Ritz variational principle and cross-H-square equations. The concepts of the SE and the ISE are combined to generalize the theory for calculating the exact wave function that is a common eigenfunction of the SE and ISE. The Krylov sequence is extended to include the inverse Hamiltonian, and the complete Krylov sequence is introduced. The iterative configuration interaction (ICI) theory is generalized to cover both the SE and ISE concepts and four different computational methods of calculating the exact wave function are presented in both analytical and matrix representations. The exact wave-function theory based on the inverse Hamiltonian can be applied to systems that have singularities in the Hamiltonian. The generalized ICI theory is applied to the hydrogen atom, giving the exact solution without any singularity problem
Stoner magnetism in an inversion layer
Golosov, D.I., E-mail: Denis.Golosov@biu.ac.il
2016-02-15
Motivated by recent experimental work on magnetic properties of Si-MOSFETs, we report a calculation of magnetisation and susceptibility of electrons in an inversion layer, taking into account the co-ordinate dependence of electron wave function in the direction perpendicular to the plane. It is assumed that the inversion-layer carriers interact via a contact repulsive potential, which is treated at a mean-field level, resulting in a self-consistent change of profile of the wave functions. We find that the results differ significantly from those obtained in the pure 2DEG case (where no provision is made for a quantum motion in the transverse direction). Specifically, the critical value of interaction needed to attain the ferromagnetic (Stoner) instability is decreased and the Stoner criterion is therefore relaxed. This leads to an increased susceptibility and ultimately to a ferromagnetic transition deep in the high-density metallic regime. In the opposite limit of low carrier densities, a phenomenological treatment of the in-plane correlation effects suggests a ferromagnetic instability above the metal–insulator transition. Results are discussed in the context of the available experimental data. - Highlights: • Stoner-type mean field theory for electrons in an inversion layer is constructed. • Wave function change under an in-plane magnetic field is taken into account. • Tendency toward ferromagnetism is strengthened in comparison with a usual Stoner theory. • In-plane correlations at low densities are taken into account phenomenologically.
Approximate 2D inversion of airborne TEM data
Christensen, N.B.; Wolfgram, Peter
2006-01-01
We propose an approximate two-dimensional inversion procedure for transient electromagnetic data. The method is a two-stage procedure, where data are first inverted with 1D multi-layer models. The 1D model section is then considered as data for the next inversion stage that produces the 2D model...... section. For moving platform data there is translational invariance and the second part of the inversion becomes a deconvolution. The convolution kernels are computed by perturbing one model element in an otherwise homogeneous 2D section and calculating full nonlinear responses. These responses...... are then inverted with 1D models to produce a 1D model section. This section is the convolution kernel for the deconvolution. Within its limitations, the approximate 2D inversion performs well. Theoretical modeling shows that it delivers model sections that are a definite improvement over 1D model sections...
AI-guided parameter optimization in inverse treatment planning
Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho
2003-01-01
An artificial intelligence (AI)-guided inverse planning system was developed to optimize the combination of parameters in the objective function for intensity-modulated radiation therapy (IMRT). In this system, the empirical knowledge of inverse planning was formulated with fuzzy if-then rules, which then guide the parameter modification based on the on-line calculated dose. Three kinds of parameters (weighting factor, dose specification, and dose prescription) were automatically modified using the fuzzy inference system (FIS). The performance of the AI-guided inverse planning system (AIGIPS) was examined using the simulated and clinical examples. Preliminary results indicate that the expected dose distribution was automatically achieved using the AI-guided inverse planning system, with the complicated compromising between different parameters accomplished by the fuzzy inference technique. The AIGIPS provides a highly promising method to replace the current trial-and-error approach
Parallelized Three-Dimensional Resistivity Inversion Using Finite Elements And Adjoint State Methods
Schaa, Ralf; Gross, Lutz; Du Plessis, Jaco
2015-04-01
The resistivity method is one of the oldest geophysical exploration methods, which employs one pair of electrodes to inject current into the ground and one or more pairs of electrodes to measure the electrical potential difference. The potential difference is a non-linear function of the subsurface resistivity distribution described by an elliptic partial differential equation (PDE) of the Poisson type. Inversion of measured potentials solves for the subsurface resistivity represented by PDE coefficients. With increasing advances in multichannel resistivity acquisition systems (systems with more than 60 channels and full waveform recording are now emerging), inversion software require efficient storage and solver algorithms. We developed the finite element solver Escript, which provides a user-friendly programming environment in Python to solve large-scale PDE-based problems (see https://launchpad.net/escript-finley). Using finite elements, highly irregular shaped geology and topography can readily be taken into account. For the 3D resistivity problem, we have implemented the secondary potential approach, where the PDE is decomposed into a primary potential caused by the source current and the secondary potential caused by changes in subsurface resistivity. The primary potential is calculated analytically, and the boundary value problem for the secondary potential is solved using nodal finite elements. This approach removes the singularity caused by the source currents and provides more accurate 3D resistivity models. To solve the inversion problem we apply a 'first optimize then discretize' approach using the quasi-Newton scheme in form of the limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS) method (see Gross & Kemp 2013). The evaluation of the cost function requires the solution of the secondary potential PDE for each source current and the solution of the corresponding adjoint-state PDE for the cost function gradients with respect to the subsurface
Inverse Estimation of Heat Flux and Temperature Distribution in 3D Finite Domain
Muhammad, Nauman Malik
2009-02-01
Inverse heat conduction problems occur in many theoretical and practical applications where it is difficult or practically impossible to measure the input heat flux and the temperature of the layer conducting the heat flux to the body. Thus it becomes imperative to devise some means to cater for such a problem and estimate the heat flux inversely. Adaptive State Estimator is one such technique which works by incorporating the semi-Markovian concept into a Bayesian estimation technique thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The problem presented in this study deals with a three dimensional system of a cube with one end conducting heat flux and all the other sides are insulated while the temperatures are measured on the accessible faces of the cube. The measurements taken on these accessible faces are fed into the estimation algorithm and the input heat flux and the temperature distribution at each point in the system is calculated. A variety of input heat flux scenarios have been examined to underwrite the robustness of the estimation algorithm and hence insure its usability in practical applications. These include sinusoidal input flux, a combination of rectangular, linearly changing and sinusoidal input flux and finally a step changing input flux. The estimator's performance limitations have been examined in these input set-ups and error associated with each set-up is compared to conclude the realistic application of the estimation algorithm in such scenarios. Different sensor arrangements, that is different sensor numbers and their locations are also examined to impress upon the importance of number of measurements and their location i.e. close or farther from the input area. Since practically it is both economically and physically tedious to install more number of measurement sensors, hence optimized number and location is very important to determine for making the study more
Dicanio, Denise; Sparacio, Rose; Declercq, Lieve; Corstjens, Hugo; Muizzuddin, Neelam; Hidalgo, Julie; Giacomoni, Paolo U; Jorgensen, Lise; Maes, Daniel
2009-12-01
The estimated apparent age (EAA) was estimated by a panel of trained experts, for the individuals in a cohort. Twelve independent clinical, biophysical and biochemical parameters measured on facial skin, have been identified by multiple regression analysis, which influence the EAA of a person of chronological age (CA) (under eye lines, clinically assessed crow's feet, age spots, clinically evaluated firmness, forehead lines, pores, lip lines, instrumentally evaluated firmness, instrumentally evaluated crow feet, skin texture, in vivo fluorescence related to proliferation and glycation). An algorithm has been devised to obtain the calculated age score (CAS) in a cohort of 452 female volunteers, as CAS(n) = ∑RCiPi(n) (i = 1-13, n = 1-452 and P13 = 1) where the coefficients Ci are obtained by minimizing the difference EAA - CAS, and Pi(n) are the experimental values of the i-th parameter for the n-th volunteer. The determination of CAS before and after a specific cosmetic or pharmacological anti-aging treatment can be used to objectively assess the efficacy of the treatment. The comparison of EAA(n) and of CAS(n) with CA(n) allows one to predict the susceptibility of an individual's face to undergo aging. It has been observed that the biophysical and biochemical parameters play a relevant role in the assessment of the predisposition of skin to undergo accelerated aging.
Elastic versus acoustic inversion for marine surveys
Mora, Peter; Wu, Zedong
2018-04-01
Full Wavefield Inversion (FWI) is a powerful and elegant approach for seismic imaging that is on the way to becoming the method of choice when processing exploration or global seismic data. In the case of processing marine survey data, one may be tempted to assume acoustic FWI is sufficient given that only pressure waves exist in the water layer. In this paper, we pose the question as to whether or not in theory - at least for a hard water bottom case - it should be possible to resolve the shear modulus or S-wave velocity in a marine setting using large offset data. We therefore conduct numerical experiments with idealized marine data calculated with the elastic wave equation. We study two cases, FWI of data due to a diffractor model, and FWI of data due to a fault model. We find that at least in idealized situation, elastic FWI of hard waterbottom data is capable of resolving between the two Lamé parameters λ and μ. Another numerical experiment with a soft waterbottom layer gives the same result. In contrast, acoustic FWI of the synthetic elastic data results in a single image of the first Lamé parameter λ which contains severe artefacts for diffraction data and noticable artefacts for layer reflection data. Based on these results, it would appear that at least, inversions of large offset marine data should be fully elastic rather than acoustic unless it has been demonstrated that for the specific case in question (offsets, model and water depth, practical issues such as soft sediment attenuation of shear waves or computational time), that an acoustic only inversion provides a reasonably good quality of image comparable to that of an elastic inversion. Further research with real data is required to determine the degree to which practical issues such as shear wave attenuation in soft sediments may affect this result.
Elastic versus acoustic inversion for marine surveys
Mora, Peter
2018-04-24
Full Wavefield Inversion (FWI) is a powerful and elegant approach for seismic imaging that is on the way to becoming the method of choice when processing exploration or global seismic data. In the case of processing marine survey data, one may be tempted to assume acoustic FWI is sufficient given that only pressure waves exist in the water layer. In this paper, we pose the question as to whether or not in theory – at least for a hard water bottom case – it should be possible to resolve the shear modulus or S-wave velocity in a marine setting using large offset data. We therefore conduct numerical experiments with idealized marine data calculated with the elastic wave equation. We study two cases, FWI of data due to a diffractor model, and FWI of data due to a fault model. We find that at least in idealized situation, elastic FWI of hard waterbottom data is capable of resolving between the two Lamé parameters λ and μ. Another numerical experiment with a soft waterbottom layer gives the same result. In contrast, acoustic FWI of the synthetic elastic data results in a single image of the first Lamé parameter λ which contains severe artefacts for diffraction data and noticable artefacts for layer reflection data. Based on these results, it would appear that at least, inversions of large offset marine data should be fully elastic rather than acoustic unless it has been demonstrated that for the specific case in question (offsets, model and water depth, practical issues such as soft sediment attenuation of shear waves or computational time), that an acoustic only inversion provides a reasonably good quality of image comparable to that of an elastic inversion. Further research with real data is required to determine the degree to which practical issues such as shear wave attenuation in soft sediments may affect this result.
Pelle, L.
2003-12-01
The removal of multiple reflections remains a real problem in seismic imaging. Many preprocessing methods have been developed to attenuate multiples in seismic data but none of them is satisfactory in 3D. The objective of this thesis is to develop a new method to remove multiples, extensible in 3D. Contrary to the existing methods, our approach is not a preprocessing step: we directly include the multiple removal in the imaging process by means of a simultaneous inversion of primaries and multiples. We then propose to improve the standard linearized inversion so as to make it insensitive to the presence of multiples in the data. We exploit kinematics differences between primaries and multiples. We propose to pick in the data the kinematics of the multiples we want to remove. The wave field is decomposed into primaries and multiples. Primaries are modeled by the Ray+Born operator from perturbations of the logarithm of impedance, given the velocity field. Multiples are modeled by the Transport operator from an initial trace, given the picking. The inverse problem simultaneously fits primaries and multiples to the data. To solve this problem with two unknowns, we take advantage of the isometric nature of the Transport operator, which allows to drastically reduce the CPU time: this simultaneous inversion is this almost as fast as the standard linearized inversion. This gain of time opens the way to different applications to multiple removal and in particular, allows to foresee the straightforward 3D extension. (author)
Inverse scattering and solitons in An-1 affine Toda field theories
Beggs, E.J.; Johnson, P.R.
1997-01-01
We implement the inverse scattering method in the case of the A n affine Toda field theories, by studying the space-time evolution of simple poles in the underlying loop group. We find the known single-soliton solutions, as well as additional solutions with non-linear modes of oscillation around the standard solution, by studying the particularly simple case where the residue at the pole is a rank-one projection. We show that these solutions with extra modes have the same mass and topological charges as the standard solutions, so we do not shed any light on the missing topological charge problem in these models. From the monodromy matrix it is shown that these solutions have the same higher conserved charges as the standard solutions. We also show that the integrated energy-momentum density can be calculated from the central extension of the loop group. (orig.)
Seismic signal simulation and study of underground nuclear sources by moment inversion
Crusem, R.
1986-09-01
Some problems of underground nuclear explosions are examined from the seismological point of view. In the first part a model is developed for mean seismic propagation through the lagoon of Mururoa atoll and for calculation of synthetic seismograms (in intermediate fields: 5 to 20 km) by summation of discrete wave numbers. In the second part this ground model is used with a linear inversion method of seismic moments for estimation of elastic source terms equivalent to the nuclear source. Only the isotrope part is investigated solution stability is increased by using spectral smoothing and a minimal phase hypothesis. Some examples of applications are presented: total energy estimation of a nuclear explosion, simulation of mechanical effects induced by an underground explosion [fr
Regularized inversion of controlled source and earthquake data
Ramachandran, Kumar
2012-01-01
Estimation of the seismic velocity structure of the Earth's crust and upper mantle from travel-time data has advanced greatly in recent years. Forward modelling trial-and-error methods have been superseded by tomographic methods which allow more objective analysis of large two-dimensional and three-dimensional refraction and/or reflection data sets. The fundamental purpose of travel-time tomography is to determine the velocity structure of a medium by analysing the time it takes for a wave generated at a source point within the medium to arrive at a distribution of receiver points. Tomographic inversion of first-arrival travel-time data is a nonlinear problem since both the velocity of the medium and ray paths in the medium are unknown. The solution for such a problem is typically obtained by repeated application of linearized inversion. Regularization of the nonlinear problem reduces the ill posedness inherent in the tomographic inversion due to the under-determined nature of the problem and the inconsistencies in the observed data. This paper discusses the theory of regularized inversion for joint inversion of controlled source and earthquake data, and results from synthetic data testing and application to real data. The results obtained from tomographic inversion of synthetic data and real data from the northern Cascadia subduction zone show that the velocity model and hypocentral parameters can be efficiently estimated using this approach. (paper)
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-01
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded
Wave-equation dispersion inversion
Li, Jing; Feng, Zongcai; Schuster, Gerard T.
2016-01-01
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms
Ebert, M.
1997-01-01
This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)
Gradient-type methods in inverse parabolic problems
Kabanikhin, Sergey; Penenko, Aleksey
2008-01-01
This article is devoted to gradient-based methods for inverse parabolic problems. In the first part, we present a priori convergence theorems based on the conditional stability estimates for linear inverse problems. These theorems are applied to backwards parabolic problem and sideways parabolic problem. The convergence conditions obtained coincide with sourcewise representability in the self-adjoint backwards parabolic case but they differ in the sideways case. In the second part, a variational approach is formulated for a coefficient identification problem. Using adjoint equations, a formal gradient of an objective functional is constructed. A numerical test illustrates the performance of conjugate gradient algorithm with the formal gradient.