WorldWideScience

Sample records for cgls inversion algorithm

  1. Nonlinear Microwave Imaging for Breast-Cancer Screening Using Gauss–Newton's Method and the CGLS Inversion Algorithm

    DEFF Research Database (Denmark)

    Rubæk, Tonny; Meaney, P. M.; Meincke, Peter

    2007-01-01

    is presented which is based on the conjugate gradient least squares (CGLS) algorithm. The iterative CGLS algorithm is capable of solving the update problem by operating on just the Jacobian and the regularizing effects of the algorithm can easily be controlled by adjusting the number of iterations. The new...

  2. Electron dose map inversion based on several algorithms

    International Nuclear Information System (INIS)

    Li Gui; Zheng Huaqing; Wu Yican; Fds Team

    2010-01-01

    The reconstruction to the electron dose map in radiation therapy was investigated by constructing the inversion model of electron dose map with different algorithms. The inversion model of electron dose map based on nonlinear programming was used, and this model was applied the penetration dose map to invert the total space one. The realization of this inversion model was by several inversion algorithms. The test results with seven samples show that except the NMinimize algorithm, which worked for just one sample, with great error,though,all the inversion algorithms could be realized to our inversion model rapidly and accurately. The Levenberg-Marquardt algorithm, having the greatest accuracy and speed, could be considered as the first choice in electron dose map inversion.Further tests show that more error would be created when the data close to the electron range was used (tail error). The tail error might be caused by the approximation of mean energy spectra, and this should be considered to improve the method. The time-saving and accurate algorithms could be used to achieve real-time dose map inversion. By selecting the best inversion algorithm, the clinical need in real-time dose verification can be satisfied. (authors)

  3. Reverse Universal Resolving Algorithm and inverse driving

    DEFF Research Database (Denmark)

    Pécseli, Thomas

    2012-01-01

    Inverse interpretation is a semantics based, non-standard interpretation of programs. Given a program and a value, an inverse interpreter finds all or one of the inputs, that would yield the given value as output with normal forward evaluation. The Reverse Universal Resolving Algorithm is a new...... variant of the Universal Resolving Algorithm for inverse interpretation. The new variant outperforms the original algorithm in several cases, e.g., when unpacking a list using inverse interpretation of a pack program. It uses inverse driving as its main technique, which has not been described in detail...... before. Inverse driving may find application with, e.g., supercompilation, thus suggesting a new kind of program inverter....

  4. NLSE: Parameter-Based Inversion Algorithm

    Science.gov (United States)

    Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.

    Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.

  5. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    International Nuclear Information System (INIS)

    Ha, Taeyoung; Shin, Changsoo

    2007-01-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data

  6. Motion-compensated cone beam computed tomography using a conjugate gradient least-squares algorithm and electrical impedance tomography imaging motion data.

    Science.gov (United States)

    Pengpen, T; Soleimani, M

    2015-06-13

    Cone beam computed tomography (CBCT) is an imaging modality that has been used in image-guided radiation therapy (IGRT). For applications such as lung radiation therapy, CBCT images are greatly affected by the motion artefacts. This is mainly due to low temporal resolution of CBCT. Recently, a dual modality of electrical impedance tomography (EIT) and CBCT has been proposed, in which the high temporal resolution EIT imaging system provides motion data to a motion-compensated algebraic reconstruction technique (ART)-based CBCT reconstruction software. High computational time associated with ART and indeed other variations of ART make it less practical for real applications. This paper develops a motion-compensated conjugate gradient least-squares (CGLS) algorithm for CBCT. A motion-compensated CGLS offers several advantages over ART-based methods, including possibilities for explicit regularization, rapid convergence and parallel computations. This paper for the first time demonstrates motion-compensated CBCT reconstruction using CGLS and reconstruction results are shown in limited data CBCT considering only a quarter of the full dataset. The proposed algorithm is tested using simulated motion data in generic motion-compensated CBCT as well as measured EIT data in dual EIT-CBCT imaging. © 2015 The Author(s) Published by the Royal Society. All rights reserved.

  7. A new hybrid-FBP inversion algorithm with inverse distance backprojection weight for CT reconstruction

    Energy Technology Data Exchange (ETDEWEB)

    Narasimhadhan, A.V.; Rajgopal, Kasi

    2011-07-01

    This paper presents a new hybrid filtered backprojection (FBP) algorithm for fan-beam and cone-beam scan. The hybrid reconstruction kernel is the sum of the ramp and Hilbert filters. We modify the redundancy weighting function to reduce the inverse square distance weighting in the backprojection to inverse distance weight. The modified weight also eliminates the derivative associated with the Hilbert filter kernel. Thus, the proposed reconstruction algorithm has the advantages of the inverse distance weight in the backprojection. We evaluate the performance of the new algorithm in terms of the magnitude level and uniformity in noise for the fan-beam geometry. The computer simulations show that the spatial resolution is nearly identical to the standard fan-beam ramp filtered algorithm while the noise is spatially uniform and the noise variance is reduced. (orig.)

  8. A recursive algorithm for computing the inverse of the Vandermonde matrix

    Directory of Open Access Journals (Sweden)

    Youness Aliyari Ghassabeh

    2016-12-01

    Full Text Available The inverse of a Vandermonde matrix has been used for signal processing, polynomial interpolation, curve fitting, wireless communication, and system identification. In this paper, we propose a novel fast recursive algorithm to compute the inverse of a Vandermonde matrix. The algorithm computes the inverse of a higher order Vandermonde matrix using the available lower order inverse matrix with a computational cost of $ O(n^2 $. The proposed algorithm is given in a matrix form, which makes it appropriate for hardware implementation. The running time of the proposed algorithm to find the inverse of a Vandermonde matrix using a lower order Vandermonde matrix is compared with the running time of the matrix inversion function implemented in MATLAB.

  9. A study of block algorithms for fermion matrix inversion

    International Nuclear Information System (INIS)

    Henty, D.

    1990-01-01

    We compare the convergence properties of Lanczos and Conjugate Gradient algorithms applied to the calculation of columns of the inverse fermion matrix for Kogut-Susskind and Wilson fermions in lattice QCD. When several columns of the inverse are required simultaneously, a block version of the Lanczos algorithm is most efficient at small mass, being over 5 times faster than the single algorithms. The block algorithm is also less susceptible to critical slowing down. (orig.)

  10. Inversion algorithms for the spherical Radon and cosine transform

    International Nuclear Information System (INIS)

    Louis, A K; Riplinger, M; Spiess, M; Spodarev, E

    2011-01-01

    We consider two integral transforms which are frequently used in integral geometry and related fields, namely the spherical Radon and cosine transform. Fast algorithms are developed which invert the respective transforms in a numerically stable way. So far, only theoretical inversion formulae or algorithms for atomic measures have been derived, which are not so important for applications. We focus on two- and three-dimensional cases, where we also show that our method leads to a regularization. Numerical results are presented and show the validity of the resulting algorithms. First, we use synthetic data for the inversion of the Radon transform. Then we apply the algorithm for the inversion of the cosine transform to reconstruct the directional distribution of line processes from finitely many intersections of their lines with test lines (2D) or planes (3D), respectively. Finally we apply our method to analyse a series of microscopic two- and three-dimensional images of a fibre system

  11. The whole space three-dimensional magnetotelluric inversion algorithm with static shift correction

    Science.gov (United States)

    Zhang, K.

    2016-12-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results.The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm. The verification and application example of 3D inversion algorithm is shown in Figure 1. From the comparison of figure 1, the inversion model can reflect all the abnormal bodies and terrain clearly regardless of what type of data (impedance/tipper/impedance and tipper). And the resolution of the bodies' boundary can be improved by using tipper data. The algorithm is very effective for terrain inversion. So it is very useful for the study of continental shelf with continuous exploration of land, marine and underground.The three-dimensional electrical model of the ore zone reflects the basic information of stratum, rock and structure. Although it cannot indicate the ore body position directly, the important clues are provided for prospecting work by the delineation of diorite pluton uplift range. The test results show that, the high quality of

  12. ANNIT - An Efficient Inversion Algorithm based on Prediction Principles

    Science.gov (United States)

    Růžek, B.; Kolář, P.

    2009-04-01

    Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good

  13. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    Science.gov (United States)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  14. Inverse Estimation of Surface Radiation Properties Using Repulsive Particle Swarm Optimization Algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Lee, Kyun Ho [Sejong University, Sejong (Korea, Republic of); Kim, Ki Wan [Agency for Defense Development, Daejeon (Korea, Republic of)

    2014-09-15

    The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem.

  15. Inverse Estimation of Surface Radiation Properties Using Repulsive Particle Swarm Optimization Algorithm

    International Nuclear Information System (INIS)

    Lee, Kyun Ho; Kim, Ki Wan

    2014-01-01

    The heat transfer mechanism for radiation is directly related to the emission of photons and electromagnetic waves. Depending on the participation of the medium, the radiation can be classified into two forms: surface and gas radiation. In the present study, unknown radiation properties were estimated using an inverse boundary analysis of surface radiation in an axisymmetric cylindrical enclosure. For efficiency, a repulsive particle swarm optimization (RPSO) algorithm, which is a relatively recent heuristic search method, was used as inverse solver. By comparing the convergence rates and accuracies with the results of a genetic algorithm (GA), the performances of the proposed RPSO algorithm as an inverse solver was verified when applied to the inverse analysis of the surface radiation problem

  16. Design and implementation of adaptive inverse control algorithm for a micro-hand control system

    Directory of Open Access Journals (Sweden)

    Wan-Cheng Wang

    2014-01-01

    Full Text Available The Letter proposes an online tuned adaptive inverse position control algorithm for a micro-hand. First, the configuration of the micro-hand is discussed. Next, a kinematic analysis of the micro-hand is investigated and then the relationship between the rotor position of micro-permanent magnet synchronous motor and the tip of the micro-finger is derived. After that, an online tuned adaptive inverse control algorithm, which includes an adaptive inverse model and an adaptive inverse control, is designed. The online tuned adaptive inverse control algorithm has better performance than the proportional–integral control algorithm does. In addition, to avoid damaging the object during the grasping process, an online force control algorithm is proposed here as well. An embedded micro-computer, cRIO-9024, is used to realise the whole position control algorithm and the force control algorithm by using software. As a result, the hardware circuit is very simple. Experimental results show that the proposed system can provide fast transient responses, good load disturbance responses, good tracking responses and satisfactory grasping responses.

  17. Inverse Monte Carlo: a unified reconstruction algorithm for SPECT

    International Nuclear Information System (INIS)

    Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.

    1985-01-01

    Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter

  18. Inversion algorithms for large-scale geophysical electromagnetic measurements

    International Nuclear Information System (INIS)

    Abubakar, A; Habashy, T M; Li, M; Liu, J

    2009-01-01

    Low-frequency surface electromagnetic prospecting methods have been gaining a lot of interest because of their capabilities to directly detect hydrocarbon reservoirs and to compliment seismic measurements for geophysical exploration applications. There are two types of surface electromagnetic surveys. The first is an active measurement where we use an electric dipole source towed by a ship over an array of seafloor receivers. This measurement is called the controlled-source electromagnetic (CSEM) method. The second is the Magnetotelluric (MT) method driven by natural sources. This passive measurement also uses an array of seafloor receivers. Both surface electromagnetic methods measure electric and magnetic field vectors. In order to extract maximal information from these CSEM and MT data we employ a nonlinear inversion approach in their interpretation. We present two types of inversion approaches. The first approach is the so-called pixel-based inversion (PBI) algorithm. In this approach the investigation domain is subdivided into pixels, and by using an optimization process the conductivity distribution inside the domain is reconstructed. The optimization process uses the Gauss–Newton minimization scheme augmented with various forms of regularization. To automate the algorithm, the regularization term is incorporated using a multiplicative cost function. This PBI approach has demonstrated its ability to retrieve reasonably good conductivity images. However, the reconstructed boundaries and conductivity values of the imaged anomalies are usually not quantitatively resolved. Nevertheless, the PBI approach can provide useful information on the location, the shape and the conductivity of the hydrocarbon reservoir. The second method is the so-called model-based inversion (MBI) algorithm, which uses a priori information on the geometry to reduce the number of unknown parameters and to improve the quality of the reconstructed conductivity image. This MBI approach can

  19. Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms

    International Nuclear Information System (INIS)

    Ebert, M.

    1997-01-01

    This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)

  20. Inverse synthetic aperture radar imaging principles, algorithms and applications

    CERN Document Server

    Chen , Victor C

    2014-01-01

    Inverse Synthetic Aperture Radar Imaging: Principles, Algorithms and Applications is based on the latest research on ISAR imaging of moving targets and non-cooperative target recognition (NCTR). With a focus on the advances and applications, this book will provide readers with a working knowledge on various algorithms of ISAR imaging of targets and implementation with MATLAB. These MATLAB algorithms will prove useful in order to visualize and manipulate some simulated ISAR images.

  1. Inverse kinematics algorithm for a six-link manipulator using a polynomial expression

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-01-01

    This report is concerned with the forward and inverse kinematics problem relevant to a six-link robot manipulator. In order to derive the kinematic relationships between links, the vector rotation operator was applied instead of the conventional homogeneous transformation. The exact algorithm for solving the inverse problem was obtained by transforming kinematics equations into a polynomial. As shown in test calculations, the accuracies of numerical solutions obtained by means of the present approach are found to be quite high. The algorithm proposed permits to find out all feasible solutions for the given inverse problem. (author)

  2. An adaptive inverse kinematics algorithm for robot manipulators

    Science.gov (United States)

    Colbaugh, R.; Glass, K.; Seraji, H.

    1990-01-01

    An adaptive algorithm for solving the inverse kinematics problem for robot manipulators is presented. The algorithm is derived using model reference adaptive control (MRAC) theory and is computationally efficient for online applications. The scheme requires no a priori knowledge of the kinematics of the robot if Cartesian end-effector sensing is available, and it requires knowledge of only the forward kinematics if joint position sensing is used. Computer simulation results are given for the redundant seven-DOF robotics research arm, demonstrating that the proposed algorithm yields accurate joint angle trajectories for a given end-effector position/orientation trajectory.

  3. Caliko: An Inverse Kinematics Software Library Implementation of the FABRIK Algorithm

    OpenAIRE

    Lansley, Alastair; Vamplew, Peter; Smith, Philip; Foale, Cameron

    2016-01-01

    The Caliko library is an implementation of the FABRIK (Forward And Backward Reaching Inverse Kinematics) algorithm written in Java. The inverse kinematics (IK) algorithm is implemented in both 2D and 3D, and incorporates a variety of joint constraints as well as the ability to connect multiple IK chains together in a hierarchy. The library allows for the simple creation and solving of multiple IK chains as well as visualisation of these solutions. It is licensed under the MIT software license...

  4. Particle Swarm Optimization algorithms for geophysical inversion, practical hints

    Science.gov (United States)

    Garcia Gonzalo, E.; Fernandez Martinez, J.; Fernandez Alvarez, J.; Kuzma, H.; Menendez Perez, C.

    2008-12-01

    PSO is a stochastic optimization technique that has been successfully used in many different engineering fields. PSO algorithm can be physically interpreted as a stochastic damped mass-spring system (Fernandez Martinez and Garcia Gonzalo 2008). Based on this analogy we present a whole family of PSO algorithms and their respective first order and second order stability regions. Their performance is also checked using synthetic functions (Rosenbrock and Griewank) showing a degree of ill-posedness similar to that found in many geophysical inverse problems. Finally, we present the application of these algorithms to the analysis of a Vertical Electrical Sounding inverse problem associated to a seawater intrusion in a coastal aquifer in South Spain. We analyze the role of PSO parameters (inertia, local and global accelerations and discretization step), both in convergence curves and in the a posteriori sampling of the depth of an intrusion. Comparison is made with binary genetic algorithms and simulated annealing. As result of this analysis, practical hints are given to select the correct algorithm and to tune the corresponding PSO parameters. Fernandez Martinez, J.L., Garcia Gonzalo, E., 2008a. The generalized PSO: a new door to PSO evolution. Journal of Artificial Evolution and Applications. DOI:10.1155/2008/861275.

  5. Nonlinear inversion of potential-field data using a hybrid-encoding genetic algorithm

    Science.gov (United States)

    Chen, C.; Xia, J.; Liu, J.; Feng, G.

    2006-01-01

    Using a genetic algorithm to solve an inverse problem of complex nonlinear geophysical equations is advantageous because it does not require computer gradients of models or "good" initial models. The multi-point search of a genetic algorithm makes it easier to find the globally optimal solution while avoiding falling into a local extremum. As is the case in other optimization approaches, the search efficiency for a genetic algorithm is vital in finding desired solutions successfully in a multi-dimensional model space. A binary-encoding genetic algorithm is hardly ever used to resolve an optimization problem such as a simple geophysical inversion with only three unknowns. The encoding mechanism, genetic operators, and population size of the genetic algorithm greatly affect search processes in the evolution. It is clear that improved operators and proper population size promote the convergence. Nevertheless, not all genetic operations perform perfectly while searching under either a uniform binary or a decimal encoding system. With the binary encoding mechanism, the crossover scheme may produce more new individuals than with the decimal encoding. On the other hand, the mutation scheme in a decimal encoding system will create new genes larger in scope than those in the binary encoding. This paper discusses approaches of exploiting the search potential of genetic operations in the two encoding systems and presents an approach with a hybrid-encoding mechanism, multi-point crossover, and dynamic population size for geophysical inversion. We present a method that is based on the routine in which the mutation operation is conducted in the decimal code and multi-point crossover operation in the binary code. The mix-encoding algorithm is called the hybrid-encoding genetic algorithm (HEGA). HEGA provides better genes with a higher probability by a mutation operator and improves genetic algorithms in resolving complicated geophysical inverse problems. Another significant

  6. Fourier rebinning algorithm for inverse geometry CT.

    Science.gov (United States)

    Mazin, Samuel R; Pele, Norbert J

    2008-11-01

    Inverse geometry computed tomography (IGCT) is a new type of volumetric CT geometry that employs a large array of x-ray sources opposite a smaller detector array. Volumetric coverage and high isotropic resolution produce very large data sets and therefore require a computationally efficient three-dimensional reconstruction algorithm. The purpose of this work was to adapt and evaluate a fast algorithm based on Defrise's Fourier rebinning (FORE), originally developed for positron emission tomography. The results were compared with the average of FDK reconstructions from each source row. The FORE algorithm is an order of magnitude faster than the FDK-type method for the case of 11 source rows. In the center of the field-of-view both algorithms exhibited the same resolution and noise performance. FORE exhibited some resolution loss (and less noise) in the periphery of the field-of-view. FORE appears to be a fast and reasonably accurate reconstruction method for IGCT.

  7. Research on the Random Shock Vibration Test Based on the Filter-X LMS Adaptive Inverse Control Algorithm

    Directory of Open Access Journals (Sweden)

    Wang Wei

    2016-01-01

    Full Text Available The related theory and algorithm of adaptive inverse control were presented through the research which pointed out the adaptive inverse control strategy could effectively eliminate the noise influence on the system control. Proposed using a frequency domain filter-X LMS adaptive inverse control algorithm, and the control algorithm was applied to the two-exciter hydraulic vibration test system of random shock vibration control process and summarized the process of the adaptive inverse control strategies in the realization of the random shock vibration test. The self-closed-loop and field test show that using the frequency-domain filter-X LMS adaptive inverse control algorithm can realize high precision control of random shock vibration test.

  8. Micro-seismic waveform matching inversion based on gravitational search algorithm and parallel computation

    Science.gov (United States)

    Jiang, Y.; Xing, H. L.

    2016-12-01

    Micro-seismic events induced by water injection, mining activity or oil/gas extraction are quite informative, the interpretation of which can be applied for the reconstruction of underground stress and monitoring of hydraulic fracturing progress in oil/gas reservoirs. The source characterises and locations are crucial parameters that required for these purposes, which can be obtained through the waveform matching inversion (WMI) method. Therefore it is imperative to develop a WMI algorithm with high accuracy and convergence speed. Heuristic algorithm, as a category of nonlinear method, possesses a very high convergence speed and good capacity to overcome local minimal values, and has been well applied for many areas (e.g. image processing, artificial intelligence). However, its effectiveness for micro-seismic WMI is still poorly investigated; very few literatures exits that addressing this subject. In this research an advanced heuristic algorithm, gravitational search algorithm (GSA) , is proposed to estimate the focal mechanism (angle of strike, dip and rake) and source locations in three dimension. Unlike traditional inversion methods, the heuristic algorithm inversion does not require the approximation of green function. The method directly interacts with a CPU parallelized finite difference forward modelling engine, and updating the model parameters under GSA criterions. The effectiveness of this method is tested with synthetic data form a multi-layered elastic model; the results indicate GSA can be well applied on WMI and has its unique advantages. Keywords: Micro-seismicity, Waveform matching inversion, gravitational search algorithm, parallel computation

  9. Caliko: An Inverse Kinematics Software Library Implementation of the FABRIK Algorithm

    Directory of Open Access Journals (Sweden)

    Alastair Lansley

    2016-09-01

    Full Text Available The Caliko library is an implementation of the FABRIK (Forward And Backward Reaching Inverse Kinematics algorithm written in Java. The inverse kinematics (IK algorithm is implemented in both 2D and 3D, and incorporates a variety of joint constraints as well as the ability to connect multiple IK chains together in a hierarchy. The library allows for the simple creation and solving of multiple IK chains as well as visualisation of these solutions. It is licensed under the MIT software license and the source code is freely available for use and modification at: https://github.com/feduni/caliko

  10. A general rough-surface inversion algorithm: Theory and application to SAR data

    Science.gov (United States)

    Moghaddam, M.

    1993-01-01

    Rough-surface inversion has significant applications in interpretation of SAR data obtained over bare soil surfaces and agricultural lands. Due to the sparsity of data and the large pixel size in SAR applications, it is not feasible to carry out inversions based on numerical scattering models. The alternative is to use parameter estimation techniques based on approximate analytical or empirical models. Hence, there are two issues to be addressed, namely, what model to choose and what estimation algorithm to apply. Here, a small perturbation model (SPM) is used to express the backscattering coefficients of the rough surface in terms of three surface parameters. The algorithm used to estimate these parameters is based on a nonlinear least-squares criterion. The least-squares optimization methods are widely used in estimation theory, but the distinguishing factor for SAR applications is incorporating the stochastic nature of both the unknown parameters and the data into formulation, which will be discussed in detail. The algorithm is tested with synthetic data, and several Newton-type least-squares minimization methods are discussed to compare their convergence characteristics. Finally, the algorithm is applied to multifrequency polarimetric SAR data obtained over some bare soil and agricultural fields. Results will be shown and compared to ground-truth measurements obtained from these areas. The strength of this general approach to inversion of SAR data is that it can be easily modified for use with any scattering model without changing any of the inversion steps. Note also that, for the same reason it is not limited to inversion of rough surfaces, and can be applied to any parameterized scattering process.

  11. VES/TEM 1D joint inversion by using Controlled Random Search (CRS) algorithm

    Science.gov (United States)

    Bortolozo, Cassiano Antonio; Porsani, Jorge Luís; Santos, Fernando Acácio Monteiro dos; Almeida, Emerson Rodrigo

    2015-01-01

    Electrical (DC) and Transient Electromagnetic (TEM) soundings are used in a great number of environmental, hydrological, and mining exploration studies. Usually, data interpretation is accomplished by individual 1D models resulting often in ambiguous models. This fact can be explained by the way as the two different methodologies sample the medium beneath surface. Vertical Electrical Sounding (VES) is good in marking resistive structures, while Transient Electromagnetic sounding (TEM) is very sensitive to conductive structures. Another difference is VES is better to detect shallow structures, while TEM soundings can reach deeper layers. A Matlab program for 1D joint inversion of VES and TEM soundings was developed aiming at exploring the best of both methods. The program uses CRS - Controlled Random Search - algorithm for both single and 1D joint inversions. Usually inversion programs use Marquadt type algorithms but for electrical and electromagnetic methods, these algorithms may find a local minimum or not converge. Initially, the algorithm was tested with synthetic data, and then it was used to invert experimental data from two places in Paraná sedimentary basin (Bebedouro and Pirassununga cities), both located in São Paulo State, Brazil. Geoelectric model obtained from VES and TEM data 1D joint inversion is similar to the real geological condition, and ambiguities were minimized. Results with synthetic and real data show that 1D VES/TEM joint inversion better recovers simulated models and shows a great potential in geological studies, especially in hydrogeological studies.

  12. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  13. A gradient based algorithm to solve inverse plane bimodular problems of identification

    Science.gov (United States)

    Ran, Chunjiang; Yang, Haitian; Zhang, Guoqing

    2018-02-01

    This paper presents a gradient based algorithm to solve inverse plane bimodular problems of identifying constitutive parameters, including tensile/compressive moduli and tensile/compressive Poisson's ratios. For the forward bimodular problem, a FE tangent stiffness matrix is derived facilitating the implementation of gradient based algorithms, for the inverse bimodular problem of identification, a two-level sensitivity analysis based strategy is proposed. Numerical verification in term of accuracy and efficiency is provided, and the impacts of initial guess, number of measurement points, regional inhomogeneity, and noisy data on the identification are taken into accounts.

  14. A systematic approach to robust preconditioning for gradient-based inverse scattering algorithms

    International Nuclear Information System (INIS)

    Nordebo, Sven; Fhager, Andreas; Persson, Mikael; Gustafsson, Mats

    2008-01-01

    This paper presents a systematic approach to robust preconditioning for gradient-based nonlinear inverse scattering algorithms. In particular, one- and two-dimensional inverse problems are considered where the permittivity and conductivity profiles are unknown and the input data consist of the scattered field over a certain bandwidth. A time-domain least-squares formulation is employed and the inversion algorithm is based on a conjugate gradient or quasi-Newton algorithm together with an FDTD-electromagnetic solver. A Fisher information analysis is used to estimate the Hessian of the error functional. A robust preconditioner is then obtained by incorporating a parameter scaling such that the scaled Fisher information has a unit diagonal. By improving the conditioning of the Hessian, the convergence rate of the conjugate gradient or quasi-Newton methods are improved. The preconditioner is robust in the sense that the scaling, i.e. the diagonal Fisher information, is virtually invariant to the numerical resolution and the discretization model that is employed. Numerical examples of image reconstruction are included to illustrate the efficiency of the proposed technique

  15. Study on hybrid multi-objective optimization algorithm for inverse treatment planning of radiation therapy

    International Nuclear Information System (INIS)

    Li Guoli; Song Gang; Wu Yican

    2007-01-01

    Inverse treatment planning for radiation therapy is a multi-objective optimization process. The hybrid multi-objective optimization algorithm is studied by combining the simulated annealing(SA) and genetic algorithm(GA). Test functions are used to analyze the efficiency of algorithms. The hybrid multi-objective optimization SA algorithm, which displacement is based on the evolutionary strategy of GA: crossover and mutation, is implemented in inverse planning of external beam radiation therapy by using two kinds of objective functions, namely the average dose distribution based and the hybrid dose-volume constraints based objective functions. The test calculations demonstrate that excellent converge speed can be achieved. (authors)

  16. A Robust Inversion Algorithm for Surface Leaf and Soil Temperatures Using the Vegetation Clumping Index

    Directory of Open Access Journals (Sweden)

    Zunjian Bian

    2017-07-01

    Full Text Available The inversion of land surface component temperatures is an essential source of information for mapping heat fluxes and the angular normalization of thermal infrared (TIR observations. Leaf and soil temperatures can be retrieved using multiple-view-angle TIR observations. In a satellite-scale pixel, the clumping effect of vegetation is usually present, but it is not completely considered during the inversion process. Therefore, we introduced a simple inversion procedure that uses gap frequency with a clumping index (GCI for leaf and soil temperatures over both crop and forest canopies. Simulated datasets corresponding to turbid vegetation, regularly planted crops and randomly distributed forest were generated using a radiosity model and were used to test the proposed inversion algorithm. The results indicated that the GCI algorithm performed well for both crop and forest canopies, with root mean squared errors of less than 1.0 °C against simulated values. The proposed inversion algorithm was also validated using measured datasets over orchard, maize and wheat canopies. Similar results were achieved, demonstrating that using the clumping index can improve inversion results. In all evaluations, we recommend using the GCI algorithm as a foundation for future satellite-based applications due to its straightforward form and robust performance for both crop and forest canopies using the vegetation clumping index.

  17. A compressed sensing based 3D resistivity inversion algorithm for hydrogeological applications

    Science.gov (United States)

    Ranjan, Shashi; Kambhammettu, B. V. N. P.; Peddinti, Srinivasa Rao; Adinarayana, J.

    2018-04-01

    Image reconstruction from discrete electrical responses pose a number of computational and mathematical challenges. Application of smoothness constrained regularized inversion from limited measurements may fail to detect resistivity anomalies and sharp interfaces separated by hydro stratigraphic units. Under favourable conditions, compressed sensing (CS) can be thought of an alternative to reconstruct the image features by finding sparse solutions to highly underdetermined linear systems. This paper deals with the development of a CS assisted, 3-D resistivity inversion algorithm for use with hydrogeologists and groundwater scientists. CS based l1-regularized least square algorithm was applied to solve the resistivity inversion problem. Sparseness in the model update vector is introduced through block oriented discrete cosine transformation, with recovery of the signal achieved through convex optimization. The equivalent quadratic program was solved using primal-dual interior point method. Applicability of the proposed algorithm was demonstrated using synthetic and field examples drawn from hydrogeology. The proposed algorithm has outperformed the conventional (smoothness constrained) least square method in recovering the model parameters with much fewer data, yet preserving the sharp resistivity fronts separated by geologic layers. Resistivity anomalies represented by discrete homogeneous blocks embedded in contrasting geologic layers were better imaged using the proposed algorithm. In comparison to conventional algorithm, CS has resulted in an efficient (an increase in R2 from 0.62 to 0.78; a decrease in RMSE from 125.14 Ω-m to 72.46 Ω-m), reliable, and fast converging (run time decreased by about 25%) solution.

  18. Nonlinear Rayleigh wave inversion based on the shuffled frog-leaping algorithm

    Science.gov (United States)

    Sun, Cheng-Yu; Wang, Yan-Yan; Wu, Dun-Shi; Qin, Xiao-Jun

    2017-12-01

    At present, near-surface shear wave velocities are mainly calculated through Rayleigh wave dispersion-curve inversions in engineering surface investigations, but the required calculations pose a highly nonlinear global optimization problem. In order to alleviate the risk of falling into a local optimal solution, this paper introduces a new global optimization method, the shuffle frog-leaping algorithm (SFLA), into the Rayleigh wave dispersion-curve inversion process. SFLA is a swarm-intelligence-based algorithm that simulates a group of frogs searching for food. It uses a few parameters, achieves rapid convergence, and is capability of effective global searching. In order to test the reliability and calculation performance of SFLA, noise-free and noisy synthetic datasets were inverted. We conducted a comparative analysis with other established algorithms using the noise-free dataset, and then tested the ability of SFLA to cope with data noise. Finally, we inverted a real-world example to examine the applicability of SFLA. Results from both synthetic and field data demonstrated the effectiveness of SFLA in the interpretation of Rayleigh wave dispersion curves. We found that SFLA is superior to the established methods in terms of both reliability and computational efficiency, so it offers great potential to improve our ability to solve geophysical inversion problems.

  19. Multi-resolution inversion algorithm for the attenuated radon transform

    KAUST Repository

    Barbano, Paolo Emilio

    2011-09-01

    We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed by combining a memory-efficient implementation of the analytical inversion formula (AIF [1], [2]) with a wavelet-based version of a recently discovered regularization technique [3]. The paper introduces all the main aspects of the new AIF, as well numerical experiments on real and simulated data. Those display a substantial improvement in reconstruction quality when compared to linear or iterative algorithms. © 2011 IEEE.

  20. 2.5D Inversion Algorithm of Frequency-Domain Airborne Electromagnetics with Topography

    Directory of Open Access Journals (Sweden)

    Jianjun Xi

    2016-01-01

    Full Text Available We presented a 2.5D inversion algorithm with topography for frequency-domain airborne electromagnetic data. The forward modeling is based on edge finite element method and uses the irregular hexahedron to adapt the topography. The electric and magnetic fields are split into primary (background and secondary (scattered field to eliminate the source singularity. For the multisources of frequency-domain airborne electromagnetic method, we use the large-scale sparse matrix parallel shared memory direct solver PARDISO to solve the linear system of equations efficiently. The inversion algorithm is based on Gauss-Newton method, which has the efficient convergence rate. The Jacobian matrix is calculated by “adjoint forward modelling” efficiently. The synthetic inversion examples indicated that our proposed method is correct and effective. Furthermore, ignoring the topography effect can lead to incorrect results and interpretations.

  1. A new stochastic algorithm for inversion of dust aerosol size distribution

    Science.gov (United States)

    Wang, Li; Li, Feng; Yang, Ma-ying

    2015-08-01

    Dust aerosol size distribution is an important source of information about atmospheric aerosols, and it can be determined from multiwavelength extinction measurements. This paper describes a stochastic inverse technique based on artificial bee colony (ABC) algorithm to invert the dust aerosol size distribution by light extinction method. The direct problems for the size distribution of water drop and dust particle, which are the main elements of atmospheric aerosols, are solved by the Mie theory and the Lambert-Beer Law in multispectral region. And then, the parameters of three widely used functions, i.e. the log normal distribution (L-N), the Junge distribution (J-J), and the normal distribution (N-N), which can provide the most useful representation of aerosol size distributions, are inversed by the ABC algorithm in the dependent model. Numerical results show that the ABC algorithm can be successfully applied to recover the aerosol size distribution with high feasibility and reliability even in the presence of random noise.

  2. A fast algorithm for sparse matrix computations related to inversion

    International Nuclear Information System (INIS)

    Li, S.; Wu, W.; Darve, E.

    2013-01-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G r and G for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  3. A fast algorithm for sparse matrix computations related to inversion

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: lisong@stanford.edu [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Wu, W. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Packard Building, Room 268, Stanford, CA 94305 (United States); Darve, E. [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Department of Mechanical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Room 209, Stanford, CA 94305 (United States)

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  4. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    Science.gov (United States)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    Ocean color remote sensing provides synoptic-scale, near-daily observations of marine inherent optical properties (IOPs). Whilst contemporary ocean color algorithms are known to perform well in deep oceanic waters, they have difficulty operating in optically clear, shallow marine environments where light reflected from the seafloor contributes to the water-leaving radiance. The effect of benthic reflectance in optically shallow waters is known to adversely affect algorithms developed for optically deep waters [1, 2]. Whilst adapted versions of optically deep ocean color algorithms have been applied to optically shallow regions with reasonable success [3], there is presently no approach that directly corrects for bottom reflectance using existing knowledge of bathymetry and benthic albedo.To address the issue of optically shallow waters, we have developed a semi-analytical ocean color inversion algorithm: the Shallow Water Inversion Model (SWIM). SWIM uses existing bathymetry and a derived benthic albedo map to correct for bottom reflectance using the semi-analytical model of Lee et al [4]. The algorithm was incorporated into the NASA Ocean Biology Processing Groups L2GEN program and tested in optically shallow waters of the Great Barrier Reef, Australia. In-lieu of readily available in situ matchup data, we present a comparison between SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Property Algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA).

  5. Identity of the conjugate gradient and Lanczos algorithms for matrix inversion in lattice fermion calculations

    International Nuclear Information System (INIS)

    Burkitt, A.N.; Irving, A.C.

    1988-01-01

    Two of the methods that are widely used in lattice gauge theory calculations requiring inversion of the fermion matrix are the Lanczos and the conjugate gradient algorithms. Those algorithms are already known to be closely related. In fact for matrix inversion, in exact arithmetic, they give identical results at each iteration and are just alternative formulations of a single algorithm. This equivalence survives rounding errors. We give the identities between the coefficients of the two formulations, enabling many of the best features of them to be combined. (orig.)

  6. Amplitude inversion of the 2D analytic signal of magnetic anomalies through the differential evolution algorithm

    Science.gov (United States)

    Ekinci, Yunus Levent; Özyalın, Şenol; Sındırgı, Petek; Balkaya, Çağlayan; Göktürkler, Gökhan

    2017-12-01

    In this work, analytic signal amplitude (ASA) inversion of total field magnetic anomalies has been achieved by differential evolution (DE) which is a population-based evolutionary metaheuristic algorithm. Using an elitist strategy, the applicability and effectiveness of the proposed inversion algorithm have been evaluated through the anomalies due to both hypothetical model bodies and real isolated geological structures. Some parameter tuning studies relying mainly on choosing the optimum control parameters of the algorithm have also been performed to enhance the performance of the proposed metaheuristic. Since ASAs of magnetic anomalies are independent of both ambient field direction and the direction of magnetization of the causative sources in a two-dimensional (2D) case, inversions of synthetic noise-free and noisy single model anomalies have produced satisfactory solutions showing the practical applicability of the algorithm. Moreover, hypothetical studies using multiple model bodies have clearly showed that the DE algorithm is able to cope with complicated anomalies and some interferences from neighbouring sources. The proposed algorithm has then been used to invert small- (120 m) and large-scale (40 km) magnetic profile anomalies of an iron deposit (Kesikköprü-Bala, Turkey) and a deep-seated magnetized structure (Sea of Marmara, Turkey), respectively to determine depths, geometries and exact origins of the source bodies. Inversion studies have yielded geologically reasonable solutions which are also in good accordance with the results of normalized full gradient and Euler deconvolution techniques. Thus, we propose the use of DE not only for the amplitude inversion of 2D analytical signals of magnetic profile anomalies having induced or remanent magnetization effects but also the low-dimensional data inversions in geophysics. A part of this paper was presented as an abstract at the 2nd International Conference on Civil and Environmental Engineering, 8

  7. SWIM: A Semi-Analytical Ocean Color Inversion Algorithm for Optically Shallow Waters

    Science.gov (United States)

    McKinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C. S.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Bailey, Sean W.; Shea, Donald M.; Feldman, Gene C.

    2014-01-01

    In clear shallow waters, light that is transmitted downward through the water column can reflect off the sea floor and thereby influence the water-leaving radiance signal. This effect can confound contemporary ocean color algorithms designed for deep waters where the seafloor has little or no effect on the water-leaving radiance. Thus, inappropriate use of deep water ocean color algorithms in optically shallow regions can lead to inaccurate retrievals of inherent optical properties (IOPs) and therefore have a detrimental impact on IOP-based estimates of marine parameters, including chlorophyll-a and the diffuse attenuation coefficient. In order to improve IOP retrievals in optically shallow regions, a semi-analytical inversion algorithm, the Shallow Water Inversion Model (SWIM), has been developed. Unlike established ocean color algorithms, SWIM considers both the water column depth and the benthic albedo. A radiative transfer study was conducted that demonstrated how SWIM and two contemporary ocean color algorithms, the Generalized Inherent Optical Properties algorithm (GIOP) and Quasi-Analytical Algorithm (QAA), performed in optically deep and shallow scenarios. The results showed that SWIM performed well, whilst both GIOP and QAA showed distinct positive bias in IOP retrievals in optically shallow waters. The SWIM algorithm was also applied to a test region: the Great Barrier Reef, Australia. Using a single test scene and time series data collected by NASA's MODIS-Aqua sensor (2002-2013), a comparison of IOPs retrieved by SWIM, GIOP and QAA was conducted.

  8. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    Science.gov (United States)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical

  9. An Improved Genetic Algorithm for Single-Machine Inverse Scheduling Problem

    Directory of Open Access Journals (Sweden)

    Jianhui Mou

    2014-01-01

    Full Text Available The goal of the scheduling is to arrange operations on suitable machines with optimal sequence for corresponding objectives. In order to meet market requirements, scheduling systems must own enough flexibility against uncertain events. These events can change production status or processing parameters, even causing the original schedule to no longer be optimal or even to be infeasible. Traditional scheduling strategies, however, cannot cope with these cases. Therefore, a new idea of scheduling called inverse scheduling has been proposed. In this paper, the inverse scheduling with weighted completion time (SMISP is considered in a single-machine shop environment. In this paper, an improved genetic algorithm (IGA with a local searching strategy is proposed. To improve the performance of IGA, efficient encoding scheme, fitness evaluation mechanism, feasible initialization methods, and a local search procedure have been employed in the paper. Because of the local improving method, the proposed IGA can balance its exploration ability and exploitation ability. We adopt 27 instances to verify the effectiveness of the proposed algorithm. The experimental results illustrated that the proposed algorithm can generate satisfactory solutions. This approach also has been applied to solve the scheduling problem in the real Chinese shipyard and can bring some benefits.

  10. Modeling and inversion Matlab algorithms for resistivity, induced polarization and seismic data

    Science.gov (United States)

    Karaoulis, M.; Revil, A.; Minsley, B. J.; Werkema, D. D.

    2011-12-01

    M. Karaoulis (1), D.D. Werkema (3), A. Revil (1,2), A., B. Minsley (4), (1) Colorado School of Mines, Dept. of Geophysics, Golden, CO, USA. (2) ISTerre, CNRS, UMR 5559, Université de Savoie, Equipe Volcan, Le Bourget du Lac, France. (3) U.S. EPA, ORD, NERL, ESD, CMB, Las Vegas, Nevada, USA . (4) USGS, Federal Center, Lakewood, 10, 80225-0046, CO. Abstract We propose 2D and 3D forward modeling and inversion package for DC resistivity, time domain induced polarization (IP), frequency-domain IP, and seismic refraction data. For the resistivity and IP case, discretization is based on rectangular cells, where each cell has as unknown resistivity in the case of DC modelling, resistivity and chargeability in the time domain IP modelling, and complex resistivity in the spectral IP modelling. The governing partial-differential equations are solved with the finite element method, which can be applied to both real and complex variables that are solved for. For the seismic case, forward modeling is based on solving the eikonal equation using a second-order fast marching method. The wavepaths are materialized by Fresnel volumes rather than by conventional rays. This approach accounts for complicated velocity models and is advantageous because it considers frequency effects on the velocity resolution. The inversion can accommodate data at a single time step, or as a time-lapse dataset if the geophysical data are gathered for monitoring purposes. The aim of time-lapse inversion is to find the change in the velocities or resistivities of each model cell as a function of time. Different time-lapse algorithms can be applied such as independent inversion, difference inversion, 4D inversion, and 4D active time constraint inversion. The forward algorithms are benchmarked against analytical solutions and inversion results are compared with existing ones. The algorithms are packaged as Matlab codes with a simple Graphical User Interface. Although the code is parallelized for multi

  11. Improved genetic algorithms using inverse-elitism; Gyakuerito senryaku wo mochiita kairyo identeki algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Kawanishi, H.; Hagiwara, M. [Keio University, Tokyo (Japan)

    1998-05-01

    Improved Genetic Algorithms (GAs) have been proposed in this paper. We have directed our attention to `selection` and `crossover` in GAs. Novel strategies in selection and crossover are used in the proposed method. Various selecting strategies have been used in the conventional GAs such as Elitism, Tournament, Ranking, Roulette wheel, and Expected value model. These are not always effective, since these refer to only the fitness of each chromosome. We have developed the following techniques to improve the conventional GAs: `inverse-elitism` as a selecting strategy and variable crossover range as a crossover strategy. In the `inverse-elitism`, an inverse-elite whose gene values are reversed from those in the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes. As for the variable crossover range, we combine the following crossover techniques effectively: one is that range in crossover is varied from wide to narrow gradually to carry out global search in the beginning and local search in the ending; another is that range in crossover is varied from narrow to wide. We confirmed validity and superior performance of the proposed method by computer simulations. 18 refs., 9 figs., 3 tabs.

  12. Metropolis-Hastings Algorithms in Function Space for Bayesian Inverse Problems

    KAUST Repository

    Ernst, Oliver

    2015-01-07

    We consider Markov Chain Monte Carlo methods adapted to a Hilbert space setting. Such algorithms occur in Bayesian inverse problems where the solution is a probability measure on a function space according to which one would like to integrate or sample. We focus on Metropolis-Hastings algorithms and, in particular, we introduce and analyze a generalization of the existing pCN-proposal. This new proposal allows to exploit the geometry or anisotropy of the target measure which in turn might improve the statistical efficiency of the corresponding MCMC method. Numerical experiments for a real-world problem confirm the improvement.

  13. Metropolis-Hastings Algorithms in Function Space for Bayesian Inverse Problems

    KAUST Repository

    Ernst, Oliver

    2015-01-01

    We consider Markov Chain Monte Carlo methods adapted to a Hilbert space setting. Such algorithms occur in Bayesian inverse problems where the solution is a probability measure on a function space according to which one would like to integrate or sample. We focus on Metropolis-Hastings algorithms and, in particular, we introduce and analyze a generalization of the existing pCN-proposal. This new proposal allows to exploit the geometry or anisotropy of the target measure which in turn might improve the statistical efficiency of the corresponding MCMC method. Numerical experiments for a real-world problem confirm the improvement.

  14. The development of computational algorithms for manipulator inverse kinematics

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1989-10-01

    A solution technique of the inverse kinematics for multi-joint robot manipulators has been considered to be one of the most cumbersome treatment due to non-linearity properties inclusive of trigonometric functions. The most traditional approach is to use the Jacobian matrix on linearization assumptions. This iterative technique, however, is attended with numerical problems having significant influences on the solution characteristics such as initial guess dependence and singularities. Taking these facts into consideration, new approaches have been proposed from different standpoints, which are based on polynomial transformation of kinematic model, the minimization technique in mathematical programming, vector-geometrical concept, and the separation of joint variables associated with the optimization problem. In terms of computer simulations, each approach was identified to be a useful algorithm which leads to theoretically accurate solutions to complicated inverse problems. In this way, the short-term goal of our studies on manipulator inverse problem in the R and D project of remote handling technology was accomplished with success, and consequently the present report sums up the results of basic studies on this matter. (author)

  15. Inverse problems with Poisson data: statistical regularization theory, applications and algorithms

    International Nuclear Information System (INIS)

    Hohage, Thorsten; Werner, Frank

    2016-01-01

    Inverse problems with Poisson data arise in many photonic imaging modalities in medicine, engineering and astronomy. The design of regularization methods and estimators for such problems has been studied intensively over the last two decades. In this review we give an overview of statistical regularization theory for such problems, the most important applications, and the most widely used algorithms. The focus is on variational regularization methods in the form of penalized maximum likelihood estimators, which can be analyzed in a general setup. Complementing a number of recent convergence rate results we will establish consistency results. Moreover, we discuss estimators based on a wavelet-vaguelette decomposition of the (necessarily linear) forward operator. As most prominent applications we briefly introduce Positron emission tomography, inverse problems in fluorescence microscopy, and phase retrieval problems. The computation of a penalized maximum likelihood estimator involves the solution of a (typically convex) minimization problem. We also review several efficient algorithms which have been proposed for such problems over the last five years. (topical review)

  16. Reconstruction of Single-Grain Orientation Distribution Functions for Crystalline Materials

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Sørensen, Henning Osholm; Sükösd, Zsuzsanna

    2009-01-01

    for individual grains of the material in consideration. We study two iterative large-scale reconstruction algorithms, the algebraic reconstruction technique (ART) and conjugate gradients for least squares (CGLS), and demonstrate that right preconditioning is necessary in both algorithms to provide satisfactory...

  17. Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer

    Science.gov (United States)

    2017-01-05

    vol. 74, pp. 279–295, 1999. [11] M. Fröhlich, D. Michaelis, and H. W. Strube, “SIM— simultaneous inverse filtering and matching of a glottal flow...1 Performance Evaluation of Glottal Inverse Filtering Algorithms Using a Physiologically Based Articulatory Speech Synthesizer Yu-Ren Chien, Daryush...D. Mehta, Member, IEEE, Jón Guðnason, Matías Zañartu, Member, IEEE, and Thomas F. Quatieri, Fellow, IEEE Abstract—Glottal inverse filtering aims to

  18. LAI inversion algorithm based on directional reflectance kernels.

    Science.gov (United States)

    Tang, S; Chen, J M; Zhu, Q; Li, X; Chen, M; Sun, R; Zhou, Y; Deng, F; Xie, D

    2007-11-01

    Leaf area index (LAI) is an important ecological and environmental parameter. A new LAI algorithm is developed using the principles of ground LAI measurements based on canopy gap fraction. First, the relationship between LAI and gap fraction at various zenith angles is derived from the definition of LAI. Then, the directional gap fraction is acquired from a remote sensing bidirectional reflectance distribution function (BRDF) product. This acquisition is obtained by using a kernel driven model and a large-scale directional gap fraction algorithm. The algorithm has been applied to estimate a LAI distribution in China in mid-July 2002. The ground data acquired from two field experiments in Changbai Mountain and Qilian Mountain were used to validate the algorithm. To resolve the scale discrepancy between high resolution ground observations and low resolution remote sensing data, two TM images with a resolution approaching the size of ground plots were used to relate the coarse resolution LAI map to ground measurements. First, an empirical relationship between the measured LAI and a vegetation index was established. Next, a high resolution LAI map was generated using the relationship. The LAI value of a low resolution pixel was calculated from the area-weighted sum of high resolution LAIs composing the low resolution pixel. The results of this comparison showed that the inversion algorithm has an accuracy of 82%. Factors that may influence the accuracy are also discussed in this paper.

  19. Development and investigation of an inverse problem solution algorithm for determination of Ap stars magnetic field geometry

    International Nuclear Information System (INIS)

    Piskunov, N.E.

    1985-01-01

    Mathematical formulation of the inverse problem of determination of magnetic field geometry from the polarization profiles of spectral lines is gven. The solving algorithm is proposed. A set of model calculations has shown the effectiveness of the algorithm, the high precision of magnetic star model parameters obtained and also the advantages of the inverse problem method over the commonly used method of interpretation of effective field curves

  20. Iterative algorithms for the input and state recovery from the approximate inverse of strictly proper multivariable systems

    Science.gov (United States)

    Chen, Liwen; Xu, Qiang

    2018-02-01

    This paper proposes new iterative algorithms for the unknown input and state recovery from the system outputs using an approximate inverse of the strictly proper linear time-invariant (LTI) multivariable system. One of the unique advantages from previous system inverse algorithms is that the output differentiation is not required. The approximate system inverse is stable due to the systematic optimal design of a dummy feedthrough D matrix in the state-space model via the feedback stabilization. The optimal design procedure avoids trial and error to identify such a D matrix which saves tremendous amount of efforts. From the derived and proved convergence criteria, such an optimal D matrix also guarantees the convergence of algorithms. Illustrative examples show significant improvement of the reference input signal tracking by the algorithms and optimal D design over non-iterative counterparts on controllable or stabilizable LTI systems, respectively. Case studies of two Boeing-767 aircraft aerodynamic models further demonstrate the capability of the proposed methods.

  1. Improved Inverse Kinematics Algorithm Using Screw Theory for a Six-DOF Robot Manipulator

    Directory of Open Access Journals (Sweden)

    Qingcheng Chen

    2015-10-01

    Full Text Available Based on screw theory, a novel improved inverse-kinematics approach for a type of six-DOF serial robot, “Qianjiang I”, is proposed in this paper. The common kinematics model of the robot is based on the Denavit-Hartenberg (D-H notation method while its inverse kinematics has inefficient calculation and complicated solution, which cannot meet the demands of online real-time application. To solve this problem, this paper presents a new method to improve the efficiency of the inverse kinematics solution by introducing the screw theory. Unlike other methods, the proposed method only establishes two coordinates, namely the inertial coordinate and the tool coordinate; the screw motion of each link is carried out based on the inertial coordinate, ensuring definite geometric meaning. Furthermore, we adopt a new inverse kinematics algorithm, developing an improved sub-problem method along with Paden-Kahan sub-problems. This method has high efficiency and can be applied in real-time industrial operation. It is convenient to select the desired solutions directly from among multiple solutions by examining clear geometric meaning. Finally, the effectiveness and reliability performance of the new algorithm are analysed and verified in comparative experiments carried out on the six-DOF serial robot “Qianjiang I”.

  2. An improved fast and elitist multi-objective genetic algorithm-ANSGA-II for multi-objective optimization of inverse radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Cao Ruifen; Li Guoli; Song Gang; Zhao Pan; Lin Hui; Wu Aidong; Huang Chenyu; Wu Yican

    2007-01-01

    Objective: To provide a fast and effective multi-objective optimization algorithm for inverse radiotherapy treatment planning system. Methods: Non-dominated Sorting Genetic Algorithm-NSGA-II is a representative of multi-objective evolutionary optimization algorithms and excels the others. The paper produces ANSGA-II that makes use of advantage of NSGA-II, and uses adaptive crossover and mutation to improve its flexibility; according the character of inverse radiotherapy treatment planning, the paper uses the pre-known knowledge to generate individuals of every generation in the course of optimization, which enhances the convergent speed and improves efficiency. Results: The example of optimizing average dose of a sheet of CT, including PTV, OAR, NT, proves the algorithm could find satisfied solutions in several minutes. Conclusions: The algorithm could provide clinic inverse radiotherapy treatment planning system with selection of optimization algorithms. (authors)

  3. Numerical Laplace inversion in problems of elastodynamics: Comparison of four algorithms

    Czech Academy of Sciences Publication Activity Database

    Adámek, V.; Valeš, František; Červ, Jan

    2017-01-01

    Roč. 113, November (2017), s. 120-129 ISSN 0965-9978 R&D Projects: GA ČR(CZ) GAP101/12/2315 Institutional support: RVO:61388998 Keywords : inverse Laplace transform * numerical algorithm * wave propagation * multi-precision computation * Maple code Subject RIV: BI - Acoustics OBOR OECD: Acoustics Impact factor: 3.000, year: 2016

  4. A realistic inversion algorithm for magnetic anomaly data: the Mt. Amiata volcano test

    Directory of Open Access Journals (Sweden)

    C. Carmisciano

    2003-06-01

    Full Text Available The aim of this work is the formulation of a 3D model of the Mt. Amiata volcanic complex (Southern Tuscany by means of geomagnetic data. This work is shown not only as a real test to check the validity of the inversion algorithm, but also to add information about the structure of the volcanic complex. First, we outline briefly the theory of geomagnetic data inversion and we introduce the approach adopted. Then we show the 3D model of the Amiata volcano built from the inversion, and we compare it with the available geological information. The most important consideration regards the surface distribution of the magnetization that is in good agreement with rock samples from this area. Moreover, the recovered model orientation recall the extension of the lava flows, and as a last proof of validity, the source appears to be contained inside of the topographic contour level. The credibility of the inversion procedure drives the interpretation even for the deepest part of the volcano. The geomagnetic signal appears suppressed at a depth of about 2 km, but the most striking consequence is that sub-vertical structures are found even in different positions from the conduits shown in the geologic sections. The results are thus in good agreement with the information obtained from other data, but showing features that had not been identified, stressing the informative power of the geomagnetic signal when a meaningful inversion algorithm is used.

  5. Methods and Algorithms for Solving Inverse Problems for Fractional Advection-Dispersion Equations

    KAUST Repository

    Aldoghaither, Abeer

    2015-11-12

    Fractional calculus has been introduced as an e cient tool for modeling physical phenomena, thanks to its memory and hereditary properties. For example, fractional models have been successfully used to describe anomalous di↵usion processes such as contaminant transport in soil, oil flow in porous media, and groundwater flow. These models capture important features of particle transport such as particles with velocity variations and long-rest periods. Mathematical modeling of physical phenomena requires the identification of pa- rameters and variables from available measurements. This is referred to as an inverse problem. In this work, we are interested in studying theoretically and numerically inverse problems for space Fractional Advection-Dispersion Equation (FADE), which is used to model solute transport in porous media. Identifying parameters for such an equa- tion is important to understand how chemical or biological contaminants are trans- ported throughout surface aquifer systems. For instance, an estimate of the di↵eren- tiation order in groundwater contaminant transport model can provide information about soil properties, such as the heterogeneity of the medium. Our main contribution is to propose a novel e cient algorithm based on modulat-ing functions to estimate the coe cients and the di↵erentiation order for space FADE, which can be extended to general fractional Partial Di↵erential Equation (PDE). We also show how the method can be applied to the source inverse problem. This work is divided into two parts: In part I, the proposed method is described and studied through an extensive numerical analysis. The local convergence of the proposed two-stage algorithm is proven for 1D space FADE. The properties of this method are studied along with its limitations. Then, the algorithm is generalized to the 2D FADE. In part II, we analyze direct and inverse source problems for a space FADE. The problem consists of recovering the source term using final

  6. Multi-GPU parallel algorithm design and analysis for improved inversion of probability tomography with gravity gradiometry data

    Science.gov (United States)

    Hou, Zhenlong; Huang, Danian

    2017-09-01

    In this paper, we make a study on the inversion of probability tomography (IPT) with gravity gradiometry data at first. The space resolution of the results is improved by multi-tensor joint inversion, depth weighting matrix and the other methods. Aiming at solving the problems brought by the big data in the exploration, we present the parallel algorithm and the performance analysis combining Compute Unified Device Architecture (CUDA) with Open Multi-Processing (OpenMP) based on Graphics Processing Unit (GPU) accelerating. In the test of the synthetic model and real data from Vinton Dome, we get the improved results. It is also proved that the improved inversion algorithm is effective and feasible. The performance of parallel algorithm we designed is better than the other ones with CUDA. The maximum speedup could be more than 200. In the performance analysis, multi-GPU speedup and multi-GPU efficiency are applied to analyze the scalability of the multi-GPU programs. The designed parallel algorithm is demonstrated to be able to process larger scale of data and the new analysis method is practical.

  7. Inverse Problems in Geodynamics Using Machine Learning Algorithms

    Science.gov (United States)

    Shahnas, M. H.; Yuen, D. A.; Pysklywec, R. N.

    2018-01-01

    During the past few decades numerical studies have been widely employed to explore the style of circulation and mixing in the mantle of Earth and other planets. However, in geodynamical studies there are many properties from mineral physics, geochemistry, and petrology in these numerical models. Machine learning, as a computational statistic-related technique and a subfield of artificial intelligence, has rapidly emerged recently in many fields of sciences and engineering. We focus here on the application of supervised machine learning (SML) algorithms in predictions of mantle flow processes. Specifically, we emphasize on estimating mantle properties by employing machine learning techniques in solving an inverse problem. Using snapshots of numerical convection models as training samples, we enable machine learning models to determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at midmantle depths. Employing support vector machine algorithms, we show that SML techniques can successfully predict the magnitude of mantle density anomalies and can also be used in characterizing mantle flow patterns. The technique can be extended to more complex geodynamic problems in mantle dynamics by employing deep learning algorithms for putting constraints on properties such as viscosity, elastic parameters, and the nature of thermal and chemical anomalies.

  8. 3D magnetization vector inversion based on fuzzy clustering: inversion algorithm, uncertainty analysis, and application to geology differentiation

    Science.gov (United States)

    Sun, J.; Li, Y.

    2017-12-01

    Magnetic data contain important information about the subsurface rocks that were magnetized in the geological history, which provides an important avenue to the study of the crustal heterogeneities associated with magmatic and hydrothermal activities. Interpretation of magnetic data has been widely used in mineral exploration, basement characterization and large scale crustal studies for several decades. However, interpreting magnetic data has been often complicated by the presence of remanent magnetizations with unknown magnetization directions. Researchers have developed different methods to deal with the challenges posed by remanence. We have developed a new and effective approach to inverting magnetic data for magnetization vector distributions characterized by region-wise consistency in the magnetization directions. This approach combines the classical Tikhonov inversion scheme with fuzzy C-means clustering algorithm, and constrains the estimated magnetization vectors to a specified small number of possible directions while fitting the observed magnetic data to within noise level. Our magnetization vector inversion recovers both the magnitudes and the directions of the magnetizations in the subsurface. Magnetization directions reflect the unique geological or hydrothermal processes applied to each geological unit, and therefore, can potentially be used for the purpose of differentiating various geological units. We have developed a practically convenient and effective way of assessing the uncertainty associated with the inverted magnetization directions (Figure 1), and investigated how geological differentiation results might be affected (Figure 2). The algorithm and procedures we have developed for magnetization vector inversion and uncertainty analysis open up new possibilities of extracting useful information from magnetic data affected by remanence. We will use a field data example from exploration of an iron-oxide-copper-gold (IOCG) deposit in Brazil to

  9. A hybrid algorithm for solving inverse problems in elasticity

    Directory of Open Access Journals (Sweden)

    Barabasz Barbara

    2014-12-01

    Full Text Available The paper offers a new approach to handling difficult parametric inverse problems in elasticity and thermo-elasticity, formulated as global optimization ones. The proposed strategy is composed of two phases. In the first, global phase, the stochastic hp-HGS algorithm recognizes the basins of attraction of various objective minima. In the second phase, the local objective minimizers are closer approached by steepest descent processes executed singly in each basin of attraction. The proposed complex strategy is especially dedicated to ill-posed problems with multimodal objective functionals. The strategy offers comparatively low computational and memory costs resulting from a double-adaptive technique in both forward and inverse problem domains. We provide a result on the Lipschitz continuity of the objective functional composed of the elastic energy and the boundary displacement misfits with respect to the unknown constitutive parameters. It allows common scaling of the accuracy of solving forward and inverse problems, which is the core of the introduced double-adaptive technique. The capability of the proposed method of finding multiple solutions is illustrated by a computational example which consists in restoring all feasible Young modulus distributions minimizing an objective functional in a 3D domain of a photo polymer template obtained during step and flash imprint lithography.

  10. Two-wavelength Lidar inversion algorithm for determining planetary boundary layer height

    Science.gov (United States)

    Liu, Boming; Ma, Yingying; Gong, Wei; Jian, Yang; Ming, Zhang

    2018-02-01

    This study proposes a two-wavelength Lidar inversion algorithm to determine the boundary layer height (BLH) based on the particles clustering. Color ratio and depolarization ratio are used to analyze the particle distribution, based on which the proposed algorithm can overcome the effects of complex aerosol layers to calculate the BLH. The algorithm is used to determine the top of the boundary layer under different mixing state. Experimental results demonstrate that the proposed algorithm can determine the top of the boundary layer even in a complex case. Moreover, it can better deal with the weak convection conditions. Finally, experimental data from June 2015 to December 2015 were used to verify the reliability of the proposed algorithm. The correlation between the results of the proposed algorithm and the manual method is R2 = 0.89 with a RMSE of 131 m and mean bias of 49 m; the correlation between the results of the ideal profile fitting method and the manual method is R2 = 0.64 with a RMSE of 270 m and a mean bias of 165 m; and the correlation between the results of the wavelet covariance transform method and manual method is R2 = 0.76, with a RMSE of 196 m and mean bias of 23 m. These findings indicate that the proposed algorithm has better reliability and stability than traditional algorithms.

  11. Picosecond scale experimental verification of a globally convergent algorithm for a coefficient inverse problem

    International Nuclear Information System (INIS)

    Klibanov, Michael V; Pantong, Natee; Fiddy, Michael A; Schenk, John; Beilina, Larisa

    2010-01-01

    A globally convergent algorithm by the first and third authors for a 3D hyperbolic coefficient inverse problem is verified on experimental data measured in the picosecond scale regime. Quantifiable images of dielectric abnormalities are obtained. The total measurement timing of a 100 ps pulse for one detector location was 1.2 ns with 20 ps (=0.02 ns) time step between two consecutive readings. Blind tests have consistently demonstrated an accurate imaging of refractive indexes of dielectric abnormalities. At the same time, it is shown that a modified gradient method is inapplicable to this kind of experimental data. This inverse algorithm is also applicable to other types of imaging modalities, e.g. acoustics. Potential applications are in airport security, imaging of land mines, imaging of defects in non-distractive testing, etc

  12. Improved Genetic Algorithm Based on the Cooperation of Elite and Inverse-elite

    Science.gov (United States)

    Kanakubo, Masaaki; Hagiwara, Masafumi

    In this paper, we propose an improved genetic algorithm based on the combination of Bee system and Inverse-elitism, both are effective strategies for the improvement of GA. In the Bee system, in the beginning, each chromosome tries to find good solution individually as global search. When some chromosome is regarded as superior one, the other chromosomes try to find solution around there. However, since chromosomes for global search are generated randomly, Bee system lacks global search ability. On the other hand, in the Inverse-elitism, an inverse-elite whose gene values are reversed from the corresponding elite is produced. This strategy greatly contributes to diversification of chromosomes, but it lacks local search ability. In the proposed method, the Inverse-elitism with Pseudo-simplex method is employed for global search of Bee system in order to strengthen global search ability. In addition, it also has strong local search ability. The proposed method has synergistic effects of the three strategies. We confirmed validity and superior performance of the proposed method by computer simulations.

  13. Volcanic source inversion using a genetic algorithm and an elastic-gravitational layered earth model for magmatic intrusions

    Science.gov (United States)

    Tiampo, K. F.; Fernández, J.; Jentzsch, G.; Charco, M.; Rundle, J. B.

    2004-11-01

    Here we present an inversion methodology using the combination of a genetic algorithm (GA) inversion program, and an elastic-gravitational earth model to determine the parameters of a volcanic intrusion. Results from the integration of the elastic-gravitational model, a suite of FORTRAN 77 programs developed to compute the displacements due to volcanic loading, with the GA inversion code, written in the C programming language, are presented. These codes allow for the calculation of displacements (horizontal and vertical), tilt, vertical strain and potential and gravity changes on the surface of an elastic-gravitational layered Earth model due to the magmatic intrusion. We detail the appropriate methodology for examining the sensitivity of the model to variation in the constituent parameters using the GA, and present, for the first time, a Monte Carlo technique for evaluating the propagation of error through the GA inversion process. One application example is given at Mayon volcano, Philippines, for the inversion program, the sensitivity analysis, and the error evaluation. The integration of the GA with the complex elastic-gravitational model is a blueprint for an efficient nonlinear inversion methodology and its implementation into an effective tool for the evaluation of parameter sensitivity. Finally, the extension of this inversion algorithm and the error assessment methodology has important implications to the modeling and data assimilation of a number of other nonlinear applications in the field of geosciences.

  14. A reversible-jump Markov chain Monte Carlo algorithm for 1D inversion of magnetotelluric data

    Science.gov (United States)

    Mandolesi, Eric; Ogaya, Xenia; Campanyà, Joan; Piana Agostinetti, Nicola

    2018-04-01

    This paper presents a new computer code developed to solve the 1D magnetotelluric (MT) inverse problem using a Bayesian trans-dimensional Markov chain Monte Carlo algorithm. MT data are sensitive to the depth-distribution of rock electric conductivity (or its reciprocal, resistivity). The solution provided is a probability distribution - the so-called posterior probability distribution (PPD) for the conductivity at depth, together with the PPD of the interface depths. The PPD is sampled via a reversible-jump Markov Chain Monte Carlo (rjMcMC) algorithm, using a modified Metropolis-Hastings (MH) rule to accept or discard candidate models along the chains. As the optimal parameterization for the inversion process is generally unknown a trans-dimensional approach is used to allow the dataset itself to indicate the most probable number of parameters needed to sample the PPD. The algorithm is tested against two simulated datasets and a set of MT data acquired in the Clare Basin (County Clare, Ireland). For the simulated datasets the correct number of conductive layers at depth and the associated electrical conductivity values is retrieved, together with reasonable estimates of the uncertainties on the investigated parameters. Results from the inversion of field measurements are compared with results obtained using a deterministic method and with well-log data from a nearby borehole. The PPD is in good agreement with the well-log data, showing as a main structure a high conductive layer associated with the Clare Shale formation. In this study, we demonstrate that our new code go beyond algorithms developend using a linear inversion scheme, as it can be used: (1) to by-pass the subjective choices in the 1D parameterizations, i.e. the number of horizontal layers in the 1D parameterization, and (2) to estimate realistic uncertainties on the retrieved parameters. The algorithm is implemented using a simple MPI approach, where independent chains run on isolated CPU, to take

  15. A Numerical Approach to Solving an Inverse Heat Conduction Problem Using the Levenberg-Marquardt Algorithm

    Directory of Open Access Journals (Sweden)

    Tao Min

    2014-01-01

    Full Text Available This paper is intended to provide a numerical algorithm involving the combined use of the Levenberg-Marquardt algorithm and the Galerkin finite element method for estimating the diffusion coefficient in an inverse heat conduction problem (IHCP. In the present study, the functional form of the diffusion coefficient is unknown a priori. The unknown diffusion coefficient is approximated by the polynomial form and the present numerical algorithm is employed to find the solution. Numerical experiments are presented to show the efficiency of the proposed method.

  16. Investigation on the Inversion of the Atmospheric Duct Using the Artificial Bee Colony Algorithm Based on Opposition-Based Learning

    Directory of Open Access Journals (Sweden)

    Chao Yang

    2016-01-01

    Full Text Available The artificial bee colony (ABC algorithm is a recently introduced optimization method in the research field of swarm intelligence. This paper presents an improved ABC algorithm named as OGABC based on opposition-based learning (OBL and global best search equation to overcome the shortcomings of the slow convergence rate and sinking into local optima in the process of inversion of atmospheric duct. Taking the inversion of the surface duct using refractivity from clutter (RFC technique as an example to validate the performance of the proposed OGABC, the inversion results are compared with those of the modified invasive weed optimization (MIWO and ABC. The radar sea clutter power calculated by parabolic equation method using the simulated and measured refractivity profile is utilized to carry out the inversion of the surface duct, respectively. The comparative investigation results indicate that the performance of OGABC is superior to that of MIWO and ABC in terms of stability, accuracy, and convergence rate during the process of inversion.

  17. Inversion of the fermion matrix and the equivalence of the conjugate gradient and Lanczos algorithms

    International Nuclear Information System (INIS)

    Burkitt, A.N.; Irving, A.C.

    1990-01-01

    The Lanczos and conjugate gradient algorithms are widely used in lattice QCD calculations. The previously known close relationship between the two methods is explored and two commonly used implementations are shown to give identically the same results at each iteration, in exact arithmetic, for matrix inversion. The identities between the coefficients of the two algorithms are given, and many of the features of the two algorithms can now be combined. The effects of finite arithmetic are investigated and the particular Lanczos formulation is found to be most stable with respect to rounding errors. (orig.)

  18. Thin-Sheet Inversion Modeling of Geomagnetic Deep Sounding Data Using MCMC Algorithm

    Directory of Open Access Journals (Sweden)

    Hendra Grandis

    2013-01-01

    Full Text Available The geomagnetic deep sounding (GDS method is one of electromagnetic (EM methods in geophysics that allows the estimation of the subsurface electrical conductivity distribution. This paper presents the inversion modeling of GDS data employing Markov Chain Monte Carlo (MCMC algorithm to evaluate the marginal posterior probability of the model parameters. We used thin-sheet model to represent quasi-3D conductivity variations in the heterogeneous subsurface. The algorithm was applied to invert field GDS data from the zone covering an area that spans from eastern margin of the Bohemian Massif to the West Carpathians in Europe. Conductivity anomalies obtained from this study confirm the well-known large-scale tectonic setting of the area.

  19. A Semianalytical Ocean Color Inversion Algorithm with Explicit Water Column Depth and Substrate Reflectance Parameterization

    Science.gov (United States)

    Mckinna, Lachlan I. W.; Werdell, P. Jeremy; Fearns, Peter R. C.; Weeks, Scarla J.; Reichstetter, Martina; Franz, Bryan A.; Shea, Donald M.; Feldman, Gene C.

    2015-01-01

    A semianalytical ocean color inversion algorithm was developed for improving retrievals of inherent optical properties (IOPs) in optically shallow waters. In clear, geometrically shallow waters, light reflected off the seafloor can contribute to the water-leaving radiance signal. This can have a confounding effect on ocean color algorithms developed for optically deep waters, leading to an overestimation of IOPs. The algorithm described here, the Shallow Water Inversion Model (SWIM), uses pre-existing knowledge of bathymetry and benthic substrate brightness to account for optically shallow effects. SWIM was incorporated into the NASA Ocean Biology Processing Group's L2GEN code and tested in waters of the Great Barrier Reef, Australia, using the Moderate Resolution Imaging Spectroradiometer (MODIS) Aqua time series (2002-2013). SWIM-derived values of the total non-water absorption coefficient at 443 nm, at(443), the particulate backscattering coefficient at 443 nm, bbp(443), and the diffuse attenuation coefficient at 488 nm, Kd(488), were compared with values derived using the Generalized Inherent Optical Properties algorithm (GIOP) and the Quasi-Analytical Algorithm (QAA). The results indicated that in clear, optically shallow waters SWIM-derived values of at(443), bbp(443), and Kd(443) were realistically lower than values derived using GIOP and QAA, in agreement with radiative transfer modeling. This signified that the benthic reflectance correction was performing as expected. However, in more optically complex waters, SWIM had difficulty converging to a solution, a likely consequence of internal IOP parameterizations. Whilst a comprehensive study of the SWIM algorithm's behavior was conducted, further work is needed to validate the algorithm using in situ data.

  20. Method of T2 spectrum inversion with conjugate gradient algorithm from NMR data

    International Nuclear Information System (INIS)

    Li Pengju; Shi Shangming; Song Yanjie

    2010-01-01

    Based on the optimization techniques, the T 2 spectrum inversion method of conjugate gradient that is easy to realize non-negativity constraint of T2 spectrum is proposed. The method transforms the linear mixed-determined problem of T2 spectrum inversion into the typical optimization problem of searching the minimum of objective function by building up the objective function according to the basic idea of geophysics modeling. The optimization problem above is solved with the conjugate gradient algorithm that has quick convergence rate and quadratic termination. The method has been applied to the inversion of noise free echo train generated from artificial spectrum, artificial echo train with signal-to-noise ratio (SNR)=25 and NMR experimental data of drilling core. The comparison between the inversion results of this paper and artificial spectrum or the result of software imported in NMR laboratory shows that the method can correctly invert T 2 spectrum from artificial NMR relaxation data even though SNR=25 and that inversion T 2 spectrum with good continuity and smoothness from core NMR experimental data accords perfectly with that of laboratory software imported, and moreover,the absolute error between the NMR porosity computed from T 2 spectrum and helium (He) porosity in laboratory is 0.65%. (authors)

  1. An improved version of Inverse Distance Weighting metamodel assisted Harmony Search algorithm for truss design optimization

    Directory of Open Access Journals (Sweden)

    Y. Gholipour

    Full Text Available This paper focuses on a metamodel-based design optimization algorithm. The intention is to improve its computational cost and convergence rate. Metamodel-based optimization method introduced here, provides the necessary means to reduce the computational cost and convergence rate of the optimization through a surrogate. This algorithm is a combination of a high quality approximation technique called Inverse Distance Weighting and a meta-heuristic algorithm called Harmony Search. The outcome is then polished by a semi-tabu search algorithm. This algorithm adopts a filtering system and determines solution vectors where exact simulation should be applied. The performance of the algorithm is evaluated by standard truss design problems and there has been a significant decrease in the computational effort and improvement of convergence rate.

  2. Inverse Kinematics of a Humanoid Robot with Non-Spherical Hip: A Hybrid Algorithm Approach

    Directory of Open Access Journals (Sweden)

    Rafael Cisneros Limón

    2013-04-01

    Full Text Available This paper describes an approach to solve the inverse kinematics problem of humanoid robots whose construction shows a small but non negligible offset at the hip which prevents any purely analytical solution to be developed. Knowing that a purely numerical solution is not feasible due to variable efficiency problems, the proposed one first neglects the offset presence in order to obtain an approximate “solution” by means of an analytical algorithm based on screw theory, and then uses it as the initial condition of a numerical refining procedure based on the Levenberg-Marquardt algorithm. In this way, few iterations are needed for any specified attitude, making it possible to implement the algorithm for real-time applications. As a way to show the algorithm's implementation, one case of study is considered throughout the paper, represented by the SILO2 humanoid robot.

  3. Algorithms and Architectures for Elastic-Wave Inversion Final Report CRADA No. TC02144.0

    Energy Technology Data Exchange (ETDEWEB)

    Larsen, S. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Lindtjorn, O. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)

    2017-08-15

    This was a collaborative effort between Lawrence Livermore National Security, LLC as manager and operator of Lawrence Livermore National Laboratory (LLNL) and Schlumberger Technology Corporation (STC), to perform a computational feasibility study that investigates hardware platforms and software algorithms applicable to STC for Reverse Time Migration (RTM) / Reverse Time Inversion (RTI) of 3-D seismic data.

  4. Comparative behaviour of the Dynamically Penalized Likelihood algorithm in inverse radiation therapy planning

    Energy Technology Data Exchange (ETDEWEB)

    Llacer, Jorge [EC Engineering Consultants, LLC, Los Gatos, CA (United States)]. E-mail: jllacer@home.com; Solberg, Timothy D. [Department of Radiation Oncology, University of California, Los Angeles, CA (United States)]. E-mail: Solberg@radonc.ucla.edu; Promberger, Claus [BrainLAB AG, Heimstetten (Germany)]. E-mail: promberg@brainlab.com

    2001-10-01

    This paper presents a description of tests carried out to compare the behaviour of five algorithms in inverse radiation therapy planning: (1) The Dynamically Penalized Likelihood (DPL), an algorithm based on statistical estimation theory; (2) an accelerated version of the same algorithm; (3) a new fast adaptive simulated annealing (ASA) algorithm; (4) a conjugate gradient method; and (5) a Newton gradient method. A three-dimensional mathematical phantom and two clinical cases have been studied in detail. The phantom consisted of a U-shaped tumour with a partially enclosed 'spinal cord'. The clinical examples were a cavernous sinus meningioma and a prostate case. The algorithms have been tested in carefully selected and controlled conditions so as to ensure fairness in the assessment of results. It has been found that all five methods can yield relatively similar optimizations, except when a very demanding optimization is carried out. For the easier cases, the differences are principally in robustness, ease of use and optimization speed. In the more demanding case, there are significant differences in the resulting dose distributions. The accelerated DPL emerges as possibly the algorithm of choice for clinical practice. An appendix describes the differences in behaviour between the new ASA method and the one based on a patent by the Nomos Corporation. (author)

  5. Multi-objective thermodynamic optimization of combined Brayton and inverse Brayton cycles using genetic algorithms

    International Nuclear Information System (INIS)

    Besarati, S.M.; Atashkari, K.; Jamali, A.; Hajiloo, A.; Nariman-zadeh, N.

    2010-01-01

    This paper presents a simultaneous optimization study of two outputs performance of a previously proposed combined Brayton and inverse Brayton cycles. It has been carried out by varying the upper cycle pressure ratio, the expansion pressure of the bottom cycle and using variable, above atmospheric, bottom cycle inlet pressure. Multi-objective genetic algorithms are used for Pareto approach optimization of the cycle outputs. The two important conflicting thermodynamic objectives that have been considered in this work are net specific work (w s ) and thermal efficiency (η th ). It is shown that some interesting features among optimal objective functions and decision variables involved in the Baryton and inverse Brayton cycles can be discovered consequently.

  6. An algorithmic framework for Mumford–Shah regularization of inverse problems in imaging

    International Nuclear Information System (INIS)

    Hohm, Kilian; Weinmann, Andreas; Storath, Martin

    2015-01-01

    The Mumford–Shah model is a very powerful variational approach for edge preserving regularization of image reconstruction processes. However, it is algorithmically challenging because one has to deal with a non-smooth and non-convex functional. In this paper, we propose a new efficient algorithmic framework for Mumford–Shah regularization of inverse problems in imaging. It is based on a splitting into specific subproblems that can be solved exactly. We derive fast solvers for the subproblems which are key for an efficient overall algorithm. Our method neither requires a priori knowledge of the gray or color levels nor of the shape of the discontinuity set. We demonstrate the wide applicability of the method for different modalities. In particular, we consider the reconstruction from Radon data, inpainting, and deconvolution. Our method can be easily adapted to many further imaging setups. The relevant condition is that the proximal mapping of the data fidelity can be evaluated a within reasonable time. In other words, it can be used whenever classical Tikhonov regularization is possible. (paper)

  7. A hybrid artificial bee colony algorithm and pattern search method for inversion of particle size distribution from spectral extinction data

    Science.gov (United States)

    Wang, Li; Li, Feng; Xing, Jian

    2017-10-01

    In this paper, a hybrid artificial bee colony (ABC) algorithm and pattern search (PS) method is proposed and applied for recovery of particle size distribution (PSD) from spectral extinction data. To be more useful and practical, size distribution function is modelled as the general Johnson's ? function that can overcome the difficulty of not knowing the exact type beforehand encountered in many real circumstances. The proposed hybrid algorithm is evaluated through simulated examples involving unimodal, bimodal and trimodal PSDs with different widths and mean particle diameters. For comparison, all examples are additionally validated by the single ABC algorithm. In addition, the performance of the proposed algorithm is further tested by actual extinction measurements with real standard polystyrene samples immersed in water. Simulation and experimental results illustrate that the hybrid algorithm can be used as an effective technique to retrieve the PSDs with high reliability and accuracy. Compared with the single ABC algorithm, our proposed algorithm can produce more accurate and robust inversion results while taking almost comparative CPU time over ABC algorithm alone. The superiority of ABC and PS hybridization strategy in terms of reaching a better balance of estimation accuracy and computation effort increases its potentials as an excellent inversion technique for reliable and efficient actual measurement of PSD.

  8. Use of Genetic Algorithms to solve Inverse Problems in Relativistic Hydrodynamics

    Science.gov (United States)

    Guzmán, F. S.; González, J. A.

    2018-04-01

    We present the use of Genetic Algorithms (GAs) as a strategy to solve inverse problems associated with models of relativistic hydrodynamics. The signal we consider to emulate an observation is the density of a relativistic gas, measured at a point where a shock is traveling. This shock is generated numerically out of a Riemann problem with mildly relativistic conditions. The inverse problem we propose is the prediction of the initial conditions of density, velocity and pressure of the Riemann problem that gave origin to that signal. For this we use the density, velocity and pressure of the gas at both sides of the discontinuity, as the six genes of an organism, initially with random values within a tolerance. We then prepare an initial population of N of these organisms and evolve them using methods based on GAs. In the end, the organism with the best fitness of each generation is compared to the signal and the process ends when the set of initial conditions of the organisms of a later generation fit the Signal within a tolerance.

  9. An implementation of differential evolution algorithm for inversion of geoelectrical data

    Science.gov (United States)

    Balkaya, Çağlayan

    2013-11-01

    Differential evolution (DE), a population-based evolutionary algorithm (EA) has been implemented to invert self-potential (SP) and vertical electrical sounding (VES) data sets. The algorithm uses three operators including mutation, crossover and selection similar to genetic algorithm (GA). Mutation is the most important operator for the success of DE. Three commonly used mutation strategies including DE/best/1 (strategy 1), DE/rand/1 (strategy 2) and DE/rand-to-best/1 (strategy 3) were applied together with a binomial type crossover. Evolution cycle of DE was realized without boundary constraints. For the test studies performed with SP data, in addition to both noise-free and noisy synthetic data sets two field data sets observed over the sulfide ore body in the Malachite mine (Colorado) and over the ore bodies in the Neem-Ka Thana cooper belt (India) were considered. VES test studies were carried out using synthetically produced resistivity data representing a three-layered earth model and a field data set example from Gökçeada (Turkey), which displays a seawater infiltration problem. Mutation strategies mentioned above were also extensively tested on both synthetic and field data sets in consideration. Of these, strategy 1 was found to be the most effective strategy for the parameter estimation by providing less computational cost together with a good accuracy. The solutions obtained by DE for the synthetic cases of SP were quite consistent with particle swarm optimization (PSO) which is a more widely used population-based optimization algorithm than DE in geophysics. Estimated parameters of SP and VES data were also compared with those obtained from Metropolis-Hastings (M-H) sampling algorithm based on simulated annealing (SA) without cooling to clarify uncertainties in the solutions. Comparison to the M-H algorithm shows that DE performs a fast approximate posterior sampling for the case of low-dimensional inverse geophysical problems.

  10. Sorting signed permutations by inversions in O(nlogn) time.

    Science.gov (United States)

    Swenson, Krister M; Rajan, Vaibhav; Lin, Yu; Moret, Bernard M E

    2010-03-01

    The study of genomic inversions (or reversals) has been a mainstay of computational genomics for nearly 20 years. After the initial breakthrough of Hannenhalli and Pevzner, who gave the first polynomial-time algorithm for sorting signed permutations by inversions, improved algorithms have been designed, culminating with an optimal linear-time algorithm for computing the inversion distance and a subquadratic algorithm for providing a shortest sequence of inversions--also known as sorting by inversions. Remaining open was the question of whether sorting by inversions could be done in O(nlogn) time. In this article, we present a qualified answer to this question, by providing two new sorting algorithms, a simple and fast randomized algorithm and a deterministic refinement. The deterministic algorithm runs in time O(nlogn + kn), where k is a data-dependent parameter. We provide the results of extensive experiments showing that both the average and the standard deviation for k are small constants, independent of the size of the permutation. We conclude (but do not prove) that almost all signed permutations can be sorted by inversions in O(nlogn) time.

  11. Sensitivity of NMR spectra properties to different inversion algorithms. Abstract 97

    International Nuclear Information System (INIS)

    Bryan, J.; Wang, G.; Vargas, S.; Kantzas, A.

    2004-01-01

    'Full text:' Low field NMR technology has many applications in the petroleum industry. NMR spectra obtained from logging tools or laboratory instruments can be used to provide an incredible wealth of useful information for formation evaluation and reservoir fluid characterization purposes. In recent years, research performed at the University of Calgary has been instrumental in developing this technology for heavy oil and bitumen related problems. Specifically, low field NMR has been used in several niche applications: in-situ viscosity estimates of heavy oil and bitumen, water-in-oil emulsion and solvent-bitumen mixture viscosity, water cut in produced fluid streams, and oil-water-solids content in oil sands mining samples. The majority of all NMR analyses are based on the interpretation of NMR spectra. These spectra are inverted numerically from the measured NMR decay data. The mathematics of the inversion is generally assumed to be correct, and the analyses revolve around interpretations of how the spectra relate to physical properties of the samples. However, when measuring high viscosity fluids or clay-bound water, the NMR signal relaxes very quickly and it becomes extremely important to ensure that the spectrum obtained is accurate before relating its properties to physics. This work investigates the effect of different inversion algorithms on the generated spectra, and attempts to quantify the magnitude of the errors that can be associated with the mathematics of inversion. This leads to a better understanding of the accuracy of NMR estimates of rock and fluid properties. (author)

  12. Identification of the Heat Transfer Coefficient in the Inverse Stefan Problem by Using the ABC Algorithm

    Directory of Open Access Journals (Sweden)

    E. Hetmaniok

    2012-12-01

    Full Text Available A procedure based on the Artificial Bee Colony algorithm for solving the two-phase axisymmetric one-dimensional inverse Stefanproblem with the third kind boundary condition is presented in this paper. Solving of the considered problem consists in reconstruction of the function describing the heat transfer coefficient appearing in boundary condition of the third kind in such a way that the reconstructed values of temperature would be as closed as possible to the measurements of temperature given in selected points of the solid. A crucial part of the solution method consists in minimizing some functional which will be executed with the aid of one of the swarm intelligence algorithms - the ABC algorithm.

  13. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    Science.gov (United States)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  14. A three-dimensional reconstruction algorithm for an inverse-geometry volumetric CT system

    International Nuclear Information System (INIS)

    Schmidt, Taly Gilat; Fahrig, Rebecca; Pelc, Norbert J.

    2005-01-01

    An inverse-geometry volumetric computed tomography (IGCT) system has been proposed capable of rapidly acquiring sufficient data to reconstruct a thick volume in one circular scan. The system uses a large-area scanned source opposite a smaller detector. The source and detector have the same extent in the axial, or slice, direction, thus providing sufficient volumetric sampling and avoiding cone-beam artifacts. This paper describes a reconstruction algorithm for the IGCT system. The algorithm first rebins the acquired data into two-dimensional (2D) parallel-ray projections at multiple tilt and azimuthal angles, followed by a 3D filtered backprojection. The rebinning step is performed by gridding the data onto a Cartesian grid in a 4D projection space. We present a new method for correcting the gridding error caused by the finite and asymmetric sampling in the neighborhood of each output grid point in the projection space. The reconstruction algorithm was implemented and tested on simulated IGCT data. Results show that the gridding correction reduces the gridding errors to below one Hounsfield unit. With this correction, the reconstruction algorithm does not introduce significant artifacts or blurring when compared to images reconstructed from simulated 2D parallel-ray projections. We also present an investigation of the noise behavior of the method which verifies that the proposed reconstruction algorithm utilizes cross-plane rays as efficiently as in-plane rays and can provide noise comparable to an in-plane parallel-ray geometry for the same number of photons. Simulations of a resolution test pattern and the modulation transfer function demonstrate that the IGCT system, using the proposed algorithm, is capable of 0.4 mm isotropic resolution. The successful implementation of the reconstruction algorithm is an important step in establishing feasibility of the IGCT system

  15. Inversion of Land Surface Temperature (LST Using Terra ASTER Data: A Comparison of Three Algorithms

    Directory of Open Access Journals (Sweden)

    Milton Isaya Ndossi

    2016-12-01

    Full Text Available Land Surface Temperature (LST is an important measurement in studies related to the Earth surface’s processes. The Advanced Space-borne Thermal Emission and Reflection Radiometer (ASTER instrument onboard the Terra spacecraft is the currently available Thermal Infrared (TIR imaging sensor with the highest spatial resolution. This study involves the comparison of LSTs inverted from the sensor using the Split Window Algorithm (SWA, the Single Channel Algorithm (SCA and the Planck function. This study has used the National Oceanic and Atmospheric Administration’s (NOAA data to model and compare the results from the three algorithms. The data from the sensor have been processed by the Python programming language in a free and open source software package (QGIS to enable users to make use of the algorithms. The study revealed that the three algorithms are suitable for LST inversion, whereby the Planck function showed the highest level of accuracy, the SWA had moderate level of accuracy and the SCA had the least accuracy. The algorithms produced results with Root Mean Square Errors (RMSE of 2.29 K, 3.77 K and 2.88 K for the Planck function, the SCA and SWA respectively.

  16. Optimization of proton and heavy ion therapy using an adaptive inversion algorithm

    International Nuclear Information System (INIS)

    Brahme, A.; Kaellman, P.; Lind, B.K.

    1989-01-01

    From the examples presented it is clear that the clinical advantages of high energy proton beams are considerable when optimally employed. Protons can generate almost any desired dose distribution in an arbitrary shaped target volume. When only ordinary uniform proton beams of fixed range modulation are available, the clinical advantages compared for example to high quality high energy electrons are not so pronounced. The new iterative inversion algorithm presented here therefore opens the door for precise and efficient use of the dose distributional advantages of high energy protons, pions and heavy ions. (author). 22 refs.; 7 figs

  17. Solving inverse problem for Markov chain model of customer lifetime value using flower pollination algorithm

    Science.gov (United States)

    Al-Ma'shumah, Fathimah; Permana, Dony; Sidarto, Kuntjoro Adji

    2015-12-01

    Customer Lifetime Value is an important and useful concept in marketing. One of its benefits is to help a company for budgeting marketing expenditure for customer acquisition and customer retention. Many mathematical models have been introduced to calculate CLV considering the customer retention/migration classification scheme. A fairly new class of these models which will be described in this paper uses Markov Chain Models (MCM). This class of models has the major advantage for its flexibility to be modified to several different cases/classification schemes. In this model, the probabilities of customer retention and acquisition play an important role. From Pfeifer and Carraway, 2000, the final formula of CLV obtained from MCM usually contains nonlinear form of the transition probability matrix. This nonlinearity makes the inverse problem of CLV difficult to solve. This paper aims to solve this inverse problem, yielding the approximate transition probabilities for the customers, by applying metaheuristic optimization algorithm developed by Yang, 2013, Flower Pollination Algorithm. The major interpretation of obtaining the transition probabilities are to set goals for marketing teams in keeping the relative frequencies of customer acquisition and customer retention.

  18. Design for a Crane Metallic Structure Based on Imperialist Competitive Algorithm and Inverse Reliability Strategy

    Science.gov (United States)

    Fan, Xiao-Ning; Zhi, Bo

    2017-07-01

    Uncertainties in parameters such as materials, loading, and geometry are inevitable in designing metallic structures for cranes. When considering these uncertainty factors, reliability-based design optimization (RBDO) offers a more reasonable design approach. However, existing RBDO methods for crane metallic structures are prone to low convergence speed and high computational cost. A unilevel RBDO method, combining a discrete imperialist competitive algorithm with an inverse reliability strategy based on the performance measure approach, is developed. Application of the imperialist competitive algorithm at the optimization level significantly improves the convergence speed of this RBDO method. At the reliability analysis level, the inverse reliability strategy is used to determine the feasibility of each probabilistic constraint at each design point by calculating its α-percentile performance, thereby avoiding convergence failure, calculation error, and disproportionate computational effort encountered using conventional moment and simulation methods. Application of the RBDO method to an actual crane structure shows that the developed RBDO realizes a design with the best tradeoff between economy and safety together with about one-third of the convergence speed and the computational cost of the existing method. This paper provides a scientific and effective design approach for the design of metallic structures of cranes.

  19. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Taehoon; Park, Won-Kwang

    2015-09-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation.

  20. A dynamical regularization algorithm for solving inverse source problems of elliptic partial differential equations

    Science.gov (United States)

    Zhang, Ye; Gong, Rongfang; Cheng, Xiaoliang; Gulliksson, Mårten

    2018-06-01

    This study considers the inverse source problem for elliptic partial differential equations with both Dirichlet and Neumann boundary data. The unknown source term is to be determined by additional boundary conditions. Unlike the existing methods found in the literature, which usually employ the first-order in time gradient-like system (such as the steepest descent methods) for numerically solving the regularized optimization problem with a fixed regularization parameter, we propose a novel method with a second-order in time dissipative gradient-like system and a dynamical selected regularization parameter. A damped symplectic scheme is proposed for the numerical solution. Theoretical analysis is given for both the continuous model and the numerical algorithm. Several numerical examples are provided to show the robustness of the proposed algorithm.

  1. Application of the inverse estimation method of current distribution from magnetic fields using genetic algorithm to beam profile measurement

    International Nuclear Information System (INIS)

    Kishimoto, M.; Sakasai, K.; Ara, K.

    1994-01-01

    In this paper, the new type of non-invasive beam profile monitor for intense ion accelerator using high-temperature superconductor. We regard the inverse estimation problem of beam profile as the optimum allocation problem of the currents into the cross-section of the beam vacuum pipe and applied genetic algorithm to solve this optimization problem. And we carried out the computer simulation to verify the effectiveness of this inverse estimation method of beam profile. (author)

  2. A necessary condition for applying MUSIC algorithm in limited-view inverse scattering problem

    International Nuclear Information System (INIS)

    Park, Taehoon; Park, Won-Kwang

    2015-01-01

    Throughout various results of numerical simulations, it is well-known that MUltiple SIgnal Classification (MUSIC) algorithm can be applied in the limited-view inverse scattering problems. However, the application is somehow heuristic. In this contribution, we identify a necessary condition of MUSIC for imaging of collection of small, perfectly conducting cracks. This is based on the fact that MUSIC imaging functional can be represented as an infinite series of Bessel function of integer order of the first kind. Numerical experiments from noisy synthetic data supports our investigation. (paper)

  3. Time-reversal and Bayesian inversion

    Science.gov (United States)

    Debski, Wojciech

    2017-04-01

    Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.

  4. Crustal velocity structure of central Gansu Province from regional seismic waveform inversion using firework algorithm

    Science.gov (United States)

    Chen, Yanyang; Wang, Yanbin; Zhang, Yuansheng

    2017-04-01

    The firework algorithm (FWA) is a novel swarm intelligence-based method recently proposed for the optimization of multi-parameter, nonlinear functions. Numerical waveform inversion experiments using a synthetic model show that the FWA performs well in both solution quality and efficiency. We apply the FWA in this study to crustal velocity structure inversion using regional seismic waveform data of central Gansu on the northeastern margin of the Qinghai-Tibet plateau. Seismograms recorded from the moment magnitude ( M W) 5.4 Minxian earthquake enable obtaining an average crustal velocity model for this region. We initially carried out a series of FWA robustness tests in regional waveform inversion at the same earthquake and station positions across the study region, inverting two velocity structure models, with and without a low-velocity crustal layer; the accuracy of our average inversion results and their standard deviations reveal the advantages of the FWA for the inversion of regional seismic waveforms. We applied the FWA across our study area using three component waveform data recorded by nine broadband permanent seismic stations with epicentral distances ranging between 146 and 437 km. These inversion results show that the average thickness of the crust in this region is 46.75 km, while thicknesses of the sedimentary layer, and the upper, middle, and lower crust are 3.15, 15.69, 13.08, and 14.83 km, respectively. Results also show that the P-wave velocities of these layers and the upper mantle are 4.47, 6.07, 6.12, 6.87, and 8.18 km/s, respectively.

  5. Frequency-domain waveform inversion using the unwrapped phase

    KAUST Repository

    Choi, Yun Seok

    2011-01-01

    Phase wrapping in the frequency-domain (or cycle skipping in the time-domain) is the major cause of the local minima problem in the waveform inversion. The unwrapped phase has the potential to provide us with a robust and reliable waveform inversion, with reduced local minima. We propose a waveform inversion algorithm using the unwrapped phase objective function in the frequency-domain. The unwrapped phase, or what we call the instantaneous traveltime, is given by the imaginary part of dividing the derivative of the wavefield with respect to the angular frequency by the wavefield itself. As a result, the objective function is given a traveltime-like function, which allows us to smooth it and reduce its nonlinearity. The gradient of the objective function is computed using the back-propagation algorithm based on the adjoint-state technique. We apply both our waveform inversion algorithm using the unwrapped phase and the conventional waveform inversion and show that our inversion algorithm gives better convergence to the true model than the conventional waveform inversion. © 2011 Society of Exploration Geophysicists.

  6. Inverse vs. forward breast IMRT planning

    International Nuclear Information System (INIS)

    Mihai, Alina; Rakovitch, Eileen; Sixel, Katharina; Woo, Tony; Cardoso, Marlene; Bell, Chris; Ruschin, Mark; Pignol, Jean-Philippe

    2005-01-01

    Breast intensity-modulated radiation therapy (IMRT) improves dose distribution homogeneity within the whole breast. Previous publications report the use of inverse or forward dose optimization algorithms. Because the inverse technique is not widely available in commercial treatment planning systems, it is important to compare the 2 algorithms. The goal of this work is to compare them on a prospective cohort of 30 patients. Dose distributions were evaluated on differential dose-volume histograms using the volumes receiving more than 105% (V 105 ) and 110% (V 110 ) of the prescribed dose, and on the maximum dose (D max ) or hot spot and the sagittal dose gradient (SDG) being the gradient between the dose on inframammary crease and the dose prescribed. The data were analyzed using Wilcoxon signed rank test. The inverse planning significantly improves the V 105 (mean value 9.7% vs. 14.5%, p = 0.002), and the V 110 (mean value 1.4% vs. 3.2%, p = 0.006). However, the SDG is not statistically significantly different for either algorithm. Looking at the potential impact on skin acute reaction, although there is a significant reduction of V 110 using an inverse algorithm, it is unlikely this 1.6% volume reduction will present a significant clinical advantage over a forward algorithm. Both algorithms are equivalent in removing the hot spots on the inframammary fold, where acute skin reactions occur more frequently using a conventional wedge technique. Based on these results, we recommend that both forward and inverse algorithms should be considered for breast IMRT planning

  7. Music algorithm for imaging of a sound-hard arc in limited-view inverse scattering problem

    Science.gov (United States)

    Park, Won-Kwang

    2017-07-01

    MUltiple SIgnal Classification (MUSIC) algorithm for a non-iterative imaging of sound-hard arc in limited-view inverse scattering problem is considered. In order to discover mathematical structure of MUSIC, we derive a relationship between MUSIC and an infinite series of Bessel functions of integer order. This structure enables us to examine some properties of MUSIC in limited-view problem. Numerical simulations are performed to support the identified structure of MUSIC.

  8. Feasibility of waveform inversion of Rayleigh waves for shallow shear-wave velocity using a genetic algorithm

    Science.gov (United States)

    Zeng, C.; Xia, J.; Miller, R.D.; Tsoflias, G.P.

    2011-01-01

    Conventional surface wave inversion for shallow shear (S)-wave velocity relies on the generation of dispersion curves of Rayleigh waves. This constrains the method to only laterally homogeneous (or very smooth laterally heterogeneous) earth models. Waveform inversion directly fits waveforms on seismograms, hence, does not have such a limitation. Waveforms of Rayleigh waves are highly related to S-wave velocities. By inverting the waveforms of Rayleigh waves on a near-surface seismogram, shallow S-wave velocities can be estimated for earth models with strong lateral heterogeneity. We employ genetic algorithm (GA) to perform waveform inversion of Rayleigh waves for S-wave velocities. The forward problem is solved by finite-difference modeling in the time domain. The model space is updated by generating offspring models using GA. Final solutions can be found through an iterative waveform-fitting scheme. Inversions based on synthetic records show that the S-wave velocities can be recovered successfully with errors no more than 10% for several typical near-surface earth models. For layered earth models, the proposed method can generate one-dimensional S-wave velocity profiles without the knowledge of initial models. For earth models containing lateral heterogeneity in which case conventional dispersion-curve-based inversion methods are challenging, it is feasible to produce high-resolution S-wave velocity sections by GA waveform inversion with appropriate priori information. The synthetic tests indicate that the GA waveform inversion of Rayleigh waves has the great potential for shallow S-wave velocity imaging with the existence of strong lateral heterogeneity. ?? 2011 Elsevier B.V.

  9. Combinatorial Algorithms for Computing Column Space Bases ThatHave Sparse Inverses

    Energy Technology Data Exchange (ETDEWEB)

    Pinar, Ali; Chow, Edmond; Pothen, Alex

    2005-03-18

    This paper presents a combinatorial study on the problem ofconstructing a sparse basis forthe null-space of a sparse, underdetermined, full rank matrix, A. Such a null-space is suitable forsolving solving many saddle point problems. Our approach is to form acolumn space basis of A that has a sparse inverse, by selecting suitablecolumns of A. This basis is then used to form a sparse null-space basisin fundamental form. We investigate three different algorithms forcomputing the column space basis: Two greedy approaches that rely onmatching, and a third employing a divide and conquer strategy implementedwith hypergraph partitioning followed by the greedy approach. We alsodiscuss the complexity of selecting a column basis when it is known thata block diagonal basis exists with a small given block size.

  10. Inversion of particle size distribution by spectral extinction technique using the attractive and repulsive particle swarm optimization algorithm

    Directory of Open Access Journals (Sweden)

    Qi Hong

    2015-01-01

    Full Text Available The particle size distribution (PSD plays an important role in environmental pollution detection and human health protection, such as fog, haze and soot. In this study, the Attractive and Repulsive Particle Swarm Optimization (ARPSO algorithm and the basic PSO were applied to retrieve the PSD. The spectral extinction technique coupled with the Anomalous Diffraction Approximation (ADA and the Lambert-Beer Law were employed to investigate the retrieval of the PSD. Three commonly used monomodal PSDs, i.e. the Rosin-Rammer (R-R distribution, the normal (N-N distribution, the logarithmic normal (L-N distribution were studied in the dependent model. Then, an optimal wavelengths selection algorithm was proposed. To study the accuracy and robustness of the inverse results, some characteristic parameters were employed. The research revealed that the ARPSO showed more accurate and faster convergence rate than the basic PSO, even with random measurement error. Moreover, the investigation also demonstrated that the inverse results of four incident laser wavelengths showed more accurate and robust than those of two wavelengths. The research also found that if increasing the interval of the selected incident laser wavelengths, inverse results would show more accurate, even in the presence of random error.

  11. Source-independent elastic waveform inversion using a logarithmic wavefield

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.

  12. New inverse synthetic aperture radar algorithm for translational motion compensation

    Science.gov (United States)

    Bocker, Richard P.; Henderson, Thomas B.; Jones, Scott A.; Frieden, B. R.

    1991-10-01

    Inverse synthetic aperture radar (ISAR) is an imaging technique that shows real promise in classifying airborne targets in real time under all weather conditions. Over the past few years a large body of ISAR data has been collected and considerable effort has been expended to develop algorithms to form high-resolution images from this data. One important goal of workers in this field is to develop software that will do the best job of imaging under the widest range of conditions. The success of classifying targets using ISAR is predicated upon forming highly focused radar images of these targets. Efforts to develop highly focused imaging computer software have been challenging, mainly because the imaging depends on and is affected by the motion of the target, which in general is not precisely known. Specifically, the target generally has both rotational motion about some axis and translational motion as a whole with respect to the radar. The slant-range translational motion kinematic quantities must be first accurately estimated from the data and compensated before the image can be focused. Following slant-range motion compensation, the image is further focused by determining and correcting for target rotation. The use of the burst derivative measure is proposed as a means to improve the computational efficiency of currently used ISAR algorithms. The use of this measure in motion compensation ISAR algorithms for estimating the slant-range translational motion kinematic quantities of an uncooperative target is described. Preliminary tests have been performed on simulated as well as actual ISAR data using both a Sun 4 workstation and a parallel processing transputer array. Results indicate that the burst derivative measure gives significant improvement in processing speed over the traditional entropy measure now employed.

  13. Inverse estimation of the spheroidal particle size distribution using Ant Colony Optimization algorithms in multispectral extinction technique

    Science.gov (United States)

    He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming

    2014-10-01

    Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.

  14. Sensitivity Analysis of a CPAM Inverse Algorithm for Composite Laminates Characterization

    Directory of Open Access Journals (Sweden)

    Farshid Masoumi

    2017-01-01

    Full Text Available Using experimental data and numerical simulations, a new combined technique is presented for characterization of thin and thick orthotropic composite laminates. Four or five elastic constants, as well as ply orientation angles, are considered as the unknown parameters. The material characterization is first examined for isotropic plates under different boundary conditions to evaluate the method’s accuracy. The proposed algorithm, so-called CPAM (Combined Programs of ABAQUS and MATLAB, utilizes an optimization procedure and makes simultaneous use of vibration test data together with their corresponding numerical solutions. The numerical solutions are based on a commercial finite element package for efficiently identifying the material properties. An inverse method based on particle swarm optimization algorithm is further provided using MATLAB software. The error function to be minimized is the sum of squared differences between experimental and simulated data of eigenfrequencies. To evaluate the robustness of the model’s results in the presence of uncertainty and unwanted noises, a sensitivity analysis that employs Gaussian disorder model is directly applied to the measured frequencies. The results with high accuracy confirm the validity and capability of the present method in simultaneous determination of mechanical constants and fiber orientation angles of composite laminates as compared to prior methods.

  15. Effective and accurate processing and inversion of airborne electromagnetic data

    DEFF Research Database (Denmark)

    Auken, Esben; Christiansen, Anders Vest; Andersen, Kristoffer Rønne

    Airborne electromagnetic (AEM) data is used throughout the world for mapping of mineral targets and groundwater resources. The development of technology and inversion algorithms has been tremendously over the last decade and results from these surveys are high-resolution images of the subsurface....... In this keynote talk, we discuss an effective inversion algorithm, which is both subjected to intense research and development as well as production. This is the well know Laterally Constrained Inversion (LCI) and Spatial Constrained Inversion algorithm. The same algorithm is also used in a voxel setup (3D model......) and for sheet inversions. An integral part of these different model discretization is an accurate modelling of the system transfer function and of auxiliary parameters like flight altitude, bird pitch,etc....

  16. Photoemission spectroscopy study on interfacial energy level alignments in tandem organic light-emitting diodes

    Energy Technology Data Exchange (ETDEWEB)

    Ou, Qing-Dong; Li, Chi; Li, Yan-Qing, E-mail: yqli@suda.edu.cn; Tang, Jian-Xin, E-mail: jxtang@suda.edu.cn

    2015-10-01

    Highlights: • The interface energetics of tandem OLEDs is overviewed. • Energy level alignment in CGLs is addressed via photoemission spectroscopy. • The n-type doping effect with cesium compounds is discussed. • Hole injection barrier is dependent on oxygen vacancies in transition metal oxides. • Device lifetime of tandem OLEDs is sensitive to interfacial stability of CGLs. - Abstract: Organic light-emitting diodes (OLEDs) using a tandem structure offer a highly attractive option for the applications of next-generation flat panel displays and solid-state lighting due to the extremely high brightness and efficiency along with the long operational lifetime. In general, reliable information about interface energetics of the charge generation layers (CGLs), which plays the central role in charge generation and carrier injection into the stacked emission units, is highly desirable and advantageous for interface engineering and the performance optimization of tandem OLEDs. In this review, our recent studies on tandem OLEDs are overviewed, especially from interface energetics perspective via photoemission spectroscopy. The electronic structures of various transition metal oxide (TMO)-based CGLs and their role in charge generation process are reviewed, addressing the n-type doping impact of organic layers in CGLs, thermal annealing-induced oxygen vacancy in TMOs, and the interfacial stability of CGLs on the device operational lifetime. The resulting energy level alignments are summarized in correspondence with tandem OLED performance.

  17. An effective medium inversion algorithm for gas hydrate quantification and its application to laboratory and borehole measurements of gas hydrate-bearing sediments

    Science.gov (United States)

    Chand, S.; Minshull, T.A.; Priest, J.A.; Best, A.I.; Clayton, C.R.I.; Waite, W.F.

    2006-01-01

    The presence of gas hydrate in marine sediments alters their physical properties. In some circumstances, gas hydrate may cement sediment grains together and dramatically increase the seismic P- and S-wave velocities of the composite medium. Hydrate may also form a load-bearing structure within the sediment microstructure, but with different seismic wave attenuation characteristics, changing the attenuation behaviour of the composite. Here we introduce an inversion algorithm based on effective medium modelling to infer hydrate saturations from velocity and attenuation measurements on hydrate-bearing sediments. The velocity increase is modelled as extra binding developed by gas hydrate that strengthens the sediment microstructure. The attenuation increase is modelled through a difference in fluid flow properties caused by different permeabilities in the sediment and hydrate microstructures. We relate velocity and attenuation increases in hydrate-bearing sediments to their hydrate content, using an effective medium inversion algorithm based on the self-consistent approximation (SCA), differential effective medium (DEM) theory, and Biot and squirt flow mechanisms of fluid flow. The inversion algorithm is able to convert observations in compressional and shear wave velocities and attenuations to hydrate saturation in the sediment pore space. We applied our algorithm to a data set from the Mallik 2L–38 well, Mackenzie delta, Canada, and to data from laboratory measurements on gas-rich and water-saturated sand samples. Predictions using our algorithm match the borehole data and water-saturated laboratory data if the proportion of hydrate contributing to the load-bearing structure increases with hydrate saturation. The predictions match the gas-rich laboratory data if that proportion decreases with hydrate saturation. We attribute this difference to differences in hydrate formation mechanisms between the two environments.

  18. Discrete inverse scattering theory and the continuum limit

    International Nuclear Information System (INIS)

    Berryman, J.G.; Greene, R.R.

    1978-01-01

    The class of satisfactory difference approximations for the Schroedinger equation in discrete inverse scattering theory is shown smaller than previously supposed. A fast algorithm (analogous to the Levinson algorithm for Toeplitz matrices) is found for solving the discrete inverse problem. (Auth.)

  19. Accelerating adaptive inverse distance weighting interpolation algorithm on a graphics processing unit.

    Science.gov (United States)

    Mei, Gang; Xu, Liangliang; Xu, Nengxiong

    2017-09-01

    This paper focuses on designing and implementing parallel adaptive inverse distance weighting (AIDW) interpolation algorithms by using the graphics processing unit (GPU). The AIDW is an improved version of the standard IDW, which can adaptively determine the power parameter according to the data points' spatial distribution pattern and achieve more accurate predictions than those predicted by IDW. In this paper, we first present two versions of the GPU-accelerated AIDW, i.e. the naive version without profiting from the shared memory and the tiled version taking advantage of the shared memory. We also implement the naive version and the tiled version using two data layouts, structure of arrays and array of aligned structures, on both single and double precision. We then evaluate the performance of parallel AIDW by comparing it with its corresponding serial algorithm on three different machines equipped with the GPUs GT730M, M5000 and K40c. The experimental results indicate that: (i) there is no significant difference in the computational efficiency when different data layouts are employed; (ii) the tiled version is always slightly faster than the naive version; and (iii) on single precision the achieved speed-up can be up to 763 (on the GPU M5000), while on double precision the obtained highest speed-up is 197 (on the GPU K40c). To benefit the community, all source code and testing data related to the presented parallel AIDW algorithm are publicly available.

  20. The Natural-CCD Algorithm, a Novel Method to Solve the Inverse Kinematics of Hyper-redundant and Soft Robots.

    Science.gov (United States)

    Martín, Andrés; Barrientos, Antonio; Del Cerro, Jaime

    2018-03-22

    This article presents a new method to solve the inverse kinematics problem of hyper-redundant and soft manipulators. From an engineering perspective, this kind of robots are underdetermined systems. Therefore, they exhibit an infinite number of solutions for the inverse kinematics problem, and to choose the best one can be a great challenge. A new algorithm based on the cyclic coordinate descent (CCD) and named as natural-CCD is proposed to solve this issue. It takes its name as a result of generating very harmonious robot movements and trajectories that also appear in nature, such as the golden spiral. In addition, it has been applied to perform continuous trajectories, to develop whole-body movements, to analyze motion planning in complex environments, and to study fault tolerance, even for both prismatic and rotational joints. The proposed algorithm is very simple, precise, and computationally efficient. It works for robots either in two or three spatial dimensions and handles a large amount of degrees-of-freedom. Because of this, it is aimed to break down barriers between discrete hyper-redundant and continuum soft robots.

  1. Efficient Algorithms for Analyzing Segmental Duplications, Deletions, and Inversions in Genomes

    Science.gov (United States)

    Kahn, Crystal L.; Mozes, Shay; Raphael, Benjamin J.

    Segmental duplications, or low-copy repeats, are common in mammalian genomes. In the human genome, most segmental duplications are mosaics consisting of pieces of multiple other segmental duplications. This complex genomic organization complicates analysis of the evolutionary history of these sequences. Earlier, we introduced a genomic distance, called duplication distance, that computes the most parsimonious way to build a target string by repeatedly copying substrings of a source string. We also showed how to use this distance to describe the formation of segmental duplications according to a two-step model that has been proposed to explain human segmental duplications. Here we describe polynomial-time exact algorithms for several extensions of duplication distance including models that allow certain types of substring deletions and inversions. These extensions will permit more biologically realistic analyses of segmental duplications in genomes.

  2. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  3. Unwrapped phase inversion for near surface seismic data

    KAUST Repository

    Choi, Yun Seok

    2012-11-04

    The Phase-wrapping is one of the main obstacles of waveform inversion. We use an inversion algorithm based on the instantaneous-traveltime that overcomes the phase-wrapping problem. With a high damping factor, the frequency-dependent instantaneous-traveltime inversion provides the stability of refraction tomography, with higher resolution results, and no arrival picking involved. We apply the instantaneous-traveltime inversion to the synthetic data generated by the elastic time-domain modeling. The synthetic data is a representative of the near surface seismic data. Although the inversion algorithm is based on the acoustic wave equation, the numerical examples show that the instantaneous-traveltime inversion generates a convergent velocity model, very similar to what we see from traveltime tomography.

  4. Joint Inversion of 1-D Magnetotelluric and Surface-Wave Dispersion Data with an Improved Multi-Objective Genetic Algorithm and Application to the Data of the Longmenshan Fault Zone

    Science.gov (United States)

    Wu, Pingping; Tan, Handong; Peng, Miao; Ma, Huan; Wang, Mao

    2018-05-01

    Magnetotellurics and seismic surface waves are two prominent geophysical methods for deep underground exploration. Joint inversion of these two datasets can help enhance the accuracy of inversion. In this paper, we describe a method for developing an improved multi-objective genetic algorithm (NSGA-SBX) and applying it to two numerical tests to verify the advantages of the algorithm. Our findings show that joint inversion with the NSGA-SBX method can improve the inversion results by strengthening structural coupling when the discontinuities of the electrical and velocity models are consistent, and in case of inconsistent discontinuities between these models, joint inversion can retain the advantages of individual inversions. By applying the algorithm to four detection points along the Longmenshan fault zone, we observe several features. The Sichuan Basin demonstrates low S-wave velocity and high conductivity in the shallow crust probably due to thick sedimentary layers. The eastern margin of the Tibetan Plateau shows high velocity and high resistivity in the shallow crust, while two low velocity layers and a high conductivity layer are observed in the middle lower crust, probably indicating the mid-crustal channel flow. Along the Longmenshan fault zone, a high conductivity layer from 8 to 20 km is observed beneath the northern segment and decreases with depth beneath the middle segment, which might be caused by the elevated fluid content of the fault zone.

  5. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.; Zayane, Chadia; Laleg-Kirati, Taous-Meriem

    2015-01-01

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  6. An Adaptive Observer-Based Algorithm for Solving Inverse Source Problem for the Wave Equation

    KAUST Repository

    Asiri, Sharefa M.

    2015-08-31

    Observers are well known in control theory. Originally designed to estimate the hidden states of dynamical systems given some measurements, the observers scope has been recently extended to the estimation of some unknowns, for systems governed by partial differential equations. In this paper, observers are used to solve inverse source problem for a one-dimensional wave equation. An adaptive observer is designed to estimate the state and source components for a fully discretized system. The effectiveness of the algorithm is emphasized in noise-free and noisy cases and an insight on the impact of measurements’ size and location is provided.

  7. Generalized inverses theory and computations

    CERN Document Server

    Wang, Guorong; Qiao, Sanzheng

    2018-01-01

    This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.

  8. An overview of a highly versatile forward and stable inverse algorithm for airborne, ground-based and borehole electromagnetic and electric data

    DEFF Research Database (Denmark)

    Auken, Esben; Christiansen, Anders Vest; Kirkegaard, Casper

    2015-01-01

    . This engine includes support for mixed data types, arbitrary model parameter constraints, integration of prior information and calculation of both model parameter sensitivity analysis and depth of investigation. We present a review of our implementation and methodology and show four different examples......We present an overview of a mature, robust and general algorithm providing a single framework for the inversion of most electromagnetic and electrical data types and instrument geometries. The implementation mainly uses a 1D earth formulation for electromagnetics and magnetic resonance sounding...... types of data. Our implementation is modular, meaning that the bulk of the algorithm is independent of data type, making it easy to add support for new types. Having implemented forward response routines and file I/O for a given data type provides access to a robust and general inversion engine...

  9. An inverse model for locating skin tumours in 3D using the genetic algorithm with the Dual Reciprocity Boundary Element Method

    Directory of Open Access Journals (Sweden)

    Fabrício Ribeiro Bueno

    Full Text Available Here, the Dual Reciprocity Boundary Element Method is used to solve the 3D Pennes Bioheat Equation, which together with a Genetic Algorithm, produces an inverse model capable of obtaining the location and the size of a tumour, having as data input the temperature distribution measured on the skin surface. Given that the objective function, which is solved inversely, involves the DRBEM (Dual Reciprocity Boundary Element Method the Genetic Algorithm in its usual form becomes slower, in such a way that it was necessary to develop functions based the solution history in order that the process becomes quicker and more accurate. Results for 8 examples are presented including cases with convection and radiation boundary conditions. Cases involving noise in the readings of the equipment are also considered. This technique is intended to assist health workers in the diagnosis of tumours.

  10. Frequency-domain waveform inversion using the phase derivative

    KAUST Repository

    Choi, Yun Seok

    2013-09-26

    Phase wrapping in the frequency domain or cycle skipping in the time domain is the major cause of the local minima problem in the waveform inversion when the starting model is far from the true model. Since the phase derivative does not suffer from the wrapping effect, its inversion has the potential of providing a robust and reliable inversion result. We propose a new waveform inversion algorithm using the phase derivative in the frequency domain along with the exponential damping term to attenuate reflections. We estimate the phase derivative, or what we refer to as the instantaneous traveltime, by taking the derivative of the Fourier-transformed wavefield with respect to the angular frequency, dividing it by the wavefield itself and taking the imaginary part. The objective function is constructed using the phase derivative and the gradient of the objective function is computed using the back-propagation algorithm. Numerical examples show that our inversion algorithm with a strong damping generates a tomographic result even for a high ‘single’ frequency, which can be a good initial model for full waveform inversion and migration.

  11. Limits to Nonlinear Inversion

    DEFF Research Database (Denmark)

    Mosegaard, Klaus

    2012-01-01

    For non-linear inverse problems, the mathematical structure of the mapping from model parameters to data is usually unknown or partly unknown. Absence of information about the mathematical structure of this function prevents us from presenting an analytical solution, so our solution depends on our......-heuristics are inefficient for large-scale, non-linear inverse problems, and that the 'no-free-lunch' theorem holds. We discuss typical objections to the relevance of this theorem. A consequence of the no-free-lunch theorem is that algorithms adapted to the mathematical structure of the problem perform more efficiently than...... pure meta-heuristics. We study problem-adapted inversion algorithms that exploit the knowledge of the smoothness of the misfit function of the problem. Optimal sampling strategies exist for such problems, but many of these problems remain hard. © 2012 Springer-Verlag....

  12. Statistically Optimized Inversion Algorithm for Enhanced Retrieval of Aerosol Properties from Spectral Multi-Angle Polarimetric Satellite Observations

    Science.gov (United States)

    Dubovik, O; Herman, M.; Holdak, A.; Lapyonok, T.; Taure, D.; Deuze, J. L.; Ducos, F.; Sinyuk, A.

    2011-01-01

    The proposed development is an attempt to enhance aerosol retrieval by emphasizing statistical optimization in inversion of advanced satellite observations. This optimization concept improves retrieval accuracy relying on the knowledge of measurement error distribution. Efficient application of such optimization requires pronounced data redundancy (excess of the measurements number over number of unknowns) that is not common in satellite observations. The POLDER imager on board the PARASOL microsatellite registers spectral polarimetric characteristics of the reflected atmospheric radiation at up to 16 viewing directions over each observed pixel. The completeness of such observations is notably higher than for most currently operating passive satellite aerosol sensors. This provides an opportunity for profound utilization of statistical optimization principles in satellite data inversion. The proposed retrieval scheme is designed as statistically optimized multi-variable fitting of all available angular observations obtained by the POLDER sensor in the window spectral channels where absorption by gas is minimal. The total number of such observations by PARASOL always exceeds a hundred over each pixel and the statistical optimization concept promises to be efficient even if the algorithm retrieves several tens of aerosol parameters. Based on this idea, the proposed algorithm uses a large number of unknowns and is aimed at retrieval of extended set of parameters affecting measured radiation.

  13. Inverse Analysis of Pavement Structural Properties Based on Dynamic Finite Element Modeling and Genetic Algorithm

    Directory of Open Access Journals (Sweden)

    Xiaochao Tang

    2013-03-01

    Full Text Available With the movement towards the implementation of mechanistic-empirical pavement design guide (MEPDG, an accurate determination of pavement layer moduli is vital for predicting pavement critical mechanistic responses. A backcalculation procedure is commonly used to estimate the pavement layer moduli based on the non-destructive falling weight deflectometer (FWD tests. Backcalculation of flexible pavement layer properties is an inverse problem with known input and output signals based upon which unknown parameters of the pavement system are evaluated. In this study, an inverse analysis procedure that combines the finite element analysis and a population-based optimization technique, Genetic Algorithm (GA has been developed to determine the pavement layer structural properties. A lightweight deflectometer (LWD was used to infer the moduli of instrumented three-layer scaled flexible pavement models. While the common practice in backcalculating pavement layer properties still assumes a static FWD load and uses only peak values of the load and deflections, dynamic analysis was conducted to simulate the impulse LWD load. The recorded time histories of the LWD load were used as the known inputs into the pavement system while the measured time-histories of surface central deflections and subgrade deflections measured with a linear variable differential transformers (LVDT were considered as the outputs. As a result, consistent pavement layer moduli can be obtained through this inverse analysis procedure.

  14. Top-of-atmosphere radiative fluxes - Validation of ERBE scanner inversion algorithm using Nimbus-7 ERB data

    Science.gov (United States)

    Suttles, John T.; Wielicki, Bruce A.; Vemury, Sastri

    1992-01-01

    The ERBE algorithm is applied to the Nimbus-7 earth radiation budget (ERB) scanner data for June 1979 to analyze the performance of an inversion method in deriving top-of-atmosphere albedos and longwave radiative fluxes. The performance is assessed by comparing ERBE algorithm results with appropriate results derived using the sorting-by-angular-bins (SAB) method, the ERB MATRIX algorithm, and the 'new-cloud ERB' (NCLE) algorithm. Comparisons are made for top-of-atmosphere albedos, longwave fluxes, viewing zenith-angle dependence of derived albedos and longwave fluxes, and cloud fractional coverage. Using the SAB method as a reference, the rms accuracy of monthly average ERBE-derived results are estimated to be 0.0165 (5.6 W/sq m) for albedos (shortwave fluxes) and 3.0 W/sq m for longwave fluxes. The ERBE-derived results were found to depend systematically on the viewing zenith angle, varying from near nadir to near the limb by about 10 percent for albedos and by 6-7 percent for longwave fluxes. Analyses indicated that the ERBE angular models are the most likely source of the systematic angular dependences. Comparison of the ERBE-derived cloud fractions, based on a maximum-likelihood estimation method, with results from the NCLE showed agreement within about 10 percent.

  15. Novel automated inversion algorithm for temperature reconstruction using gas isotopes from ice cores

    Directory of Open Access Journals (Sweden)

    M. Döring

    2018-06-01

    Full Text Available Greenland past temperature history can be reconstructed by forcing the output of a firn-densification and heat-diffusion model to fit multiple gas-isotope data (δ15N or δ40Ar or δ15Nexcess extracted from ancient air in Greenland ice cores using published accumulation-rate (Acc datasets. We present here a novel methodology to solve this inverse problem, by designing a fully automated algorithm. To demonstrate the performance of this novel approach, we begin by intentionally constructing synthetic temperature histories and associated δ15N datasets, mimicking real Holocene data that we use as true values (targets to be compared to the output of the algorithm. This allows us to quantify uncertainties originating from the algorithm itself. The presented approach is completely automated and therefore minimizes the subjective impact of manual parameter tuning, leading to reproducible temperature estimates. In contrast to many other ice-core-based temperature reconstruction methods, the presented approach is completely independent from ice-core stable-water isotopes, providing the opportunity to validate water-isotope-based reconstructions or reconstructions where water isotopes are used together with δ15N or δ40Ar. We solve the inverse problem T(δ15N, Acc by using a combination of a Monte Carlo based iterative approach and the analysis of remaining mismatches between modelled and target data, based on cubic-spline filtering of random numbers and the laboratory-determined temperature sensitivity for nitrogen isotopes. Additionally, the presented reconstruction approach was tested by fitting measured δ40Ar and δ15Nexcess data, which led as well to a robust agreement between modelled and measured data. The obtained final mismatches follow a symmetric standard-distribution function. For the study on synthetic data, 95 % of the mismatches compared to the synthetic target data are in an envelope between 3.0 to 6.3 permeg for δ15N and 0.23 to 0

  16. Improved algorithm for three-dimensional inverse method

    Science.gov (United States)

    Qiu, Xuwen

    An inverse method, which works for full 3D viscous applications in turbomachinery aerodynamic design, is developed. The method takes pressure loading and thickness distribution as inputs and computes the 3D-blade geometry. The core of the inverse method consists of two closely related steps, which are integrated into a time-marching procedure of a Navier-Stokes solver. First, the pressure loading condition is enforced while flow is allowed to cross the blade surfaces. A permeable blade boundary condition is developed here in order to be consistent with the propagation characteristics of the transient Navier-Stokes equations. In the second step, the blade geometry is adjusted so that the flow-tangency condition is satisfied for the new blade. A Non-Uniform Rational B-Spline (NURBS) model is used to represent the span-wise camber curves. The flow-tangency condition is then transformed into a general linear least squares fitting problem, which is solved by a robust Singular Value Decomposition (SVD) scheme. This blade geometry generation scheme allows the designer to have direct control over the smoothness of the calculated blade, and thus ensures the numerical stability during the iteration process. Numerical experiments show that this method is very accurate, efficient and robust. In target-shooting tests, the program was able to converge to the target blade accurately from a different initial blade. The speed of an inverse run is only about 15% slower than its analysis counterpart, which means a complete 3D viscous inverse design can be done in a matter of hours. The method is also proved to work well with the presence of clearance between the blade and the housing, a key factor to be considered in aerodynamic design. The method is first developed for blades without splitters, and is then extended to provide the capability of analyzing and designing machines with splitters. This gives designers an integrated environment where the aerodynamic design of both full

  17. Fast Simulation of 3-D Surface Flanging and Prediction of the Flanging Lines Based On One-Step Inverse Forming Algorithm

    International Nuclear Information System (INIS)

    Bao Yidong; Hu Sibo; Lang Zhikui; Hu Ping

    2005-01-01

    A fast simulation scheme for 3D curved binder flanging and blank shape prediction of sheet metal based on one-step inverse finite element method is proposed, in which the total plasticity theory and proportional loading assumption are used. The scheme can be actually used to simulate 3D flanging with complex curve binder shape, and suitable for simulating any type of flanging model by numerically determining the flanging height and flanging lines. Compared with other methods such as analytic algorithm and blank sheet-cut return method, the prominent advantage of the present scheme is that it can directly predict the location of the 3D flanging lines when simulating the flanging process. Therefore, the prediction time of flanging lines will be obviously decreased. Two typical 3D curve binder flanging including stretch and shrink characters are simulated in the same time by using the present scheme and incremental FE non-inverse algorithm based on incremental plasticity theory, which show the validity and high efficiency of the present scheme

  18. Refraction traveltime tomography with irregular topography using the unwrapped phase inversion

    KAUST Repository

    Choi, Yun Seok

    2013-01-01

    Traveltime tomography has long served as a stable and efficient tool for velocity estimation, especially for the near surface. It, however, suffers from some of limitations associated with ray tracing and high-frequency traveltime in velocity inversion zones and ray shadow regions. We develop a tomographic approach based on traveltime solutions obtained by tracking the phase (instantaneous traveltime) of the wavefield solution of the Helmholtz wave equation. Since the instantaneous-traveltime does not suffer from phase wrapping, the inversion algorithm using the instantaneous-traveltime has the potential to generate robust inversion results. With a high damping factor, the instantaneous-traveltime inversion provides refraction tomography similar results, but from a single frequency. Despite the Helmholtz-based solver implementation, the tomographic inversion handles irrgular topography. The numerical examples show that our inversion algorithm generates a convergent smooth velocity model, which looks very much like a tomographic result. Next, we plan to apply the instantaneous-traveltime inversion algorithm to real seismic data acquired from the near surface with irregular topography.

  19. Development of an inverse distance weighted active infrared stealth scheme using the repulsive particle swarm optimization algorithm.

    Science.gov (United States)

    Han, Kuk-Il; Kim, Do-Hwi; Choi, Jun-Hyuk; Kim, Tae-Kuk

    2018-04-20

    Treatments for detection by infrared (IR) signals are higher than for other signals such as radar or sonar because an object detected by the IR sensor cannot easily recognize its detection status. Recently, research for actively reducing IR signal has been conducted to control the IR signal by adjusting the surface temperature of the object. In this paper, we propose an active IR stealth algorithm to synchronize IR signals from the object and the background around the object. The proposed method includes the repulsive particle swarm optimization statistical optimization algorithm to estimate the IR stealth surface temperature, which will result in a synchronization between the IR signals from the object and the surrounding background by setting the inverse distance weighted contrast radiant intensity (CRI) equal to zero. We tested the IR stealth performance in mid wavelength infrared (MWIR) and long wavelength infrared (LWIR) bands for a test plate located at three different positions on a forest scene to verify the proposed method. Our results show that the inverse distance weighted active IR stealth technique proposed in this study is proved to be an effective method for reducing the contrast radiant intensity between the object and background up to 32% as compared to the previous method using the CRI determined as the simple signal difference between the object and the background.

  20. New RADIOM algorithm using inverse EOS

    Science.gov (United States)

    Busquet, Michel; Sokolov, Igor; Klapisch, Marcel

    2012-10-01

    The RADIOM model, [1-2], allows one to implement non-LTE atomic physics with a very low extra CPU cost. Although originally heuristic, RADIOM has been physically justified [3] and some accounting for auto-ionization has been included [2]. RADIOM defines an ionization temperature Tz derived from electronic density and actual electronic temperature Te. LTE databases are then queried for properties at Tz and NLTE values are derived from them. Some hydro-codes (like FAST at NRL, Ramis' MULTI, or the CRASH code at U.Mich) use inverse EOS starting from the total internal energy Etot and returning the temperature. In the NLTE case, inverse EOS requires to solve implicit relations between Te, Tz, and Etot. We shall describe these relations and an efficient solver successively implemented in some of our codes. [4pt] [1] M. Busquet, Radiation dependent ionization model for laser-created plasmas, Ph. Fluids B 5, 4191 (1993).[0pt] [2] M. Busquet, D. Colombant, M. Klapisch, D. Fyfe, J. Gardner. Improvements to the RADIOM non-LTE model, HEDP 5, 270 (2009).[0pt] [3] M.Busquet, Onset of pseudo-thermal equilibrium within configurations and super-configurations, JQSRT 99, 131 (2006)

  1. Line-breaking algorithm enhancement in inverse typesetting paradigma

    Directory of Open Access Journals (Sweden)

    Jan Přichystal

    2007-01-01

    Full Text Available High quality text preparing using computer desktop publishing systems usually uses line-breaking algorithm which cannot make provision for line heights and typeset paragraph accurately when composition width, page break, line index or other object appears. This article deals with enhancing of line-breaking algorithm based on optimum-fit algorithm. This algorithm is enhanced with calculation of immediate typesetting width and thus solves problem of forced change. Line-breaking algorithm enhancement causes expansion potentialities of high-quality typesetting in cases that have not been yet covered with present typesetting systems.

  2. Intelligent inversion method for pre-stack seismic big data based on MapReduce

    Science.gov (United States)

    Yan, Xuesong; Zhu, Zhixin; Wu, Qinghua

    2018-01-01

    Seismic exploration is a method of oil exploration that uses seismic information; that is, according to the inversion of seismic information, the useful information of the reservoir parameters can be obtained to carry out exploration effectively. Pre-stack data are characterised by a large amount of data, abundant information, and so on, and according to its inversion, the abundant information of the reservoir parameters can be obtained. Owing to the large amount of pre-stack seismic data, existing single-machine environments have not been able to meet the computational needs of the huge amount of data; thus, the development of a method with a high efficiency and the speed to solve the inversion problem of pre-stack seismic data is urgently needed. The optimisation of the elastic parameters by using a genetic algorithm easily falls into a local optimum, which results in a non-obvious inversion effect, especially for the optimisation effect of the density. Therefore, an intelligent optimisation algorithm is proposed in this paper and used for the elastic parameter inversion of pre-stack seismic data. This algorithm improves the population initialisation strategy by using the Gardner formula and the genetic operation of the algorithm, and the improved algorithm obtains better inversion results when carrying out a model test with logging data. All of the elastic parameters obtained by inversion and the logging curve of theoretical model are fitted well, which effectively improves the inversion precision of the density. This algorithm was implemented with a MapReduce model to solve the seismic big data inversion problem. The experimental results show that the parallel model can effectively reduce the running time of the algorithm.

  3. Presentation: 3D magnetic inversion by planting anomalous densities

    OpenAIRE

    Uieda, Leonardo; Barbosa, Valeria C. F.

    2013-01-01

    Slides for the presentation "3D magnetic inversion by planting anomalous densities" given at the 2013 AGU Meeting of the Americas in Cancun, Mexico.   Note: There was an error in the title of the talk. The correct title should be "3D magnetic inversion by planting anomalous magnetization"   Abstract: We present a new 3D magnetic inversion algorithm based on the computationally efficient method of planting anomalous densities. The algorithm consists of an iterative growth of the an...

  4. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    International Nuclear Information System (INIS)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo; Martinet, Philippe

    2008-01-01

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  5. Damped least square based genetic algorithm with Gaussian distribution of damping factor for singularity-robust inverse kinematics

    Energy Technology Data Exchange (ETDEWEB)

    Phuoc, Le Minh; Lee, Suk Han; Kim, Hun Mo [Sungkyunkwan University, Suwon (Korea, Republic of); Martinet, Philippe [Blaise Pascal University, Clermont-Ferrand Cedex (France)

    2008-07-15

    Robot inverse kinematics based on Jacobian inversion encounters critical issues of kinematic singularities. In this paper, several techniques based on damped least squares are proposed to lead robot pass through kinematic singularities without excessive joint velocities. Unlike other work in which the same damping factor is used for all singular vectors, this paper proposes a different damping coefficient for each singular vector based on corresponding singular value of the Jacobian. Moreover, a continuous distribution of damping factor following Gaussian function guarantees the continuous in joint velocities. A genetic algorithm is utilized to search for the best maximum damping factor and singular region, which used to require ad hoc searching in other works. As a result, end effector tracking error, which is inherited from damped least squares by introducing damping factors, is minimized. The effectiveness of our approach is compared with other methods in both non-redundant robot and redundant robot

  6. Nonlinear inversion of resistivity sounding data for 1-D earth models using the Neighbourhood Algorithm

    Science.gov (United States)

    Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.

    2018-01-01

    To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.

  7. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction

    International Nuclear Information System (INIS)

    Yang, C L; Wei, H Y; Soleimani, M; Adler, A

    2013-01-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current–voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results. (paper)

  8. Reducing computational costs in large scale 3D EIT by using a sparse Jacobian matrix with block-wise CGLS reconstruction.

    Science.gov (United States)

    Yang, C L; Wei, H Y; Adler, A; Soleimani, M

    2013-06-01

    Electrical impedance tomography (EIT) is a fast and cost-effective technique to provide a tomographic conductivity image of a subject from boundary current-voltage data. This paper proposes a time and memory efficient method for solving a large scale 3D EIT inverse problem using a parallel conjugate gradient (CG) algorithm. The 3D EIT system with a large number of measurement data can produce a large size of Jacobian matrix; this could cause difficulties in computer storage and the inversion process. One of challenges in 3D EIT is to decrease the reconstruction time and memory usage, at the same time retaining the image quality. Firstly, a sparse matrix reduction technique is proposed using thresholding to set very small values of the Jacobian matrix to zero. By adjusting the Jacobian matrix into a sparse format, the element with zeros would be eliminated, which results in a saving of memory requirement. Secondly, a block-wise CG method for parallel reconstruction has been developed. The proposed method has been tested using simulated data as well as experimental test samples. Sparse Jacobian with a block-wise CG enables the large scale EIT problem to be solved efficiently. Image quality measures are presented to quantify the effect of sparse matrix reduction in reconstruction results.

  9. A New Formula for the Inverse Wavelet Transform

    OpenAIRE

    Sun, Wenchang

    2010-01-01

    Finding a computationally efficient algorithm for the inverse continuous wavelet transform is a fundamental topic in applications. In this paper, we show the convergence of the inverse wavelet transform.

  10. Inverse Kinematics of a Serial Robot

    Directory of Open Access Journals (Sweden)

    Amici Cinzia

    2016-01-01

    Full Text Available This work describes a technique to treat the inverse kinematics of a serial manipulator. The inverse kinematics is obtained through the numerical inversion of the Jacobian matrix, that represents the equation of motion of the manipulator. The inversion is affected by numerical errors and, in different conditions, due to the numerical nature of the solver, it does not converge to a reasonable solution. Thus a soft computing approach is adopted to mix different traditional methods to obtain an increment of algorithmic convergence.

  11. Numerical Inversion for the Multiple Fractional Orders in the Multiterm TFDE

    Directory of Open Access Journals (Sweden)

    Chunlong Sun

    2017-01-01

    Full Text Available The fractional order in a fractional diffusion model is a key parameter which characterizes the anomalous diffusion behaviors. This paper deals with an inverse problem of determining the multiple fractional orders in the multiterm time-fractional diffusion equation (TFDE for short from numerics. The homotopy regularization algorithm is applied to solve the inversion problem using the finite data at one interior point in the space domain. The inversion fractional orders with random noisy data give good approximations to the exact order demonstrating the efficiency of the inversion algorithm and numerical stability of the inversion problem.

  12. Efficient combination of a 3D Quasi-Newton inversion algorithm and a vector dual-primal finite element tearing and interconnecting method

    International Nuclear Information System (INIS)

    Voznyuk, I; Litman, A; Tortel, H

    2015-01-01

    A Quasi-Newton method for reconstructing the constitutive parameters of three-dimensional (3D) penetrable scatterers from scattered field measurements is presented. This method is adapted for handling large-scale electromagnetic problems while keeping the memory requirement and the time flexibility as low as possible. The forward scattering problem is solved by applying the finite-element tearing and interconnecting full-dual-primal (FETI-FDP2) method which shares the same spirit as the domain decomposition methods for finite element methods. The idea is to split the computational domain into smaller non-overlapping sub-domains in order to simultaneously solve local sub-problems. Various strategies are proposed in order to efficiently couple the inversion algorithm with the FETI-FDP2 method: a separation into permanent and non-permanent subdomains is performed, iterative solvers are favorized for resolving the interface problem and a marching-on-in-anything initial guess selection further accelerates the process. The computational burden is also reduced by applying the adjoint state vector methodology. Finally, the inversion algorithm is confronted to measurements extracted from the 3D Fresnel database. (paper)

  13. A fast algorithm for parabolic PDE-based inverse problems based on Laplace transforms and flexible Krylov solvers

    International Nuclear Information System (INIS)

    Bakhos, Tania; Saibaba, Arvind K.; Kitanidis, Peter K.

    2015-01-01

    We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method

  14. A fast algorithm for parabolic PDE-based inverse problems based on Laplace transforms and flexible Krylov solvers

    Energy Technology Data Exchange (ETDEWEB)

    Bakhos, Tania, E-mail: taniab@stanford.edu [Institute for Computational and Mathematical Engineering, Stanford University (United States); Saibaba, Arvind K. [Department of Electrical and Computer Engineering, Tufts University (United States); Kitanidis, Peter K. [Institute for Computational and Mathematical Engineering, Stanford University (United States); Department of Civil and Environmental Engineering, Stanford University (United States)

    2015-10-15

    We consider the problem of estimating parameters in large-scale weakly nonlinear inverse problems for which the underlying governing equations is a linear, time-dependent, parabolic partial differential equation. A major challenge in solving these inverse problems using Newton-type methods is the computational cost associated with solving the forward problem and with repeated construction of the Jacobian, which represents the sensitivity of the measurements to the unknown parameters. Forming the Jacobian can be prohibitively expensive because it requires repeated solutions of the forward and adjoint time-dependent parabolic partial differential equations corresponding to multiple sources and receivers. We propose an efficient method based on a Laplace transform-based exponential time integrator combined with a flexible Krylov subspace approach to solve the resulting shifted systems of equations efficiently. Our proposed solver speeds up the computation of the forward and adjoint problems, thus yielding significant speedup in total inversion time. We consider an application from Transient Hydraulic Tomography (THT), which is an imaging technique to estimate hydraulic parameters related to the subsurface from pressure measurements obtained by a series of pumping tests. The algorithms discussed are applied to a synthetic example taken from THT to demonstrate the resulting computational gains of this proposed method.

  15. Inverse folding of RNA pseudoknot structures

    Directory of Open Access Journals (Sweden)

    Li Linda YM

    2010-06-01

    Full Text Available Abstract Background RNA exhibits a variety of structural configurations. Here we consider a structure to be tantamount to the noncrossing Watson-Crick and G-U-base pairings (secondary structure and additional cross-serial base pairs. These interactions are called pseudoknots and are observed across the whole spectrum of RNA functionalities. In the context of studying natural RNA structures, searching for new ribozymes and designing artificial RNA, it is of interest to find RNA sequences folding into a specific structure and to analyze their induced neutral networks. Since the established inverse folding algorithms, RNAinverse, RNA-SSD as well as INFO-RNA are limited to RNA secondary structures, we present in this paper the inverse folding algorithm Inv which can deal with 3-noncrossing, canonical pseudoknot structures. Results In this paper we present the inverse folding algorithm Inv. We give a detailed analysis of Inv, including pseudocodes. We show that Inv allows to design in particular 3-noncrossing nonplanar RNA pseudoknot 3-noncrossing RNA structures-a class which is difficult to construct via dynamic programming routines. Inv is freely available at http://www.combinatorics.cn/cbpc/inv.html. Conclusions The algorithm Inv extends inverse folding capabilities to RNA pseudoknot structures. In comparison with RNAinverse it uses new ideas, for instance by considering sets of competing structures. As a result, Inv is not only able to find novel sequences even for RNA secondary structures, it does so in the context of competing structures that potentially exhibit cross-serial interactions.

  16. Three dimensional intensity modulated brachytherapy (IMBT): Dosimetry algorithm and inverse treatment planning

    International Nuclear Information System (INIS)

    Shi Chengyu; Guo Bingqi; Cheng, Chih-Yao; Esquivel, Carlos; Eng, Tony; Papanikolaou, Niko

    2010-01-01

    Purpose: The feasibility of intensity modulated brachytherapy (IMBT) to improve dose conformity for irregularly shaped targets has been previously investigated by researchers by means of using partially shielded sources. However, partial shielding does not fully explore the potential of IMBT. The goal of this study is to introduce the concept of three dimensional (3D) intensity modulated brachytherapy and solve two fundamental issues regarding the application of 3D IMBT treatment planning: The dose calculation algorithm and the inverse treatment planning method. Methods: A 3D IMBT treatment planning system prototype was developed using the MATLAB platform. This system consists of three major components: (1) A comprehensive IMBT source calibration method with dosimetric inputs from Monte Carlo (EGSnrc) simulations; (2) a ''modified TG-43'' (mTG-43) dose calculation formalism for IMBT dosimetry; and (3) a physical constraint based inverse IMBT treatment planning platform utilizing a simulated annealing optimization algorithm. The model S700 Axxent electronic brachytherapy source developed by Xoft, Inc. (Fremont, CA), was simulated in this application. Ten intracavitary accelerated partial breast irradiation (APBI) cases were studied. For each case, an ''isotropic plan'' with only optimized source dwell time and a fully optimized IMBT plan were generated and compared to the original plan in various dosimetric aspects, such as the plan quality, planning, and delivery time. The issue of the mechanical complexity of the IMBT applicator is not addressed in this study. Results: IMBT approaches showed superior plan quality compared to the original plans and the isotropic plans to different extents in all studied cases. An extremely difficult case with a small breast and a small distance to the ribs and skin, the IMBT plan minimized the high dose volume V 200 by 16.1% and 4.8%, respectively, compared to the original and the isotropic plans. The conformity index for the

  17. Three dimensional intensity modulated brachytherapy (IMBT): Dosimetry algorithm and inverse treatment planning

    Energy Technology Data Exchange (ETDEWEB)

    Shi Chengyu; Guo Bingqi; Cheng, Chih-Yao; Esquivel, Carlos; Eng, Tony; Papanikolaou, Niko [Cancer Therapy and Research Center, University of Texas Health Science Center at San Antonio, San Antonio, Texas 78229 (United States); Department of Radiation Oncology, Oklahoma University Health Science Center, Oklahoma City, Oklahoma 73104 (United States); Cancer Therapy and Research Center, University of Texas Health Science Center at San Antonio, San Antonio, Texas 78229 (United States)

    2010-07-15

    Purpose: The feasibility of intensity modulated brachytherapy (IMBT) to improve dose conformity for irregularly shaped targets has been previously investigated by researchers by means of using partially shielded sources. However, partial shielding does not fully explore the potential of IMBT. The goal of this study is to introduce the concept of three dimensional (3D) intensity modulated brachytherapy and solve two fundamental issues regarding the application of 3D IMBT treatment planning: The dose calculation algorithm and the inverse treatment planning method. Methods: A 3D IMBT treatment planning system prototype was developed using the MATLAB platform. This system consists of three major components: (1) A comprehensive IMBT source calibration method with dosimetric inputs from Monte Carlo (EGSnrc) simulations; (2) a ''modified TG-43'' (mTG-43) dose calculation formalism for IMBT dosimetry; and (3) a physical constraint based inverse IMBT treatment planning platform utilizing a simulated annealing optimization algorithm. The model S700 Axxent electronic brachytherapy source developed by Xoft, Inc. (Fremont, CA), was simulated in this application. Ten intracavitary accelerated partial breast irradiation (APBI) cases were studied. For each case, an ''isotropic plan'' with only optimized source dwell time and a fully optimized IMBT plan were generated and compared to the original plan in various dosimetric aspects, such as the plan quality, planning, and delivery time. The issue of the mechanical complexity of the IMBT applicator is not addressed in this study. Results: IMBT approaches showed superior plan quality compared to the original plans and the isotropic plans to different extents in all studied cases. An extremely difficult case with a small breast and a small distance to the ribs and skin, the IMBT plan minimized the high dose volume V{sub 200} by 16.1% and 4.8%, respectively, compared to the original and the

  18. Three dimensional intensity modulated brachytherapy (IMBT): dosimetry algorithm and inverse treatment planning.

    Science.gov (United States)

    Shi, Chengyu; Guo, Bingqi; Cheng, Chih-Yao; Esquivel, Carlos; Eng, Tony; Papanikolaou, Niko

    2010-07-01

    The feasibility of intensity modulated brachytherapy (IMBT) to improve dose conformity for irregularly shaped targets has been previously investigated by researchers by means of using partially shielded sources. However, partial shielding does not fully explore the potential of IMBT. The goal of this study is to introduce the concept of three dimensional (3D) intensity modulated brachytherapy and solve two fundamental issues regarding the application of 3D IMBT treatment planning: The dose calculation algorithm and the inverse treatment planning method. A 3D IMBT treatment planning system prototype was developed using the MATLAB platform. This system consists of three major components: (1) A comprehensive IMBT source calibration method with dosimetric inputs from Monte Carlo (EGSnrc) simulations; (2) a "modified TG-43" (mTG-43) dose calculation formalism for IMBT dosimetry; and (3) a physical constraint based inverse IMBT treatment planning platform utilizing a simulated annealing optimization algorithm. The model S700 Axxent electronic brachytherapy source developed by Xoft, Inc. (Fremont, CA), was simulated in this application. Ten intracavitary accelerated partial breast irradiation (APBI) cases were studied. For each case, an "isotropic plan" with only optimized source dwell time and a fully optimized IMBT plan were generated and compared to the original plan in various dosimetric aspects, such as the plan quality, planning, and delivery time. The issue of the mechanical complexity of the IMBT applicator is not addressed in this study. IMBT approaches showed superior plan quality compared to the original plans and tht isotropic plans to different extents in all studied cases. An extremely difficult case with a small breast and a small distance to the ribs and skin, the IMBT plan minimized the high dose volume V200 by 16.1% and 4.8%, respectively, compared to the original and the isotropic plans. The conformity index for the target was increased by 0.13 and 0

  19. Bayesian approach to inverse statistical mechanics

    Science.gov (United States)

    Habeck, Michael

    2014-05-01

    Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.

  20. Large Airborne Full Tensor Gradient Data Inversion Based on a Non-Monotone Gradient Method

    Science.gov (United States)

    Sun, Yong; Meng, Zhaohai; Li, Fengting

    2018-03-01

    Following the development of gravity gradiometer instrument technology, the full tensor gravity (FTG) data can be acquired on airborne and marine platforms. Large-scale geophysical data can be obtained using these methods, making such data sets a number of the "big data" category. Therefore, a fast and effective inversion method is developed to solve the large-scale FTG data inversion problem. Many algorithms are available to accelerate the FTG data inversion, such as conjugate gradient method. However, the conventional conjugate gradient method takes a long time to complete data processing. Thus, a fast and effective iterative algorithm is necessary to improve the utilization of FTG data. Generally, inversion processing is formulated by incorporating regularizing constraints, followed by the introduction of a non-monotone gradient-descent method to accelerate the convergence rate of FTG data inversion. Compared with the conventional gradient method, the steepest descent gradient algorithm, and the conjugate gradient algorithm, there are clear advantages of the non-monotone iterative gradient-descent algorithm. Simulated and field FTG data were applied to show the application value of this new fast inversion method.

  1. Inverse planning anatomy-based dose optimization for HDR-brachytherapy of the prostate using fast simulated annealing algorithm and dedicated objective function

    International Nuclear Information System (INIS)

    Lessard, Etienne; Pouliot, Jean

    2001-01-01

    An anatomy-based dose optimization algorithm is developed to automatically and rapidly produce a highly conformal dose coverage of the target volume while minimizing urethra, bladder, and rectal doses in the delivery of an high dose-rate (HDR) brachytherapy boost for the treatment of prostate cancer. The dwell times are optimized using an inverse planning simulated annealing algorithm (IPSA) governed entirely from the anatomy extracted from a CT and by a dedicated objective function (cost function) reflecting clinical prescription and constraints. With this inverse planning approach, the focus is on the physician's prescription and constraint instead of on the technical limitations. Consequently, the physician's control on the treatment is improved. The capacity of this algorithm to represent the physician's prescription is presented for a clinical prostate case. The computation time (CPU) for IPSA optimization is less than 1 min (41 s for 142 915 iterations) for a typical clinical case, allowing fast and practical dose optimization. The achievement of highly conformal dose coverage to the target volume opens the possibility to deliver a higher dose to the prostate without inducing overdosage of urethra and normal tissues surrounding the prostate. Moreover, using the same concept, it will be possible to deliver a boost dose to a delimited tumor volume within the prostate. Finally, this method can be easily extended to other anatomical sites

  2. Hydraulic tomography of discrete networks of conduits and fractures in a karstic aquifer by using a deterministic inversion algorithm

    Science.gov (United States)

    Fischer, P.; Jardani, A.; Lecoq, N.

    2018-02-01

    In this paper, we present a novel inverse modeling method called Discrete Network Deterministic Inversion (DNDI) for mapping the geometry and property of the discrete network of conduits and fractures in the karstified aquifers. The DNDI algorithm is based on a coupled discrete-continuum concept to simulate numerically water flows in a model and a deterministic optimization algorithm to invert a set of observed piezometric data recorded during multiple pumping tests. In this method, the model is partioned in subspaces piloted by a set of parameters (matrix transmissivity, and geometry and equivalent transmissivity of the conduits) that are considered as unknown. In this way, the deterministic optimization process can iteratively correct the geometry of the network and the values of the properties, until it converges to a global network geometry in a solution model able to reproduce the set of data. An uncertainty analysis of this result can be performed from the maps of posterior uncertainties on the network geometry or on the property values. This method has been successfully tested for three different theoretical and simplified study cases with hydraulic responses data generated from hypothetical karstic models with an increasing complexity of the network geometry, and of the matrix heterogeneity.

  3. (abstract) Using an Inversion Algorithm to Retrieve Parameters and Monitor Changes over Forested Areas from SAR Data

    Science.gov (United States)

    Moghaddam, Mahta

    1995-01-01

    In this work, the application of an inversion algorithm based on a nonlinear opimization technique to retrieve forest parameters from multifrequency polarimetric SAR data is discussed. The approach discussed here allows for retrieving and monitoring changes in forest parameters in a quantative and systematic fashion using SAR data. The parameters to be inverted directly from the data are the electromagnetic scattering properties of the forest components such as their dielectric constants and size characteristics. Once these are known, attributes such as canopy moisture content can be obtained, which are useful in the ecosystem models.

  4. A finite-difference contrast source inversion method

    International Nuclear Information System (INIS)

    Abubakar, A; Hu, W; Habashy, T M; Van den Berg, P M

    2008-01-01

    We present a contrast source inversion (CSI) algorithm using a finite-difference (FD) approach as its backbone for reconstructing the unknown material properties of inhomogeneous objects embedded in a known inhomogeneous background medium. Unlike the CSI method using the integral equation (IE) approach, the FD-CSI method can readily employ an arbitrary inhomogeneous medium as its background. The ability to use an inhomogeneous background medium has made this algorithm very suitable to be used in through-wall imaging and time-lapse inversion applications. Similar to the IE-CSI algorithm the unknown contrast sources and contrast function are updated alternately to reconstruct the unknown objects without requiring the solution of the full forward problem at each iteration step in the optimization process. The FD solver is formulated in the frequency domain and it is equipped with a perfectly matched layer (PML) absorbing boundary condition. The FD operator used in the FD-CSI method is only dependent on the background medium and the frequency of operation, thus it does not change throughout the inversion process. Therefore, at least for the two-dimensional (2D) configurations, where the size of the stiffness matrix is manageable, the FD stiffness matrix can be inverted using a non-iterative inversion matrix approach such as a Gauss elimination method for the sparse matrix. In this case, an LU decomposition needs to be done only once and can then be reused for multiple source positions and in successive iterations of the inversion. Numerical experiments show that this FD-CSI algorithm has an excellent performance for inverting inhomogeneous objects embedded in an inhomogeneous background medium

  5. Numerical investigation of the inverse blackbody radiation problem

    International Nuclear Information System (INIS)

    Xin Tan, Guo-zhen Yang, Ben-yuan Gu

    1994-01-01

    A numerical algorithm for the inverse blackbody radiation problem, which is the determination of the temperature distribution of a thermal radiator (TDTR) from its total radiated power spectrum (TRPS), is presented, based on the general theory of amplitude-phase retrieval. With application of this new algorithm, the ill-posed nature of the Fredholm equation of the first kind can be largely overcome and a convergent solution to high accuracy can be obtained. By incorporation of the hybrid input-output algorithm into our algorithm, the convergent process can be substantially expedited and the stagnation problem of the solution can be averted. From model calculations it is found that the new algorithm can also provide a robust reconstruction of the TDTR from the noise-corrupted data of the TRPS. Therefore the new algorithm may offer a useful approach to solving the ill-posed inverse problem. 18 refs., 9 figs

  6. Deformation of Copahue volcano: Inversion of InSAR data using a genetic algorithm

    Science.gov (United States)

    Velez, Maria Laura; Euillades, Pablo; Caselli, Alberto; Blanco, Mauro; Díaz, Jose Martínez

    2011-04-01

    The Copahue volcano is one of the most active volcanoes in Argentina with eruptions having been reported as recently as 1992, 1995 and 2000. A deformation analysis using the Differential Synthetic Aperture Radar technique (DInSAR) was performed on Copahue-Caviahue Volcanic Complex (CCVC) from Envisat radar images between 2002 and 2007. A deformation rate of approximately 2 cm/yr was calculated, located mostly on the north-eastern flank of Copahue volcano, and assumed to be constant during the period of the interferograms. The geometry of the source responsible for the deformation was evaluated from an inversion of the mean velocity deformation measurements using two different models based on pressure sources embedded in an elastic homogeneous half-space. A genetic algorithm was applied as an optimization tool to find the best fit source. Results from inverse modelling indicate that a source located beneath the volcano edifice at a mean depth of 4 km is producing a volume change of approximately 0.0015 km/yr. This source was analysed considering the available studies of the area, and a conceptual model of the volcanic-hydrothermal system was designed. The source of deformation is related to a depressurisation of the system that results from the release of magmatic fluids across the boundary between the brittle and plastic domains. These leakages are considered to be responsible for the weak phreatic eruptions recently registered at the Copahue volcano.

  7. A 3D inversion for all-space magnetotelluric data with static shift correction

    Science.gov (United States)

    Zhang, Kun

    2017-04-01

    Base on the previous studies on the static shift correction and 3D inversion algorithms, we improve the NLCG 3D inversion method and propose a new static shift correction method which work in the inversion. The static shift correction method is based on the 3D theory and real data. The static shift can be detected by the quantitative analysis of apparent parameters (apparent resistivity and impedance phase) of MT in high frequency range, and completed correction with inversion. The method is an automatic processing technology of computer with 0 cost, and avoids the additional field work and indoor processing with good results. The 3D inversion algorithm is improved (Zhang et al., 2013) base on the NLCG method of Newman & Alumbaugh (2000) and Rodi & Mackie (2001). For the algorithm, we added the parallel structure, improved the computational efficiency, reduced the memory of computer and added the topographic and marine factors. So the 3D inversion could work in general PC with high efficiency and accuracy. And all the MT data of surface stations, seabed stations and underground stations can be used in the inversion algorithm.

  8. Generalized Grover's Algorithm for Multiple Phase Inversion States

    Science.gov (United States)

    Byrnes, Tim; Forster, Gary; Tessler, Louis

    2018-02-01

    Grover's algorithm is a quantum search algorithm that proceeds by repeated applications of the Grover operator and the Oracle until the state evolves to one of the target states. In the standard version of the algorithm, the Grover operator inverts the sign on only one state. Here we provide an exact solution to the problem of performing Grover's search where the Grover operator inverts the sign on N states. We show the underlying structure in terms of the eigenspectrum of the generalized Hamiltonian, and derive an appropriate initial state to perform the Grover evolution. This allows us to use the quantum phase estimation algorithm to solve the search problem in this generalized case, completely bypassing the Grover algorithm altogether. We obtain a time complexity of this case of √{D /Mα }, where D is the search space dimension, M is the number of target states, and α ≈1 , which is close to the optimal scaling.

  9. Unwrapped phase inversion for near surface seismic data

    KAUST Repository

    Choi, Yun Seok; Alkhalifah, Tariq Ali

    2012-01-01

    The Phase-wrapping is one of the main obstacles of waveform inversion. We use an inversion algorithm based on the instantaneous-traveltime that overcomes the phase-wrapping problem. With a high damping factor, the frequency-dependent instantaneous

  10. TaBoo SeArch Algorithm with a Modified Inverse Histogram for Reproducing Biologically Relevant Rare Events of Proteins.

    Science.gov (United States)

    Harada, Ryuhei; Takano, Yu; Shigeta, Yasuteru

    2016-05-10

    The TaBoo SeArch (TBSA) algorithm [ Harada et al. J. Comput. Chem. 2015 , 36 , 763 - 772 and Harada et al. Chem. Phys. Lett. 2015 , 630 , 68 - 75 ] was recently proposed as an enhanced conformational sampling method for reproducing biologically relevant rare events of a given protein. In TBSA, an inverse histogram of the original distribution, mapped onto a set of reaction coordinates, is constructed from trajectories obtained by multiple short-time molecular dynamics (MD) simulations. Rarely occurring states of a given protein are statistically selected as new initial states based on the inverse histogram, and resampling is performed by restarting the MD simulations from the new initial states to promote the conformational transition. In this process, the definition of the inverse histogram, which characterizes the rarely occurring states, is crucial for the efficiency of TBSA. In this study, we propose a simple modification of the inverse histogram to further accelerate the convergence of TBSA. As demonstrations of the modified TBSA, we applied it to (a) hydrogen bonding rearrangements of Met-enkephalin, (b) large-amplitude domain motions of Glutamine-Binding Protein, and (c) folding processes of the B domain of Staphylococcus aureus Protein A. All demonstrations numerically proved that the modified TBSA reproduced these biologically relevant rare events with nanosecond-order simulation times, although a set of microsecond-order, canonical MD simulations failed to reproduce the rare events, indicating the high efficiency of the modified TBSA.

  11. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  12. 2D acoustic-elastic coupled waveform inversion in the Laplace domain

    KAUST Repository

    Bae, Hoseuk

    2010-04-01

    Although waveform inversion has been intensively studied in an effort to properly delineate the Earth\\'s structures since the early 1980s, most of the time- and frequency-domain waveform inversion algorithms still have critical limitations in their applications to field data. This may be attributed to the highly non-linear objective function and the unreliable low-frequency components. To overcome the weaknesses of conventional waveform inversion algorithms, the acoustic Laplace-domain waveform inversion has been proposed. The Laplace-domain waveform inversion has been known to provide a long-wavelength velocity model even for field data, which may be because it employs the zero-frequency component of the damped wavefield and a well-behaved logarithmic objective function. However, its applications have been confined to 2D acoustic media.We extend the Laplace-domain waveform inversion algorithm to a 2D acoustic-elastic coupled medium, which is encountered in marine exploration environments. In 2D acoustic-elastic coupled media, the Laplace-domain pressures behave differently from those of 2D acoustic media, although the overall features are similar to each other. The main differences are that the pressure wavefields for acoustic-elastic coupled media show negative values even for simple geological structures unlike in acoustic media, when the Laplace damping constant is small and the water depth is shallow. The negative values may result from more complicated wave propagation in elastic media and at fluid-solid interfaces.Our Laplace-domain waveform inversion algorithm is also based on the finite-element method and logarithmic wavefields. To compute gradient direction, we apply the back-propagation technique. Under the assumption that density is fixed, P- and S-wave velocity models are inverted from the pressure data. We applied our inversion algorithm to the SEG/EAGE salt model and the numerical results showed that the Laplace-domain waveform inversion

  13. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    Energy Technology Data Exchange (ETDEWEB)

    Agaltsov, A. D., E-mail: agalets@gmail.com [Faculty of Computational Mathematics and Cybernetics, Lomonosov Moscow State University, 119991 Moscow (Russian Federation); Novikov, R. G., E-mail: novikov@cmap.polytechnique.fr [CNRS (UMR 7641), Centre de Mathématiques Appliquées, Ecole Polytechnique, 91128 Palaiseau (France); IEPT RAS, 117997 Moscow (Russian Federation); Moscow Institute of Physics and Technology, Dolgoprudny (Russian Federation)

    2014-10-15

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given.

  14. Riemann–Hilbert problem approach for two-dimensional flow inverse scattering

    International Nuclear Information System (INIS)

    Agaltsov, A. D.; Novikov, R. G.

    2014-01-01

    We consider inverse scattering for the time-harmonic wave equation with first-order perturbation in two dimensions. This problem arises in particular in the acoustic tomography of moving fluid. We consider linearized and nonlinearized reconstruction algorithms for this problem of inverse scattering. Our nonlinearized reconstruction algorithm is based on the non-local Riemann–Hilbert problem approach. Comparisons with preceding results are given

  15. Layered and Laterally Constrained 2D Inversion of Time Domain Induced Polarization Data

    DEFF Research Database (Denmark)

    Fiandaca, Gianluca; Ramm, James; Auken, Esben

    description of the transmitter waveform and of the receiver transfer function allowing for a quantitative interpretation of the parameters. The code has been optimized for parallel computation and the inversion time is comparable to codes inverting just for direct current resistivity. The new inversion......In a sedimentary environment, quasi-layered models often represent the actual geology more accurately than smooth minimum-structure models. We have developed a new layered and laterally constrained inversion algorithm for time domain induced polarization data. The algorithm is based on the time...... transform of a complex resistivity forward response and the inversion extracts the spectral information of the time domain measures in terms of the Cole-Cole parameters. The developed forward code and inversion algorithm use the full time decay of the induced polarization response, together with an accurate...

  16. Resolving spectral information from time domain induced polarization data through 2-D inversion

    DEFF Research Database (Denmark)

    Fiandaca, Gianluca; Ramm, James; Binley, A.

    2013-01-01

    these limitations of conventional approaches, a new 2-D inversion algorithm has been developed using the full voltage decay of the IP response, together with an accurate description of the transmitter waveform and receiver transfer function. This allows reconstruction of the spectral information contained in the TD...... sampling necessary in the fast Hankel transform. These features, together with parallel computation, ensure inversion times comparable with those of direct current algorithms. The algorithm has been developed in a laterally constrained inversion scheme, and handles both smooth and layered inversions......; the latter being helpful in sedimentary environments, where quasi-layered models often represent the actual geology more accurately than smooth minimum-structure models. In the layered inversion approach, a general method to derive the thickness derivative from the complex conductivity Jacobian is also...

  17. Waveform inversion of lateral velocity variation from wavefield source location perturbation

    KAUST Repository

    Choi, Yun Seok

    2013-09-22

    It is challenge in waveform inversion to precisely define the deep part of the velocity model compared to the shallow part. The lateral velocity variation, or what referred to as the derivative of velocity with respect to the horizontal distance, with well log data can be used to update the deep part of the velocity model more precisely. We develop a waveform inversion algorithm to obtain the lateral velocity variation by inverting the wavefield variation associated with the lateral shot location perturbation. The gradient of the new waveform inversion algorithm is obtained by the adjoint-state method. Our inversion algorithm focuses on resolving the lateral changes of the velocity model with respect to a fixed reference vertical velocity profile given by a well log. We apply the method on a simple-dome model to highlight the methods potential.

  18. 3-D cross-gradient joint inversion of seismic refraction and DC resistivity data

    Science.gov (United States)

    Shi, Zhanjie; Hobbs, Richard W.; Moorkamp, Max; Tian, Gang; Jiang, Lu

    2017-06-01

    We present a 3-D cross-gradient joint inversion algorithm for seismic refraction and DC resistivity data. The structural similarity between seismic slowness and resistivity models is enforced by a cross-gradient term in the objective function that also includes misfit and regularization terms. A limited memory quasi-Newton approach is used to perform the optimization of the objective function. To validate the proposed methodology and its implementation, tests were performed on a typical archaeological geophysical synthetic model. The results show that the inversion model and physical parameters estimated by our joint inversion method are more consistent with the true model than those from single inversion algorithm. Moreover, our approach appears to be more robust in conditions of noise. Finally, the 3-D cross-gradient joint inversion algorithm was applied to the field data from Lin_an ancient city site in Hangzhou of China. The 3-D cross-gradient joint inversion models are consistent with the archaeological excavation results of the ancient city wall remains. However, by single inversion, seismic slowness model does not show the anomaly of city wall remains and resistivity model does not fit well with the archaeological excavation results. Through these comparisons, we conclude that the proposed algorithm can be used to jointly invert 3-D seismic refraction and DC resistivity data to reduce the uncertainty brought by single inversion scheme.

  19. RNA inverse folding using Monte Carlo tree search.

    Science.gov (United States)

    Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji

    2017-11-06

    Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .

  20. Direct and inverse scattering for viscoelastic media

    International Nuclear Information System (INIS)

    Ammicht, E.; Corones, J.P.; Krueger, R.J.

    1987-01-01

    A time domain approach to direct and inverse scattering problems for one-dimensional viscoelastic media is presented. Such media can be characterized as having a constitutive relation between stress and strain which involves the past history of the strain through a memory function, the relaxation modulus. In the approach in this article, the relaxation modulus of a material is shown to be related to the reflection properties of the material. This relation provides a constructive algorithm for direct and inverse scattering problems. A numerical implementation of this algorithm is tested on several problems involving realistic relaxation moduli

  1. Source-independent elastic waveform inversion using a logarithmic wavefield

    KAUST Repository

    Choi, Yun Seok; Min, Dong Joon

    2012-01-01

    The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating

  2. Medium change based image estimation from application of inverse algorithms to coda wave measurements

    Science.gov (United States)

    Zhan, Hanyu; Jiang, Hanwan; Jiang, Ruinian

    2018-03-01

    Perturbations worked as extra scatters will cause coda waveform distortions; thus, coda wave with long propagation time and traveling path are sensitive to micro-defects in strongly heterogeneous media such as concretes. In this paper, we conduct varied external loads on a life-size concrete slab which contains multiple existing micro-cracks, and a couple of sources and receivers are installed to collect coda wave signals. The waveform decorrelation coefficients (DC) at different loads are calculated for all available source-receiver pair measurements. Then inversions of the DC results are applied to estimate the associated distribution density values in three-dimensional regions through kernel sensitivity model and least-square algorithms, which leads to the images indicating the micro-cracks positions. This work provides an efficiently non-destructive approach to detect internal defects and damages of large-size concrete structures.

  3. Duality reconstruction algorithm for use in electrical impedance tomography

    International Nuclear Information System (INIS)

    Abdullah, M.Z.; Dickin, F.J.

    1996-01-01

    A duality reconstruction algorithm for solving the inverse problem in electrical impedance tomography (EIT) is described. In this method, an algorithm based on the Geselowitz compensation (GC) theorem is used first to reconstruct an approximate version of the image. It is then fed as a first guessed data to the modified Newton-Raphson (MNR) algorithm which iteratively correct the image until a final acceptable solution is reached. The implementation of the GC and MNR based algorithms using the finite element method will be discussed. Reconstructed images produced by the algorithm will also be presented. Consideration is also given to the most computationally intensive aspects of the algorithm, namely the inversion of the large and sparse matrices. The methods taken to approximately compute the inverse ot those matrices will be outlined. (author)

  4. Laterally constrained inversion for CSAMT data interpretation

    Science.gov (United States)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  5. A new algorithm for three-dimensional joint inversion of body wave and surface wave data and its application to the Southern California plate boundary region

    Science.gov (United States)

    Fang, Hongjian; Zhang, Haijiang; Yao, Huajian; Allam, Amir; Zigone, Dimitri; Ben-Zion, Yehuda; Thurber, Clifford; van der Hilst, Robert D.

    2016-05-01

    We introduce a new algorithm for joint inversion of body wave and surface wave data to get better 3-D P wave (Vp) and S wave (Vs) velocity models by taking advantage of the complementary strengths of each data set. Our joint inversion algorithm uses a one-step inversion of surface wave traveltime measurements at different periods for 3-D Vs and Vp models without constructing the intermediate phase or group velocity maps. This allows a more straightforward modeling of surface wave traveltime data with the body wave arrival times. We take into consideration the sensitivity of surface wave data with respect to Vp in addition to its large sensitivity to Vs, which means both models are constrained by two different data types. The method is applied to determine 3-D crustal Vp and Vs models using body wave and Rayleigh wave data in the Southern California plate boundary region, which has previously been studied with both double-difference tomography method using body wave arrival times and ambient noise tomography method with Rayleigh and Love wave group velocity dispersion measurements. Our approach creates self-consistent and unique models with no prominent gaps, with Rayleigh wave data resolving shallow and large-scale features and body wave data constraining relatively deeper structures where their ray coverage is good. The velocity model from the joint inversion is consistent with local geological structures and produces better fits to observed seismic waveforms than the current Southern California Earthquake Center (SCEC) model.

  6. Monitoring carbon dioxide from space: Retrieval algorithm and flux inversion based on GOSAT data and using CarbonTracker-China

    Science.gov (United States)

    Yang, Dongxu; Zhang, Huifang; Liu, Yi; Chen, Baozhang; Cai, Zhaonan; Lü, Daren

    2017-08-01

    Monitoring atmospheric carbon dioxide (CO2) from space-borne state-of-the-art hyperspectral instruments can provide a high precision global dataset to improve carbon flux estimation and reduce the uncertainty of climate projection. Here, we introduce a carbon flux inversion system for estimating carbon flux with satellite measurements under the support of "The Strategic Priority Research Program of the Chinese Academy of Sciences—Climate Change: Carbon Budget and Relevant Issues". The carbon flux inversion system is composed of two separate parts: the Institute of Atmospheric Physics Carbon Dioxide Retrieval Algorithm for Satellite Remote Sensing (IAPCAS), and CarbonTracker-China (CT-China), developed at the Chinese Academy of Sciences. The Greenhouse gases Observing SATellite (GOSAT) measurements are used in the carbon flux inversion experiment. To improve the quality of the IAPCAS-GOSAT retrieval, we have developed a post-screening and bias correction method, resulting in 25%-30% of the data remaining after quality control. Based on these data, the seasonal variation of XCO2 (column-averaged CO2 dry-air mole fraction) is studied, and a strong relation with vegetation cover and population is identified. Then, the IAPCAS-GOSAT XCO2 product is used in carbon flux estimation by CT-China. The net ecosystem CO2 exchange is -0.34 Pg C yr-1 (±0.08 Pg C yr-1), with a large error reduction of 84%, which is a significant improvement on the error reduction when compared with in situ-only inversion.

  7. Recursive Matrix Inverse Update On An Optical Processor

    Science.gov (United States)

    Casasent, David P.; Baranoski, Edward J.

    1988-02-01

    A high accuracy optical linear algebraic processor (OLAP) using the digital multiplication by analog convolution (DMAC) algorithm is described for use in an efficient matrix inverse update algorithm with speed and accuracy advantages. The solution of the parameters in the algorithm are addressed and the advantages of optical over digital linear algebraic processors are advanced.

  8. Modelling and genetic algorithm based optimisation of inverse supply chain

    Science.gov (United States)

    Bányai, T.

    2009-04-01

    (Recycling of household appliances with emphasis on reuse options). The purpose of this paper is the presentation of a possible method for avoiding the unnecessary environmental risk and landscape use through unprovoked large supply chain of collection systems of recycling processes. In the first part of the paper the author presents the mathematical model of recycling related collection systems (applied especially for wastes of electric and electronic products) and in the second part of the work a genetic algorithm based optimisation method will be demonstrated, by the aid of which it is possible to determine the optimal structure of the inverse supply chain from the point of view economical, ecological and logistic objective functions. The model of the inverse supply chain is based on a multi-level, hierarchical collection system. In case of this static model it is assumed that technical conditions are permanent. The total costs consist of three parts: total infrastructure costs, total material handling costs and environmental risk costs. The infrastructure-related costs are dependent only on the specific fixed costs and the specific unit costs of the operation points (collection, pre-treatment, treatment, recycling and reuse plants). The costs of warehousing and transportation are represented by the material handling related costs. The most important factors determining the level of environmental risk cost are the number of out of time recycled (treated or reused) products, the number of supply chain objects and the length of transportation routes. The objective function is the minimization of the total cost taking into consideration the constraints. However a lot of research work discussed the design of supply chain [8], but most of them concentrate on linear cost functions. In the case of this model non-linear cost functions were used. The non-linear cost functions and the possible high number of objects of the inverse supply chain leaded to the problem of choosing a

  9. A Survey on Inverse Problems for Applied Sciences

    Directory of Open Access Journals (Sweden)

    Fatih Yaman

    2013-01-01

    Full Text Available The aim of this paper is to introduce inversion-based engineering applications and to investigate some of the important ones from mathematical point of view. To do this we employ acoustic, electromagnetic, and elastic waves for presenting different types of inverse problems. More specifically, we first study location, shape, and boundary parameter reconstruction algorithms for the inaccessible targets in acoustics. The inverse problems for the time-dependent differential equations of isotropic and anisotropic elasticity are reviewed in the following section of the paper. These problems were the objects of the study by many authors in the last several decades. The physical interpretations for almost all of these problems are given, and the geophysical applications for some of them are described. In our last section, an introduction with many links into the literature is given for modern algorithms which combine techniques from classical inverse problems with stochastic tools into ensemble methods both for data assimilation as well as for forecasting.

  10. A model reduction approach to numerical inversion for a parabolic partial differential equation

    International Nuclear Information System (INIS)

    Borcea, Liliana; Druskin, Vladimir; Zaslavsky, Mikhail; Mamonov, Alexander V

    2014-01-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss–Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments. (paper)

  11. A model reduction approach to numerical inversion for a parabolic partial differential equation

    Science.gov (United States)

    Borcea, Liliana; Druskin, Vladimir; Mamonov, Alexander V.; Zaslavsky, Mikhail

    2014-12-01

    We propose a novel numerical inversion algorithm for the coefficients of parabolic partial differential equations, based on model reduction. The study is motivated by the application of controlled source electromagnetic exploration, where the unknown is the subsurface electrical resistivity and the data are time resolved surface measurements of the magnetic field. The algorithm presented in this paper considers inversion in one and two dimensions. The reduced model is obtained with rational interpolation in the frequency (Laplace) domain and a rational Krylov subspace projection method. It amounts to a nonlinear mapping from the function space of the unknown resistivity to the small dimensional space of the parameters of the reduced model. We use this mapping as a nonlinear preconditioner for the Gauss-Newton iterative solution of the inverse problem. The advantage of the inversion algorithm is twofold. First, the nonlinear preconditioner resolves most of the nonlinearity of the problem. Thus the iterations are less likely to get stuck in local minima and the convergence is fast. Second, the inversion is computationally efficient because it avoids repeated accurate simulations of the time-domain response. We study the stability of the inversion algorithm for various rational Krylov subspaces, and assess its performance with numerical experiments.

  12. A recursive Formulation of the Inversion of symmetric positive defite matrices in packed storage data format

    DEFF Research Database (Denmark)

    Andersen, Bjarne Stig; Gunnels, John A.; Gustavson, Fred

    2002-01-01

    A new Recursive Packed Inverse Calculation Algorithm for symmetric positive definite matrices has been developed. The new Recursive Inverse Calculation algorithm uses minimal storage, \\$n(n+1)/2\\$, and has nearly the same performance as the LAPACK full storage algorithm using \\$n\\^2\\$ memory words...

  13. A review of ocean chlorophyll algorithms and primary production models

    Science.gov (United States)

    Li, Jingwen; Zhou, Song; Lv, Nan

    2015-12-01

    This paper mainly introduces the five ocean chlorophyll concentration inversion algorithm and 3 main models for computing ocean primary production based on ocean chlorophyll concentration. Through the comparison of five ocean chlorophyll inversion algorithm, sums up the advantages and disadvantages of these algorithm,and briefly analyzes the trend of ocean primary production model.

  14. Inverse Modeling of Soil Hydraulic Parameters Based on a Hybrid of Vector-Evaluated Genetic Algorithm and Particle Swarm Optimization

    Directory of Open Access Journals (Sweden)

    Yi-Bo Li

    2018-01-01

    Full Text Available The accurate estimation of soil hydraulic parameters (θs, α, n, and Ks of the van Genuchten–Mualem model has attracted considerable attention. In this study, we proposed a new two-step inversion method, which first estimated the hydraulic parameter θs using objective function by the final water content, and subsequently estimated the soil hydraulic parameters α, n, and Ks, using a vector-evaluated genetic algorithm and particle swarm optimization (VEGA-PSO method based on objective functions by cumulative infiltration and infiltration rate. The parameters were inversely estimated for four types of soils (sand, loam, silt, and clay under an in silico experiment simulating the tension disc infiltration at three initial water content levels. The results indicated that the method is excellent and robust. Because the objective function had multilocal minima in a tiny range near the true values, inverse estimation of the hydraulic parameters was difficult; however, the estimated soil water retention curves and hydraulic conductivity curves were nearly identical to the true curves. In addition, the proposed method was able to estimate the hydraulic parameters accurately despite substantial measurement errors in initial water content, final water content, and cumulative infiltration, proving that the method was feasible and practical for field application.

  15. An inverse problem approach to pattern recognition in industry

    Directory of Open Access Journals (Sweden)

    Ali Sever

    2015-01-01

    Full Text Available Many works have shown strong connections between learning and regularization techniques for ill-posed inverse problems. A careful analysis shows that a rigorous connection between learning and regularization for inverse problem is not straightforward. In this study, pattern recognition will be viewed as an ill-posed inverse problem and applications of methods from the theory of inverse problems to pattern recognition are studied. A new learning algorithm derived from a well-known regularization model is generated and applied to the task of reconstruction of an inhomogeneous object as pattern recognition. Particularly, it is demonstrated that pattern recognition can be reformulated in terms of inverse problems defined by a Riesz-type kernel. This reformulation can be employed to design a learning algorithm based on a numerical solution of a system of linear equations. Finally, numerical experiments have been carried out with synthetic experimental data considering a reasonable level of noise. Good recoveries have been achieved with this methodology, and the results of these simulations are compatible with the existing methods. The comparison results show that the Regularization-based learning algorithm (RBA obtains a promising performance on the majority of the test problems. In prospects, this method can be used for the creation of automated systems for diagnostics, testing, and control in various fields of scientific and applied research, as well as in industry.

  16. Workflows for Full Waveform Inversions

    Science.gov (United States)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  17. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    Science.gov (United States)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  18. Monte Carlo full-waveform inversion of crosshole GPR data using multiple-point geostatistical a priori information

    DEFF Research Database (Denmark)

    Cordua, Knud Skou; Hansen, Thomas Mejer; Mosegaard, Klaus

    2012-01-01

    We present a general Monte Carlo full-waveform inversion strategy that integrates a priori information described by geostatistical algorithms with Bayesian inverse problem theory. The extended Metropolis algorithm can be used to sample the a posteriori probability density of highly nonlinear...... inverse problems, such as full-waveform inversion. Sequential Gibbs sampling is a method that allows efficient sampling of a priori probability densities described by geostatistical algorithms based on either two-point (e.g., Gaussian) or multiple-point statistics. We outline the theoretical framework......) Based on a posteriori realizations, complicated statistical questions can be answered, such as the probability of connectivity across a layer. (3) Complex a priori information can be included through geostatistical algorithms. These benefits, however, require more computing resources than traditional...

  19. Learning theory of distributed spectral algorithms

    International Nuclear Information System (INIS)

    Guo, Zheng-Chu; Lin, Shao-Bo; Zhou, Ding-Xuan

    2017-01-01

    Spectral algorithms have been widely used and studied in learning theory and inverse problems. This paper is concerned with distributed spectral algorithms, for handling big data, based on a divide-and-conquer approach. We present a learning theory for these distributed kernel-based learning algorithms in a regression framework including nice error bounds and optimal minimax learning rates achieved by means of a novel integral operator approach and a second order decomposition of inverse operators. Our quantitative estimates are given in terms of regularity of the regression function, effective dimension of the reproducing kernel Hilbert space, and qualification of the filter function of the spectral algorithm. They do not need any eigenfunction or noise conditions and are better than the existing results even for the classical family of spectral algorithms. (paper)

  20. The inverse spectral problem for pencils of differential operators

    International Nuclear Information System (INIS)

    Guseinov, I M; Nabiev, I M

    2007-01-01

    The inverse problem of spectral analysis for a quadratic pencil of Sturm-Liouville operators on a finite interval is considered. A uniqueness theorem is proved, a solution algorithm is presented, and sufficient conditions for the solubility of the inverse problem are obtained. Bibliography: 31 titles.

  1. Inverse problem of radiofrequency sounding of ionosphere

    Science.gov (United States)

    Velichko, E. N.; Yu. Grishentsev, A.; Korobeynikov, A. G.

    2016-01-01

    An algorithm for the solution of the inverse problem of vertical ionosphere sounding and a mathematical model of noise filtering are presented. An automated system for processing and analysis of spectrograms of vertical ionosphere sounding based on our algorithm is described. It is shown that the algorithm we suggest has a rather high efficiency. This is supported by the data obtained at the ionospheric stations of the so-called “AIS-M” type.

  2. Inverse Interval Matrix: A Survey

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Farhadsefat, R.

    2011-01-01

    Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf

  3. Cycle-Based Cluster Variational Method for Direct and Inverse Inference

    Science.gov (United States)

    Furtlehner, Cyril; Decelle, Aurélien

    2016-08-01

    Large scale inference problems of practical interest can often be addressed with help of Markov random fields. This requires to solve in principle two related problems: the first one is to find offline the parameters of the MRF from empirical data (inverse problem); the second one (direct problem) is to set up the inference algorithm to make it as precise, robust and efficient as possible. In this work we address both the direct and inverse problem with mean-field methods of statistical physics, going beyond the Bethe approximation and associated belief propagation algorithm. We elaborate on the idea that loop corrections to belief propagation can be dealt with in a systematic way on pairwise Markov random fields, by using the elements of a cycle basis to define regions in a generalized belief propagation setting. For the direct problem, the region graph is specified in such a way as to avoid feed-back loops as much as possible by selecting a minimal cycle basis. Following this line we are led to propose a two-level algorithm, where a belief propagation algorithm is run alternatively at the level of each cycle and at the inter-region level. Next we observe that the inverse problem can be addressed region by region independently, with one small inverse problem per region to be solved. It turns out that each elementary inverse problem on the loop geometry can be solved efficiently. In particular in the random Ising context we propose two complementary methods based respectively on fixed point equations and on a one-parameter log likelihood function minimization. Numerical experiments confirm the effectiveness of this approach both for the direct and inverse MRF inference. Heterogeneous problems of size up to 10^5 are addressed in a reasonable computational time, notably with better convergence properties than ordinary belief propagation.

  4. Implementing a C++ Version of the Joint Seismic-Geodetic Algorithm for Finite-Fault Detection and Slip Inversion for Earthquake Early Warning

    Science.gov (United States)

    Smith, D. E.; Felizardo, C.; Minson, S. E.; Boese, M.; Langbein, J. O.; Guillemot, C.; Murray, J. R.

    2015-12-01

    The earthquake early warning (EEW) systems in California and elsewhere can greatly benefit from algorithms that generate estimates of finite-fault parameters. These estimates could significantly improve real-time shaking calculations and yield important information for immediate disaster response. Minson et al. (2015) determined that combining FinDer's seismic-based algorithm (Böse et al., 2012) with BEFORES' geodetic-based algorithm (Minson et al., 2014) yields a more robust and informative joint solution than using either algorithm alone. FinDer examines the distribution of peak ground accelerations from seismic stations and determines the best finite-fault extent and strike from template matching. BEFORES employs a Bayesian framework to search for the best slip inversion over all possible fault geometries in terms of strike and dip. Using FinDer and BEFORES together generates estimates of finite-fault extent, strike, dip, preferred slip, and magnitude. To yield the quickest, most flexible, and open-source version of the joint algorithm, we translated BEFORES and FinDer from Matlab into C++. We are now developing a C++ Application Protocol Interface for these two algorithms to be connected to the seismic and geodetic data flowing from the EEW system. The interface that is being developed will also enable communication between the two algorithms to generate the joint solution of finite-fault parameters. Once this interface is developed and implemented, the next step will be to run test seismic and geodetic data through the system via the Earthworm module, Tank Player. This will allow us to examine algorithm performance on simulated data and past real events.

  5. 3D Multisource Full‐Waveform Inversion using Dynamic Random Phase Encoding

    KAUST Repository

    Boonyasiriwat, Chaiwoot

    2010-10-17

    We have developed a multisource full‐waveform inversion algorithm using a dynamic phase encoding strategy with dual‐randomization—both the position and polarity of simultaneous sources are randomized and changed every iteration. The dynamic dual‐randomization is used to promote the destructive interference of crosstalk noise resulting from blending a large number of common shot gathers into a supergather. We compare our multisource algorithm with various algorithms in a numerical experiment using the 3D SEG/EAGE overthrust model and show that our algorithm provides a higher‐quality velocity tomogram than the other methods that use only monorandomization. This suggests that increasing the degree of randomness in phase encoding should improve the quality of the inversion result.

  6. Multi-resolution inversion algorithm for the attenuated radon transform

    KAUST Repository

    Barbano, Paolo Emilio; Fokas, Athanasios S.

    2011-01-01

    We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed

  7. Unwrapped phase inversion with an exponential damping

    KAUST Repository

    Choi, Yun Seok

    2015-07-28

    Full-waveform inversion (FWI) suffers from the phase wrapping (cycle skipping) problem when the frequency of data is not low enough. Unless we obtain a good initial velocity model, the phase wrapping problem in FWI causes a result corresponding to a local minimum, usually far away from the true solution, especially at depth. Thus, we have developed an inversion algorithm based on a space-domain unwrapped phase, and we also used exponential damping to mitigate the nonlinearity associated with the reflections. We construct the 2D phase residual map, which usually contains the wrapping discontinuities, especially if the model is complex and the frequency is high. We then unwrap the phase map and remove these cycle-based jumps. However, if the phase map has several residues, the unwrapping process becomes very complicated. We apply a strong exponential damping to the wavefield to eliminate much of the residues in the phase map, thus making the unwrapping process simple. We finally invert the unwrapped phases using the back-propagation algorithm to calculate the gradient. We progressively reduce the damping factor to obtain a high-resolution image. Numerical examples determined that the unwrapped phase inversion with a strong exponential damping generated convergent long-wavelength updates without low-frequency information. This model can be used as a good starting model for a subsequent inversion with a reduced damping, eventually leading to conventional waveform inversion.

  8. Inverse transformation algorithm of transient electromagnetic field and its high-resolution continuous imaging interpretation method

    International Nuclear Information System (INIS)

    Qi, Zhipeng; Li, Xiu; Lu, Xushan; Zhang, Yingying; Yao, Weihua

    2015-01-01

    We introduce a new and potentially useful method for wave field inverse transformation and its application in transient electromagnetic method (TEM) 3D interpretation. The diffusive EM field is known to have a unique integral representation in terms of a fictitious wave field that satisfies a wave equation. The continuous imaging of TEM can be accomplished using the imaging methods in seismic interpretation after the diffusion equation is transformed into a fictitious wave equation. The interpretation method based on the imaging of a fictitious wave field could be used as a fast 3D inversion method. Moreover, the fictitious wave field possesses some wave field features making it possible for the application of a wave field interpretation method in TEM to improve the prospecting resolution.Wave field transformation is a key issue in the migration imaging of a fictitious wave field. The equation in the wave field transformation belongs to the first class Fredholm integration equation, which is a typical ill-posed equation. Additionally, TEM has a large dynamic time range, which also facilitates the weakness of this ill-posed problem. The wave field transformation is implemented by using pre-conditioned regularized conjugate gradient method. The continuous imaging of a fictitious wave field is implemented by using Kirchhoff integration. A synthetic aperture and deconvolution algorithm is also introduced to improve the interpretation resolution. We interpreted field data by the method proposed in this paper, and obtained a satisfying interpretation result. (paper)

  9. Inversion of the star transform

    International Nuclear Information System (INIS)

    Zhao, Fan; Schotland, John C; Markel, Vadim A

    2014-01-01

    We define the star transform as a generalization of the broken ray transform introduced by us in previous work. The advantages of using the star transform include the possibility to reconstruct the absorption and the scattering coefficients of the medium separately and simultaneously (from the same data) and the possibility to utilize scattered radiation which, in the case of conventional x-ray tomography, is discarded. In this paper, we derive the star transform from physical principles, discuss its mathematical properties and analyze numerical stability of inversion. In particular, it is shown that stable inversion of the star transform can be obtained only for configurations involving odd number of rays. Several computationally-efficient inversion algorithms are derived and tested numerically. (paper)

  10. 2D acoustic-elastic coupled waveform inversion in the Laplace domain

    KAUST Repository

    Bae, Hoseuk; Shin, Changsoo; Cha, Youngho; Choi, Yun Seok; Min, Dongjoo

    2010-01-01

    Although waveform inversion has been intensively studied in an effort to properly delineate the Earth's structures since the early 1980s, most of the time- and frequency-domain waveform inversion algorithms still have critical limitations

  11. Voxel inversion of airborne EM data

    DEFF Research Database (Denmark)

    Fiandaca, Gianluca G.; Auken, Esben; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which allows for straightforward integration of different data types in joint inversion, for informing geological/hydrogeological models directly and for easier incorporation...... of prior information. Inversion of geophysical data usually refers to a model space being linked to the actual observation points. For airborne surveys the spatial discretization of the model space reflects the flight lines. Often airborne surveys are carried out in areas where other ground......-based geophysical data are available. The model space of geophysical inversions is usually referred to the positions of the measurements, and ground-based model positions do not generally coincide with the airborne model positions. Consequently, a model space based on the measuring points is not well suited...

  12. REGULARIZED D-BAR METHOD FOR THE INVERSE CONDUCTIVITY PROBLEM

    DEFF Research Database (Denmark)

    Knudsen, Kim; Lassas, Matti; Mueller, Jennifer

    2009-01-01

    A strategy for regularizing the inversion procedure for the two-dimensional D-bar reconstruction algorithm based on the global uniqueness proof of Nachman [Ann. Math. 143 (1996)] for the ill-posed inverse conductivity problem is presented. The strategy utilizes truncation of the boundary integral...... the convergence of the reconstructed conductivity to the true conductivity as the noise level tends to zero. The results provide a link between two traditions of inverse problems research: theory of regularization and inversion methods based on complex geometrical optics. Also, the procedure is a novel...

  13. Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals

    Science.gov (United States)

    Loyola, D. G.

    2017-12-01

    The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.

  14. 2D Inversion of Transient Electromagnetic Method (TEM)

    Science.gov (United States)

    Bortolozo, Cassiano Antonio; Luís Porsani, Jorge; Acácio Monteiro dos Santos, Fernando

    2017-04-01

    A new methodology was developed for 2D inversion of Transient Electromagnetic Method (TEM). The methodology consists in the elaboration of a set of routines in Matlab code for modeling and inversion of TEM data and the determination of the most efficient field array for the problem. In this research, the 2D TEM modeling uses the finite differences discretization. To solve the inversion problem, were applied an algorithm based on Marquardt technique, also known as Ridge Regression. The algorithm is stable and efficient and it is widely used in geoelectrical inversion problems. The main advantage of 1D survey is the rapid data acquisition in a large area, but in regions with two-dimensional structures or that need more details, is essential to use two-dimensional interpretation methodologies. For an efficient field acquisition we used in an innovative form the fixed-loop array, with a square transmitter loop (200m x 200m) and 25m spacing between the sounding points. The TEM surveys were conducted only inside the transmitter loop, in order to not deal with negative apparent resistivity values. Although it is possible to model the negative values, it makes the inversion convergence more difficult. Therefore the methodology described above has been developed in order to achieve maximum optimization of data acquisition. Since it is necessary only one transmitter loop disposition in the surface for each series of soundings inside the loop. The algorithms were tested with synthetic data and the results were essential to the interpretation of the results with real data and will be useful in future situations. With the inversion of the real data acquired over the Paraná Sedimentary Basin (PSB) was successful realized a 2D TEM inversion. The results indicate a robust geoelectrical characterization for the sedimentary and crystalline aquifers in the PSB. Therefore, using a new and relevant approach for 2D TEM inversion, this research effectively contributed to map the most

  15. Implementation of Tuy's cone-beam inversion formula

    International Nuclear Information System (INIS)

    Zeng, G.L.; Clack, R.; Gullberg, G.T.

    1994-01-01

    Tuy's cone-beam inversion formula was modified to develop a cone-beam reconstruction algorithm. The algorithm was implemented for a cone-beam vertex orbit consisting of a circle and two orthogonal lines. This orbit geometry satisfies the cone-beam data sufficiency condition and is easy to implement on commercial single photon emission computed tomography (SPECT) systems. The algorithm which consists of two derivative steps, one rebinning step, and one three-dimensional backprojection step, was verified by computer simulations and by reconstructing physical phantom data collected on a clinical SPECT system. The proposed algorithm gives equivalent results and is as efficient as other analytical cone-beam reconstruction algorithms. (Author)

  16. Approximation Of Multi-Valued Inverse Functions Using Clustering And Sugeno Fuzzy Inference

    Science.gov (United States)

    Walden, Maria A.; Bikdash, Marwan; Homaifar, Abdollah

    1998-01-01

    Finding the inverse of a continuous function can be challenging and computationally expensive when the inverse function is multi-valued. Difficulties may be compounded when the function itself is difficult to evaluate. We show that we can use fuzzy-logic approximators such as Sugeno inference systems to compute the inverse on-line. To do so, a fuzzy clustering algorithm can be used in conjunction with a discriminating function to split the function data into branches for the different values of the forward function. These data sets are then fed into a recursive least-squares learning algorithm that finds the proper coefficients of the Sugeno approximators; each Sugeno approximator finds one value of the inverse function. Discussions about the accuracy of the approximation will be included.

  17. Continuity of the direct and inverse problems in one-dimensional scattering theory and numerical solution of the inverse problem

    International Nuclear Information System (INIS)

    Moura, C.A. de.

    1976-09-01

    We propose an algorithm for computing the potential V(x) associated to the one-dimensional Schroedinger operator E identical to - d 2 /dx 2 + V(x) -infinite < x< infinite from knowledge of the S.matrix, more exactly, of one of the reelection coefficients. The convergence of the algorithm is guaranteed by the stability results obtained for both the direct and inverse problems

  18. A fast sparse reconstruction algorithm for electrical tomography

    International Nuclear Information System (INIS)

    Zhao, Jia; Xu, Yanbin; Tan, Chao; Dong, Feng

    2014-01-01

    Electrical tomography (ET) has been widely investigated due to its advantages of being non-radiative, low-cost and high-speed. However, the image reconstruction of ET is a nonlinear and ill-posed inverse problem and the imaging results are easily affected by measurement noise. A sparse reconstruction algorithm based on L 1 regularization is robust to noise and consequently provides a high quality of reconstructed images. In this paper, a sparse reconstruction by separable approximation algorithm (SpaRSA) is extended to solve the ET inverse problem. The algorithm is competitive with the fastest state-of-the-art algorithms in solving the standard L 2 −L 1 problem. However, it is computationally expensive when the dimension of the matrix is large. To further improve the calculation speed of solving inverse problems, a projection method based on the Krylov subspace is employed and combined with the SpaRSA algorithm. The proposed algorithm is tested with image reconstruction of electrical resistance tomography (ERT). Both simulation and experimental results demonstrate that the proposed method can reduce the computational time and improve the noise robustness for the image reconstruction. (paper)

  19. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Energy Technology Data Exchange (ETDEWEB)

    Razali, Azhani Mohd, E-mail: azhani@nuclearmalaysia.gov.my; Abdullah, Jaafar, E-mail: jaafar@nuclearmalaysia.gov.my [Plant Assessment Technology (PAT) Group, Industrial Technology Division, Malaysian Nuclear Agency, Bangi, 43000 Kajang (Malaysia)

    2015-04-29

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  20. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    International Nuclear Information System (INIS)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-01-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm

  1. Image reconstruction of single photon emission computed tomography (SPECT) on a pebble bed reactor (PBR) using expectation maximization and exact inversion algorithms: Comparison study by means of numerical phantom

    Science.gov (United States)

    Razali, Azhani Mohd; Abdullah, Jaafar

    2015-04-01

    Single Photon Emission Computed Tomography (SPECT) is a well-known imaging technique used in medical application, and it is part of medical imaging modalities that made the diagnosis and treatment of disease possible. However, SPECT technique is not only limited to the medical sector. Many works are carried out to adapt the same concept by using high-energy photon emission to diagnose process malfunctions in critical industrial systems such as in chemical reaction engineering research laboratories, as well as in oil and gas, petrochemical and petrochemical refining industries. Motivated by vast applications of SPECT technique, this work attempts to study the application of SPECT on a Pebble Bed Reactor (PBR) using numerical phantom of pebbles inside the PBR core. From the cross-sectional images obtained from SPECT, the behavior of pebbles inside the core can be analyzed for further improvement of the PBR design. As the quality of the reconstructed image is largely dependent on the algorithm used, this work aims to compare two image reconstruction algorithms for SPECT, namely the Expectation Maximization Algorithm and the Exact Inversion Formula. The results obtained from the Exact Inversion Formula showed better image contrast and sharpness, and shorter computational time compared to the Expectation Maximization Algorithm.

  2. Derivation and implementation of a cone-beam reconstruction algorithm for nonplanar orbits

    International Nuclear Information System (INIS)

    Kudo, Hiroyuki; Saito, Tsuneo

    1994-01-01

    Smith and Grangeat derived a cone-beam inversion formula that can be applied when a nonplanar orbit satisfying the completeness condition is used. Although Grangeat's inversion formula is mathematically different from Smith's, they have similar overall structures to each other. The contribution of this paper is two-fold. First, based on the derivation of Smith, the authors point out that Grangeat's inversion formula and Smith's can be conveniently described using a single formula (the Smith-Grangeat inversion formula) that is in the form of space-variant filtering followed by cone-beam backprojection. Furthermore, the resulting formula is reformulated for data acquisition systems with a planar detector to obtain a new reconstruction algorithm. Second, the authors make two significant modifications to the new algorithm to reduce artifacts and numerical errors encountered in direct implementation of the new algorithm. As for exactness of the new algorithm, the following fact can be stated. The algorithm based on Grangeat's intermediate function is exact for any complete orbit, whereas that based on Smith's intermediate function should be considered as an approximate inverse excepting the special case where almost every plane in 3-D space meets the orbit. The validity of the new algorithm is demonstrated by simulation studies

  3. Phase matching in quantum searching and the improved Grover algorithm

    International Nuclear Information System (INIS)

    Long Guilu; Li Yansong; Xiao Li; Tu Changcun; Sun Yang

    2004-01-01

    The authors briefly introduced some of our recent work related to the phase matching condition in quantum searching algorithms and the improved Grover algorithm. When one replaces the two phase inversions in the Grover algorithm with arbitrary phase rotations, the modified algorithm usually fails in searching the marked state unless a phase matching condition is satisfied between the two phases. the Grover algorithm is not 100% in success rate, an improved Grover algorithm with zero-failure rate is given by replacing the phase inversions with angles that depends on the size of the database. Other aspects of the Grover algorithm such as the SO(3) picture of quantum searching, the dominant gate imperfections in the Grover algorithm are also mentioned. (author)

  4. Genetic algorithms and their use in Geophysical Problems

    Energy Technology Data Exchange (ETDEWEB)

    Parker, Paul B. [Univ. of California, Berkeley, CA (United States)

    1999-04-01

    Genetic algorithms (GAs), global optimization methods that mimic Darwinian evolution are well suited to the nonlinear inverse problems of geophysics. A standard genetic algorithm selects the best or ''fittest'' models from a ''population'' and then applies operators such as crossover and mutation in order to combine the most successful characteristics of each model and produce fitter models. More sophisticated operators have been developed, but the standard GA usually provides a robust and efficient search. Although the choice of parameter settings such as crossover and mutation rate may depend largely on the type of problem being solved, numerous results show that certain parameter settings produce optimal performance for a wide range of problems and difficulties. In particular, a low (about half of the inverse of the population size) mutation rate is crucial for optimal results, but the choice of crossover method and rate do not seem to affect performance appreciably. Optimal efficiency is usually achieved with smaller (< 50) populations. Lastly, tournament selection appears to be the best choice of selection methods due to its simplicity and its autoscaling properties. However, if a proportional selection method is used such as roulette wheel selection, fitness scaling is a necessity, and a high scaling factor (> 2.0) should be used for the best performance. Three case studies are presented in which genetic algorithms are used to invert for crustal parameters. The first is an inversion for basement depth at Yucca mountain using gravity data, the second an inversion for velocity structure in the crust of the south island of New Zealand using receiver functions derived from teleseismic events, and the third is a similar receiver function inversion for crustal velocities beneath the Mendocino Triple Junction region of Northern California. The inversions demonstrate that genetic algorithms are effective in solving problems

  5. Black hole algorithm for determining model parameter in self-potential data

    Science.gov (United States)

    Sungkono; Warnana, Dwa Desa

    2018-01-01

    Analysis of self-potential (SP) data is increasingly popular in geophysical method due to its relevance in many cases. However, the inversion of SP data is often highly nonlinear. Consequently, local search algorithms commonly based on gradient approaches have often failed to find the global optimum solution in nonlinear problems. Black hole algorithm (BHA) was proposed as a solution to such problems. As the name suggests, the algorithm was constructed based on the black hole phenomena. This paper investigates the application of BHA to solve inversions of field and synthetic self-potential (SP) data. The inversion results show that BHA accurately determines model parameters and model uncertainty. This indicates that BHA is highly potential as an innovative approach for SP data inversion.

  6. EDITORIAL: Inverse Problems in Engineering

    Science.gov (United States)

    West, Robert M.; Lesnic, Daniel

    2007-01-01

    Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

  7. Analog fault diagnosis by inverse problem technique

    KAUST Repository

    Ahmed, Rania F.

    2011-12-01

    A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.

  8. A Single Software For Processing, Inversion, And Presentation Of Aem Data Of Different Systems

    DEFF Research Database (Denmark)

    Auken, Esben; Christiansen, Anders Vest; Viezzoli, Andrea

    2009-01-01

    modeling and Spatial Constrained inversion (SCI) for quasi 3-D inversion. The Workbench implements a user friendly interface to these algorithms enabling non-geophysicists to carry out inversion of complicated airborne data sets without having in-depth knowledge about how the algorithm actually works. Just...... to manage data and settings. The benefits of using a databases compared to flat ASCII column files should not be underestimated. Firstly, user-handled input/output is nearly eliminated, thus minimizing the chance of human errors. Secondly, data are stored in a well described and documented format which...

  9. Three-Dimensional Induced Polarization Parallel Inversion Using Nonlinear Conjugate Gradients Method

    Directory of Open Access Journals (Sweden)

    Huan Ma

    2015-01-01

    Full Text Available Four kinds of array of induced polarization (IP methods (surface, borehole-surface, surface-borehole, and borehole-borehole are widely used in resource exploration. However, due to the presence of large amounts of the sources, it will take much time to complete the inversion. In the paper, a new parallel algorithm is described which uses message passing interface (MPI and graphics processing unit (GPU to accelerate 3D inversion of these four methods. The forward finite differential equation is solved by ILU0 preconditioner and the conjugate gradient (CG solver. The inverse problem is solved by nonlinear conjugate gradients (NLCG iteration which is used to calculate one forward and two “pseudo-forward” modelings and update the direction, space, and model in turn. Because each source is independent in forward and “pseudo-forward” modelings, multiprocess modes are opened by calling MPI library. The iterative matrix solver within CULA is called in each process. Some tables and synthetic data examples illustrate that this parallel inversion algorithm is effective. Furthermore, we demonstrate that the joint inversion of surface and borehole data produces resistivity and chargeability results are superior to those obtained from inversions of individual surface data.

  10. Nonlinear regularization with applications in geophysics

    DEFF Research Database (Denmark)

    Berglund, Eva Ann-Charlotte

    2002-01-01

    -posed problems. We find that a for a special class of discrete linear ill-posed problems, we can calculate approximations to the singular values as well as approximations to the Fourier coefficients directly from the CGLS iterations. This finding makes it possible to design a new stopping rule for the CGLS...... iterations, base upon the fact that the ratio between the Fourier coefficients and the singular values decays as long as we can extract information about the solution from the right-hand side....

  11. A conditioning technique for matrix inversion for Wilson fermions

    International Nuclear Information System (INIS)

    DeGrand, T.A.

    1988-01-01

    I report a simple technique for conditioning conjugate gradient or conjugate residue matrix inversion as applied to the lattice gauge theory problem of computing the propagator of Wilson fermions. One form of the technique provides about a factor of three speedup over an unconditioned algorithm while running at the same speed as an unconditioned algorithm. I illustrate the method as it is applied to a conjugate residue algorithm. (orig.)

  12. Improved Inverse Kinematics Algorithm Using Screw Theory for a Six-DOF Robot Manipulator

    OpenAIRE

    Chen, Qingcheng; Zhu, Shiqiang; Zhang, Xuequn

    2015-01-01

    Based on screw theory, a novel improved inverse-kinematics approach for a type of six-DOF serial robot, “Qianjiang I”, is proposed in this paper. The common kinematics model of the robot is based on the Denavit-Hartenberg (D-H) notation method while its inverse kinematics has inefficient calculation and complicated solution, which cannot meet the demands of online real-time application. To solve this problem, this paper presents a new method to improve the efficiency of the inverse kinematics...

  13. Digital Hardware Realization of Forward and Inverse Kinematics for a Five-Axis Articulated Robot Arm

    Directory of Open Access Journals (Sweden)

    Bui Thi Hai Linh

    2015-01-01

    Full Text Available When robot arm performs a motion control, it needs to calculate a complicated algorithm of forward and inverse kinematics which consumes much CPU time and certainty slows down the motion speed of robot arm. Therefore, to solve this issue, the development of a hardware realization of forward and inverse kinematics for an articulated robot arm is investigated. In this paper, the formulation of the forward and inverse kinematics for a five-axis articulated robot arm is derived firstly. Then, the computations algorithm and its hardware implementation are described. Further, very high speed integrated circuits hardware description language (VHDL is applied to describe the overall hardware behavior of forward and inverse kinematics. Additionally, finite state machine (FSM is applied for reducing the hardware resource usage. Finally, for verifying the correctness of forward and inverse kinematics for the five-axis articulated robot arm, a cosimulation work is constructed by ModelSim and Simulink. The hardware of the forward and inverse kinematics is run by ModelSim and a test bench which generates stimulus to ModelSim and displays the output response is taken in Simulink. Under this design, the forward and inverse kinematics algorithms can be completed within one microsecond.

  14. Joint Bayesian Stochastic Inversion of Well Logs and Seismic Data for Volumetric Uncertainty Analysis

    Directory of Open Access Journals (Sweden)

    Moslem Moradi

    2015-06-01

    Full Text Available Here in, an application of a new seismic inversion algorithm in one of Iran’s oilfields is described. Stochastic (geostatistical seismic inversion, as a complementary method to deterministic inversion, is perceived as contribution combination of geostatistics and seismic inversion algorithm. This method integrates information from different data sources with different scales, as prior information in Bayesian statistics. Data integration leads to a probability density function (named as a posteriori probability that can yield a model of subsurface. The Markov Chain Monte Carlo (MCMC method is used to sample the posterior probability distribution, and the subsurface model characteristics can be extracted by analyzing a set of the samples. In this study, the theory of stochastic seismic inversion in a Bayesian framework was described and applied to infer P-impedance and porosity models. The comparison between the stochastic seismic inversion and the deterministic model based seismic inversion indicates that the stochastic seismic inversion can provide more detailed information of subsurface character. Since multiple realizations are extracted by this method, an estimation of pore volume and uncertainty in the estimation were analyzed.

  15. Adaptive inversion algorithm for 1 . 5 μm visibility lidar incorporating in situ Angstrom wavelength exponent

    Science.gov (United States)

    Shang, Xiang; Xia, Haiyun; Dou, Xiankang; Shangguan, Mingjia; Li, Manyi; Wang, Chong

    2018-07-01

    An eye-safe 1 . 5 μm visibility lidar is presented in this work considering in situ particle size distribution, which can be deployed in crowded places like airports. In such a case, the measured extinction coefficient at 1 . 5 μm should be converted to that at 0 . 55 μm for visibility retrieval. Although several models have been established since 1962, the accurate wavelength conversion remains a challenge. An adaptive inversion algorithm for 1 . 5 μm visibility lidar is proposed and demonstrated by using the in situ Angstrom wavelength exponent, which is derived from an aerosol spectrometer. The impact of the particle size distribution of atmospheric aerosols and the Rayleigh backscattering of atmospheric molecules are taken into account. Using the 1 . 5 μm visibility lidar, the visibility with a temporal resolution of 5 min is detected over 48 h in Hefei (31 . 83∘ N, 117 . 25∘ E). The average visibility error between the new method and a visibility sensor (Vaisala, PWD52) is 5.2% with the R-square value of 0.96, while the relative error between another reference visibility lidar at 532 nm and the visibility sensor is 6.7% with the R-square value of 0.91. All results agree with each other well, demonstrating the accuracy and stability of the algorithm.

  16. An inverse method for radiation transport

    Energy Technology Data Exchange (ETDEWEB)

    Favorite, J. A. (Jeffrey A.); Sanchez, R. (Richard)

    2004-01-01

    Adjoint functions have been used with forward functions to compute gradients in implicit (iterative) solution methods for inverse problems in optical tomography, geoscience, thermal science, and other fields, but only once has this approach been used for inverse solutions to the Boltzmann transport equation. In this paper, this approach is used to develop an inverse method that requires only angle-independent flux measurements, rather than angle-dependent measurements as was done previously. The method is applied to a simplified form of the transport equation that does not include scattering. The resulting procedure uses measured values of gamma-ray fluxes of discrete, characteristic energies to determine interface locations in a multilayer shield. The method was implemented with a Newton-Raphson optimization algorithm, and it worked very well in numerical one-dimensional spherical test cases. A more sophisticated optimization method would better exploit the potential of the inverse method.

  17. Magnetotelluric inversion for depth-to-basement estimation

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Zhdanov, Michael

    2015-01-01

    The magnetotelluric (MT) method can be effectively applied for depth-to-basement estimation, because there exists a strong contrast in resistivity between a conductive sedimentary basin and a resistive crystalline basement. Conventional inversions of MT data are usually aimed at determining...... the volumetric distribution of the conductivity within the inversion domain. By the nature of the MT method, the recovered distribution of the subsurface conductivity is typically diffusive, which makes it difficult to select the sediment-basement interface. This paper develops a novel approach to 3D MT...... inversion for the depth-to-basement estimate. The key to this approach is selection of the model parameterization with the depth to basement being the major unknown parameter. In order to estimate the depth to the basement, the inversion algorithm recovers both the thickness and the conductivities...

  18. Inverse source problems in elastodynamics

    Science.gov (United States)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  19. Reconstructing the Hopfield network as an inverse Ising problem

    International Nuclear Information System (INIS)

    Huang Haiping

    2010-01-01

    We test four fast mean-field-type algorithms on Hopfield networks as an inverse Ising problem. The equilibrium behavior of Hopfield networks is simulated through Glauber dynamics. In the low-temperature regime, the simulated annealing technique is adopted. Although performances of these network reconstruction algorithms on the simulated network of spiking neurons are extensively studied recently, the analysis of Hopfield networks is lacking so far. For the Hopfield network, we found that, in the retrieval phase favored when the network wants to memory one of stored patterns, all the reconstruction algorithms fail to extract interactions within a desired accuracy, and the same failure occurs in the spin-glass phase where spurious minima show up, while in the paramagnetic phase, albeit unfavored during the retrieval dynamics, the algorithms work well to reconstruct the network itself. This implies that, as an inverse problem, the paramagnetic phase is conversely useful for reconstructing the network while the retrieval phase loses all the information about interactions in the network except for the case where only one pattern is stored. The performances of algorithms are studied with respect to the system size, memory load, and temperature; sample-to-sample fluctuations are also considered.

  20. Comparison of the inversion algorithms applied to the ozone vertical profile retrieval from SCIAMACHY limb measurements

    Directory of Open Access Journals (Sweden)

    A. Rozanov

    2007-09-01

    Full Text Available This paper is devoted to an intercomparison of ozone vertical profiles retrieved from the measurements of scattered solar radiation performed by the SCIAMACHY instrument in the limb viewing geometry. Three different inversion algorithms including the prototype of the operational Level 1 to 2 processor to be operated by the European Space Agency are considered. Unlike usual validation studies, this comparison removes the uncertainties arising when comparing measurements made by different instruments probing slightly different air masses and focuses on the uncertainties specific to the modeling-retrieval problem only. The intercomparison was performed for 5 selected orbits of SCIAMACHY showing a good overall agreement of the results in the middle stratosphere, whereas considerable discrepancies were identified in the lower stratosphere and upper troposphere altitude region. Additionally, comparisons with ground-based lidar measurements are shown for selected profiles demonstrating an overall correctness of the retrievals.

  1. Collision-free inverse kinematics of a 7 link cucumber picking robot

    NARCIS (Netherlands)

    Henten, van E.J.; Schenk, E.J.J.; Willigenburg, van L.G.; Meuleman, J.; Barreiro, P.

    2008-01-01

    The paper presents results of research on inverse kinematics algorithms to be used in a functional model of a cucumber harvesting robot consisting of a redundant manipulator with one prismatic and six rotational joints (P6R). Within a first generic approach, the inverse kinematics problem was

  2. inverse correction of fourier transforms for one-dimensional strongly ...

    African Journals Online (AJOL)

    Hsin Ying-Fei

    2016-05-01

    May 1, 2016 ... As it is widely used in periodic lattice design theory and is particularly useful in aperiodic lattice design [12,13], the accuracy of the FT algorithm under strong scattering conditions is the focus of this paper. We propose an inverse correction approach for the inaccurate FT algorithm in strongly scattering ...

  3. Inverse planning and class solutions for brachytherapy treatment planning

    International Nuclear Information System (INIS)

    Trnkova, P.

    2010-01-01

    Brachytherapy or interventional radiooncology is a method of radiation therapy. It is a method, where a small encapsulated radioactive source is placed near to / in the tumour and therefore delivers high doses directly to the target volume. Organs at risk (OARs) are spared due to the inverse square dose fall-off. In the past years there was a slight stagnation in the development of techniques for brachytherapy treatment. While external beam radiotherapy became more and more sophisticated, in brachytherapy traditional methods have been still used. Recently, 3D imaging was considered also as the modality for brachytherapy and more precise brachytherapy could expand. Nowadays, an image guided brachytherapy is state-of-art in many centres. Integration of imaging methods lead to the dose distribution individually tailored for each patient. Treatment plan optimization is mostly performed manually as an adaptation of a standard loading pattern. Recently, inverse planning approaches have been introduced into brachytherapy. The aim of this doctoral thesis was to analyze inverse planning and to develop concepts how to integrate inverse planning into cervical cancer brachytherapy. First part of the thesis analyzes the Hybrid Inverse treatment Planning and Optimization (HIPO) algorithm and proposes a workflow how to safely work with this algorithm. The problem of inverse planning generally is that only the dose and volume parameters are taken into account and spatial dose distribution is neglected. This fact can lead to unwanted high dose regions in a normal tissue. A unique implementation of HIPO into the treatment planning system using additional features enabled to create treatment plans similar to the plans resulting from manual optimization and to shape the high dose regions inside the CTV. In the second part the HIPO algorithm is compared to the Inverse Planning Simulated Annealing (IPSA) algorithm. IPSA is implemented into the commercial treatment planning system. It

  4. Parallel inverse halftoning by look-up table (LUT) partitioning

    International Nuclear Information System (INIS)

    Siddiqui, Umair F.; Sait, Sadiq M.

    2008-01-01

    The Look-Up Table (LUT) method for inverse halftoning is not only computation-less and fast but yields good results. The method employs a single LUT that is stored in a ROM and contains pre-computed contone (gray level) values for inverse halftone operation. This paper proposes an algorithm that can perform parallel inverse halftone operations by partitioning the single LUT into N smaller look-Up Tables (s- LUTs). Therefore, up to k (k<-N) pixels can be concurrently fetched from the halftone image and their contone values fetched concurrently from separate s- LUT. Obviously, this parallelization increases the speed of inverse halftoning by upto k times. In this proposed method, the total entries in all s- LUTs remain equal to the entries in the single LUT of the serial LUT method. Some degradation in image quality is also possible due to pixel loss during fetching. This is because some other contone value is being fetched from that s-LUT. The complete implementation of the algorithm requires two CPLDs (Complex Programmable Logic Devices) for the computational portion, external content addressable memories (CAM) and static RAMs to store s-LUTs. (author)

  5. The feasibility of retrieving vertical temperature profiles from satellite nadir UV observations: A sensitivity analysis and an inversion experiment with neural network algorithms

    International Nuclear Information System (INIS)

    Sellitto, P.; Del Frate, F.

    2014-01-01

    Atmospheric temperature profiles are inferred from passive satellite instruments, using thermal infrared or microwave observations. Here we investigate on the feasibility of the retrieval of height resolved temperature information in the ultraviolet spectral region. The temperature dependence of the absorption cross sections of ozone in the Huggins band, in particular in the interval 320–325 nm, is exploited. We carried out a sensitivity analysis and demonstrated that a non-negligible information on the temperature profile can be extracted from this small band. Starting from these results, we developed a neural network inversion algorithm, trained and tested with simulated nadir EnviSat-SCIAMACHY ultraviolet observations. The algorithm is able to retrieve the temperature profile with root mean square errors and biases comparable to existing retrieval schemes that use thermal infrared or microwave observations. This demonstrates, for the first time, the feasibility of temperature profiles retrieval from space-borne instruments operating in the ultraviolet. - Highlights: • A sensitivity analysis and an inversion scheme to retrieve temperature profiles from satellite UV observations (320–325 nm). • The exploitation of the temperature dependence of the absorption cross section of ozone in the Huggins band is proposed. • First demonstration of the feasibility of temperature profiles retrieval from satellite UV observations. • RMSEs and biases comparable with more established techniques involving TIR and MW observations

  6. [Orthogonal Vector Projection Algorithm for Spectral Unmixing].

    Science.gov (United States)

    Song, Mei-ping; Xu, Xing-wei; Chang, Chein-I; An, Ju-bai; Yao, Li

    2015-12-01

    Spectrum unmixing is an important part of hyperspectral technologies, which is essential for material quantity analysis in hyperspectral imagery. Most linear unmixing algorithms require computations of matrix multiplication and matrix inversion or matrix determination. These are difficult for programming, especially hard for realization on hardware. At the same time, the computation costs of the algorithms increase significantly as the number of endmembers grows. Here, based on the traditional algorithm Orthogonal Subspace Projection, a new method called. Orthogonal Vector Projection is prompted using orthogonal principle. It simplifies this process by avoiding matrix multiplication and inversion. It firstly computes the final orthogonal vector via Gram-Schmidt process for each endmember spectrum. And then, these orthogonal vectors are used as projection vector for the pixel signature. The unconstrained abundance can be obtained directly by projecting the signature to the projection vectors, and computing the ratio of projected vector length and orthogonal vector length. Compared to the Orthogonal Subspace Projection and Least Squares Error algorithms, this method does not need matrix inversion, which is much computation costing and hard to implement on hardware. It just completes the orthogonalization process by repeated vector operations, easy for application on both parallel computation and hardware. The reasonability of the algorithm is proved by its relationship with Orthogonal Sub-space Projection and Least Squares Error algorithms. And its computational complexity is also compared with the other two algorithms', which is the lowest one. At last, the experimental results on synthetic image and real image are also provided, giving another evidence for effectiveness of the method.

  7. A LAI inversion algorithm based on the unified model of canopy bidirectional reflectance distribution function for the Heihe River Basin

    Science.gov (United States)

    Ma, B.; Li, J.; Fan, W.; Ren, H.; Xu, X.

    2017-12-01

    Leaf area index (LAI) is one of the important parameters of vegetation canopy structure, which can represent the growth condition of vegetation effectively. The accuracy, availability and timeliness of LAI data can be improved greatly, which is of great importance to vegetation-related research, such as the study of atmospheric, land surface and hydrological processes to obtain LAI by remote sensing method. Heihe River Basin is the inland river basin in northwest China. There are various types of vegetation and all kinds of terrain conditions in the basin, so it is helpful for testing the accuracy of the model under the complex surface and evaluating the correctness of the model to study LAI in this area. On the other hand, located in west arid area of China, the ecological environment of Heihe Basin is fragile, LAI is an important parameter to represent the vegetation growth condition, and can help us understand the status of vegetation in the Heihe River Basin. Different from the previous LAI inversion models, the BRDF (bidirectional reflectance distribution function) unified model can be applied for both continuous vegetation and discrete vegetation, it is appropriate to the complex vegetation distribution. LAI is the key input parameter of the model. We establish the inversion algorithm that can exactly retrieve LAI using remote sensing image based on the unified model. First, we determine the vegetation type through the vegetation classification map to obtain the corresponding G function, leaf and surface reflectivity. Then, we need to determine the leaf area index (LAI), the aggregation index (ζ) and the sky scattered light ratio (β) range and the value of the interval, entering all the parameters into the model to calculate the corresponding reflectivity ρ and establish the lookup table of different vegetation. Finally, we can invert LAI on the basis of the established lookup table. The principle of inversion is least squares method. We have produced 1 km

  8. Numerical approach to the inverse convection-diffusion problem

    International Nuclear Information System (INIS)

    Yang, X-H; She, D-X; Li, J-Q

    2008-01-01

    In this paper, the inverse problem on source term identification in convection-diffusion equation is transformed into an optimization problem. To reduce the computational cost and improve computational accuracy for the optimization problem, a new algorithm, chaos real-coded hybrid-accelerating evolution algorithm (CRHAEA), is proposed, in which an initial population is generated by chaos mapping, and new chaos mutation and simplex evolution operation are used. With the shrinking of searching range, CRHAEA gradually directs to an optimal result with the excellent individuals obtained by real-coded evolution algorithm. Its convergence is analyzed. Its efficiency is demonstrated by 15 test functions. Numerical simulation shows that CRHAEA has some advantages over the real-coded accelerated evolution algorithm, the chaos algorithm and the pure random search algorithm

  9. Comparison of the genetic algorithm and incremental optimisation routines for a Bayesian inverse modelling based network design

    Science.gov (United States)

    Nickless, A.; Rayner, P. J.; Erni, B.; Scholes, R. J.

    2018-05-01

    The design of an optimal network of atmospheric monitoring stations for the observation of carbon dioxide (CO2) concentrations can be obtained by applying an optimisation algorithm to a cost function based on minimising posterior uncertainty in the CO2 fluxes obtained from a Bayesian inverse modelling solution. Two candidate optimisation methods assessed were the evolutionary algorithm: the genetic algorithm (GA), and the deterministic algorithm: the incremental optimisation (IO) routine. This paper assessed the ability of the IO routine in comparison to the more computationally demanding GA routine to optimise the placement of a five-member network of CO2 monitoring sites located in South Africa. The comparison considered the reduction in uncertainty of the overall flux estimate, the spatial similarity of solutions, and computational requirements. Although the IO routine failed to find the solution with the global maximum uncertainty reduction, the resulting solution had only fractionally lower uncertainty reduction compared with the GA, and at only a quarter of the computational resources used by the lowest specified GA algorithm. The GA solution set showed more inconsistency if the number of iterations or population size was small, and more so for a complex prior flux covariance matrix. If the GA completed with a sub-optimal solution, these solutions were similar in fitness to the best available solution. Two additional scenarios were considered, with the objective of creating circumstances where the GA may outperform the IO. The first scenario considered an established network, where the optimisation was required to add an additional five stations to an existing five-member network. In the second scenario the optimisation was based only on the uncertainty reduction within a subregion of the domain. The GA was able to find a better solution than the IO under both scenarios, but with only a marginal improvement in the uncertainty reduction. These results suggest

  10. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    Science.gov (United States)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results

  11. A Motion Estimation Algorithm Using DTCWT and ARPS

    Directory of Open Access Journals (Sweden)

    Unan Y. Oktiawati

    2013-09-01

    Full Text Available In this paper, a hybrid motion estimation algorithm utilizing the Dual Tree Complex Wavelet Transform (DTCWT and the Adaptive Rood Pattern Search (ARPS block is presented. The proposed algorithm first transforms each video sequence with DTCWT. The frame n of the video sequence is used as a reference input and the frame n+2 is used to find the motion vector. Next, the ARPS block search algorithm is carried out and followed by an inverse DTCWT. The motion compensation is then carried out on each inversed frame n and motion vector. The results show that PSNR can be improved for mobile device without depriving its quality. The proposed algorithm also takes less memory usage compared to the DCT-based algorithm. The main contribution of this work is a hybrid wavelet-based motion estimation algorithm for mobile devices. Other contribution is the visual quality scoring system as used in section 6.

  12. Germinal Center Optimization Applied to Neural Inverse Optimal Control for an All-Terrain Tracked Robot

    Directory of Open Access Journals (Sweden)

    Carlos Villaseñor

    2017-12-01

    Full Text Available Nowadays, there are several meta-heuristics algorithms which offer solutions for multi-variate optimization problems. These algorithms use a population of candidate solutions which explore the search space, where the leadership plays a big role in the exploration-exploitation equilibrium. In this work, we propose to use a Germinal Center Optimization algorithm (GCO which implements temporal leadership through modeling a non-uniform competitive-based distribution for particle selection. GCO is used to find an optimal set of parameters for a neural inverse optimal control applied to all-terrain tracked robot. In the Neural Inverse Optimal Control (NIOC scheme, a neural identifier, based on Recurrent High Orden Neural Network (RHONN trained with an extended kalman filter algorithm, is used to obtain a model of the system, then, a control law is design using such model with the inverse optimal control approach. The RHONN identifier is developed without knowledge of the plant model or its parameters, on the other hand, the inverse optimal control is designed for tracking velocity references. Applicability of the proposed scheme is illustrated using simulations results as well as real-time experimental results with an all-terrain tracked robot.

  13. Numerical Inversion for the Multiple Fractional Orders in the Multiterm TFDE

    OpenAIRE

    Sun, Chunlong; Li, Gongsheng; Jia, Xianzheng

    2017-01-01

    The fractional order in a fractional diffusion model is a key parameter which characterizes the anomalous diffusion behaviors. This paper deals with an inverse problem of determining the multiple fractional orders in the multiterm time-fractional diffusion equation (TFDE for short) from numerics. The homotopy regularization algorithm is applied to solve the inversion problem using the finite data at one interior point in the space domain. The inversion fractional orders with random noisy data...

  14. DenInv3D: a geophysical software for three-dimensional density inversion of gravity field data

    Science.gov (United States)

    Tian, Yu; Ke, Xiaoping; Wang, Yong

    2018-04-01

    This paper presents a three-dimensional density inversion software called DenInv3D that operates on gravity and gravity gradient data. The software performs inversion modelling, kernel function calculation, and inversion calculations using the improved preconditioned conjugate gradient (PCG) algorithm. In the PCG algorithm, due to the uncertainty of empirical parameters, such as the Lagrange multiplier, we use the inflection point of the L-curve as the regularisation parameter. The software can construct unequally spaced grids and perform inversions using such grids, which enables changing the resolution of the inversion results at different depths. Through inversion of airborne gradiometry data on the Australian Kauring test site, we discovered that anomalous blocks of different sizes are present within the study area in addition to the central anomalies. The software of DenInv3D can be downloaded from http://159.226.162.30.

  15. Inverse scattering problems with multi-frequencies

    International Nuclear Information System (INIS)

    Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi

    2015-01-01

    This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods. (topical review)

  16. Comparison result of inversion of gravity data of a fault by particle swarm optimization and Levenberg-Marquardt methods.

    Science.gov (United States)

    Toushmalani, Reza

    2013-01-01

    The purpose of this study was to compare the performance of two methods for gravity inversion of a fault. First method [Particle swarm optimization (PSO)] is a heuristic global optimization method and also an optimization algorithm, which is based on swarm intelligence. It comes from the research on the bird and fish flock movement behavior. Second method [The Levenberg-Marquardt algorithm (LM)] is an approximation to the Newton method used also for training ANNs. In this paper first we discussed the gravity field of a fault, then describes the algorithms of PSO and LM And presents application of Levenberg-Marquardt algorithm, and a particle swarm algorithm in solving inverse problem of a fault. Most importantly the parameters for the algorithms are given for the individual tests. Inverse solution reveals that fault model parameters are agree quite well with the known results. A more agreement has been found between the predicted model anomaly and the observed gravity anomaly in PSO method rather than LM method.

  17. Automated gravity gradient tensor inversion for underwater object detection

    International Nuclear Information System (INIS)

    Wu, Lin; Tian, Jinwen

    2010-01-01

    Underwater abnormal object detection is a current need for the navigation security of autonomous underwater vehicles (AUVs). In this paper, an automated gravity gradient tensor inversion algorithm is proposed for the purpose of passive underwater object detection. Full-tensor gravity gradient anomalies induced by an object in the partial area can be measured with the technique of gravity gradiometry on an AUV. Then the automated algorithm utilizes the anomalies, using the inverse method to estimate the mass and barycentre location of the arbitrary-shaped object. A few tests on simple synthetic models will be illustrated, in order to evaluate the feasibility and accuracy of the new algorithm. Moreover, the method is applied to a complicated model of an abnormal object with gradiometer and AUV noise, and interference from a neighbouring illusive smaller object. In all cases tested, the estimated mass and barycentre location parameters are found to be in good agreement with the actual values

  18. The attitude inversion method of geostationary satellites based on unscented particle filter

    Science.gov (United States)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  19. Simultaneous inversion for the space-dependent diffusion coefficient and the fractional order in the time-fractional diffusion equation

    International Nuclear Information System (INIS)

    Li, Gongsheng; Zhang, Dali; Jia, Xianzheng; Yamamoto, Masahiro

    2013-01-01

    This paper deals with an inverse problem of simultaneously identifying the space-dependent diffusion coefficient and the fractional order in the 1D time-fractional diffusion equation with smooth initial functions by using boundary measurements. The uniqueness results for the inverse problem are proved on the basis of the inverse eigenvalue problem, and the Lipschitz continuity of the solution operator is established. A modified optimal perturbation algorithm with a regularization parameter chosen by a sigmoid-type function is put forward for the discretization of the minimization problem. Numerical inversions are performed for the diffusion coefficient taking on different functional forms and the additional data having random noise. Several factors which have important influences on the realization of the algorithm are discussed, including the approximate space of the diffusion coefficient, the regularization parameter and the initial iteration. The inversion solutions are good approximations to the exact solutions with stability and adaptivity demonstrating that the optimal perturbation algorithm with the sigmoid-type regularization parameter is efficient for the simultaneous inversion. (paper)

  20. Adaptive Inverse Optimal Control for Rehabilitation Robot Systems Using Actor-Critic Algorithm

    Directory of Open Access Journals (Sweden)

    Fancheng Meng

    2014-01-01

    Full Text Available The higher goal of rehabilitation robot is to aid a person to achieve a desired functional task (e.g., tracking trajectory based on assisted-as-needed principle. To this goal, a new adaptive inverse optimal hybrid control (AHC combining inverse optimal control and actor-critic learning is proposed. Specifically, an uncertain nonlinear rehabilitation robot model is firstly developed that includes human motor behavior dynamics. Then, based on this model, an open-loop error system is formed; thereafter, an inverse optimal control input is designed to minimize the cost functional and a NN-based actor-critic feedforward signal is responsible for the nonlinear dynamic part contaminated by uncertainties. Finally, the AHC controller is proven (through a Lyapunov-based stability analysis to yield a global uniformly ultimately bounded stability result, and the resulting cost functional is meaningful. Simulation and experiment on rehabilitation robot demonstrate the effectiveness of the proposed control scheme.

  1. Transitionless driving on adiabatic search algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Oh, Sangchul, E-mail: soh@qf.org.qa [Qatar Environment and Energy Research Institute, Qatar Foundation, Doha (Qatar); Kais, Sabre, E-mail: kais@purdue.edu [Qatar Environment and Energy Research Institute, Qatar Foundation, Doha (Qatar); Department of Chemistry, Department of Physics and Birck Nanotechnology Center, Purdue University, West Lafayette, Indiana 47907 (United States)

    2014-12-14

    We study quantum dynamics of the adiabatic search algorithm with the equivalent two-level system. Its adiabatic and non-adiabatic evolution is studied and visualized as trajectories of Bloch vectors on a Bloch sphere. We find the change in the non-adiabatic transition probability from exponential decay for the short running time to inverse-square decay in asymptotic running time. The scaling of the critical running time is expressed in terms of the Lambert W function. We derive the transitionless driving Hamiltonian for the adiabatic search algorithm, which makes a quantum state follow the adiabatic path. We demonstrate that a uniform transitionless driving Hamiltonian, approximate to the exact time-dependent driving Hamiltonian, can alter the non-adiabatic transition probability from the inverse square decay to the inverse fourth power decay with the running time. This may open up a new but simple way of speeding up adiabatic quantum dynamics.

  2. Inverse kinematics problem in robotics using neural networks

    Science.gov (United States)

    Choi, Benjamin B.; Lawrence, Charles

    1992-01-01

    In this paper, Multilayer Feedforward Networks are applied to the robot inverse kinematic problem. The networks are trained with endeffector position and joint angles. After training, performance is measured by having the network generate joint angles for arbitrary endeffector trajectories. A 3-degree-of-freedom (DOF) spatial manipulator is used for the study. It is found that neural networks provide a simple and effective way to both model the manipulator inverse kinematics and circumvent the problems associated with algorithmic solution methods.

  3. Inverse planning in brachytherapy from radium to high rate 192 iridium afterloading

    International Nuclear Information System (INIS)

    Lahanas, M.; Mould, R.F.; Baltas, D.; Karauzakis, K.; Giannouli, S.; Baltas, D.

    2004-01-01

    We consider the inverse planning problem in brachytherapy, i.e. the problem to determine an optimal number of catheters, number of sources for low-dose rate brachytherapy (LDR) and the optimal dwell times for high-dose rate brachytherapy (HDR) necessary to obtain an optimal as possible dose distribution. Starting from the 1930s, inverse planning for LDR brachytherapy used geometrically derived rules to determine the optimal placement of sources in order to achieve a uniform dose distribution of a specific level in planes, spheres and cylinders. Rules and nomograms were derived which still are widely used. With the rapid development of 3D imaging technologies and the rapidly increasing computer power we have now entered the new era of computer-based inverse planning in brachytherapy. The inverse planning is now an optimisation process adapted to the individual geometry of the patient. New inverse planning optimisation algorithms are anatomy-based that consider the real anatomy of the tumour and the organs at risk (OAR). Computer-based inverse planning considers various effects such as stability of solutions for seed misplacements which cannot ever be solved analytically without gross simplifications. In the last few years multiobjective (MO) inverse planning algorithms have been developed which recognise the MO optimisation problem which is inherent in inverse planning in brachytherapy. Previous methods used a trial and error method to obtain a satisfactory solution. MO optimisation replaces this trial and error process by presenting a representative set of dose distributions that can be obtained. With MO optimisation it is possible to obtain information that can be used to obtain the optimum number of catheters, their position and the optimum distribution of dwell times for HDR brachytherapy. For LDR brachytherapy also the stability of solutions due to seed migration can also be improved. A spectrum of alternative solutions is available and the treatment planner

  4. Modular Regularization Algorithms

    DEFF Research Database (Denmark)

    Jacobsen, Michael

    2004-01-01

    The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen...

  5. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    International Nuclear Information System (INIS)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi

    2012-01-01

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  6. Robust inverse scattering full waveform seismic tomography for imaging complex structure

    Energy Technology Data Exchange (ETDEWEB)

    Nurhandoko, Bagus Endar B.; Sukmana, Indriani; Wibowo, Satryo; Deny, Agus; Kurniadi, Rizal; Widowati, Sri; Mubarok, Syahrul; Susilowati; Kaswandhi [Wave Inversion and Subsurface Fluid Imaging Research (WISFIR) Lab., Complex System Research Division, Physics Department, Faculty of Mathematics and Natural Sciences, Institut Teknologi Bandung. and Rock Fluid Imaging Lab., Rock Physics and Cluster C (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia); Physics Department of Institut Teknologi Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung, Indonesia and Institut Teknologi Telkom, Bandung (Indonesia); Rock Fluid Imaging Lab., Rock Physics and Cluster Computing Center, Bandung (Indonesia)

    2012-06-20

    Seismic tomography becomes important tool recently for imaging complex subsurface. It is well known that imaging complex rich fault zone is difficult. In this paper, The application of time domain inverse scattering wave tomography to image the complex fault zone would be shown on this paper, especially an efficient time domain inverse scattering tomography and their run in cluster parallel computer which has been developed. This algorithm is purely based on scattering theory through solving Lippmann Schwienger integral by using Born's approximation. In this paper, it is shown the robustness of this algorithm especially in avoiding the inversion trapped in local minimum to reach global minimum. A large data are solved by windowing and blocking technique of memory as well as computation. Parameter of windowing computation is based on shot gather's aperture. This windowing technique reduces memory as well as computation significantly. This parallel algorithm is done by means cluster system of 120 processors from 20 nodes of AMD Phenom II. Benchmarking of this algorithm is done by means Marmoussi model which can be representative of complex rich fault area. It is shown that the proposed method can image clearly the rich fault and complex zone in Marmoussi model even though the initial model is quite far from the true model. Therefore, this method can be as one of solution to image the very complex mode.

  7. Multiobjective optimization with a modified simulated annealing algorithm for external beam radiotherapy treatment planning

    International Nuclear Information System (INIS)

    Aubry, Jean-Francois; Beaulieu, Frederic; Sevigny, Caroline; Beaulieu, Luc; Tremblay, Daniel

    2006-01-01

    Inverse planning in external beam radiotherapy often requires a scalar objective function that incorporates importance factors to mimic the planner's preferences between conflicting objectives. Defining those importance factors is not straightforward, and frequently leads to an iterative process in which the importance factors become variables of the optimization problem. In order to avoid this drawback of inverse planning, optimization using algorithms more suited to multiobjective optimization, such as evolutionary algorithms, has been suggested. However, much inverse planning software, including one based on simulated annealing developed at our institution, does not include multiobjective-oriented algorithms. This work investigates the performance of a modified simulated annealing algorithm used to drive aperture-based intensity-modulated radiotherapy inverse planning software in a multiobjective optimization framework. For a few test cases involving gastric cancer patients, the use of this new algorithm leads to an increase in optimization speed of a little more than a factor of 2 over a conventional simulated annealing algorithm, while giving a close approximation of the solutions produced by a standard simulated annealing. A simple graphical user interface designed to facilitate the decision-making process that follows an optimization is also presented

  8. Eikonal-Based Inversion of GPR Data from the Vaucluse Karst Aquifer

    Science.gov (United States)

    Yedlin, M. J.; van Vorst, D.; Guglielmi, Y.; Cappa, F.; Gaffet, S.

    2009-12-01

    In this paper, we present an easy-to-implement eikonal-based travel time inversion algorithm and apply it to borehole GPR measurement data obtained from a karst aquifer located in the Vaucluse in Provence. The boreholes are situated with a fault zone deep inside the aquifer, in the Laboratoire Souterrain à Bas Bruit (LSBB). The measurements were made using 250 MHz MALA RAMAC borehole GPR antennas. The inversion formulation is unique in its application of a fast-sweeping eikonal solver (Zhao [1]) to the minimization of an objective functional that is composed of a travel time misfit and a model-based regularization [2]. The solver is robust in the presence of large velocity contrasts, efficient, easy to implement, and does not require the use of a sorting algorithm. The computation of sensitivities, which are required for the inversion process, is achieved by tracing rays backward from receiver to source following the gradient of the travel time field [2]. A user wishing to implement this algorithm can opt to avoid the ray tracing step and simply perturb the model to obtain the required sensitivities. Despite the obvious computational inefficiency of such an approach, it is acceptable for 2D problems. The relationship between travel time and the velocity profile is non-linear, requiring an iterative approach to be used. At each iteration, a set of matrix equations is solved to determine the model update. As the inversion continues, the weighting of the regularization parameter is adjusted until an appropriate data misfit is obtained. The inversion results, shown in the attached image, are consistent with previously obtained geological structure. Future work will look at improving inversion resolution and incorporating other measurement methodologies, with the goal of providing useful data for groundwater analysis. References: [1] H. Zhao, “A fast sweeping method for Eikonal equations,” Mathematics of Computation, vol. 74, no. 250, pp. 603-627, 2004. [2] D

  9. Dirichlet Characters, Gauss Sums, and Inverse Z Transform

    OpenAIRE

    Gao, Jing; Liu, Huaning

    2012-01-01

    A generalized Möbius transform is presented. It is based on Dirichlet characters. A general algorithm is developed to compute the inverse $Z$ transform on the unit circle, and an error estimate is given for the truncated series representation.

  10. An Analytical Method for the Abel Inversion of Asymmetrical Gaussian Profiles

    International Nuclear Information System (INIS)

    Xu Guosheng; Wan Baonian

    2007-01-01

    An analytical algorithm for fast calculation of the Abel inversion for density profile measurement in tokamak is developed. Based upon the assumptions that the particle source is negligibly small in the plasma core region, density profiles can be approximated by an asymmetrical Gaussian distribution controlled only by one parameter V 0 /D and V 0 /D is constant along the radial direction, the analytical algorithm is presented and examined against a testing profile. The validity is confirmed by benchmark with the standard Abel inversion method and the theoretical profile. The scope of application as well as the error analysis is also discussed in detail

  11. Application of genetic algorithms for parameter estimation in liquid chromatography

    International Nuclear Information System (INIS)

    Hernandez Torres, Reynier; Irizar Mesa, Mirtha; Tavares Camara, Leoncio Diogenes

    2012-01-01

    In chromatography, complex inverse problems related to the parameters estimation and process optimization are presented. Metaheuristics methods are known as general purpose approximated algorithms which seek and hopefully find good solutions at a reasonable computational cost. These methods are iterative process to perform a robust search of a solution space. Genetic algorithms are optimization techniques based on the principles of genetics and natural selection. They have demonstrated very good performance as global optimizers in many types of applications, including inverse problems. In this work, the effectiveness of genetic algorithms is investigated to estimate parameters in liquid chromatography

  12. Iterative Reconstruction Methods for Hybrid Inverse Problems in Impedance Tomography

    DEFF Research Database (Denmark)

    Hoffmann, Kristoffer; Knudsen, Kim

    2014-01-01

    For a general formulation of hybrid inverse problems in impedance tomography the Picard and Newton iterative schemes are adapted and four iterative reconstruction algorithms are developed. The general problem formulation includes several existing hybrid imaging modalities such as current density...... impedance imaging, magnetic resonance electrical impedance tomography, and ultrasound modulated electrical impedance tomography, and the unified approach to the reconstruction problem encompasses several algorithms suggested in the literature. The four proposed algorithms are implemented numerically in two...

  13. Discriminating Phytoplankton Functional Types (PFTs) in the Coastal Ocean Using the Inversion Algorithm Phydotax and Airborne Imaging Spectrometer Data

    Science.gov (United States)

    Palacios, Sherry L.; Schafer, Chris; Broughton, Jennifer; Guild, Liane S.; Kudela, Raphael M.

    2013-01-01

    There is a need in the Biological Oceanography community to discriminate among phytoplankton groups within the bulk chlorophyll pool to understand energy flow through ecosystems, to track the fate of carbon in the ocean, and to detect and monitor-for harmful algal blooms (HABs). The ocean color community has responded to this demand with the development of phytoplankton functional type (PFT) discrimination algorithms. These PFT algorithms fall into one of three categories depending on the science application: size-based, biogeochemical function, and taxonomy. The new PFT algorithm Phytoplankton Detection with Optics (PHYDOTax) is an inversion algorithm that discriminates taxon-specific biomass to differentiate among six taxa found in the California Current System: diatoms, dinoflagellates, haptophytes, chlorophytes, cryptophytes, and cyanophytes. PHYDOTax was developed and validated in Monterey Bay, CA for the high resolution imaging spectrometer, Spectroscopic Aerial Mapping System with On-board Navigation (SAMSON - 3.5 nm resolution). PHYDOTax exploits the high spectral resolution of an imaging spectrometer and the improved spatial resolution that airborne data provides for coastal areas. The objective of this study was to apply PHYDOTax to a relatively lower resolution imaging spectrometer to test the algorithm's sensitivity to atmospheric correction, to evaluate capability with other sensors, and to determine if down-sampling spectral resolution would degrade its ability to discriminate among phytoplankton taxa. This study is a part of the larger Hyperspectral Infrared Imager (HyspIRI) airborne simulation campaign which is collecting Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) imagery aboard NASA's ER-2 aircraft during three seasons in each of two years over terrestrial and marine targets in California. Our aquatic component seeks to develop and test algorithms to retrieve water quality properties (e.g. HABs and river plumes) in both marine and in

  14. Inverse consistent non-rigid image registration based on robust point set matching

    Science.gov (United States)

    2014-01-01

    Background Robust point matching (RPM) has been extensively used in non-rigid registration of images to robustly register two sets of image points. However, except for the location at control points, RPM cannot estimate the consistent correspondence between two images because RPM is a unidirectional image matching approach. Therefore, it is an important issue to make an improvement in image registration based on RPM. Methods In our work, a consistent image registration approach based on the point sets matching is proposed to incorporate the property of inverse consistency and improve registration accuracy. Instead of only estimating the forward transformation between the source point sets and the target point sets in state-of-the-art RPM algorithms, the forward and backward transformations between two point sets are estimated concurrently in our algorithm. The inverse consistency constraints are introduced to the cost function of RPM and the fuzzy correspondences between two point sets are estimated based on both the forward and backward transformations simultaneously. A modified consistent landmark thin-plate spline registration is discussed in detail to find the forward and backward transformations during the optimization of RPM. The similarity of image content is also incorporated into point matching in order to improve image matching. Results Synthetic data sets, medical images are employed to demonstrate and validate the performance of our approach. The inverse consistent errors of our algorithm are smaller than RPM. Especially, the topology of transformations is preserved well for our algorithm for the large deformation between point sets. Moreover, the distance errors of our algorithm are similar to that of RPM, and they maintain a downward trend as whole, which demonstrates the convergence of our algorithm. The registration errors for image registrations are evaluated also. Again, our algorithm achieves the lower registration errors in same iteration number

  15. Loading pattern calculated by inverse optimization vs traditional dosimetry systems of intracavitary brachytherapy of cervical cancer: a dosimetric study

    International Nuclear Information System (INIS)

    Jamema, S.V.; Deshpande, D.D.; Kirisits, C.; Trnkova, P.; Poetter, R.; Mahantshetty, U.; Shrivastava, S.K.; Dinshaw, K.A.

    2008-01-01

    In the recent past, inverse planning algorithms were introduced for intracavitary brachytherapy planning (ICBT) for cervical cancer. The loading pattern of these algorithms in comparison with traditional systems may not be similar. The purpose of this study was to objectively compare the loading patterns of traditional systems with the inverse optimization. Based on the outcome of the comparison, an attempt was made to obtain a loading pattern that takes into account the experience made with the inverse optimization

  16. A Stochastic Inversion Method for Potential Field Data: Ant Colony Optimization

    Science.gov (United States)

    Liu, Shuang; Hu, Xiangyun; Liu, Tianyou

    2014-07-01

    Simulating natural ants' foraging behavior, the ant colony optimization (ACO) algorithm performs excellently in combinational optimization problems, for example the traveling salesman problem and the quadratic assignment problem. However, the ACO is seldom used to inverted for gravitational and magnetic data. On the basis of the continuous and multi-dimensional objective function for potential field data optimization inversion, we present the node partition strategy ACO (NP-ACO) algorithm for inversion of model variables of fixed shape and recovery of physical property distributions of complicated shape models. We divide the continuous variables into discrete nodes and ants directionally tour the nodes by use of transition probabilities. We update the pheromone trails by use of Gaussian mapping between the objective function value and the quantity of pheromone. It can analyze the search results in real time and promote the rate of convergence and precision of inversion. Traditional mapping, including the ant-cycle system, weaken the differences between ant individuals and lead to premature convergence. We tested our method by use of synthetic data and real data from scenarios involving gravity and magnetic anomalies. The inverted model variables and recovered physical property distributions were in good agreement with the true values. The ACO algorithm for binary representation imaging and full imaging can recover sharper physical property distributions than traditional linear inversion methods. The ACO has good optimization capability and some excellent characteristics, for example robustness, parallel implementation, and portability, compared with other stochastic metaheuristics.

  17. Rapid processing of data based on high-performance algorithms for solving inverse problems and 3D-simulation of the tsunami and earthquakes

    Science.gov (United States)

    Marinin, I. V.; Kabanikhin, S. I.; Krivorotko, O. I.; Karas, A.; Khidasheli, D. G.

    2012-04-01

    We consider new techniques and methods for earthquake and tsunami related problems, particularly - inverse problems for the determination of tsunami source parameters, numerical simulation of long wave propagation in soil and water and tsunami risk estimations. In addition, we will touch upon the issue of database management and destruction scenario visualization. New approaches and strategies, as well as mathematical tools and software are to be shown. The long joint investigations by researchers of the Institute of Mathematical Geophysics and Computational Mathematics SB RAS and specialists from WAPMERR and Informap have produced special theoretical approaches, numerical methods, and software tsunami and earthquake modeling (modeling of propagation and run-up of tsunami waves on coastal areas), visualization, risk estimation of tsunami, and earthquakes. Algorithms are developed for the operational definition of the origin and forms of the tsunami source. The system TSS numerically simulates the source of tsunami and/or earthquakes and includes the possibility to solve the direct and the inverse problem. It becomes possible to involve advanced mathematical results to improve models and to increase the resolution of inverse problems. Via TSS one can construct maps of risks, the online scenario of disasters, estimation of potential damage to buildings and roads. One of the main tools for the numerical modeling is the finite volume method (FVM), which allows us to achieve stability with respect to possible input errors, as well as to achieve optimum computing speed. Our approach to the inverse problem of tsunami and earthquake determination is based on recent theoretical results concerning the Dirichlet problem for the wave equation. This problem is intrinsically ill-posed. We use the optimization approach to solve this problem and SVD-analysis to estimate the degree of ill-posedness and to find the quasi-solution. The software system we developed is intended to

  18. One-dimensional nonlinear inverse heat conduction technique

    International Nuclear Information System (INIS)

    Hills, R.G.; Hensel, E.C. Jr.

    1986-01-01

    The one-dimensional nonlinear problem of heat conduction is considered. A noniterative space-marching finite-difference algorithm is developed to estimate the surface temperature and heat flux from temperature measurements at subsurface locations. The trade-off between resolution and variance of the estimates of the surface conditions is discussed quantitatively. The inverse algorithm is stabilized through the use of digital filters applied recursively. The effect of the filters on the resolution and variance of the surface estimates is quantified. Results are presented which indicate that the technique is capable of handling noisy measurement data

  19. 3D stochastic inversion and joint inversion of potential fields for multi scale parameters

    Science.gov (United States)

    Shamsipour, Pejman

    In this thesis we present the development of new techniques for the interpretation of potential field (gravity and magnetic data), which are the most widespread economic geophysical methods used for oil and mineral exploration. These new techniques help to address the long-standing issue with the interpretation of potential fields, namely the intrinsic non-uniqueness inversion of these types of data. The thesis takes the form of three papers (four including Appendix), which have been published, or soon to be published, in respected international journals. The purpose of the thesis is to introduce new methods based on 3D stochastical approaches for: 1) Inversion of potential field data (magnetic), 2) Multiscale Inversion using surface and borehole data and 3) Joint inversion of geophysical potential field data. We first present a stochastic inversion method based on a geostatistical approach to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. We evaluate the uncertainty on the parameter model by using geostatistical unconditional simulations. The realizations are post-conditioned by cokriging to observation data. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. Then, we introduce algorithm for multiscale inversion, the presented algorithm has the capability of inverting data on multiple supports. The method involves four main steps: i. upscaling of borehole parameters (It could be density or susceptibility) to block parameters, ii. selection of block to use as constraints based on a threshold on kriging variance, iii. inversion of observation data with selected block densities as constraints, and iv. downscaling of inverted parameters to small prisms. Two modes of application are presented: estimation and simulation. Finally, a novel

  20. Inverse problem studies of biochemical systems with structure identification of S-systems by embedding training functions in a genetic algorithm.

    Science.gov (United States)

    Sarode, Ketan Dinkar; Kumar, V Ravi; Kulkarni, B D

    2016-05-01

    An efficient inverse problem approach for parameter estimation, state and structure identification from dynamic data by embedding training functions in a genetic algorithm methodology (ETFGA) is proposed for nonlinear dynamical biosystems using S-system canonical models. Use of multiple shooting and decomposition approach as training functions has been shown for handling of noisy datasets and computational efficiency in studying the inverse problem. The advantages of the methodology are brought out systematically by studying it for three biochemical model systems of interest. By studying a small-scale gene regulatory system described by a S-system model, the first example demonstrates the use of ETFGA for the multifold aims of the inverse problem. The estimation of a large number of parameters with simultaneous state and network identification is shown by training a generalized S-system canonical model with noisy datasets. The results of this study bring out the superior performance of ETFGA on comparison with other metaheuristic approaches. The second example studies the regulation of cAMP oscillations in Dictyostelium cells now assuming limited availability of noisy data. Here, flexibility of the approach to incorporate partial system information in the identification process is shown and its effect on accuracy and predictive ability of the estimated model are studied. The third example studies the phenomenological toy model of the regulation of circadian oscillations in Drosophila that follows rate laws different from S-system power-law. For the limited noisy data, using a priori information about properties of the system, we could estimate an alternate S-system model that showed robust oscillatory behavior with predictive abilities. Copyright © 2016 Elsevier Inc. All rights reserved.

  1. Inverse problem for particle size distributions of atmospheric aerosols using stochastic particle swarm optimization

    International Nuclear Information System (INIS)

    Yuan Yuan; Yi Hongliang; Shuai Yong; Wang Fuqiang; Tan Heping

    2010-01-01

    As a part of resolving optical properties in atmosphere radiative transfer calculations, this paper focuses on obtaining aerosol optical thicknesses (AOTs) in the visible and near infrared wave band through indirect method by gleaning the values of aerosol particle size distribution parameters. Although various inverse techniques have been applied to obtain values for these parameters, we choose a stochastic particle swarm optimization (SPSO) algorithm to perform an inverse calculation. Computational performances of different inverse methods are investigated and the influence of swarm size on the inverse problem of computation particles is examined. Next, computational efficiencies of various particle size distributions and the influences of the measured errors on computational accuracy are compared. Finally, we recover particle size distributions for atmospheric aerosols over Beijing using the measured AOT data (at wavelengths λ=0.400, 0.690, 0.870, and 1.020 μm) obtained from AERONET at different times and then calculate other AOT values for this band based on the inverse results. With calculations agreeing with measured data, the SPSO algorithm shows good practicability.

  2. Key Generation for Fast Inversion of the Paillier Encryption Function

    Science.gov (United States)

    Hirano, Takato; Tanaka, Keisuke

    We study fast inversion of the Paillier encryption function. Especially, we focus only on key generation, and do not modify the Paillier encryption function. We propose three key generation algorithms based on the speeding-up techniques for the RSA encryption function. By using our algorithms, the size of the private CRT exponent is half of that of Paillier-CRT. The first algorithm employs the extended Euclidean algorithm. The second algorithm employs factoring algorithms, and can construct the private CRT exponent with low Hamming weight. The third algorithm is a variant of the second one, and has some advantage such as compression of the private CRT exponent and no requirement for factoring algorithms. We also propose the settings of the parameters for these algorithms and analyze the security of the Paillier encryption function by these algorithms against known attacks. Finally, we give experimental results of our algorithms.

  3. Solving Inverse Kinematics – A New Approach to the Extended Jacobian Technique

    Directory of Open Access Journals (Sweden)

    M. Šoch

    2005-01-01

    Full Text Available This paper presents a brief summary of current numerical algorithms for solving the Inverse Kinematics problem. Then a new approach based on the Extended Jacobian technique is compared with the current Jacobian Inversion method. The presented method is intended for use in the field of computer graphics for animation of articulated structures. 

  4. Embedding Term Similarity and Inverse Document Frequency into a Logical Model of Information Retrieval.

    Science.gov (United States)

    Losada, David E.; Barreiro, Alvaro

    2003-01-01

    Proposes an approach to incorporate term similarity and inverse document frequency into a logical model of information retrieval. Highlights include document representation and matching; incorporating term similarity into the measure of distance; new algorithms for implementation; inverse document frequency; and logical versus classical models of…

  5. LinvPy : a Python package for linear inverse problems

    OpenAIRE

    Beaud, Guillaume François Paul

    2016-01-01

    The goal of this project is to make a Python package including the tau-estimator algorithm to solve linear inverse problems. The package must be distributed, well documented, easy to use and easy to extend for future developers.

  6. Quantum algorithm for support matrix machines

    Science.gov (United States)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  7. Extension and optimization of the FIND algorithm: Computing Green’s and less-than Green’s functions

    International Nuclear Information System (INIS)

    Li, S.; Darve, E.

    2012-01-01

    Highlights: ► FIND is an algorithm for calculating entries of the inverse of a sparse matrix. ► We extend the algorithm to other matrix inverse related calculations. ► We exploit sparsity and symmetry to improve performance. - Abstract: The FIND algorithm is a fast algorithm designed to calculate certain entries of the inverse of a sparse matrix. Such calculation is critical in many applications, e.g., quantum transport in nano-devices. We extended the algorithm to other matrix inverse related calculations. Those are required for example to calculate the less-than Green’s function and the current density through the device. For a 2D device discretized as an N x × N y mesh, the best known algorithms have a running time of O(N x 3 N y ), whereas FIND only requires O(N x 2 N y ). Even though this complexity has been reduced by an order of magnitude, the matrix inverse calculation is still the most time consuming part in the simulation of transport problems. We could not reduce the order of complexity, but we were able to significantly reduce the constant factor involved in the computation cost. By exploiting the sparsity and symmetry, the size of the problem beyond which FIND is faster than other methods typically decreases from a 130 × 130 2D mesh down to a 40 × 40 mesh. These improvements make the optimized FIND algorithm even more competitive for real-life applications.

  8. Inverse problems with non-trivial priors: efficient solution through sequential Gibbs sampling

    DEFF Research Database (Denmark)

    Hansen, Thomas Mejer; Cordua, Knud Skou; Mosegaard, Klaus

    2012-01-01

    Markov chain Monte Carlo methods such as the Gibbs sampler and the Metropolis algorithm can be used to sample solutions to non-linear inverse problems. In principle, these methods allow incorporation of prior information of arbitrary complexity. If an analytical closed form description of the prior...... is available, which is the case when the prior can be described by a multidimensional Gaussian distribution, such prior information can easily be considered. In reality, prior information is often more complex than can be described by the Gaussian model, and no closed form expression of the prior can be given....... We propose an algorithm, called sequential Gibbs sampling, allowing the Metropolis algorithm to efficiently incorporate complex priors into the solution of an inverse problem, also for the case where no closed form description of the prior exists. First, we lay out the theoretical background...

  9. An inverse analysis of a transient 2-D conduction-radiation problem using the lattice Boltzmann method and the finite volume method coupled with the genetic algorithm

    International Nuclear Information System (INIS)

    Das, Ranjan; Mishra, Subhash C.; Ajith, M.; Uppaluri, R.

    2008-01-01

    This article deals with the simultaneous estimation of parameters in a 2-D transient conduction-radiation heat transfer problem. The homogeneous medium is assumed to be absorbing, emitting and scattering. The boundaries of the enclosure are diffuse gray. Three parameters, viz. the scattering albedo, the conduction-radiation parameter and the boundary emissivity, are simultaneously estimated by the inverse method involving the lattice Boltzmann method (LBM) and the finite volume method (FVM) in conjunction with the genetic algorithm (GA). In the direct method, the FVM is used for computing the radiative information while the LBM is used to solve the energy equation. The temperature field obtained in the direct method is used in the inverse method for simultaneous estimation of unknown parameters using the LBM-FVM and the GA. The LBM-FVM-GA combination has been found to accurately predict the unknown parameters

  10. On the Duality of Forward and Inverse Light Transport.

    Science.gov (United States)

    Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi

    2011-10-01

    Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.

  11. Frequency-domain inversion using the amplitude of the derivative wavefield with respect to the angular frequency

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    The instantaneous traveltime based inversion was developed to solve the phase wrapping problem, thus generating long-wavelength structures even for a high single-frequency. However, it required aggressive damping to insure proper convergence. A reason for that is the potential for unstable division in the calculation of the instantaneous traveltime for low damping factors. Thus, we propose an inversion algorithm using the amplitude of the derivative wavefield to avoid the unstable division process. Since the amplitude of the derivative wavefield contains the unwrapped-phase information, its inversion has the potential to provide robust inversion results. On the other hand, the damping term rapidly diminishes the amplitude of the derivative wavefield at far source-receiver offsets. As an alternative, we suggest using the logarithmic amplitude of the derivative wavefield. The gradient of this inversion algorithm is obtained by the back-propagation approach, based on the adjoint-state technique. Numerical examples show that the logarithmic-amplitude approach yields better convergent results than the instantaneous traveltime inversion, whereas the pure-amplitude approach does not show much convergence.

  12. Analytical inversion formula for uniformly attenuated fan-beam projections

    International Nuclear Information System (INIS)

    Weng, Y.; Zeng, G.L.; Gullberg, G.T.

    1997-01-01

    In deriving algorithms to reconstruct single photon emission computed tomography (SPECT) projection data, it is important that the algorithm compensates for photon attenuation in order to obtain quantitative reconstruction results. A convolution backprojection algorithm was derived by Tretiak and Metz to reconstruct two-dimensional (2-D) transaxial slices from uniformly attenuated parallel-beam projections. Using transformation of coordinates, this algorithm can be modified to obtain a formulation useful to reconstruct uniformly attenuated fan-beam projections. Unlike that for parallel-beam projections, this formulation does not produce a filtered backprojection reconstruction algorithm but instead has a formulation that is an inverse integral operator with a spatially varying kernel. This algorithm thus requires more computation time than does the filtered backprojection reconstruction algorithm for the uniformly attenuated parallel-beam case. However, the fan-beam reconstructions demonstrate the same image quality as that of parallel-beam reconstructions

  13. 3D elastic inversion of vertical seismic profiles in horizontally stratified media; Inversion elastique 3D de profils sismiques verticaux en milieux stratifies horizontalement

    Energy Technology Data Exchange (ETDEWEB)

    Petit, J.L.

    1997-07-21

    This thesis is devoted to the inversion of VSP (vertical seismic profile) seismic data in order to determine the elastic properties of horizontally stratified media. The VSP records are computed using the full wave elastic modelling in isotropic and transversely isotropic media using Hankel transform, a finite difference scheme and an inverse Hankel transform algorithm, and the propagation equations are determined and numerically solved; the importance of considering a 3D wave propagation model instead of a 1 D one is emphasized. The theoretical VSP inverse problem is then considered, with the seismic waveform inversion set as a least-squares problem, consisting in recovering the distribution of physical parameters which minimize the misfit between calculated and observed VSP. The corresponding problem requires the knowledge of the source function

  14. Gradient-type methods in inverse parabolic problems

    International Nuclear Information System (INIS)

    Kabanikhin, Sergey; Penenko, Aleksey

    2008-01-01

    This article is devoted to gradient-based methods for inverse parabolic problems. In the first part, we present a priori convergence theorems based on the conditional stability estimates for linear inverse problems. These theorems are applied to backwards parabolic problem and sideways parabolic problem. The convergence conditions obtained coincide with sourcewise representability in the self-adjoint backwards parabolic case but they differ in the sideways case. In the second part, a variational approach is formulated for a coefficient identification problem. Using adjoint equations, a formal gradient of an objective functional is constructed. A numerical test illustrates the performance of conjugate gradient algorithm with the formal gradient.

  15. A variational Bayesian method to inverse problems with impulsive noise

    KAUST Repository

    Jin, Bangti

    2012-01-01

    We propose a novel numerical method for solving inverse problems subject to impulsive noises which possibly contain a large number of outliers. The approach is of Bayesian type, and it exploits a heavy-tailed t distribution for data noise to achieve robustness with respect to outliers. A hierarchical model with all hyper-parameters automatically determined from the given data is described. An algorithm of variational type by minimizing the Kullback-Leibler divergence between the true posteriori distribution and a separable approximation is developed. The numerical method is illustrated on several one- and two-dimensional linear and nonlinear inverse problems arising from heat conduction, including estimating boundary temperature, heat flux and heat transfer coefficient. The results show its robustness to outliers and the fast and steady convergence of the algorithm. © 2011 Elsevier Inc.

  16. Algorithms for orbit control on SPEAR

    International Nuclear Information System (INIS)

    Corbett, J.; Keeley, D.; Hettel, R.; Linscott, I.; Sebek, J.

    1994-06-01

    A global orbit feedback system has been installed on SPEAR to help stabilize the position of the photon beams. The orbit control algorithms depend on either harmonic reconstruction of the orbit or eigenvector decomposition. The orbit motion is corrected by dipole corrector kicks determined from the inverse corrector-to-bpm response matrix. This paper outlines features of these control algorithms as applied to SPEAR

  17. Pareto joint inversion of 2D magnetotelluric and gravity data

    Science.gov (United States)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where

  18. Inverse Reliability Task: Artificial Neural Networks and Reliability-Based Optimization Approaches

    OpenAIRE

    Lehký , David; Slowik , Ondřej; Novák , Drahomír

    2014-01-01

    Part 7: Genetic Algorithms; International audience; The paper presents two alternative approaches to solve inverse reliability task – to determine the design parameters to achieve desired target reliabilities. The first approach is based on utilization of artificial neural networks and small-sample simulation Latin hypercube sampling. The second approach considers inverse reliability task as reliability-based optimization task using double-loop method and also small-sample simulation. Efficie...

  19. Quantitative analysis of SMEX'02 AIRSAR data for soil moisture inversion

    Science.gov (United States)

    Zyl, J. J. van; Njoku, E.; Jackson, T.

    2003-01-01

    This paper discusses in detail the characteristics of the AIRSAR data acquired, and provides an initial quantitative assessment of the accuracy of the radar inversion algorithms under these vegetated conditions.

  20. Regularized inversion of controlled source audio-frequency magnetotelluric data in horizontally layered transversely isotropic media

    International Nuclear Information System (INIS)

    Zhou, Jianmei; Shang, Qinglong; Wang, Hongnian; Wang, Jianxun; Yin, Changchun

    2014-01-01

    We present an algorithm for inverting controlled source audio-frequency magnetotelluric (CSAMT) data in horizontally layered transversely isotropic (TI) media. The popular inversion method parameterizes the media into a large number of layers which have fixed thickness and only reconstruct the conductivities (e.g. Occam's inversion), which does not enable the recovery of the sharp interfaces between layers. In this paper, we simultaneously reconstruct all the model parameters, including both the horizontal and vertical conductivities and layer depths. Applying the perturbation principle and the dyadic Green's function in TI media, we derive the analytic expression of Fréchet derivatives of CSAMT responses with respect to all the model parameters in the form of Sommerfeld integrals. A regularized iterative inversion method is established to simultaneously reconstruct all the model parameters. Numerical results show that the inverse algorithm, including the depths of the layer interfaces, can significantly improve the inverse results. It can not only reconstruct the sharp interfaces between layers, but also can obtain conductivities close to the true value. (paper)

  1. Anisotropic parameter inversion in VTI media using diffraction data

    KAUST Repository

    Waheed, Umair bin; Alkhalifah, Tariq Ali; Stovas, Alexey

    2013-01-01

    . Using this property of diffraction data to our vantage, we develop an algorithm to invert for effective η model, assuming no prior knowledge of it. The obtained effective η model is then converted to interval η model using Dix-type inversion formula

  2. Plasma diagnostics by Abel inversion in hyperbolic geometry

    International Nuclear Information System (INIS)

    Alhasi, A.S.; Elliott, J.A.

    1992-01-01

    Plasma confined in the UMIST linear quadrupole adopts a configuration with approximately hyperbolic symmetry. The normal diagnostic is a Langmuir probe, but we have developed an alternative method using optical emission tomography based upon an analytic Abel inversion. Plasma radiance is obtained as a function of a parameter identifying magnetic flux surfaces. The inversion algorithm has been tested using artificial data. Experimentally, the results show that ionizing collisions cause the confined plasma distribution to broaden as the plasma travels through the confining field. This is shown to be a consequence of the approximate incompressibility of the E x B flow. (author)

  3. Subspace-based Inverse Uncertainty Quantification for Nuclear Data Assessment

    Energy Technology Data Exchange (ETDEWEB)

    Khuwaileh, B.A., E-mail: bakhuwai@ncsu.edu; Abdel-Khalik, H.S.

    2015-01-15

    Safety analysis and design optimization depend on the accurate prediction of various reactor attributes. Predictions can be enhanced by reducing the uncertainty associated with the attributes of interest. An inverse problem can be defined and solved to assess the sources of uncertainty, and experimental effort can be subsequently directed to further improve the uncertainty associated with these sources. In this work a subspace-based algorithm for inverse sensitivity/uncertainty quantification (IS/UQ) has been developed to enable analysts account for all sources of nuclear data uncertainties in support of target accuracy assessment-type analysis. An approximate analytical solution of the optimization problem is used to guide the search for the dominant uncertainty subspace. By limiting the search to a subspace, the degrees of freedom available for the optimization search are significantly reduced. A quarter PWR fuel assembly is modeled and the accuracy of the multiplication factor and the fission reaction rate are used as reactor attributes whose uncertainties are to be reduced. Numerical experiments are used to demonstrate the computational efficiency of the proposed algorithm. Our ongoing work is focusing on extending the proposed algorithm to account for various forms of feedback, e.g., thermal-hydraulics and depletion effects.

  4. Calculation method of water injection forward modeling and inversion process in oilfield water injection network

    Science.gov (United States)

    Liu, Long; Liu, Wei

    2018-04-01

    A forward modeling and inversion algorithm is adopted in order to determine the water injection plan in the oilfield water injection network. The main idea of the algorithm is shown as follows: firstly, the oilfield water injection network is inversely calculated. The pumping station demand flow is calculated. Then, forward modeling calculation is carried out for judging whether all water injection wells meet the requirements of injection allocation or not. If all water injection wells meet the requirements of injection allocation, calculation is stopped, otherwise the demand injection allocation flow rate of certain step size is reduced aiming at water injection wells which do not meet requirements, and next iterative operation is started. It is not necessary to list the algorithm into water injection network system algorithm, which can be realized easily. Iterative method is used, which is suitable for computer programming. Experimental result shows that the algorithm is fast and accurate.

  5. New algorithms for parallel MRI

    International Nuclear Information System (INIS)

    Anzengruber, S; Ramlau, R; Bauer, F; Leitao, A

    2008-01-01

    Magnetic Resonance Imaging with parallel data acquisition requires algorithms for reconstructing the patient's image from a small number of measured lines of the Fourier domain (k-space). In contrast to well-known algorithms like SENSE and GRAPPA and its flavors we consider the problem as a non-linear inverse problem. However, in order to avoid cost intensive derivatives we will use Landweber-Kaczmarz iteration and in order to improve the overall results some additional sparsity constraints.

  6. Inverse problem for the mean-field monomer-dimer model with attractive interaction

    International Nuclear Information System (INIS)

    Contucci, Pierluigi; Luzi, Rachele; Vernia, Cecilia

    2017-01-01

    The inverse problem method is tested for a class of monomer-dimer statistical mechanics models that contain also an attractive potential and display a mean-field critical point at a boundary of a coexistence line. The inversion is obtained by analytically identifying the parameters in terms of the correlation functions and via the maximum-likelihood method. The precision is tested in the whole phase space and, when close to the coexistence line, the algorithm is used together with a clustering method to take care of the underlying possible ambiguity of the inversion. (paper)

  7. Seismic waveform inversion best practices: regional, global and exploration test cases

    Science.gov (United States)

    Modrak, Ryan; Tromp, Jeroen

    2016-09-01

    Reaching the global minimum of a waveform misfit function requires careful choices about the nonlinear optimization, preconditioning and regularization methods underlying an inversion. Because waveform inversion problems are susceptible to erratic convergence associated with strong nonlinearity, one or two test cases are not enough to reliably inform such decisions. We identify best practices, instead, using four seismic near-surface problems, one regional problem and two global problems. To make meaningful quantitative comparisons between methods, we carry out hundreds of inversions, varying one aspect of the implementation at a time. Comparing nonlinear optimization algorithms, we find that limited-memory BFGS provides computational savings over nonlinear conjugate gradient methods in a wide range of test cases. Comparing preconditioners, we show that a new diagonal scaling derived from the adjoint of the forward operator provides better performance than two conventional preconditioning schemes. Comparing regularization strategies, we find that projection, convolution, Tikhonov regularization and total variation regularization are effective in different contexts. Besides questions of one strategy or another, reliability and efficiency in waveform inversion depend on close numerical attention and care. Implementation details involving the line search and restart conditions have a strong effect on computational cost, regardless of the chosen nonlinear optimization algorithm.

  8. Nonlinear inversion of borehole-radar tomography data to reconstruct velocity and attenuation distribution in earth materials

    Science.gov (United States)

    Zhou, C.; Liu, L.; Lane, J.W.

    2001-01-01

    A nonlinear tomographic inversion method that uses first-arrival travel-time and amplitude-spectra information from cross-hole radar measurements was developed to simultaneously reconstruct electromagnetic velocity and attenuation distribution in earth materials. Inversion methods were developed to analyze single cross-hole tomography surveys and differential tomography surveys. Assuming the earth behaves as a linear system, the inversion methods do not require estimation of source radiation pattern, receiver coupling, or geometrical spreading. The data analysis and tomographic inversion algorithm were applied to synthetic test data and to cross-hole radar field data provided by the US Geological Survey (USGS). The cross-hole radar field data were acquired at the USGS fractured-rock field research site at Mirror Lake near Thornton, New Hampshire, before and after injection of a saline tracer, to monitor the transport of electrically conductive fluids in the image plane. Results from the synthetic data test demonstrate the algorithm computational efficiency and indicate that the method robustly can reconstruct electromagnetic (EM) wave velocity and attenuation distribution in earth materials. The field test results outline zones of velocity and attenuation anomalies consistent with the finding of previous investigators; however, the tomograms appear to be quite smooth. Further work is needed to effectively find the optimal smoothness criterion in applying the Tikhonov regularization in the nonlinear inversion algorithms for cross-hole radar tomography. ?? 2001 Elsevier Science B.V. All rights reserved.

  9. High-resolution Fracture Characterization Using Elastic Full-waveform Inversion

    KAUST Repository

    Zhang, Z.; Tsvankin, I.; Alkhalifah, Tariq Ali

    2017-01-01

    Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution. Here, we propose to estimate both the spatial distribution and physical properties of fractures using full waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. To better understand the inversion results, we analyze the FWI radiation patterns of the fracture weaknesses. A shape regularization term is added to the objective function to improve the inversion for the horizontal weakness, which is otherwise poorly constrained. Alternatively, a simplified model of penny-shaped cracks is used to reduce the nonuniqueness in the inverted weaknesses and achieve a faster convergence.

  10. High-resolution Fracture Characterization Using Elastic Full-waveform Inversion

    KAUST Repository

    Zhang, Z.

    2017-05-26

    Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution. Here, we propose to estimate both the spatial distribution and physical properties of fractures using full waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. To better understand the inversion results, we analyze the FWI radiation patterns of the fracture weaknesses. A shape regularization term is added to the objective function to improve the inversion for the horizontal weakness, which is otherwise poorly constrained. Alternatively, a simplified model of penny-shaped cracks is used to reduce the nonuniqueness in the inverted weaknesses and achieve a faster convergence.

  11. Expansion around half-integer values, binomial sums, and inverse binomial sums

    International Nuclear Information System (INIS)

    Weinzierl, Stefan

    2004-01-01

    I consider the expansion of transcendental functions in a small parameter around rational numbers. This includes in particular the expansion around half-integer values. I present algorithms which are suitable for an implementation within a symbolic computer algebra system. The method is an extension of the technique of nested sums. The algorithms allow in addition the evaluation of binomial sums, inverse binomial sums and generalizations thereof

  12. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    OpenAIRE

    Soleimani, Farahnaz; Stanimirovi´c, Predrag; Soleymani, Fazlollah

    2015-01-01

    An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the ...

  13. Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

    KAUST Repository

    Gower, Robert M.; Hanzely, Filip; Richtarik, Peter; Stich, Sebastian

    2018-01-01

    We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite

  14. Application of a two-and-a-half dimensional model-based algorithm to crosswell electromagnetic data inversion

    International Nuclear Information System (INIS)

    Li, Maokun; Abubakar, Aria; Habashy, Tarek M

    2010-01-01

    In this paper, we apply a model-based inversion scheme for the interpretation of the crosswell electromagnetic data. In this approach, we use open and closed polygons to parameterize the unknown configuration. The parameters that define these polygons are then inverted for by minimizing the data misfit cost function. Compared with the pixel-based inversion approach, the model-based inversion uses only a few number of parameters; hence, it is more efficient. Furthermore, with sufficient sensitivity in the data, the model-based approach can provide quantitative estimates of the inverted parameters such as the conductivity. The model-based inversion also provides a convenient way to incorporate a priori information from other independent measurements such as seismic, gravity and well logs

  15. hp-HGS strategy for inverse 3D DC resistivity logging measurement simulations

    KAUST Repository

    Gajda-Zaǵorska, Ewa; Paszý nski, Maciej; Schaefer, Robert; Pardo, David; Calo, Victor M.

    2012-01-01

    In this paper we present a twin adaptive strategy hp-HGS for solving inverse problems related to 3D DC borehole resistivity measurement simulations. The term “simulation of measurements” is widely used by the geophysical community. A quantity of interest, voltage, is measured at a receiver electrode located in the logging instrument. We use the self-adaptive goal-oriented hp-Finite Element Method (hp-FEM) computer simulations of the process of measurements in deviated wells (when the angle between the borehole and formation layers are < 90 deg). We also employ the hierarchical genetic search (HGS) algorithm to solve the inverse problem. Each individual in the population represents a single configuration of the formation layers. The evaluation of the individual is performed by solving the direct problem by means of the hp-FEM algorithm and by comparison with measured logging curve. We conclude the paper with some discussion on the parallelization of the algorithm.

  16. hp-HGS strategy for inverse 3D DC resistivity logging measurement simulations

    KAUST Repository

    Gajda-Zaǵorska, Ewa

    2012-06-02

    In this paper we present a twin adaptive strategy hp-HGS for solving inverse problems related to 3D DC borehole resistivity measurement simulations. The term “simulation of measurements” is widely used by the geophysical community. A quantity of interest, voltage, is measured at a receiver electrode located in the logging instrument. We use the self-adaptive goal-oriented hp-Finite Element Method (hp-FEM) computer simulations of the process of measurements in deviated wells (when the angle between the borehole and formation layers are < 90 deg). We also employ the hierarchical genetic search (HGS) algorithm to solve the inverse problem. Each individual in the population represents a single configuration of the formation layers. The evaluation of the individual is performed by solving the direct problem by means of the hp-FEM algorithm and by comparison with measured logging curve. We conclude the paper with some discussion on the parallelization of the algorithm.

  17. 3D Multisource Full‐Waveform Inversion using Dynamic Random Phase Encoding

    KAUST Repository

    Boonyasiriwat, Chaiwoot; Schuster, Gerard T.

    2010-01-01

    We have developed a multisource full‐waveform inversion algorithm using a dynamic phase encoding strategy with dual‐randomization—both the position and polarity of simultaneous sources are randomized and changed every iteration. The dynamic dual

  18. Efficient Inversion of Mult-frequency and Multi-Source Electromagnetic Data

    Energy Technology Data Exchange (ETDEWEB)

    Gary D. Egbert

    2007-03-22

    The project covered by this report focused on development of efficient but robust non-linear inversion algorithms for electromagnetic induction data, in particular for data collected with multiple receivers, and multiple transmitters, a situation extremely common in eophysical EM subsurface imaging methods. A key observation is that for such multi-transmitter problems each step in commonly used linearized iterative limited memory search schemes such as conjugate gradients (CG) requires solution of forward and adjoint EM problems for each of the N frequencies or sources, essentially generating data sensitivities for an N dimensional data-subspace. These multiple sensitivities allow a good approximation to the full Jacobian of the data mapping to be built up in many fewer search steps than would be required by application of textbook optimization methods, which take no account of the multiplicity of forward problems that must be solved for each search step. We have applied this idea to a develop a hybrid inversion scheme that combines features of the iterative limited memory type methods with a Newton-type approach using a partial calculation of the Jacobian. Initial tests on 2D problems show that the new approach produces results essentially identical to a Newton type Occam minimum structure inversion, while running more rapidly than an iterative (fixed regularization parameter) CG style inversion. Memory requirements, while greater than for something like CG, are modest enough that even in 3D the scheme should allow 3D inverse problems to be solved on a common desktop PC, at least for modest (~ 100 sites, 15-20 frequencies) data sets. A secondary focus of the research has been development of a modular system for EM inversion, using an object oriented approach. This system has proven useful for more rapid prototyping of inversion algorithms, in particular allowing initial development and testing to be conducted with two-dimensional example problems, before

  19. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  20. Two-Dimensional Steady-State Boundary Shape Inversion of CGM-SPSO Algorithm on Temperature Information

    Directory of Open Access Journals (Sweden)

    Shoubin Wang

    2017-01-01

    Full Text Available Addressing the problem of two-dimensional steady-state thermal boundary recognition, a hybrid algorithm of conjugate gradient method and social particle swarm optimization (CGM-SPSO algorithm is proposed. The global search ability of particle swarm optimization algorithm and local search ability of gradient algorithm are effectively combined, which overcomes the shortcoming that the conjugate gradient method tends to converge to the local solution and relies heavily on the initial approximation of the iterative process. The hybrid algorithm also avoids the problem that the particle swarm optimization algorithm requires a large number of iterative steps and a lot of time. The experimental results show that the proposed algorithm is feasible and effective in solving the problem of two-dimensional steady-state thermal boundary shape.

  1. Three-dimensional induced polarization data inversion for complex resistivity

    Energy Technology Data Exchange (ETDEWEB)

    Commer, M.; Newman, G.A.; Williams, K.H.; Hubbard, S.S.

    2011-03-15

    The conductive and capacitive material properties of the subsurface can be quantified through the frequency-dependent complex resistivity. However, the routine three-dimensional (3D) interpretation of voluminous induced polarization (IP) data sets still poses a challenge due to large computational demands and solution nonuniqueness. We have developed a flexible methodology for 3D (spectral) IP data inversion. Our inversion algorithm is adapted from a frequency-domain electromagnetic (EM) inversion method primarily developed for large-scale hydrocarbon and geothermal energy exploration purposes. The method has proven to be efficient by implementing the nonlinear conjugate gradient method with hierarchical parallelism and by using an optimal finite-difference forward modeling mesh design scheme. The method allows for a large range of survey scales, providing a tool for both exploration and environmental applications. We experimented with an image focusing technique to improve the poor depth resolution of surface data sets with small survey spreads. The algorithm's underlying forward modeling operator properly accounts for EM coupling effects; thus, traditionally used EM coupling correction procedures are not needed. The methodology was applied to both synthetic and field data. We tested the benefit of directly inverting EM coupling contaminated data using a synthetic large-scale exploration data set. Afterward, we further tested the monitoring capability of our method by inverting time-lapse data from an environmental remediation experiment near Rifle, Colorado. Similar trends observed in both our solution and another 2D inversion were in accordance with previous findings about the IP effects due to subsurface microbial activity.

  2. Toward precise solution of one-dimensional velocity inverse problems

    International Nuclear Information System (INIS)

    Gray, S.; Hagin, F.

    1980-01-01

    A family of one-dimensional inverse problems are considered with the goal of reconstructing velocity profiles to reasonably high accuracy. The travel-time variable change is used together with an iteration scheme to produce an effective algorithm for computation. Under modest assumptions the scheme is shown to be convergent

  3. Full Waveform Inversion for Reservoir Characterization - A Synthetic Study

    KAUST Repository

    Zabihi Naeini, E.

    2017-05-26

    Most current reservoir-characterization workflows are based on classic amplitude-variation-with-offset (AVO) inversion techniques. Although these methods have generally served us well over the years, here we examine full-waveform inversion (FWI) as an alternative tool for higher-resolution reservoir characterization. An important step in developing reservoir-oriented FWI is the implementation of facies-based rock physics constraints adapted from the classic methods. We show that such constraints can be incorporated into FWI by adding appropriately designed regularization terms to the objective function. The advantages of the proposed algorithm are demonstrated on both isotropic and VTI (transversely isotropic with a vertical symmetry axis) models with pronounced lateral and vertical heterogeneity. The inversion results are explained using the theoretical radiation patterns produced by perturbations in the medium parameters.

  4. Three-dimensional Gravity Inversion with a New Gradient Scheme on Unstructured Grids

    Science.gov (United States)

    Sun, S.; Yin, C.; Gao, X.; Liu, Y.; Zhang, B.

    2017-12-01

    Stabilized gradient-based methods have been proved to be efficient for inverse problems. Based on these methods, setting gradient close to zero can effectively minimize the objective function. Thus the gradient of objective function determines the inversion results. By analyzing the cause of poor resolution on depth in gradient-based gravity inversion methods, we find that imposing depth weighting functional in conventional gradient can improve the depth resolution to some extent. However, the improvement is affected by the regularization parameter and the effect of the regularization term becomes smaller with increasing depth (shown as Figure 1 (a)). In this paper, we propose a new gradient scheme for gravity inversion by introducing a weighted model vector. The new gradient can improve the depth resolution more efficiently, which is independent of the regularization parameter, and the effect of regularization term will not be weakened when depth increases. Besides, fuzzy c-means clustering method and smooth operator are both used as regularization terms to yield an internal consecutive inverse model with sharp boundaries (Sun and Li, 2015). We have tested our new gradient scheme with unstructured grids on synthetic data to illustrate the effectiveness of the algorithm. Gravity forward modeling with unstructured grids is based on the algorithm proposed by Okbe (1979). We use a linear conjugate gradient inversion scheme to solve the inversion problem. The numerical experiments show a great improvement in depth resolution compared with regular gradient scheme, and the inverse model is compact at all depths (shown as Figure 1 (b)). AcknowledgeThis research is supported by Key Program of National Natural Science Foundation of China (41530320), China Natural Science Foundation for Young Scientists (41404093), and Key National Research Project of China (2016YFC0303100, 2017YFC0601900). ReferencesSun J, Li Y. 2015. Multidomain petrophysically constrained inversion and

  5. Jump-and-return sandwiches: A new family of binomial-like selective inversion sequences with improved performance

    Science.gov (United States)

    Brenner, Tom; Chen, Johnny; Stait-Gardner, Tim; Zheng, Gang; Matsukawa, Shingo; Price, William S.

    2018-03-01

    A new family of binomial-like inversion sequences, named jump-and-return sandwiches (JRS), has been developed by inserting a binomial-like sequence into a standard jump-and-return sequence, discovered through use of a stochastic Genetic Algorithm optimisation. Compared to currently used binomial-like inversion sequences (e.g., 3-9-19 and W5), the new sequences afford wider inversion bands and narrower non-inversion bands with an equal number of pulses. As an example, two jump-and-return sandwich 10-pulse sequences achieved 95% inversion at offsets corresponding to 9.4% and 10.3% of the non-inversion band spacing, compared to 14.7% for the binomial-like W5 inversion sequence, i.e., they afforded non-inversion bands about two thirds the width of the W5 non-inversion band.

  6. Sparse Nonlinear Electromagnetic Imaging Accelerated With Projected Steepest Descent Algorithm

    KAUST Repository

    Desmal, Abdulla

    2017-04-03

    An efficient electromagnetic inversion scheme for imaging sparse 3-D domains is proposed. The scheme achieves its efficiency and accuracy by integrating two concepts. First, the nonlinear optimization problem is constrained using L₀ or L₁-norm of the solution as the penalty term to alleviate the ill-posedness of the inverse problem. The resulting Tikhonov minimization problem is solved using nonlinear Landweber iterations (NLW). Second, the efficiency of the NLW is significantly increased using a steepest descent algorithm. The algorithm uses a projection operator to enforce the sparsity constraint by thresholding the solution at every iteration. Thresholding level and iteration step are selected carefully to increase the efficiency without sacrificing the convergence of the algorithm. Numerical results demonstrate the efficiency and accuracy of the proposed imaging scheme in reconstructing sparse 3-D dielectric profiles.

  7. Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler

    Science.gov (United States)

    Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham

    2018-04-01

    We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.

  8. Application of Bayesian Inversion for Multilayer Reservoir Mapping while Drilling Measurements

    Science.gov (United States)

    Wang, J.; Chen, H.; Wang, X.

    2017-12-01

    Real-time geosteering technology plays a key role in horizontal well development, which keeps the wellbore trajectories within target zones to maximize reservoir contact. The new generation logging while drilling (LWD) resistivity tools have longer spacing and deeper investigation depth, but meanwhile bring a new challenge to inversion of logging data that is formation model not be restricted to few possible numbers of layer such as typical three layers model. If the inappropriate starting models of deterministic and gradient-based methods are adopted may mislead geophysicists in interpretation of subsurface structure. For this purpose, to take advantage of richness of the measurements and deep depth of investigation across multiple formation boundaries, a trans-dimensional Markov chain Monte Carlo(MCMC) inversion algorithm has been developed that combines phase and attenuation measurements at various frequencies and spacings. Unlike conventional gradient-based inversion approaches, MCMC algorithm does not introduce bias from prior information and require any subjective choice of regularization parameter. A synthetic three layers model example demonstrates how the algorithm can be used to image the subsurface using the LWD data. When the tool is far from top boundary, the inversion clearly resolves the boundary position; that is where the boundary histogram shows a large peak. But the measurements cannot resolve the bottom boundary; the large spread between quantiles reflects the uncertainty associated with the bed resolution. As the tool moves closer to the top boundary, the middle layer and bottom layer are resolved and retained models are more similar, the uncertainty associated with these two beds decreases. From the spread observed between models, we can evaluate actual depth of investigation, uncertainty, and sensitivity, which is more useful then just a single best model.

  9. Collision-free inverse kinematics of the redundant seven link manipulator used in a cucumber harvesting robot

    NARCIS (Netherlands)

    Henten, van E.J.; Schenk, E.J.J.; Willigenburg, van L.G.; Meuleman, J.; Barreiro, P.

    2010-01-01

    The paper presents results of research on an inverse kinematics algorithm that has been used in a functional model of a cucumber-harvesting robot consisting of a redundant P6R manipulator. Within a first generic approach, the inverse kinematics problem was reformulated as a non-linear programming

  10. Exploring SWOT discharge algorithm accuracy on the Sacramento River

    Science.gov (United States)

    Durand, M. T.; Yoon, Y.; Rodriguez, E.; Minear, J. T.; Andreadis, K.; Pavelsky, T. M.; Alsdorf, D. E.; Smith, L. C.; Bales, J. D.

    2012-12-01

    Scheduled for launch in 2019, the Surface Water and Ocean Topography (SWOT) satellite mission will utilize a Ka-band radar interferometer to measure river heights, widths, and slopes, globally, as well as characterize storage change in lakes and ocean surface dynamics with a spatial resolution ranging from 10 - 70 m, with temporal revisits on the order of a week. A discharge algorithm has been formulated to solve the inverse problem of characterizing river bathymetry and the roughness coefficient from SWOT observations. The algorithm uses a Bayesian Markov Chain estimation approach, treats rivers as sets of interconnected reaches (typically 5 km - 10 km in length), and produces best estimates of river bathymetry, roughness coefficient, and discharge, given SWOT observables. AirSWOT (the airborne version of SWOT) consists of a radar interferometer similar to SWOT, but mounted aboard an aircraft. AirSWOT spatial resolution will range from 1 - 35 m. In early 2013, AirSWOT will perform several flights over the Sacramento River, capturing river height, width, and slope at several different flow conditions. The Sacramento River presents an excellent target given that the river includes some stretches heavily affected by management (diversions, bypasses, etc.). AirSWOT measurements will be used to validate SWOT observation performance, but are also a unique opportunity for testing and demonstrating the capabilities and limitations of the discharge algorithm. This study uses HEC-RAS simulations of the Sacramento River to first, characterize expected discharge algorithm accuracy on the Sacramento River, and second to explore the required AirSWOT measurements needed to perform a successful inverse with the discharge algorithm. We focus on several specific research questions affecting algorithm performance: 1) To what extent do lateral inflows confound algorithm performance? We examine the ~100 km stretch of river from Colusa, CA to the Yolo Bypass, and investigate how the

  11. Acoustic 2D full waveform inversion to solve gas cloud challenges

    Directory of Open Access Journals (Sweden)

    Srichand Prajapati

    2015-09-01

    Full Text Available The existing conventional inversion algorithm does not provide satisfactory results due to the complexity of propagated wavefield though the gas cloud. Acoustic full waveform inversion has been developed and applied to a realistic synthetic offshore shallow gas cloud feature with Student-t approach, with and without simultaneous sources encoding. As a modeling operator, we implemented the grid based finite-difference method in frequency domain using second order elastic wave equation. Jacobin operator and its adjoint provide a necessary platform for solving full waveform inversion problem in a reduced Hessian matrix. We invert gas cloud model in 5 frequency band selected from 1 to 12 Hz, each band contains 3 frequencies. The inversion results are highly sensitive to the misfit. The model allows better convergence and recovery of amplitude losses. This approach gives better resolution then the existing least-squares approach. In this paper, we implement the full waveform inversion for low frequency model with minimum number of iteration providing a better resolution of inversion results.

  12. The shifting zoom: new possibilities for inverse scattering on electrically large domains

    Science.gov (United States)

    Persico, Raffaele; Ludeno, Giovanni; Soldovieri, Francesco; De Coster, Alberic; Lambot, Sebastien

    2017-04-01

    Inverse scattering is a subject of great interest in diagnostic problems, which are in their turn of interest for many applicative problems as investigation of cultural heritage, characterization of foundations or subservices, identification of unexploded ordnances and so on [1-4]. In particular, GPR data are usually focused by means of migration algorithms, essentially based on a linear approximation of the scattering phenomenon. Migration algorithms are popular because they are computationally efficient and do not require the inversion of a matrix, neither the calculation of the elements of a matrix. In fact, they are essentially based on the adjoint of the linearised scattering operator, which allows in the end to write the inversion formula as a suitably weighted integral of the data [5]. In particular, this makes a migration algorithm more suitable than a linear microwave tomography inversion algorithm for the reconstruction of an electrically large investigation domain. However, this computational challenge can be overcome by making use of investigation domains joined side by side, as proposed e.g. in ref. [3]. This allows to apply a microwave tomography algorithm even to large investigation domains. However, the joining side by side of sequential investigation domains introduces a problem of limited (and asymmetric) maximum view angle with regard to the targets occurring close to the edges between two adjacent domains, or possibly crossing these edges. The shifting zoom is a method that allows to overcome this difficulty by means of overlapped investigation and observation domains [6-7]. It requires more sequential inversion with respect to adjacent investigation domains, but the really required extra-time is minimal because the matrix to be inverted is calculated ones and for all, as well as its singular value decomposition: what is repeated more time is only a fast matrix-vector multiplication. References [1] M. Pieraccini, L. Noferini, D. Mecatti, C

  13. High Performance Parallel Multigrid Algorithms for Unstructured Grids

    Science.gov (United States)

    Frederickson, Paul O.

    1996-01-01

    We describe a high performance parallel multigrid algorithm for a rather general class of unstructured grid problems in two and three dimensions. The algorithm PUMG, for parallel unstructured multigrid, is related in structure to the parallel multigrid algorithm PSMG introduced by McBryan and Frederickson, for they both obtain a higher convergence rate through the use of multiple coarse grids. Another reason for the high convergence rate of PUMG is its smoother, an approximate inverse developed by Baumgardner and Frederickson.

  14. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.; Wheeler, Mary Fanett; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using

  15. NDT applications of the 3D radon transform algorithm for cone beam reconstruction

    International Nuclear Information System (INIS)

    Sire, P.; Grangeat, P.; Lemasson, P.; Molennec, P.; Rizo, P.

    1990-01-01

    The paper describes the authors' 3D X-ray CT algorithm RADON using attenuation measurements acquired with a bidimensional detector. The authors' inversion diagram uses the first derivative of the Radon transform synthesis then its inversion. The potentiality of that new method, particularly for the large aperture, prompted us to develop an optimized software offering convenience and high performances on a modern scientific computer. After a brief recall of the basic principle of X-ray imaging processing, the authors introduce theoretical developments resulting in the present inversion diagram. A general algorithm structure will be proposed afterwards. As a conclusion the authors present the performances and the results obtained with ceramic rotors examination

  16. 2.5D inversion of CSEM data in a vertically anisotropic earth

    International Nuclear Information System (INIS)

    Ramananjaona, Christophe; MacGregor, Lucy

    2010-01-01

    The marine Controlled-Source Electromagnetic (CSEM) method is a low frequency (diffusive) electromagnetic subsurface imaging technique aimed at mapping the electric resistivity of the earth by measuring the response to a source dipole emitting an electromagnetic field in a marine environment. Although assuming isotropy for the inversion is the most straightforward approach, in many situations horizontal layering of the earth strata and grain alignment within earth materials creates electric anisotropy. Ignoring this during interpretation may create artifacts in the inversion results. Accounting for this effect therefore requires adequate forward modelling and inversion procedures. We present here an inversion algorithm for vertically anisotropic media based on finite element modelling, the use of Frechet derivatives, and different types of regularisation. Comparisons between isotropic and anisotropic inversion results are given for the characterisation of an anisotropic earth from data measured in line with the source dipole for both synthetic and real data examples.

  17. Multisource waveform inversion of marine streamer data using normalized wavefield

    KAUST Repository

    Choi, Yun Seok

    2013-09-01

    Multisource full-waveform inversion based on the L1- and L2-norm objective functions cannot be applied to marine streamer data because it does not take into account the unmatched acquisition geometries between the observed and modeled data. To apply multisource full-waveform inversion to marine streamer data, we construct the L1- and L2-norm objective functions using the normalized wavefield. The new residual seismograms obtained from the L1- and L2-norms using the normalized wavefield mitigate the problem of unmatched acquisition geometries, which enables multisource full-waveform inversion to work with marine streamer data. In the new approaches using the normalized wavefield, we used the back-propagation algorithm based on the adjoint-state technique to efficiently calculate the gradients of the objective functions. Numerical examples showed that multisource full-waveform inversion using the normalized wavefield yields much better convergence for marine streamer data than conventional approaches. © 2013 Society of Exploration Geophysicists.

  18. A comparative study of surface waves inversion techniques at strong motion recording sites in Greece

    Science.gov (United States)

    Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.

    2015-01-01

    Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.

  19. Retrieving global aerosol sources from satellites using inverse modeling

    Directory of Open Access Journals (Sweden)

    O. Dubovik

    2008-01-01

    Full Text Available Understanding aerosol effects on global climate requires knowing the global distribution of tropospheric aerosols. By accounting for aerosol sources, transports, and removal processes, chemical transport models simulate the global aerosol distribution using archived meteorological fields. We develop an algorithm for retrieving global aerosol sources from satellite observations of aerosol distribution by inverting the GOCART aerosol transport model.

    The inversion is based on a generalized, multi-term least-squares-type fitting, allowing flexible selection and refinement of a priori algorithm constraints. For example, limitations can be placed on retrieved quantity partial derivatives, to constrain global aerosol emission space and time variability in the results. Similarities and differences between commonly used inverse modeling and remote sensing techniques are analyzed. To retain the high space and time resolution of long-period, global observational records, the algorithm is expressed using adjoint operators.

    Successful global aerosol emission retrievals at 2°×2.5 resolution were obtained by inverting GOCART aerosol transport model output, assuming constant emissions over the diurnal cycle, and neglecting aerosol compositional differences. In addition, fine and coarse mode aerosol emission sources were inverted separately from MODIS fine and coarse mode aerosol optical thickness data, respectively. These assumptions are justified, based on observational coverage and accuracy limitations, producing valuable aerosol source locations and emission strengths. From two weeks of daily MODIS observations during August 2000, the global placement of fine mode aerosol sources agreed with available independent knowledge, even though the inverse method did not use any a priori information about aerosol sources, and was initialized with a "zero aerosol emission" assumption. Retrieving coarse mode aerosol emissions was less successful

  20. Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

    KAUST Repository

    Gower, Robert M.

    2018-02-12

    We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\\\\em first accelerated (deterministic and stochastic) quasi-Newton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.

  1. Inverse kinematics research using obstacle avoidance geometry method for EAST Articulated Maintenance Arm (EAMA)

    International Nuclear Information System (INIS)

    Wang, Kun; Song, Yuntao; Wu, Huapeng; Wei, Xiaoyang; Khan, Shahab Ud-Din; Cheng, Yong

    2017-01-01

    Highlights: • An Obstacle Topology Partition Projection (OTPP) method of tokamak-like vessel for collision detection. • Median values preferentially of depth-first search algorithm for solving redundant inverse kinematics based on OTPP. • Application of RIK in grasping target objects. - Abstract: This paper proposed a new method for solving inverse kinematics (IK) of a redundant manipulator called EAST Articulated Maintenance Arm (EAMA), which is applied in the fusion reactor EAST (Experimental Advanced Superconducting Tokamak) and used to complete some maintenance tasks in the complex areas. However, it is difficult to realize remote control due to its redundancy, coupling structure and the complex operational environment. The IK research of the robot played a vital role to the manipulator’s motion control algorithm of remote handling (RH) technology. An Obstacle Topology Partition Projection (OTPP) approach integrated with Modified Inverse Depth First Search (MIDFS) method was presented. This is a kind of new geometric algorithm in order to solve the problem of IK for a high-redundancy manipulator. It can also be used to find a solution satisfying collision avoidance with optimal safety distance between the manipulator and obstacles. Simulations and experiments were conducted to demonstrate the efficiency and accuracy of the proposed method.

  2. Inverse kinematics research using obstacle avoidance geometry method for EAST Articulated Maintenance Arm (EAMA)

    Energy Technology Data Exchange (ETDEWEB)

    Wang, Kun, E-mail: wangkun@ipp.ac.cn [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei (China); Lappeenranta University of Technology, Lappeenranta (Finland); University of Science and Technology of China, Hefei (China); Song, Yuntao [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei (China); University of Science and Technology of China, Hefei (China); Wu, Huapeng [Lappeenranta University of Technology, Lappeenranta (Finland); Wei, Xiaoyang; Khan, Shahab Ud-Din; Cheng, Yong [Institute of Plasma Physics, Chinese Academy of Sciences, Hefei (China)

    2017-06-15

    Highlights: • An Obstacle Topology Partition Projection (OTPP) method of tokamak-like vessel for collision detection. • Median values preferentially of depth-first search algorithm for solving redundant inverse kinematics based on OTPP. • Application of RIK in grasping target objects. - Abstract: This paper proposed a new method for solving inverse kinematics (IK) of a redundant manipulator called EAST Articulated Maintenance Arm (EAMA), which is applied in the fusion reactor EAST (Experimental Advanced Superconducting Tokamak) and used to complete some maintenance tasks in the complex areas. However, it is difficult to realize remote control due to its redundancy, coupling structure and the complex operational environment. The IK research of the robot played a vital role to the manipulator’s motion control algorithm of remote handling (RH) technology. An Obstacle Topology Partition Projection (OTPP) approach integrated with Modified Inverse Depth First Search (MIDFS) method was presented. This is a kind of new geometric algorithm in order to solve the problem of IK for a high-redundancy manipulator. It can also be used to find a solution satisfying collision avoidance with optimal safety distance between the manipulator and obstacles. Simulations and experiments were conducted to demonstrate the efficiency and accuracy of the proposed method.

  3. Estimation of fracture parameters using elastic full-waveform inversion

    KAUST Repository

    Zhang, Zhendong

    2017-08-17

    Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution and suffer from uncertainties in the inverted parameters. Here, we propose to estimate the spatial distribution and physical properties of fractures using full-waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. A shape regularization term is added to the objective function to improve the estimation of the fracture azimuth, which is otherwise poorly constrained. The cracks are assumed to be penny-shaped to reduce the nonuniqueness in the inverted fracture weaknesses and achieve a faster convergence. To better understand the inversion results, we analyze the radiation patterns induced by the perturbations in the fracture weaknesses and orientation. Due to the high-resolution potential of elastic FWI, the developed algorithm can recover the spatial fracture distribution and identify localized “sweet spots” of intense fracturing. However, the fracture azimuth can be resolved only using long-offset data.

  4. Fixed-point image orthorectification algorithms for reduced computational cost

    Science.gov (United States)

    French, Joseph Clinton

    Imaging systems have been applied to many new applications in recent years. With the advent of low-cost, low-power focal planes and more powerful, lower cost computers, remote sensing applications have become more wide spread. Many of these applications require some form of geolocation, especially when relative distances are desired. However, when greater global positional accuracy is needed, orthorectification becomes necessary. Orthorectification is the process of projecting an image onto a Digital Elevation Map (DEM), which removes terrain distortions and corrects the perspective distortion by changing the viewing angle to be perpendicular to the projection plane. Orthorectification is used in disaster tracking, landscape management, wildlife monitoring and many other applications. However, orthorectification is a computationally expensive process due to floating point operations and divisions in the algorithm. To reduce the computational cost of on-board processing, two novel algorithm modifications are proposed. One modification is projection utilizing fixed-point arithmetic. Fixed point arithmetic removes the floating point operations and reduces the processing time by operating only on integers. The second modification is replacement of the division inherent in projection with a multiplication of the inverse. The inverse must operate iteratively. Therefore, the inverse is replaced with a linear approximation. As a result of these modifications, the processing time of projection is reduced by a factor of 1.3x with an average pixel position error of 0.2% of a pixel size for 128-bit integer processing and over 4x with an average pixel position error of less than 13% of a pixel size for a 64-bit integer processing. A secondary inverse function approximation is also developed that replaces the linear approximation with a quadratic. The quadratic approximation produces a more accurate approximation of the inverse, allowing for an integer multiplication calculation

  5. A fast inverse consistent deformable image registration method based on symmetric optical flow computation

    International Nuclear Information System (INIS)

    Yang Deshan; Li Hua; Low, Daniel A; Deasy, Joseph O; Naqa, Issam El

    2008-01-01

    Deformable image registration is widely used in various radiation therapy applications including daily treatment planning adaptation to map planned tissue or dose to changing anatomy. In this work, a simple and efficient inverse consistency deformable registration method is proposed with aims of higher registration accuracy and faster convergence speed. Instead of registering image I to a second image J, the two images are symmetrically deformed toward one another in multiple passes, until both deformed images are matched and correct registration is therefore achieved. In each pass, a delta motion field is computed by minimizing a symmetric optical flow system cost function using modified optical flow algorithms. The images are then further deformed with the delta motion field in the positive and negative directions respectively, and then used for the next pass. The magnitude of the delta motion field is forced to be less than 0.4 voxel for every pass in order to guarantee smoothness and invertibility for the two overall motion fields that are accumulating the delta motion fields in both positive and negative directions, respectively. The final motion fields to register the original images I and J, in either direction, are calculated by inverting one overall motion field and combining the inversion result with the other overall motion field. The final motion fields are inversely consistent and this is ensured by the symmetric way that registration is carried out. The proposed method is demonstrated with phantom images, artificially deformed patient images and 4D-CT images. Our results suggest that the proposed method is able to improve the overall accuracy (reducing registration error by 30% or more, compared to the original and inversely inconsistent optical flow algorithms), reduce the inverse consistency error (by 95% or more) and increase the convergence rate (by 100% or more). The overall computation speed may slightly decrease, or increase in most cases

  6. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm.

    Science.gov (United States)

    Zhang, Zhihua; Sheng, Zheng; Shi, Hanqing; Fan, Zhiqiang

    2016-01-01

    Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS) algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO). Rechenberg's 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  7. Prestack inversion based on anisotropic Markov random field-maximum posterior probability inversion and its application to identify shale gas sweet spots

    Science.gov (United States)

    Wang, Kang-Ning; Sun, Zan-Dong; Dong, Ning

    2015-12-01

    Economic shale gas production requires hydraulic fracture stimulation to increase the formation permeability. Hydraulic fracturing strongly depends on geomechanical parameters such as Young's modulus and Poisson's ratio. Fracture-prone sweet spots can be predicted by prestack inversion, which is an ill-posed problem; thus, regularization is needed to obtain unique and stable solutions. To characterize gas-bearing shale sedimentary bodies, elastic parameter variations are regarded as an anisotropic Markov random field. Bayesian statistics are adopted for transforming prestack inversion to the maximum posterior probability. Two energy functions for the lateral and vertical directions are used to describe the distribution, and the expectation-maximization algorithm is used to estimate the hyperparameters of the prior probability of elastic parameters. Finally, the inversion yields clear geological boundaries, high vertical resolution, and reasonable lateral continuity using the conjugate gradient method to minimize the objective function. Antinoise and imaging ability of the method were tested using synthetic and real data.

  8. Pareto-Optimal Multi-objective Inversion of Geophysical Data

    Science.gov (United States)

    Schnaidt, Sebastian; Conway, Dennis; Krieger, Lars; Heinson, Graham

    2018-01-01

    In the process of modelling geophysical properties, jointly inverting different data sets can greatly improve model results, provided that the data sets are compatible, i.e., sensitive to similar features. Such a joint inversion requires a relationship between the different data sets, which can either be analytic or structural. Classically, the joint problem is expressed as a scalar objective function that combines the misfit functions of multiple data sets and a joint term which accounts for the assumed connection between the data sets. This approach suffers from two major disadvantages: first, it can be difficult to assess the compatibility of the data sets and second, the aggregation of misfit terms introduces a weighting of the data sets. We present a pareto-optimal multi-objective joint inversion approach based on an existing genetic algorithm. The algorithm treats each data set as a separate objective, avoiding forced weighting and generating curves of the trade-off between the different objectives. These curves are analysed by their shape and evolution to evaluate data set compatibility. Furthermore, the statistical analysis of the generated solution population provides valuable estimates of model uncertainty.

  9. Robust inverse-consistent affine CT-MR registration in MRI-assisted and MRI-alone prostate radiation therapy.

    Science.gov (United States)

    Rivest-Hénault, David; Dowson, Nicholas; Greer, Peter B; Fripp, Jurgen; Dowling, Jason A

    2015-07-01

    CT-MR registration is a critical component of many radiation oncology protocols. In prostate external beam radiation therapy, it allows the propagation of MR-derived contours to reference CT images at the planning stage, and it enables dose mapping during dosimetry studies. The use of carefully registered CT-MR atlases allows the estimation of patient specific electron density maps from MRI scans, enabling MRI-alone radiation therapy planning and treatment adaptation. In all cases, the precision and accuracy achieved by registration influences the quality of the entire process. Most current registration algorithms do not robustly generalize and lack inverse-consistency, increasing the risk of human error and acting as a source of bias in studies where information is propagated in a particular direction, e.g. CT to MR or vice versa. In MRI-based treatment planning where both CT and MR scans serve as spatial references, inverse-consistency is critical, if under-acknowledged. A robust, inverse-consistent, rigid/affine registration algorithm that is well suited to CT-MR alignment in prostate radiation therapy is presented. The presented method is based on a robust block-matching optimization process that utilises a half-way space definition to maintain inverse-consistency. Inverse-consistency substantially reduces the influence of the order of input images, simplifying analysis, and increasing robustness. An open source implementation is available online at http://aehrc.github.io/Mirorr/. Experimental results on a challenging 35 CT-MR pelvis dataset demonstrate that the proposed method is more accurate than other popular registration packages and is at least as accurate as the state of the art, while being more robust and having an order of magnitude higher inverse-consistency than competing approaches. The presented results demonstrate that the proposed registration algorithm is readily applicable to prostate radiation therapy planning. Copyright © 2015. Published by

  10. Efficient Monte Carlo sampling of inverse problems using a neural network-based forward—applied to GPR crosshole traveltime inversion

    Science.gov (United States)

    Hansen, T. M.; Cordua, K. S.

    2017-12-01

    Probabilistically formulated inverse problems can be solved using Monte Carlo-based sampling methods. In principle, both advanced prior information, based on for example, complex geostatistical models and non-linear forward models can be considered using such methods. However, Monte Carlo methods may be associated with huge computational costs that, in practice, limit their application. This is not least due to the computational requirements related to solving the forward problem, where the physical forward response of some earth model has to be evaluated. Here, it is suggested to replace a numerical complex evaluation of the forward problem, with a trained neural network that can be evaluated very fast. This will introduce a modeling error that is quantified probabilistically such that it can be accounted for during inversion. This allows a very fast and efficient Monte Carlo sampling of the solution to an inverse problem. We demonstrate the methodology for first arrival traveltime inversion of crosshole ground penetrating radar data. An accurate forward model, based on 2-D full-waveform modeling followed by automatic traveltime picking, is replaced by a fast neural network. This provides a sampling algorithm three orders of magnitude faster than using the accurate and computationally expensive forward model, and also considerably faster and more accurate (i.e. with better resolution), than commonly used approximate forward models. The methodology has the potential to dramatically change the complexity of non-linear and non-Gaussian inverse problems that have to be solved using Monte Carlo sampling techniques.

  11. Born reflection kernel analysis and wave-equation reflection traveltime inversion in elastic media

    KAUST Repository

    Wang, Tengfei

    2017-08-17

    Elastic reflection waveform inversion (ERWI) utilize the reflections to update the low and intermediate wavenumbers in the deeper part of model. However, ERWI suffers from the cycle-skipping problem due to the objective function of waveform residual. Since traveltime information relates to the background model more linearly, we use the traveltime residuals as objective function to update background velocity model using wave equation reflected traveltime inversion (WERTI). The reflection kernel analysis shows that mode decomposition can suppress the artifacts in gradient calculation. We design a two-step inversion strategy, in which PP reflections are firstly used to invert P wave velocity (Vp), followed by S wave velocity (Vs) inversion with PS reflections. P/S separation of multi-component seismograms and spatial wave mode decomposition can reduce the nonlinearity of inversion effectively by selecting suitable P or S wave subsets for hierarchical inversion. Numerical example of Sigsbee2A model validates the effectiveness of the algorithms and strategies for elastic WERTI (E-WERTI).

  12. Inverse thermal analysis method to study solidification in cast iron

    DEFF Research Database (Denmark)

    Dioszegi, Atilla; Hattel, Jesper

    2004-01-01

    Solidification modelling of cast metals is widely used to predict final properties in cast components. Accurate models necessitate good knowledge of the solidification behaviour. The present study includes a re-examination of the Fourier thermal analysis method. This involves an inverse numerical...... solution of a 1-dimensional heat transfer problem connected to solidification of cast alloys. In the analysis, the relation between the thermal state and the fraction solid of the metal is evaluated by a numerical method. This method contains an iteration algorithm controlled by an under relaxation term...... inverse thermal analysis was tested on both experimental and simulated data....

  13. Full-Physics Inverse Learning Machine for Satellite Remote Sensing of Ozone Profile Shapes and Tropospheric Columns

    Science.gov (United States)

    Xu, J.; Heue, K.-P.; Coldewey-Egbers, M.; Romahn, F.; Doicu, A.; Loyola, D.

    2018-04-01

    Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM), has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP) product and the convective-cloud-differential (CCD) method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.

  14. Calculation of total number of disintegrations after intake of radioactive nuclides using the pseudo inverse matrix

    International Nuclear Information System (INIS)

    Noh, Si Wan; Sol, Jeong; Lee, Jai Ki; Lee, Jong Il; Kim, Jang Lyul

    2012-01-01

    Calculation of total number of disintegrations after intake of radioactive nuclides is indispensable to calculate a dose coefficient which means committed effective dose per unit activity (Sv/Bq). In order to calculate the total number of disintegrations analytically, Birch all's algorithm has been commonly used. As described below, an inverse matrix should be calculated in the algorithm. As biokinetic models have been complicated, however, the inverse matrix does not exist sometime and the total number of disintegrations cannot be calculated. Thus, a numerical method has been applied to DCAL code used to calculate dose coefficients in ICRP publication and IMBA code. In this study, however, we applied the pseudo inverse matrix to solve the problem that the inverse matrix does not exist for. In order to validate our method, the method was applied to two examples and the results were compared to the tabulated data in ICRP publication. MATLAB 2012a was used to calculate the total number of disintegrations and exp m and p inv MATLAB built in functions were employed

  15. Particle Swarm Optimization and Uncertainty Assessment in Inverse Problems

    Directory of Open Access Journals (Sweden)

    José L. G. Pallero

    2018-01-01

    Full Text Available Most inverse problems in the industry (and particularly in geophysical exploration are highly underdetermined because the number of model parameters too high to achieve accurate data predictions and because the sampling of the data space is scarce and incomplete; it is always affected by different kinds of noise. Additionally, the physics of the forward problem is a simplification of the reality. All these facts result in that the inverse problem solution is not unique; that is, there are different inverse solutions (called equivalent, compatible with the prior information that fits the observed data within similar error bounds. In the case of nonlinear inverse problems, these equivalent models are located in disconnected flat curvilinear valleys of the cost-function topography. The uncertainty analysis consists of obtaining a representation of this complex topography via different sampling methodologies. In this paper, we focus on the use of a particle swarm optimization (PSO algorithm to sample the region of equivalence in nonlinear inverse problems. Although this methodology has a general purpose, we show its application for the uncertainty assessment of the solution of a geophysical problem concerning gravity inversion in sedimentary basins, showing that it is possible to efficiently perform this task in a sampling-while-optimizing mode. Particularly, we explain how to use and analyze the geophysical models sampled by exploratory PSO family members to infer different descriptors of nonlinear uncertainty.

  16. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    International Nuclear Information System (INIS)

    Elsheikh, Ahmed H.; Wheeler, Mary F.; Hoteit, Ibrahim

    2014-01-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems

  17. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    Energy Technology Data Exchange (ETDEWEB)

    Elsheikh, Ahmed H., E-mail: aelsheikh@ices.utexas.edu [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Institute of Petroleum Engineering, Heriot-Watt University, Edinburgh EH14 4AS (United Kingdom); Wheeler, Mary F. [Institute for Computational Engineering and Sciences (ICES), University of Texas at Austin, TX (United States); Hoteit, Ibrahim [Department of Earth Sciences and Engineering, King Abdullah University of Science and Technology (KAUST), Thuwal (Saudi Arabia)

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems.

  18. Inversion for Refractivity Parameters Using a Dynamic Adaptive Cuckoo Search with Crossover Operator Algorithm

    Directory of Open Access Journals (Sweden)

    Zhihua Zhang

    2016-01-01

    Full Text Available Using the RFC technique to estimate refractivity parameters is a complex nonlinear optimization problem. In this paper, an improved cuckoo search (CS algorithm is proposed to deal with this problem. To enhance the performance of the CS algorithm, a parameter dynamic adaptive operation and crossover operation were integrated into the standard CS (DACS-CO. Rechenberg’s 1/5 criteria combined with learning factor were used to control the parameter dynamic adaptive adjusting process. The crossover operation of genetic algorithm was utilized to guarantee the population diversity. The new hybrid algorithm has better local search ability and contributes to superior performance. To verify the ability of the DACS-CO algorithm to estimate atmospheric refractivity parameters, the simulation data and real radar clutter data are both implemented. The numerical experiments demonstrate that the DACS-CO algorithm can provide an effective method for near-real-time estimation of the atmospheric refractivity profile from radar clutter.

  19. Hybrid nested sampling algorithm for Bayesian model selection applied to inverse subsurface flow problems

    KAUST Repository

    Elsheikh, Ahmed H.

    2014-02-01

    A Hybrid Nested Sampling (HNS) algorithm is proposed for efficient Bayesian model calibration and prior model selection. The proposed algorithm combines, Nested Sampling (NS) algorithm, Hybrid Monte Carlo (HMC) sampling and gradient estimation using Stochastic Ensemble Method (SEM). NS is an efficient sampling algorithm that can be used for Bayesian calibration and estimating the Bayesian evidence for prior model selection. Nested sampling has the advantage of computational feasibility. Within the nested sampling algorithm, a constrained sampling step is performed. For this step, we utilize HMC to reduce the correlation between successive sampled states. HMC relies on the gradient of the logarithm of the posterior distribution, which we estimate using a stochastic ensemble method based on an ensemble of directional derivatives. SEM only requires forward model runs and the simulator is then used as a black box and no adjoint code is needed. The developed HNS algorithm is successfully applied for Bayesian calibration and prior model selection of several nonlinear subsurface flow problems. © 2013 Elsevier Inc.

  20. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  1. Genetic Algorithms Evolve Optimized Transforms for Signal Processing Applications

    National Research Council Canada - National Science Library

    Moore, Frank; Babb, Brendan; Becke, Steven; Koyuk, Heather; Lamson, Earl, III; Wedge, Christopher

    2005-01-01

    .... The primary goal of the research described in this final report was to establish a methodology for using genetic algorithms to evolve coefficient sets describing inverse transforms and matched...

  2. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    Science.gov (United States)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  3. Multiple estimation channel decoupling and optimization method based on inverse system

    Science.gov (United States)

    Wu, Peng; Mu, Rongjun; Zhang, Xin; Deng, Yanpeng

    2018-03-01

    This paper addressed the intelligent autonomous navigation request of intelligent deformation missile, based on the intelligent deformation missile dynamics and kinematics modeling, navigation subsystem solution method and error modeling, and then focuses on the corresponding data fusion and decision fusion technology, decouples the sensitive channel of the filter input through the inverse system of design dynamics to reduce the influence of sudden change of the measurement information on the filter input. Then carrying out a series of simulation experiments, which verified the feasibility of the inverse system decoupling algorithm effectiveness.

  4. Analysis of forward and inverse problems in chemical dynamics and spectroscopy

    Energy Technology Data Exchange (ETDEWEB)

    Rabitz, H. [Princeton Univ., NJ (United States)

    1993-12-01

    The overall scope of this research concerns the development and application of forward and inverse analysis tools for problems in chemical dynamics and chemical kinetics. The chemical dynamics work is specifically associated with relating features in potential surfaces and resultant dynamical behavior. The analogous inverse research aims to provide stable algorithms for extracting potential surfaces from laboratory data. In the case of chemical kinetics, the focus is on the development of systematic means to reduce the complexity of chemical kinetic models. Recent progress in these directions is summarized below.

  5. Direct integration of the inverse Radon equation for X-ray computed tomography.

    Science.gov (United States)

    Libin, E E; Chakhlov, S V; Trinca, D

    2016-11-22

    A new mathematical appoach using the inverse Radon equation for restoration of images in problems of linear two-dimensional x-ray tomography is formulated. In this approach, Fourier transformation is not used, and it gives the chance to create the practical computing algorithms having more reliable mathematical substantiation. Results of software implementation show that for especially for low number of projections, the described approach performs better than standard X-ray tomographic reconstruction algorithms.

  6. Complexity analysis of accelerated MCMC methods for Bayesian inversion

    International Nuclear Information System (INIS)

    Hoang, Viet Ha; Schwab, Christoph; Stuart, Andrew M

    2013-01-01

    The Bayesian approach to inverse problems, in which the posterior probability distribution on an unknown field is sampled for the purposes of computing posterior expectations of quantities of interest, is starting to become computationally feasible for partial differential equation (PDE) inverse problems. Balancing the sources of error arising from finite-dimensional approximation of the unknown field, the PDE forward solution map and the sampling of the probability space under the posterior distribution are essential for the design of efficient computational Bayesian methods for PDE inverse problems. We study Bayesian inversion for a model elliptic PDE with an unknown diffusion coefficient. We provide complexity analyses of several Markov chain Monte Carlo (MCMC) methods for the efficient numerical evaluation of expectations under the Bayesian posterior distribution, given data δ. Particular attention is given to bounds on the overall work required to achieve a prescribed error level ε. Specifically, we first bound the computational complexity of ‘plain’ MCMC, based on combining MCMC sampling with linear complexity multi-level solvers for elliptic PDE. Our (new) work versus accuracy bounds show that the complexity of this approach can be quite prohibitive. Two strategies for reducing the computational complexity are then proposed and analyzed: first, a sparse, parametric and deterministic generalized polynomial chaos (gpc) ‘surrogate’ representation of the forward response map of the PDE over the entire parameter space, and, second, a novel multi-level Markov chain Monte Carlo strategy which utilizes sampling from a multi-level discretization of the posterior and the forward PDE. For both of these strategies, we derive asymptotic bounds on work versus accuracy, and hence asymptotic bounds on the computational complexity of the algorithms. In particular, we provide sufficient conditions on the regularity of the unknown coefficients of the PDE and on the

  7. Full-waveform inversion with reflected waves for 2D VTI media

    KAUST Repository

    Pattnaik, Sonali

    2016-09-06

    Full-waveform inversion in anisotropic media using reflected waves suffers from the strong non-linearity of the objective function and trade-offs between model parameters. Estimating long-wavelength model components by fixing parameter perturbations, referred to as reflection-waveform inversion (RWI), can mitigate nonlinearity-related inversion issues. Here, we extend RWI to acoustic VTI (transversely isotropic with a vertical symmetry axis) media. To minimize trade-offs between the model parameters, we employ a new hierarchical two-stage approach that operates with the P-wave normal-moveout velocity and anisotropy coefficents ζ and η. First, is estimated using a fixed perturbation in ζ, and then we invert for η by fixing the updated perturbation in . The proposed 2D algorithm is tested on a horizontally layered VTI model.

  8. An inverse source problem of the Poisson equation with Cauchy data

    Directory of Open Access Journals (Sweden)

    Ji-Chuan Liu

    2017-05-01

    Full Text Available In this article, we study an inverse source problem of the Poisson equation with Cauchy data. We want to find iterative algorithms to detect the hidden source within a body from measurements on the boundary. Our goal is to reconstruct the location, the size and the shape of the hidden source. This problem is ill-posed, regularization techniques should be employed to obtain the regularized solution. Numerical examples show that our proposed algorithms are valid and effective.

  9. Performance impact of mutation operators of a subpopulation-based genetic algorithm for multi-robot task allocation problems.

    Science.gov (United States)

    Liu, Chun; Kroll, Andreas

    2016-01-01

    Multi-robot task allocation determines the task sequence and distribution for a group of robots in multi-robot systems, which is one of constrained combinatorial optimization problems and more complex in case of cooperative tasks because they introduce additional spatial and temporal constraints. To solve multi-robot task allocation problems with cooperative tasks efficiently, a subpopulation-based genetic algorithm, a crossover-free genetic algorithm employing mutation operators and elitism selection in each subpopulation, is developed in this paper. Moreover, the impact of mutation operators (swap, insertion, inversion, displacement, and their various combinations) is analyzed when solving several industrial plant inspection problems. The experimental results show that: (1) the proposed genetic algorithm can obtain better solutions than the tested binary tournament genetic algorithm with partially mapped crossover; (2) inversion mutation performs better than other tested mutation operators when solving problems without cooperative tasks, and the swap-inversion combination performs better than other tested mutation operators/combinations when solving problems with cooperative tasks. As it is difficult to produce all desired effects with a single mutation operator, using multiple mutation operators (including both inversion and swap) is suggested when solving similar combinatorial optimization problems.

  10. Forward and inverse solutions for Risley prism based on the Denavit-Hartenberg methodology

    Science.gov (United States)

    Beltran-Gonzalez, A.; Garcia-Torales, G.; Strojnik, M.; Flores, J. L.; Garcia-Luna, J. L.

    2017-08-01

    In this work forward and inverse solutions for two-element Risley prism for pointing and scanning beam systems are developed. A more efficient and faster algorithm is proposed to make an analogy of the Risley prism system compared with a robotic system with two degrees of freedom. This system of equations controls each Risley prism individually as a planar manipulator arm of two links. In order to evaluate the algorithm we implement it in a pointing system. We perform popular routines such as the linear, spiral and loops traces. Using forward and inverse solutions for two-element Risley prism it is also possible to point at coordinates specified by the user, provided they are within the pointer area of work area. Experimental results are showed as a validation of our proposal.

  11. Shrinkage-thresholding enhanced born iterative method for solving 2D inverse electromagnetic scattering problem

    KAUST Repository

    Desmal, Abdulla; Bagci, Hakan

    2014-01-01

    A numerical framework that incorporates recently developed iterative shrinkage thresholding (IST) algorithms within the Born iterative method (BIM) is proposed for solving the two-dimensional inverse electromagnetic scattering problem. IST

  12. Incremental inverse kinematics based vision servo for autonomous robotic capture of non-cooperative space debris

    Science.gov (United States)

    Dong, Gangqi; Zhu, Z. H.

    2016-04-01

    This paper proposed a new incremental inverse kinematics based vision servo approach for robotic manipulators to capture a non-cooperative target autonomously. The target's pose and motion are estimated by a vision system using integrated photogrammetry and EKF algorithm. Based on the estimated pose and motion of the target, the instantaneous desired position of the end-effector is predicted by inverse kinematics and the robotic manipulator is moved incrementally from its current configuration subject to the joint speed limits. This approach effectively eliminates the multiple solutions in the inverse kinematics and increases the robustness of the control algorithm. The proposed approach is validated by a hardware-in-the-loop simulation, where the pose and motion of the non-cooperative target is estimated by a real vision system. The simulation results demonstrate the effectiveness and robustness of the proposed estimation approach for the target and the incremental control strategy for the robotic manipulator.

  13. Weak unique continuation property and a related inverse source problem for time-fractional diffusion-advection equations

    Science.gov (United States)

    Jiang, Daijun; Li, Zhiyuan; Liu, Yikan; Yamamoto, Masahiro

    2017-05-01

    In this paper, we first establish a weak unique continuation property for time-fractional diffusion-advection equations. The proof is mainly based on the Laplace transform and the unique continuation properties for elliptic and parabolic equations. The result is weaker than its parabolic counterpart in the sense that we additionally impose the homogeneous boundary condition. As a direct application, we prove the uniqueness for an inverse problem on determining the spatial component in the source term by interior measurements. Numerically, we reformulate our inverse source problem as an optimization problem, and propose an iterative thresholding algorithm. Finally, several numerical experiments are presented to show the accuracy and efficiency of the algorithm.

  14. FOREWORD: 5th International Workshop on New Computational Methods for Inverse Problems

    Science.gov (United States)

    Vourc'h, Eric; Rodet, Thomas

    2015-11-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific research presented during the 5th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2015 (http://complement.farman.ens-cachan.fr/NCMIP_2015.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 29, 2015. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011, and secondly at the initiative of Institut Farman, in May 2012, May 2013 and May 2014. The New Computational Methods for Inverse Problems (NCMIP) workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the workshop were: algorithms and computational aspects of inversion, Bayesian estimation, Kernel methods, learning methods

  15. Inverse problems in classical and quantum physics

    International Nuclear Information System (INIS)

    Almasy, A.A.

    2007-01-01

    The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A

  16. Inverse problems in classical and quantum physics

    Energy Technology Data Exchange (ETDEWEB)

    Almasy, A.A.

    2007-06-29

    The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A

  17. Speckle imaging algorithms for planetary imaging

    Energy Technology Data Exchange (ETDEWEB)

    Johansson, E. [Lawrence Livermore National Lab., CA (United States)

    1994-11-15

    I will discuss the speckle imaging algorithms used to process images of the impact sites of the collision of comet Shoemaker-Levy 9 with Jupiter. The algorithms use a phase retrieval process based on the average bispectrum of the speckle image data. High resolution images are produced by estimating the Fourier magnitude and Fourier phase of the image separately, then combining them and inverse transforming to achieve the final result. I will show raw speckle image data and high-resolution image reconstructions from our recent experiment at Lick Observatory.

  18. An inverse hyperbolic heat conduction problem in estimating surface heat flux by the conjugate gradient method

    International Nuclear Information System (INIS)

    Huang, C.-H.; Wu, H.-H.

    2006-01-01

    In the present study an inverse hyperbolic heat conduction problem is solved by the conjugate gradient method (CGM) in estimating the unknown boundary heat flux based on the boundary temperature measurements. Results obtained in this inverse problem will be justified based on the numerical experiments where three different heat flux distributions are to be determined. Results show that the inverse solutions can always be obtained with any arbitrary initial guesses of the boundary heat flux. Moreover, the drawbacks of the previous study for this similar inverse problem, such as (1) the inverse solution has phase error and (2) the inverse solution is sensitive to measurement error, can be avoided in the present algorithm. Finally, it is concluded that accurate boundary heat flux can be estimated in this study

  19. Fast algorithms for transport models. Final report

    International Nuclear Information System (INIS)

    Manteuffel, T.A.

    1994-01-01

    This project has developed a multigrid in space algorithm for the solution of the S N equations with isotropic scattering in slab geometry. The algorithm was developed for the Modified Linear Discontinuous (MLD) discretization in space which is accurate in the thick diffusion limit. It uses a red/black two-cell μ-line relaxation. This relaxation solves for all angles on two adjacent spatial cells simultaneously. It takes advantage of the rank-one property of the coupling between angles and can perform this inversion in O(N) operations. A version of the multigrid in space algorithm was programmed on the Thinking Machines Inc. CM-200 located at LANL. It was discovered that on the CM-200 a block Jacobi type iteration was more efficient than the block red/black iteration. Given sufficient processors all two-cell block inversions can be carried out simultaneously with a small number of parallel steps. The bottleneck is the need for sums of N values, where N is the number of discrete angles, each from a different processor. These are carried out by machine intrinsic functions and are well optimized. The overall algorithm has computational complexity O(log(M)), where M is the number of spatial cells. The algorithm is very efficient and represents the state-of-the-art for isotropic problems in slab geometry. For anisotropic scattering in slab geometry, a multilevel in angle algorithm was developed. A parallel version of the multilevel in angle algorithm has also been developed. Upon first glance, the shifted transport sweep has limited parallelism. Once the right-hand-side has been computed, the sweep is completely parallel in angle, becoming N uncoupled initial value ODE's. The author has developed a cyclic reduction algorithm that renders it parallel with complexity O(log(M)). The multilevel in angle algorithm visits log(N) levels, where shifted transport sweeps are performed. The overall complexity is O(log(N)log(M))

  20. On multiple level-set regularization methods for inverse problems

    International Nuclear Information System (INIS)

    DeCezaro, A; Leitão, A; Tai, X-C

    2009-01-01

    We analyze a multiple level-set method for solving inverse problems with piecewise constant solutions. This method corresponds to an iterated Tikhonov method for a particular Tikhonov functional G α based on TV–H 1 penalization. We define generalized minimizers for our Tikhonov functional and establish an existence result. Moreover, we prove convergence and stability results of the proposed Tikhonov method. A multiple level-set algorithm is derived from the first-order optimality conditions for the Tikhonov functional G α , similarly as the iterated Tikhonov method. The proposed multiple level-set method is tested on an inverse potential problem. Numerical experiments show that the method is able to recover multiple objects as well as multiple contrast levels

  1. Sensitivity and inversion of full seismic waveforms in stratified porous medium; Sensibilite et inversion de formes d'ondes completes en milieu poreux stratifie

    Energy Technology Data Exchange (ETDEWEB)

    Barros, L. de

    2007-12-15

    Characterization of porous media parameters, and particularly the porosity, permeability and fluid properties are very useful in many applications (hydrologic, natural hazards or oil industry). The aim of my research is to evaluate the possibility to determine these properties from the full seismic wave fields. First, I am interested in the useful parameters and the specific properties of the seismic waves in the poro-elastic theory, often called Biot (1956) theory. I then compute seismic waves propagation in fluid saturated stratified porous media with a reflectivity method coupled with the discrete wavenumber integration method. I first used this modeling to study the possibilities to determine the carbon dioxide concentration and localization thanks to the reflected P-waves in the case of the deep geological storage of Sleipner (North Sea). The sensitivity of the seismic response to the poro-elastic parameters are then generalized by the analytical computation of the Frechet derivatives which are expressed in terms of the Green's functions of the unperturbed medium. The numerical tests show that the porosity and the consolidation are the main parameters to invert. The sensitivity operators are then introduced in a inversion algorithm based on iterative modeling of the full waveform. The classical algorithm of generalized least-square inverse problem is solved by the quasi-Newton technique (Tarantola, 1984). The inversion of synthetic data show that we can invert for the porosity and the fluid and solid parameters (densities and mechanical modulus, or volume rate of fluid and mineral) can be correctly rebuilt if the other parameters are well known. However, the strong seismic coupling of the porous parameters leads to difficulties to invert simultaneously for several parameters. One way to get round these difficulties is to use additional information and invert for one single parameter for the fluid properties (saturating rate) or for the lithology. An other

  2. Inverse determination of convective heat transfer between an impinging jet and a continuously moving flat surface

    International Nuclear Information System (INIS)

    Mobtil, Mohammed; Bougeard, Daniel; Solliec, Camille

    2014-01-01

    Highlights: • A new method for convective heat flux determination on a moving wall is proposed. • An inverse technique is used for retrieving the heat flux from IR measurements. • Heat flux distribution determination in the slot jet impingement area is performed. • The accuracy of the method is examined using CFD Based simulated experiments. • The inversion quality is tested according to several parameters of the experiments. - Abstract: In this study an inverse method is developed to determine the heat flux distribution on a moving plane wall. The method uses a thin layer of material (the measurement medium) glued on the conveyor belt. The heat flux distribution on the moving wall is then determined by an inverse method based on the temperature measurement by infrared thermography on the upper surface of the measurement medium. A finite element based inverse algorithm of a steady state heat conduction advection in the Eulerian frame is performed. The algorithm entails the use of the Tikhonov regularization method, along with the L-curve method to select an optimal regularization parameter. Both the direct solution of moving boundary problem and the inverse design formulation are presented. The accuracy of the inverse method is examined by simulating the exact and noisy data with four different values of the surface-to-jet velocity ratio, and two different materials (PVC and Aluminum) for the measurement medium. The results show a greater sensitivity to the convective heat flux allowing a better estimation of heat flux distribution for the PVC layer. An alternative underdetermined inverse scheme is also studied. This configuration allows a different extend between the retrieval heat flux surface and the measurement temperature surface

  3. Time-lapse three-dimensional inversion of complex conductivity data using an active time constrained (ATC) approach

    Science.gov (United States)

    Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.

    2011-01-01

    Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  4. A new accurate curvature matching and optimal tool based five-axis machining algorithm

    International Nuclear Information System (INIS)

    Lin, Than; Lee, Jae Woo; Bohez, Erik L. J.

    2009-01-01

    Free-form surfaces are widely used in CAD systems to describe the part surface. Today, the most advanced machining of free from surfaces is done in five-axis machining using a flat end mill cutter. However, five-axis machining requires complex algorithms for gouging avoidance, collision detection and powerful computer-aided manufacturing (CAM) systems to support various operations. An accurate and efficient method is proposed for five-axis CNC machining of free-form surfaces. The proposed algorithm selects the best tool and plans the tool path autonomously using curvature matching and integrated inverse kinematics of the machine tool. The new algorithm uses the real cutter contact tool path generated by the inverse kinematics and not the linearized piecewise real cutter location tool path

  5. Blocky inversion of multichannel elastic impedance for elastic parameters

    Science.gov (United States)

    Mozayan, Davoud Karami; Gholami, Ali; Siahkoohi, Hamid Reza

    2018-04-01

    Petrophysical description of reservoirs requires proper knowledge of elastic parameters like P- and S-wave velocities (Vp and Vs) and density (ρ), which can be retrieved from pre-stack seismic data using the concept of elastic impedance (EI). We propose an inversion algorithm which recovers elastic parameters from pre-stack seismic data in two sequential steps. In the first step, using the multichannel blind seismic inversion method (exploited recently for recovering acoustic impedance from post-stack seismic data), high-resolution blocky EI models are obtained directly from partial angle-stacks. Using an efficient total-variation (TV) regularization, each angle-stack is inverted independently in a multichannel form without prior knowledge of the corresponding wavelet. The second step involves inversion of the resulting EI models for elastic parameters. Mathematically, under some assumptions, the EI's are linearly described by the elastic parameters in the logarithm domain. Thus a linear weighted least squares inversion is employed to perform this step. Accuracy of the concept of elastic impedance in predicting reflection coefficients at low and high angles of incidence is compared with that of exact Zoeppritz elastic impedance and the role of low frequency content in the problem is discussed. The performance of the proposed inversion method is tested using synthetic 2D data sets obtained from the Marmousi model and also 2D field data sets. The results confirm the efficiency and accuracy of the proposed method for inversion of pre-stack seismic data.

  6. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  7. Adding Image Constraints to Inverse Kinematics for Human Motion Capture

    Science.gov (United States)

    Jaume-i-Capó, Antoni; Varona, Javier; González-Hidalgo, Manuel; Perales, Francisco J.

    2009-12-01

    In order to study human motion in biomechanical applications, a critical component is to accurately obtain the 3D joint positions of the user's body. Computer vision and inverse kinematics are used to achieve this objective without markers or special devices attached to the body. The problem of these systems is that the inverse kinematics is "blinded" with respect to the projection of body segments into the images used by the computer vision algorithms. In this paper, we present how to add image constraints to inverse kinematics in order to estimate human motion. Specifically, we explain how to define a criterion to use images in order to guide the posture reconstruction of the articulated chain. Tests with synthetic images show how the scheme performs well in an ideal situation. In order to test its potential in real situations, more experiments with task specific image sequences are also presented. By means of a quantitative study of different sequences, the results obtained show how this approach improves the performance of inverse kinematics in this application.

  8. Pre-clinical evaluation of an inverse planning module for segmental MLC based IMRT delivery

    International Nuclear Information System (INIS)

    Georg, Dietmar; Kroupa, Bernhard

    2002-01-01

    Phantom tests are performed for pre-clinical evaluation of a commercial inverse planning system (HELAX TMS, V 6.0) for segmented multileaf collimator (MLC) intensity modulated radiotherapy (IMRT) delivery. The optimization module has available two optimization algorithms: the target primary feasibility and the weighted feasibility algorithm, only the latter allows the user to specify weights for structures. In the first series, single beam tests are performed to evaluate the outcome of inverse planning in terms of plausibility for the following situations: oblique incidence, presence of inhomogeneities, multiple targets at different depths and multiple targets with different desired doses. Additionally, for these tests a manual plan is made for comparison. In the absence of organs at risk, both the optimization algorithms are found to assign the highest priority to low dose constraints for targets. In the second series, tests resembling clinical relevant configurations (simultaneous boost and concave target with critical organ) are performed with multiple beam arrangements in order to determine the impact of the system's configuration on inverse planning. It is found that the definition of certain segment number and segment size limitations does not largely compromise treatment plans when using multiple beams. On the other hand, these limitations are important for delivery efficiency and dosimetry. For the number of iterations and voxels per volume of interest, standard values in the system's configuration are considered to be sufficient. Additionally, it is demonstrated that precautions must be taken to precisely define treatment goals when using computerized treatment optimization. Similar phantom tests could be used for a direct dosimetric verification of all steps from inverse treatment planning to IMRT delivery. (note)

  9. An adaptive ANOVA-based PCKF for high-dimensional nonlinear inverse modeling

    Science.gov (United States)

    Li, Weixuan; Lin, Guang; Zhang, Dongxiao

    2014-02-01

    The probabilistic collocation-based Kalman filter (PCKF) is a recently developed approach for solving inverse problems. It resembles the ensemble Kalman filter (EnKF) in every aspect-except that it represents and propagates model uncertainty by polynomial chaos expansion (PCE) instead of an ensemble of model realizations. Previous studies have shown PCKF is a more efficient alternative to EnKF for many data assimilation problems. However, the accuracy and efficiency of PCKF depends on an appropriate truncation of the PCE series. Having more polynomial chaos basis functions in the expansion helps to capture uncertainty more accurately but increases computational cost. Selection of basis functions is particularly important for high-dimensional stochastic problems because the number of polynomial chaos basis functions required to represent model uncertainty grows dramatically as the number of input parameters (random dimensions) increases. In classic PCKF algorithms, the PCE basis functions are pre-set based on users' experience. Also, for sequential data assimilation problems, the basis functions kept in PCE expression remain unchanged in different Kalman filter loops, which could limit the accuracy and computational efficiency of classic PCKF algorithms. To address this issue, we present a new algorithm that adaptively selects PCE basis functions for different problems and automatically adjusts the number of basis functions in different Kalman filter loops. The algorithm is based on adaptive functional ANOVA (analysis of variance) decomposition, which approximates a high-dimensional function with the summation of a set of low-dimensional functions. Thus, instead of expanding the original model into PCE, we implement the PCE expansion on these low-dimensional functions, which is much less costly. We also propose a new adaptive criterion for ANOVA that is more suited for solving inverse problems. The new algorithm was tested with different examples and demonstrated

  10. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    Science.gov (United States)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work

  11. VLSI IMPLEMENTATION OF NOVEL ROUND KEYS GENERATION SCHEME FOR CRYPTOGRAPHY APPLICATIONS BY ERROR CONTROL ALGORITHM

    Directory of Open Access Journals (Sweden)

    B. SENTHILKUMAR

    2015-05-01

    Full Text Available A novel implementation of code based cryptography (Cryptocoding technique for multi-layer key distribution scheme is presented. VLSI chip is designed for storing information on generation of round keys. New algorithm is developed for reduced key size with optimal performance. Error Control Algorithm is employed for both generation of round keys and diffusion of non-linearity among them. Two new functions for bit inversion and its reversal are developed for cryptocoding. Probability of retrieving original key from any other round keys is reduced by diffusing nonlinear selective bit inversions on round keys. Randomized selective bit inversions are done on equal length of key bits by Round Constant Feedback Shift Register within the error correction limits of chosen code. Complexity of retrieving the original key from any other round keys is increased by optimal hardware usage. Proposed design is simulated and synthesized using VHDL coding for Spartan3E FPGA and results are shown. Comparative analysis is done between 128 bit Advanced Encryption Standard round keys and proposed round keys for showing security strength of proposed algorithm. This paper concludes that chip based multi-layer key distribution of proposed algorithm is an enhanced solution to the existing threats on cryptography algorithms.

  12. [Research on respiration course of human at different postures by electrical impedance tomography].

    Science.gov (United States)

    Chen, Xiaoyan; Wu, Jun; Wang, Huaxiang; Li, Da

    2010-10-01

    In this paper, the respiration courses of human at different postures are reconstructed by electrical impedance tomography (EIT). Conjugate gradient least squares (CGLS) algorithm is applied to reconstruct the resistivity distribution during respiration courses, and the EIT images taken from human at flat lying, left lying, right lying, sitting and prone postures are reconstructed and compared. The relative changes of the resistivity in region of interest (ROI) are analyzed to evidence the influences caused by different postures. Results show that the changes in postures are the most influential factors for the reconstructions, and the EIT images vary with the postures. In human at flat-lying posture, the left and right lungs have larger pulmonary ventilation volume simultaneously, and the EIT-measured data are of lower variability.

  13. Visco-elastic controlled-source full waveform inversion without surface waves

    Science.gov (United States)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  14. FULL-PHYSICS INVERSE LEARNING MACHINE FOR SATELLITE REMOTE SENSING OF OZONE PROFILE SHAPES AND TROPOSPHERIC COLUMNS

    Directory of Open Access Journals (Sweden)

    J. Xu

    2018-04-01

    Full Text Available Characterizing vertical distributions of ozone from nadir-viewing satellite measurements is known to be challenging, particularly the ozone information in the troposphere. A novel retrieval algorithm called Full-Physics Inverse Learning Machine (FP-ILM, has been developed at DLR in order to estimate ozone profile shapes based on machine learning techniques. In contrast to traditional inversion methods, the FP-ILM algorithm formulates the profile shape retrieval as a classification problem. Its implementation comprises a training phase to derive an inverse function from synthetic measurements, and an operational phase in which the inverse function is applied to real measurements. This paper extends the ability of the FP-ILM retrieval to derive tropospheric ozone columns from GOME- 2 measurements. Results of total and tropical tropospheric ozone columns are compared with the ones using the official GOME Data Processing (GDP product and the convective-cloud-differential (CCD method, respectively. Furthermore, the FP-ILM framework will be used for the near-real-time processing of the new European Sentinel sensors with their unprecedented spectral and spatial resolution and corresponding large increases in the amount of data.

  15. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    Directory of Open Access Journals (Sweden)

    Rohit Shukla

    2018-03-01

    Full Text Available Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

  16. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    Science.gov (United States)

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483

  17. a method of gravity and seismic sequential inversion and its GPU implementation

    Science.gov (United States)

    Liu, G.; Meng, X.

    2011-12-01

    In this abstract, we introduce a gravity and seismic sequential inversion method to invert for density and velocity together. For the gravity inversion, we use an iterative method based on correlation imaging algorithm; for the seismic inversion, we use the full waveform inversion. The link between the density and velocity is an empirical formula called Gardner equation, for large volumes of data, we use the GPU to accelerate the computation. For the gravity inversion method , we introduce a method based on correlation imaging algorithm,it is also a interative method, first we calculate the correlation imaging of the observed gravity anomaly, it is some value between -1 and +1, then we multiply this value with a little density ,this value become the initial density model. We get a forward reuslt with this initial model and also calculate the correaltion imaging of the misfit of observed data and the forward data, also multiply the correaltion imaging result a little density and add it to the initial model, then do the same procedure above , at last ,we can get a inversion density model. For the seismic inveron method ,we use a mothod base on the linearity of acoustic wave equation written in the frequency domain,with a intial velociy model, we can get a good velocity result. In the sequential inversion of gravity and seismic , we need a link formula to convert between density and velocity ,in our method , we use the Gardner equation. Driven by the insatiable market demand for real time, high-definition 3D images, the programmable NVIDIA Graphic Processing Unit (GPU) as co-processor of CPU has been developed for high performance computing. Compute Unified Device Architecture (CUDA) is a parallel programming model and software environment provided by NVIDIA designed to overcome the challenge of using traditional general purpose GPU while maintaining a low learn curve for programmers familiar with standard programming languages such as C. In our inversion processing

  18. A Parameterized Inversion Model for Soil Moisture and Biomass from Polarimetric Backscattering Coefficients

    Science.gov (United States)

    Truong-Loi, My-Linh; Saatchi, Sassan; Jaruwatanadilok, Sermsak

    2012-01-01

    A semi-empirical algorithm for the retrieval of soil moisture, root mean square (RMS) height and biomass from polarimetric SAR data is explained and analyzed in this paper. The algorithm is a simplification of the distorted Born model. It takes into account the physical scattering phenomenon and has three major components: volume, double-bounce and surface. This simplified model uses the three backscattering coefficients ( sigma HH, sigma HV and sigma vv) at low-frequency (P-band). The inversion process uses the Levenberg-Marquardt non-linear least-squares method to estimate the structural parameters. The estimation process is entirely explained in this paper, from initialization of the unknowns to retrievals. A sensitivity analysis is also done where the initial values in the inversion process are varying randomly. The results show that the inversion process is not really sensitive to initial values and a major part of the retrievals has a root-mean-square error lower than 5% for soil moisture, 24 Mg/ha for biomass and 0.49 cm for roughness, considering a soil moisture of 40%, roughness equal to 3cm and biomass varying from 0 to 500 Mg/ha with a mean of 161 Mg/ha

  19. Implementation of a Monte Carlo based inverse planning model for clinical IMRT with MCNP code

    International Nuclear Information System (INIS)

    He, Tongming Tony

    2003-01-01

    Inaccurate dose calculations and limitations of optimization algorithms in inverse planning introduce systematic and convergence errors to treatment plans. This work was to implement a Monte Carlo based inverse planning model for clinical IMRT aiming to minimize the aforementioned errors. The strategy was to precalculate the dose matrices of beamlets in a Monte Carlo based method followed by the optimization of beamlet intensities. The MCNP 4B (Monte Carlo N-Particle version 4B) code was modified to implement selective particle transport and dose tallying in voxels and efficient estimation of statistical uncertainties. The resulting performance gain was over eleven thousand times. Due to concurrent calculation of multiple beamlets of individual ports, hundreds of beamlets in an IMRT plan could be calculated within a practical length of time. A finite-sized point source model provided a simple and accurate modeling of treatment beams. The dose matrix calculations were validated through measurements in phantoms. Agreements were better than 1.5% or 0.2 cm. The beamlet intensities were optimized using a parallel platform based optimization algorithm that was capable of escape from local minima and preventing premature convergence. The Monte Carlo based inverse planning model was applied to clinical cases. The feasibility and capability of Monte Carlo based inverse planning for clinical IMRT was demonstrated. Systematic errors in treatment plans of a commercial inverse planning system were assessed in comparison with the Monte Carlo based calculations. Discrepancies in tumor doses and critical structure doses were up to 12% and 17%, respectively. The clinical importance of Monte Carlo based inverse planning for IMRT was demonstrated

  20. Renormalized nonlinear sensitivity kernel and inverse thin-slab propagator in T-matrix formalism for wave-equation tomography

    International Nuclear Information System (INIS)

    Wu, Ru-Shan; Wang, Benfeng; Hu, Chunhua

    2015-01-01

    We derived the renormalized nonlinear sensitivity operator and the related inverse thin-slab propagator (ITSP) for nonlinear tomographic waveform inversion based on the theory of nonlinear partial derivative operator and its De Wolf approximation. The inverse propagator is based on a renormalization procedure to the forward and inverse transition matrix scattering series. The ITSP eliminates the divergence of the inverse Born series for strong perturbations by stepwise partial summation (renormalization). Numerical tests showed that the inverse Born T-series starts to diverge at moderate perturbation (20% for the given model of Gaussian ball with a radius of 5 wavelength), while the ITSP has no divergence problem for any strong perturbations (up to 100% perturbation for test model). In addition, the ITSP is a non-iterative, marching algorithm with only one sweep, and therefore very efficient in comparison with the iterative inversion based on the inverse-Born scattering series. This convergence and efficiency improvement has potential applications to the iterative procedure of waveform inversion. (paper)

  1. Review on solving the inverse problem in EEG source analysis

    Directory of Open Access Journals (Sweden)

    Fabri Simon G

    2008-11-01

    Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF

  2. GBIS (Geodetic Bayesian Inversion Software): Rapid Inversion of InSAR and GNSS Data to Estimate Surface Deformation Source Parameters and Uncertainties

    Science.gov (United States)

    Bagnardi, M.; Hooper, A. J.

    2017-12-01

    Inversions of geodetic observational data, such as Interferometric Synthetic Aperture Radar (InSAR) and Global Navigation Satellite System (GNSS) measurements, are often performed to obtain information about the source of surface displacements. Inverse problem theory has been applied to study magmatic processes, the earthquake cycle, and other phenomena that cause deformation of the Earth's interior and of its surface. Together with increasing improvements in data resolution, both spatial and temporal, new satellite missions (e.g., European Commission's Sentinel-1 satellites) are providing the unprecedented opportunity to access space-geodetic data within hours from their acquisition. To truly take advantage of these opportunities we must become able to interpret geodetic data in a rapid and robust manner. Here we present the open-source Geodetic Bayesian Inversion Software (GBIS; available for download at http://comet.nerc.ac.uk/gbis). GBIS is written in Matlab and offers a series of user-friendly and interactive pre- and post-processing tools. For example, an interactive function has been developed to estimate the characteristics of noise in InSAR data by calculating the experimental semi-variogram. The inversion software uses a Markov-chain Monte Carlo algorithm, incorporating the Metropolis-Hastings algorithm with adaptive step size, to efficiently sample the posterior probability distribution of the different source parameters. The probabilistic Bayesian approach allows the user to retrieve estimates of the optimal (best-fitting) deformation source parameters together with the associated uncertainties produced by errors in the data (and by scaling, errors in the model). The current version of GBIS (V1.0) includes fast analytical forward models for magmatic sources of different geometry (e.g., point source, finite spherical source, prolate spheroid source, penny-shaped sill-like source, and dipping-dike with uniform opening) and for dipping faults with uniform

  3. An improved method of inverse kinematics calculation for a six-link manipulator

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-07-01

    As one method of solving the inverse problem related to a six-link manipulator, an improvement was made of previously proposed calculation algorithm based on a solution of an algebraic equation of the 24-th order. In this paper, the same type of a polynomial was derived in the form of the equation of 16-th order, i.e., the order reduced by 8, as compared to previous algorithm. The accuracy of solutions was identified to be much refined. (author)

  4. Optimal inverse magnetorheological damper modeling using shuffled frog-leaping algorithm–based adaptive neuro-fuzzy inference system approach

    Directory of Open Access Journals (Sweden)

    Xiufang Lin

    2016-08-01

    Full Text Available Magnetorheological dampers have become prominent semi-active control devices for vibration mitigation of structures which are subjected to severe loads. However, the damping force cannot be controlled directly due to the inherent nonlinear characteristics of the magnetorheological dampers. Therefore, for fully exploiting the capabilities of the magnetorheological dampers, one of the challenging aspects is to develop an accurate inverse model which can appropriately predict the input voltage to control the damping force. In this article, a hybrid modeling strategy combining shuffled frog-leaping algorithm and adaptive-network-based fuzzy inference system is proposed to model the inverse dynamic characteristics of the magnetorheological dampers for improving the modeling accuracy. The shuffled frog-leaping algorithm is employed to optimize the premise parameters of the adaptive-network-based fuzzy inference system while the consequent parameters are tuned by a least square estimation method, here known as shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach. To evaluate the effectiveness of the proposed approach, the inverse modeling results based on the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system approach are compared with those based on the adaptive-network-based fuzzy inference system and genetic algorithm–based adaptive-network-based fuzzy inference system approaches. Analysis of variance test is carried out to statistically compare the performance of the proposed methods and the results demonstrate that the shuffled frog-leaping algorithm-based adaptive-network-based fuzzy inference system strategy outperforms the other two methods in terms of modeling (training accuracy and checking accuracy.

  5. Time-domain full waveform inversion using the gradient preconditioning based on transmitted waves energy

    KAUST Repository

    Zhang, Xiao-bo; Tan, Jun; Song, Peng; Li, Jin-shan; Xia, Dong-ming; Liu, Zhao-lun

    2017-01-01

    The gradient preconditioning approach based on seismic wave energy can effectively avoid the huge storage consumption in the gradient preconditioning algorithms based on Hessian matrices in time-domain full waveform inversion (FWI), but the accuracy

  6. Full Waveform Inversion Using Oriented Time Migration Method

    KAUST Repository

    Zhang, Zhendong

    2016-04-12

    Full waveform inversion (FWI) for reflection events is limited by its linearized update requirements given by a process equivalent to migration. Unless the background velocity model is reasonably accurate the resulting gradient can have an inaccurate update direction leading the inversion to converge into what we refer to as local minima of the objective function. In this thesis, I first look into the subject of full model wavenumber to analysis the root of local minima and suggest the possible ways to avoid this problem. And then I analysis the possibility of recovering the corresponding wavenumber components through the existing inversion and migration algorithms. Migration can be taken as a generalized inversion method which mainly retrieves the high wavenumber part of the model. Conventional impedance inversion method gives a mapping relationship between the migration image (high wavenumber) and model parameters (full wavenumber) and thus provides a possible cascade inversion strategy to retrieve the full wavenumber components from seismic data. In the proposed approach, consider a mild lateral variation in the model, I find an analytical Frechet derivation corresponding to the new objective function. In the proposed approach, the gradient is given by the oriented time-domain imaging method. This is independent of the background velocity. Specifically, I apply the oriented time-domain imaging (which depends on the reflection slope instead of a background velocity) on the data residual to obtain the geometrical features of the velocity perturbation. Assuming that density is constant, the conventional 1D impedance inversion method is also applicable for 2D or 3D velocity inversion within the process of FWI. This method is not only capable of inverting for velocity, but it is also capable of retrieving anisotropic parameters relying on linearized representations of the reflection response. To eliminate the cross-talk artifacts between different parameters, I

  7. Three-dimensional inversion of multisource array electromagnetic data

    Science.gov (United States)

    Tartaras, Efthimios

    Three-dimensional (3-D) inversion is increasingly important for the correct interpretation of geophysical data sets in complex environments. To this effect, several approximate solutions have been developed that allow the construction of relatively fast inversion schemes. One such method that is fast and provides satisfactory accuracy is the quasi-linear (QL) approximation. It has, however, the drawback that it is source-dependent and, therefore, impractical in situations where multiple transmitters in different positions are employed. I have, therefore, developed a localized form of the QL approximation that is source-independent. This so-called localized quasi-linear (LQL) approximation can have a scalar, a diagonal, or a full tensor form. Numerical examples of its comparison with the full integral equation solution, the Born approximation, and the original QL approximation are given. The objective behind developing this approximation is to use it in a fast 3-D inversion scheme appropriate for multisource array data such as those collected in airborne surveys, cross-well logging, and other similar geophysical applications. I have developed such an inversion scheme using the scalar and diagonal LQL approximation. It reduces the original nonlinear inverse electromagnetic (EM) problem to three linear inverse problems. The first of these problems is solved using a weighted regularized linear conjugate gradient method, whereas the last two are solved in the least squares sense. The algorithm I developed provides the option of obtaining either smooth or focused inversion images. I have applied the 3-D LQL inversion to synthetic 3-D EM data that simulate a helicopter-borne survey over different earth models. The results demonstrate the stability and efficiency of the method and show that the LQL approximation can be a practical solution to the problem of 3-D inversion of multisource array frequency-domain EM data. I have also applied the method to helicopter-borne EM

  8. Inversion of time-domain induced polarization data based on time-lapse concept

    Science.gov (United States)

    Kim, Bitnarae; Nam, Myung Jin; Kim, Hee Joon

    2018-05-01

    Induced polarization (IP) surveys, measuring overvoltage phenomena of the medium, are widely and increasingly performed not only for exploration of mineral resources but also for engineering applications. Among several IP survey methods such as time-domain, frequency-domain and spectral IP surveys, this study introduces a noble inversion method for time-domain IP data to recover the chargeability structure of target medium. The inversion method employs the concept of 4D inversion of time-lapse resistivity data sets, considering the fact that measured voltage in time-domain IP survey is distorted by IP effects to increase from the instantaneous voltage measured at the moment the source current injection starts. Even though the increase is saturated very fast, we can consider the saturated and instantaneous voltages as a time-lapse data set. The 4D inversion method is one of the most powerful method for inverting time-lapse resistivity data sets. Using the developed IP inversion algorithm, we invert not only synthetic but also field IP data to show the effectiveness of the proposed method by comparing the recovered chargeability models with those from linear inversion that was used for the inversion of the field data in a previous study. Numerical results confirm that the proposed inversion method generates reliable chargeability models even though the anomalous bodies have large IP effects.

  9. Efficient generalized Golub-Kahan based methods for dynamic inverse problems

    Science.gov (United States)

    Chung, Julianne; Saibaba, Arvind K.; Brown, Matthew; Westman, Erik

    2018-02-01

    We consider efficient methods for computing solutions to and estimating uncertainties in dynamic inverse problems, where the parameters of interest may change during the measurement procedure. Compared to static inverse problems, incorporating prior information in both space and time in a Bayesian framework can become computationally intensive, in part, due to the large number of unknown parameters. In these problems, explicit computation of the square root and/or inverse of the prior covariance matrix is not possible, so we consider efficient, iterative, matrix-free methods based on the generalized Golub-Kahan bidiagonalization that allow automatic regularization parameter and variance estimation. We demonstrate that these methods for dynamic inversion can be more flexible than standard methods and develop efficient implementations that can exploit structure in the prior, as well as possible structure in the forward model. Numerical examples from photoacoustic tomography, space-time deblurring, and passive seismic tomography demonstrate the range of applicability and effectiveness of the described approaches. Specifically, in passive seismic tomography, we demonstrate our approach on both synthetic and real data. To demonstrate the scalability of our algorithm, we solve a dynamic inverse problem with approximately 43 000 measurements and 7.8 million unknowns in under 40 s on a standard desktop.

  10. A Monte Carlo algorithm for the Vavilov distribution

    International Nuclear Information System (INIS)

    Yi, Chul-Young; Han, Hyon-Soo

    1999-01-01

    Using the convolution property of the inverse Laplace transform, an improved Monte Carlo algorithm for the Vavilov energy-loss straggling distribution of the charged particle is developed, which is relatively simple and gives enough accuracy to be used for most Monte Carlo applications

  11. On the use of double differences in inversion of surface movement measurements

    NARCIS (Netherlands)

    Fokker, P.A.; Thienen-Visser, K. van

    2015-01-01

    Surface movement data can be used in data assimilation or inversion exercises to improve the level of knowledge of a compacting reservoir. We have designed, implemented and tested a new algorithm that uses measured optical height differences directly, without having to translate them to heights with

  12. A Fast DCT Algorithm for Watermarking in Digital Signal Processor

    Directory of Open Access Journals (Sweden)

    S. E. Tsai

    2017-01-01

    Full Text Available Discrete cosine transform (DCT has been an international standard in Joint Photographic Experts Group (JPEG format to reduce the blocking effect in digital image compression. This paper proposes a fast discrete cosine transform (FDCT algorithm that utilizes the energy compactness and matrix sparseness properties in frequency domain to achieve higher computation performance. For a JPEG image of 8×8 block size in spatial domain, the algorithm decomposes the two-dimensional (2D DCT into one pair of one-dimensional (1D DCTs with transform computation in only 24 multiplications. The 2D spatial data is a linear combination of the base image obtained by the outer product of the column and row vectors of cosine functions so that inverse DCT is as efficient. Implementation of the FDCT algorithm shows that embedding a watermark image of 32 × 32 block pixel size in a 256 × 256 digital image can be completed in only 0.24 seconds and the extraction of watermark by inverse transform is within 0.21 seconds. The proposed FDCT algorithm is shown more efficient than many previous works in computation.

  13. An inverse source location algorithm for radiation portal monitor applications

    International Nuclear Information System (INIS)

    Miller, Karen A.; Charlton, William S.

    2010-01-01

    Radiation portal monitors are being deployed at border crossings throughout the world to prevent the smuggling of nuclear and radiological materials; however, a tension exists between security and the free-flow of commerce. Delays at ports-of-entry have major economic implications, so it is imperative to minimize portal monitor screening time. We have developed an algorithm to locate a radioactive source using a distributed array of detectors, specifically for use at border crossings. To locate the source, we formulated an optimization problem where the objective function describes the least-squares difference between the actual and predicted detector measurements. The predicted measurements are calculated by solving the 3-D deterministic neutron transport equation given an estimated source position. The source position is updated using the steepest descent method, where the gradient of the objective function with respect to the source position is calculated using adjoint transport calculations. If the objective function is smaller than the convergence criterion, then the source position has been identified. This paper presents the derivation of the underlying equations in the algorithm as well as several computational test cases used to characterize its accuracy.

  14. A Frequency Matching Method: Solving Inverse Problems by Use of Geologically Realistic Prior Information

    DEFF Research Database (Denmark)

    Lange, Katrine; Frydendall, Jan; Cordua, Knud Skou

    2012-01-01

    The frequency matching method defines a closed form expression for a complex prior that quantifies the higher order statistics of a proposed solution model to an inverse problem. While existing solution methods to inverse problems are capable of sampling the solution space while taking into account...... arbitrarily complex a priori information defined by sample algorithms, it is not possible to directly compute the maximum a posteriori model, as the prior probability of a solution model cannot be expressed. We demonstrate how the frequency matching method enables us to compute the maximum a posteriori...... solution model to an inverse problem by using a priori information based on multiple point statistics learned from training images. We demonstrate the applicability of the suggested method on a synthetic tomographic crosshole inverse problem....

  15. Inversion of Gravity Anomalies Using Primal-Dual Interior Point Methods

    Directory of Open Access Journals (Sweden)

    Aaron A. Velasco

    2016-06-01

    Full Text Available Structural inversion of gravity datasets based on the use of density anomalies to derive robust images of the subsurface (delineating lithologies and their boundaries constitutes a fundamental non-invasive tool for geological exploration. The use of experimental techniques in geophysics to estimate and interpret di erences in the substructure based on its density properties have proven e cient; however, the inherent non-uniqueness associated with most geophysical datasets make this the ideal scenario for the use of recently developed robust constrained optimization techniques. We present a constrained optimization approach for a least squares inversion problem aimed to characterize 2-Dimensional Earth density structure models based on Bouguer gravity anomalies. The proposed formulation is solved with a Primal-Dual Interior-Point method including equality and inequality physical and structural constraints. We validate our results using synthetic density crustal structure models with varying complexity and illustrate the behavior of the algorithm using di erent initial density structure models and increasing noise levels in the observations. Based on these implementations, we conclude that the algorithm using Primal-Dual Interior-Point methods is robust, and its results always honor the geophysical constraints. Some of the advantages of using this approach for structural inversion of gravity data are the incorporation of a priori information related to the model parameters (coming from actual physical properties of the subsurface and the reduction of the solution space contingent on these boundary conditions.

  16. Stabilization Algorithms for Large-Scale Problems

    DEFF Research Database (Denmark)

    Jensen, Toke Koldborg

    2006-01-01

    The focus of the project is on stabilization of large-scale inverse problems where structured models and iterative algorithms are necessary for computing approximate solutions. For this purpose, we study various iterative Krylov methods and their abilities to produce regularized solutions. Some......-curve. This heuristic is implemented as a part of a larger algorithm which is developed in collaboration with G. Rodriguez and P. C. Hansen. Last, but not least, a large part of the project has, in different ways, revolved around the object-oriented Matlab toolbox MOORe Tools developed by PhD Michael Jacobsen. New...

  17. Inverse photoemission

    International Nuclear Information System (INIS)

    Namatame, Hirofumi; Taniguchi, Masaki

    1994-01-01

    Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)

  18. Full waveform inversion for time-distance helioseismology

    International Nuclear Information System (INIS)

    Hanasoge, Shravan M.; Tromp, Jeroen

    2014-01-01

    Inferring interior properties of the Sun from photospheric measurements of the seismic wavefield constitutes the helioseismic inverse problem. Deviations in seismic measurements (such as wave travel times) from their fiducial values estimated for a given model of the solar interior imply that the model is inaccurate. Contemporary inversions in local helioseismology assume that properties of the solar interior are linearly related to measured travel-time deviations. It is widely known, however, that this assumption is invalid for sunspots and active regions and is likely for supergranular flows. Here, we introduce nonlinear optimization, executed iteratively, as a means of inverting for the subsurface structure of large-amplitude perturbations. Defining the penalty functional as the L 2 norm of wave travel-time deviations, we compute the total misfit gradient of this functional with respect to the relevant model parameters at each iteration around the corresponding model. The model is successively improved using either steepest descent, conjugate gradient, or the quasi-Newton limited-memory Broyden-Fletcher-Goldfarb-Shanno algorithm. Performing nonlinear iterations requires privileging pixels (such as those in the near field of the scatterer), a practice that is not compliant with the standard assumption of translational invariance. Measurements for these inversions, although similar in principle to those used in time-distance helioseismology, require some retooling. For the sake of simplicity in illustrating the method, we consider a two-dimensional inverse problem with only a sound-speed perturbation.

  19. Algorithm 865

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Reid, John K.; Wasniewski, Jerzy

    2007-01-01

    We present subroutines for the Cholesky factorization of a positive-definite symmetric matrix and for solving corresponding sets of linear equations. They exploit cache memory by using the block hybrid format proposed by the authors in a companion article. The matrix is packed into n(n + 1)/2 real...... variables, and the speed is usually better than that of the LAPACK algorithm that uses full storage (n2 variables). Included are subroutines for rearranging a matrix whose upper or lower-triangular part is packed by columns to this format and for the inverse rearrangement. Also included is a kernel...

  20. Optimization of a seven-stage centrifugal compressor by using a quasi-3D inverse design method

    Energy Technology Data Exchange (ETDEWEB)

    Niliahmadabadi, Mahdi; Poursadegh, Farzad [Isfahan University of Technology, Isfahan (Iran, Islamic Republic of)

    2013-11-15

    This paper focuses on performance improvement of a centrifugal compressor. An inverse design method for 3D design approaches is formulated to address this concern. The design procedure encompasses two major steps. First, with the use of ball spine algorithm, which is an inverse design algorithm, on the meridional plane of impeller, the hub and shroud of impeller are computed based on a modified pressure distribution along them. Second, an original and progressive algorithm is developed for design of blade camber line profile on the blade-to-blade planes of impeller based on blade loading improvement. Full 3D analysis of the current and designed compressor is accomplished by using a Reynolds-averaged Navier-Stokes equations solver. A comparison between the analysis results of the current and designed compressor shows that the total-to-total isentropic efficiency and pressure ratio of the designed compressor under the same operating conditions are enhanced by more than 4.5% and 5%, respectively.

  1. Optimization of a seven-stage centrifugal compressor by using a quasi-3D inverse design method

    International Nuclear Information System (INIS)

    Niliahmadabadi, Mahdi; Poursadegh, Farzad

    2013-01-01

    This paper focuses on performance improvement of a centrifugal compressor. An inverse design method for 3D design approaches is formulated to address this concern. The design procedure encompasses two major steps. First, with the use of ball spine algorithm, which is an inverse design algorithm, on the meridional plane of impeller, the hub and shroud of impeller are computed based on a modified pressure distribution along them. Second, an original and progressive algorithm is developed for design of blade camber line profile on the blade-to-blade planes of impeller based on blade loading improvement. Full 3D analysis of the current and designed compressor is accomplished by using a Reynolds-averaged Navier-Stokes equations solver. A comparison between the analysis results of the current and designed compressor shows that the total-to-total isentropic efficiency and pressure ratio of the designed compressor under the same operating conditions are enhanced by more than 4.5% and 5%, respectively.

  2. Total variation regularization of the 3-D gravity inverse problem using a randomized generalized singular value decomposition

    Science.gov (United States)

    Vatankhah, Saeed; Renaut, Rosemary A.; Ardestani, Vahid E.

    2018-04-01

    We present a fast algorithm for the total variation regularization of the 3-D gravity inverse problem. Through imposition of the total variation regularization, subsurface structures presenting with sharp discontinuities are preserved better than when using a conventional minimum-structure inversion. The associated problem formulation for the regularization is nonlinear but can be solved using an iteratively reweighted least-squares algorithm. For small-scale problems the regularized least-squares problem at each iteration can be solved using the generalized singular value decomposition. This is not feasible for large-scale, or even moderate-scale, problems. Instead we introduce the use of a randomized generalized singular value decomposition in order to reduce the dimensions of the problem and provide an effective and efficient solution technique. For further efficiency an alternating direction algorithm is used to implement the total variation weighting operator within the iteratively reweighted least-squares algorithm. Presented results for synthetic examples demonstrate that the novel randomized decomposition provides good accuracy for reduced computational and memory demands as compared to use of classical approaches.

  3. Prototype Implementation of Two Efficient Low-Complexity Digital Predistortion Algorithms

    Directory of Open Access Journals (Sweden)

    Timo I. Laakso

    2008-01-01

    Full Text Available Predistortion (PD lineariser for microwave power amplifiers (PAs is an important topic of research. With larger and larger bandwidth as it appears today in modern WiMax standards as well as in multichannel base stations for 3GPP standards, the relatively simple nonlinear effect of a PA becomes a complex memory-including function, severely distorting the output signal. In this contribution, two digital PD algorithms are investigated for the linearisation of microwave PAs in mobile communications. The first one is an efficient and low-complexity algorithm based on a memoryless model, called the simplicial canonical piecewise linear (SCPWL function that describes the static nonlinear characteristic of the PA. The second algorithm is more general, approximating the pre-inverse filter of a nonlinear PA iteratively using a Volterra model. The first simpler algorithm is suitable for compensation of amplitude compression and amplitude-to-phase conversion, for example, in mobile units with relatively small bandwidths. The second algorithm can be used to linearise PAs operating with larger bandwidths, thus exhibiting memory effects, for example, in multichannel base stations. A measurement testbed which includes a transmitter-receiver chain with a microwave PA is built for testing and prototyping of the proposed PD algorithms. In the testing phase, the PD algorithms are implemented using MATLAB (floating-point representation and tested in record-and-playback mode. The iterative PD algorithm is then implemented on a Field Programmable Gate Array (FPGA using fixed-point representation. The FPGA implementation allows the pre-inverse filter to be tested in a real-time mode. Measurement results show excellent linearisation capabilities of both the proposed algorithms in terms of adjacent channel power suppression. It is also shown that the fixed-point FPGA implementation of the iterative algorithm performs as well as the floating-point implementation.

  4. Geoelectrical characterization by joint inversion of VES/TEM in Paraná basin, Brazil

    Science.gov (United States)

    Bortolozo, C. A.; Couto, M. A.; Almeida, E. R.; Porsani, J. L.; Santos, F. M.

    2012-12-01

    For many years electrical (DC) and transient electromagnetic (TEM) soundings have been used in a great number of environmental, hydrological and mining exploration studies. The data of both methods are interpreted usually by individual 1D models resulting in many cases in ambiguous models. This can be explained by how the two different methodologies sample the subsurface. The vertical electrical sounding (VES) is good on marking very resistive structures, while the transient electromagnetic sounding (TEM) is very sensitive to map conductive structures. Another characteristic is that VES is more sensitive to shallow structures, while TEM soundings can reach deeper structures. A Matlab program for joint inversion of VES and TEM soundings, by using CRS algorithm was developed aiming explore the best of the both methods. Initially, the algorithm was tested with synthetic data and after it was used to invert experimental data from Paraná sedimentary basin. We present the results of a re-interpretation of 46 VES/TEM soundings data set acquired in Bebedouro region in São Paulo State - Brazil. The previous interpretation was based in geoelectrical models obtained by single inversion of the VES and TEM soundings. In this work we present the results with single inversion of VES and TEM sounding inverted by the Curupira Program and a new interpretation based in the joint inversion of both methodologies. The goal is increase the accuracy in determining the underground structures. As a result a new geoelectrical model of the region is obtained.

  5. Large-scale inverse model analyses employing fast randomized data reduction

    Science.gov (United States)

    Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan

    2017-08-01

    When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.

  6. A new algorithm to determine the total radiated power at ASDEX upgrade

    Energy Technology Data Exchange (ETDEWEB)

    Gloeggler, Stephan; Bernert, Matthias; Eich, Thomas [Max Planck Institute for Plasma Physics, Boltzmannstr. 2, 85748 Garching (Germany); Collaboration: The ASDEX Upgrade Team

    2016-07-01

    Radiation is an essential part of the power balance in a fusion plasma. In future fusion devices about 90% of the power will have to be dissipated, mainly by radiation. For the development of an appropriate operational scenario, information about the absolute level of plasma radiation (P{sub rad,tot}) is crucial. Bolometers are used to measure the radiated power, however, an algorithm is required to derive the absolute power out of many line-integrated measurements. The currently used algorithm (BPD) was developed for the main chamber radiation. It underestimates the divertor radiation as its basic assumptions are not satisfied in this region. Therefore, a new P{sub rad,tot} algorithm is presented. It applies an Abel inversion on the main chamber and uses empirically based assumptions for poloidal asymmetries and the divertor radiation. To benchmark the new algorithm, synthetic emissivity profiles are used. On average, the new Abel inversion based algorithm deviates by only 10% from the nominal synthetic value while BPD is about 25% too low. With both codes time traces of ASDEX Upgrade discharges are calculated. The analysis of these time traces shows that the underestimation of the divertor radiation can have significant consequences on the accuracy of BPD while the new algorithm is shown to be stable.

  7. Structural level set inversion for microwave breast screening

    International Nuclear Information System (INIS)

    Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel

    2010-01-01

    We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D

  8. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    Directory of Open Access Journals (Sweden)

    Farahnaz Soleimani

    2015-11-01

    Full Text Available An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the effectiveness of our approach is confirmed on the basis of the theoretical point of view, some numerical comparisons in balancing chemical equations, as well as on randomly-generated matrices are furnished.

  9. Inverse scattering problem for a magnetic field in the Glauber approximation

    International Nuclear Information System (INIS)

    Bogdanov, I.V.

    1985-01-01

    New results in the general theory of scattering are obtained. An inverse problem at fixed energy for an axisymmetric magnetic field is formulated and solved within the frames of the quantum-mechanical Glauber approximation. The solution is found in quadratures in the form of an explicit inversion algorithm reproducing a vector potential by the angular dependence of the scattering amplitude. Extreme transitions from the eikonal inversion method to the classical and Born ones are investigated. Integral and differential equations are derived for the eikonal amplitude that ensure the real value of the vector potential and its energy independence. Magnetoelectric analogies the existence of equivalent axisymmetric electric and magnetic fields scattering charged particles in the same manner both in the Glauber and Born approximation are established. The mentioned analogies permit to simulate ion-potential scattering by potential one that is of interest from the practical viewpoint. Three-dimensional (excentral) eikonal inverse problems for the electric and magnetic fields are discussed. The results of the paper can be used in electron optics

  10. Inverse Estimation of Heat Flux and Temperature Distribution in 3D Finite Domain

    International Nuclear Information System (INIS)

    Muhammad, Nauman Malik

    2009-02-01

    Inverse heat conduction problems occur in many theoretical and practical applications where it is difficult or practically impossible to measure the input heat flux and the temperature of the layer conducting the heat flux to the body. Thus it becomes imperative to devise some means to cater for such a problem and estimate the heat flux inversely. Adaptive State Estimator is one such technique which works by incorporating the semi-Markovian concept into a Bayesian estimation technique thereby developing an inverse input and state estimator consisting of a bank of parallel adaptively weighted Kalman filters. The problem presented in this study deals with a three dimensional system of a cube with one end conducting heat flux and all the other sides are insulated while the temperatures are measured on the accessible faces of the cube. The measurements taken on these accessible faces are fed into the estimation algorithm and the input heat flux and the temperature distribution at each point in the system is calculated. A variety of input heat flux scenarios have been examined to underwrite the robustness of the estimation algorithm and hence insure its usability in practical applications. These include sinusoidal input flux, a combination of rectangular, linearly changing and sinusoidal input flux and finally a step changing input flux. The estimator's performance limitations have been examined in these input set-ups and error associated with each set-up is compared to conclude the realistic application of the estimation algorithm in such scenarios. Different sensor arrangements, that is different sensor numbers and their locations are also examined to impress upon the importance of number of measurements and their location i.e. close or farther from the input area. Since practically it is both economically and physically tedious to install more number of measurement sensors, hence optimized number and location is very important to determine for making the study more

  11. Inverse kinematic solution for near-simple robots and its application to robot calibration

    Science.gov (United States)

    Hayati, Samad A.; Roston, Gerald P.

    1986-01-01

    This paper provides an inverse kinematic solution for a class of robot manipulators called near-simple manipulators. The kinematics of these manipulators differ from those of simple-robots by small parameter variations. Although most robots are by design simple, in practice, due to manufacturing tolerances, every robot is near-simple. The method in this paper gives an approximate inverse kinematics solution for real time applications based on the nominal solution for these robots. The validity of the results are tested both by a simulation study and by applying the algorithm to a PUMA robot.

  12. Methane combustion kinetic rate constants determination: an ill-posed inverse problem analysis

    Directory of Open Access Journals (Sweden)

    Bárbara D. L. Ferreira

    2013-01-01

    Full Text Available Methane combustion was studied by the Westbrook and Dryer model. This well-established simplified mechanism is very useful in combustion science, for computational effort can be notably reduced. In the inversion procedure to be studied, rate constants are obtained from [CO] concentration data. However, when inherent experimental errors in chemical concentrations are considered, an ill-conditioned inverse problem must be solved for which appropriate mathematical algorithms are needed. A recurrent neural network was chosen due to its numerical stability and robustness. The proposed methodology was compared against Simplex and Levenberg-Marquardt, the most used methods for optimization problems.

  13. On a quadratic inverse eigenvalue problem

    International Nuclear Information System (INIS)

    Cai, Yunfeng; Xu, Shufang

    2009-01-01

    This paper concerns the quadratic inverse eigenvalue problem (QIEP) of constructing real symmetric matrices M, C and K of size n × n, with M nonsingular, so that the quadratic matrix polynomial Q(λ) ≡ λ 2 M + λC + K has a completely prescribed set of eigenvalues and eigenvectors. It is shown via construction that the QIEP has a solution if and only if r 0, where r and δ are computable from the prescribed spectral data. A necessary and sufficient condition for the existence of a solution to the QIEP with M being positive definite is also established in a constructive way. Furthermore, two algorithms are developed: one is to solve the QIEP; another is to find a particular solution to the QIEP with the leading coefficient matrix being positive definite, which also provides us an approach to a simultaneous reduction of real symmetric matrix triple (M, C, K) by real congruence. Numerical results show that the two algorithms are feasible and numerically reliable

  14. Soft-sensing Modeling Based on MLS-SVM Inversion for L-lysine Fermentation Processes

    Directory of Open Access Journals (Sweden)

    Bo Wang

    2015-06-01

    Full Text Available A modeling approach 63 based on multiple output variables least squares support vector machine (MLS-SVM inversion is presented by a combination of inverse system and support vector machine theory. Firstly, a dynamic system model is developed based on material balance relation of a fed-batch fermentation process, with which it is analyzed whether an inverse system exists or not, and into which characteristic information of a fermentation process is introduced to set up an extended inversion model. Secondly, an initial extended inversion model is developed off-line by the use of the fitting capacity of MLS-SVM; on-line correction is made by the use of a differential evolution (DE algorithm on the basis of deviation information. Finally, a combined pseudo-linear system is formed by means of a serial connection of a corrected extended inversion model behind the L-lysine fermentation processes; thereby crucial biochemical parameters of a fermentation process could be predicted on-line. The simulation experiment shows that this soft-sensing modeling method features very high prediction precision and can predict crucial biochemical parameters of L-lysine fermentation process very well.

  15. Inverse problem of estimating transient heat transfer rate on external wall of forced convection pipe

    International Nuclear Information System (INIS)

    Chen, W.-L.; Yang, Y.-C.; Chang, W.-J.; Lee, H.-L.

    2008-01-01

    In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown space and time dependent heat transfer rate on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown heat transfer rate; hence, the procedure is classified as function estimation in the inverse calculation. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation of the space and time dependent heat transfer rate can be obtained for the test case considered in this study

  16. A Closed Loop Inverse Kinematics Solver Intended for Offline Calculation Optimized with GA

    Directory of Open Access Journals (Sweden)

    Emil Dale Bjoerlykhaug

    2018-01-01

    Full Text Available This paper presents a simple approach to building a robotic control system. Instead of a conventional control system which solves the inverse kinematics in real-time as the robot moves, an alternative approach where the inverse kinematics is calculated ahead of time is presented. This approach reduces the complexity and code necessary for the control system. Robot control systems are usually implemented in low level programming language. This new approach enables the use of high level programming for the complex inverse kinematics problem. For our approach, we implement a program to solve the inverse kinematics, called the Inverse Kinematics Solver (IKS, in Java, with a simple graphical user interface (GUI to load a file with desired end effector poses and edit the configuration of the robot using the Denavit-Hartenberg (DH convention. The program uses the closed-loop inverse kinematics (CLIK algorithm to solve the inverse kinematics problem. As an example, the IKS was set up to solve the kinematics for a custom built serial link robot. The kinematics for the custom robot is presented, and an example of input and output files is also presented. Additionally, the gain of the loop in the IKS is optimized using a GA, resulting in almost a 50% decrease in computational time.

  17. Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA

    Energy Technology Data Exchange (ETDEWEB)

    Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division

    2017-10-18

    Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.

  18. A fast marching algorithm for the factored eikonal equation

    Energy Technology Data Exchange (ETDEWEB)

    Treister, Eran, E-mail: erantreister@gmail.com [Department of Earth and Ocean Sciences, The University of British Columbia, Vancouver, BC (Canada); Haber, Eldad, E-mail: haber@math.ubc.ca [Department of Earth and Ocean Sciences, The University of British Columbia, Vancouver, BC (Canada); Department of Mathematics, The University of British Columbia, Vancouver, BC (Canada)

    2016-11-01

    The eikonal equation is instrumental in many applications in several fields ranging from computer vision to geoscience. This equation can be efficiently solved using the iterative Fast Sweeping (FS) methods and the direct Fast Marching (FM) methods. However, when used for a point source, the original eikonal equation is known to yield inaccurate numerical solutions, because of a singularity at the source. In this case, the factored eikonal equation is often preferred, and is known to yield a more accurate numerical solution. One application that requires the solution of the eikonal equation for point sources is travel time tomography. This inverse problem may be formulated using the eikonal equation as a forward problem. While this problem has been solved using FS in the past, the more recent choice for applying it involves FM methods because of the efficiency in which sensitivities can be obtained using them. However, while several FS methods are available for solving the factored equation, the FM method is available only for the original eikonal equation. In this paper we develop a Fast Marching algorithm for the factored eikonal equation, using both first and second order finite-difference schemes. Our algorithm follows the same lines as the original FM algorithm and requires the same computational effort. In addition, we show how to obtain sensitivities using this FM method and apply travel time tomography, formulated as an inverse factored eikonal equation. Numerical results in two and three dimensions show that our algorithm solves the factored eikonal equation efficiently, and demonstrate the achieved accuracy for computing the travel time. We also demonstrate a recovery of a 2D and 3D heterogeneous medium by travel time tomography using the eikonal equation for forward modeling and inversion by Gauss–Newton.

  19. Inversion for atmosphere duct parameters using real radar sea clutter

    International Nuclear Information System (INIS)

    Sheng Zheng; Fang Han-Xian

    2012-01-01

    This paper addresses the problem of estimating the lower atmospheric refractivity (M profile) under nonstandard propagation conditions frequently encountered in low altitude maritime radar applications. The vertical structure of the refractive environment is modeled using five parameters and the horizontal structure is modeled using five parameters. The refractivity model is implemented with and without a priori constraint on the duct strength as might be derived from soundings or numerical weather-prediction models. An electromagnetic propagation model maps the refractivity structure into a replica field. Replica fields are compared with the observed clutter using a squared-error objective function. A global search for the 10 environmental parameters is performed using genetic algorithms. The inversion algorithm is implemented on the basis of S-band radar sea-clutter data from Wallops Island, Virginia (SPANDAR). Reference data are from range-dependent refractivity profiles obtained with a helicopter. The inversion is assessed (i) by comparing the propagation predicted from the radar-inferred refractivity profiles with that from the helicopter profiles, (ii) by comparing the refractivity parameters from the helicopter soundings with those estimated. This technique could provide near-real-time estimation of ducting effects. (geophysics, astronomy, and astrophysics)

  20. Sensitivity and inversion of full seismic waveforms in stratified porous medium

    International Nuclear Information System (INIS)

    Barros, L. de

    2007-12-01

    Characterization of porous media parameters, and particularly the porosity, permeability and fluid properties are very useful in many applications (hydrologic, natural hazards or oil industry). The aim of my research is to evaluate the possibility to determine these properties from the full seismic wave fields. First, I am interested in the useful parameters and the specific properties of the seismic waves in the poro-elastic theory, often called Biot (1956) theory. I then compute seismic waves propagation in fluid saturated stratified porous media with a reflectivity method coupled with the discrete wavenumber integration method. I first used this modeling to study the possibilities to determine the carbon dioxide concentration and localization thanks to the reflected P-waves in the case of the deep geological storage of Sleipner (North Sea). The sensitivity of the seismic response to the poro-elastic parameters are then generalized by the analytical computation of the Frechet derivatives which are expressed in terms of the Green's functions of the unperturbed medium. The numerical tests show that the porosity and the consolidation are the main parameters to invert. The sensitivity operators are then introduced in a inversion algorithm based on iterative modeling of the full waveform. The classical algorithm of generalized least-square inverse problem is solved by the quasi-Newton technique (Tarantola, 1984). The inversion of synthetic data show that we can invert for the porosity and the fluid and solid parameters (densities and mechanical modulus, or volume rate of fluid and mineral) can be correctly rebuilt if the other parameters are well known. However, the strong seismic coupling of the porous parameters leads to difficulties to invert simultaneously for several parameters. One way to get round these difficulties is to use additional information and invert for one single parameter for the fluid properties (saturating rate) or for the lithology. An other way

  1. An Advanced Coupled Genetic Algorithm for Identifying Unknown Moving Loads on Bridge Decks

    Directory of Open Access Journals (Sweden)

    Sang-Youl Lee

    2014-01-01

    Full Text Available This study deals with an inverse method to identify moving loads on bridge decks using the finite element method (FEM and a coupled genetic algorithm (c-GA. We developed the inverse technique using a coupled genetic algorithm that can make global solution searches possible as opposed to classical gradient-based optimization techniques. The technique described in this paper allows us to not only detect the weight of moving vehicles but also find their moving velocities. To demonstrate the feasibility of the method, the algorithm is applied to a bridge deck model with beam elements. In addition, 1D and 3D finite element models are simulated to study the influence of measurement errors and model uncertainty between numerical and real structures. The results demonstrate the excellence of the method from the standpoints of computation efficiency and avoidance of premature convergence.

  2. Calculation of the inverse data space via sparse inversion

    KAUST Repository

    Saragiotis, Christos

    2011-01-01

    The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.

  3. Inversion of multiwavelength Raman lidar data for retrieval of bimodal aerosol size distribution

    Science.gov (United States)

    Veselovskii, Igor; Kolgotin, Alexei; Griaznov, Vadim; Müller, Detlef; Franke, Kathleen; Whiteman, David N.

    2004-02-01

    We report on the feasibility of deriving microphysical parameters of bimodal particle size distributions from Mie-Raman lidar based on a triple Nd:YAG laser. Such an instrument provides backscatter coefficients at 355, 532, and 1064 nm and extinction coefficients at 355 and 532 nm. The inversion method employed is Tikhonov's inversion with regularization. Special attention has been paid to extend the particle size range for which this inversion scheme works to ~10 μm, which makes this algorithm applicable to large particles, e.g., investigations concerning the hygroscopic growth of aerosols. Simulations showed that surface area, volume concentration, and effective radius are derived to an accuracy of ~50% for a variety of bimodal particle size distributions. For particle size distributions with an effective radius of rims along which anthropogenic pollution mixes with marine aerosols. Measurement cases obtained from the Institute for Tropospheric Research six-wavelength aerosol lidar observations during the Indian Ocean Experiment were used to test the capabilities of the algorithm for experimental data sets. A benchmark test was attempted for the case representing anthropogenic aerosols between a broken cloud deck. A strong contribution of particle volume in the coarse mode of the particle size distribution was found.

  4. Using Poisson-regularized inversion of Bremsstrahlung emission to extract full electron energy distribution functions from x-ray pulse-height detector data

    Science.gov (United States)

    Swanson, C.; Jandovitz, P.; Cohen, S. A.

    2018-02-01

    We measured Electron Energy Distribution Functions (EEDFs) from below 200 eV to over 8 keV and spanning five orders-of-magnitude in intensity, produced in a low-power, RF-heated, tandem mirror discharge in the PFRC-II apparatus. The EEDF was obtained from the x-ray energy distribution function (XEDF) using a novel Poisson-regularized spectrum inversion algorithm applied to pulse-height spectra that included both Bremsstrahlung and line emissions. The XEDF was measured using a specially calibrated Amptek Silicon Drift Detector (SDD) pulse-height system with 125 eV FWHM at 5.9 keV. The algorithm is found to out-perform current leading x-ray inversion algorithms when the error due to counting statistics is high.

  5. FOREWORD: 4th International Workshop on New Computational Methods for Inverse Problems (NCMIP2014)

    Science.gov (United States)

    2014-10-01

    This volume of Journal of Physics: Conference Series is dedicated to the scientific contributions presented during the 4th International Workshop on New Computational Methods for Inverse Problems, NCMIP 2014 (http://www.farman.ens-cachan.fr/NCMIP_2014.html). This workshop took place at Ecole Normale Supérieure de Cachan, on May 23, 2014. The prior editions of NCMIP also took place in Cachan, France, firstly within the scope of ValueTools Conference, in May 2011 (http://www.ncmip.org/2011/), and secondly at the initiative of Institut Farman, in May 2012 and May 2013, (http://www.farman.ens-cachan.fr/NCMIP_2012.html), (http://www.farman.ens-cachan.fr/NCMIP_2013.html). The New Computational Methods for Inverse Problems (NCMIP) Workshop focused on recent advances in the resolution of inverse problems. Indeed, inverse problems appear in numerous scientific areas such as geophysics, biological and medical imaging, material and structure characterization, electrical, mechanical and civil engineering, and finances. The resolution of inverse problems consists of estimating the parameters of the observed system or structure from data collected by an instrumental sensing or imaging device. Its success firstly requires the collection of relevant observation data. It also requires accurate models describing the physical interactions between the instrumental device and the observed system, as well as the intrinsic properties of the solution itself. Finally, it requires the design of robust, accurate and efficient inversion algorithms. Advanced sensor arrays and imaging devices provide high rate and high volume data; in this context, the efficient resolution of the inverse problem requires the joint development of new models and inversion methods, taking computational and implementation aspects into account. During this one-day workshop, researchers had the opportunity to bring to light and share new techniques and results in the field of inverse problems. The topics of the

  6. Effects of systematic phase errors on optimized quantum random-walk search algorithm

    International Nuclear Information System (INIS)

    Zhang Yu-Chao; Bao Wan-Su; Wang Xiang; Fu Xiang-Qun

    2015-01-01

    This study investigates the effects of systematic errors in phase inversions on the success rate and number of iterations in the optimized quantum random-walk search algorithm. Using the geometric description of this algorithm, a model of the algorithm with phase errors is established, and the relationship between the success rate of the algorithm, the database size, the number of iterations, and the phase error is determined. For a given database size, we obtain both the maximum success rate of the algorithm and the required number of iterations when phase errors are present in the algorithm. Analyses and numerical simulations show that the optimized quantum random-walk search algorithm is more robust against phase errors than Grover’s algorithm. (paper)

  7. Inverse kinetics equations for on line measurement of reactivity using personal computer

    International Nuclear Information System (INIS)

    Ratemi, Wajdi; El Gadamsi, Walied; Beleid, Abdul Kariem

    1993-01-01

    Computer with their astonishing speed of calculations along with their easy connection to real systems, are very appropriate for digital measurements of real system variables. In the nuclear industry, such computer application will produce compact control rooms of real power plants, where information and results display can be obtained through push button concept. In our study, we use two personal computers for the purpose of simulation and measurement. One of them is used as a digital simulator to a real reactor, where we effectively simulate the reactor power through a cross talk network. The computed power is passed at certain chosen sampling time to the other computer. The purpose of the other computer is to use the inverse kinetics equations to calculate the reactivity parameter based on the received power and then it performs on line display of the power curve and the reactivity curve using color graphics. In this study, we use the one group version of the inverse kinetics algorithm which can easily be extended to larger group version. The language of programming used in Turbo BASIC, which is very comparable, in terms of efficiency, to FORTRAN language, besides its effective graphics routines. With the use of the extended version of the Inverse Kinetics algorithm, we can effectively apply this techniques of measurement for the purpose of on line display of the reactivity of the Tajoura Research Reactor. (author)

  8. Quality measures for HRR alignment based ISAR imaging algorithms

    CSIR Research Space (South Africa)

    Janse van Rensburg, V

    2013-05-01

    Full Text Available Some Inverse Synthetic Aperture Radar (ISAR) algorithms form the image in a two-step process of range alignment and phase conjugation. This paper discusses a comprehensive set of measures used to quantify the quality of range alignment, with the aim...

  9. Hooke–Jeeves Method-used Local Search in a Hybrid Global Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    V. D. Sulimov

    2014-01-01

    Full Text Available Modern methods for optimization investigation of complex systems are based on development and updating the mathematical models of systems because of solving the appropriate inverse problems. Input data desirable for solution are obtained from the analysis of experimentally defined consecutive characteristics for a system or a process. Causal characteristics are the sought ones to which equation coefficients of mathematical models of object, limit conditions, etc. belong. The optimization approach is one of the main ones to solve the inverse problems. In the main case it is necessary to find a global extremum of not everywhere differentiable criterion function. Global optimization methods are widely used in problems of identification and computation diagnosis system as well as in optimal control, computing to-mography, image restoration, teaching the neuron networks, other intelligence technologies. Increasingly complicated systems of optimization observed during last decades lead to more complicated mathematical models, thereby making solution of appropriate extreme problems significantly more difficult. A great deal of practical applications may have the problem con-ditions, which can restrict modeling. As a consequence, in inverse problems the criterion functions can be not everywhere differentiable and noisy. Available noise means that calculat-ing the derivatives is difficult and unreliable. It results in using the optimization methods without calculating the derivatives.An efficiency of deterministic algorithms of global optimization is significantly restrict-ed by their dependence on the extreme problem dimension. When the number of variables is large they use the stochastic global optimization algorithms. As stochastic algorithms yield too expensive solutions, so this drawback restricts their applications. Developing hybrid algo-rithms that combine a stochastic algorithm for scanning the variable space with deterministic local search

  10. A study of inverse planning by simulated annealing for photon beams modulated by a multileaf collimator

    International Nuclear Information System (INIS)

    Grant, Walter; Carol, Mark; Geis, Paul; Boyer, Arthur L.

    1995-01-01

    Purpose/Objective: To demonstrate the feasibility of inverse planning for multiple fixed-field conformal therapy with a prototype simulated annealing technique and to deliver the treatment plan with an engineering prototype dynamic multileaf collimator. Methods and Materials: A version of the NOMOS inverse-planning algorithm was used to compute weighting distributions over the areas of multiple fixed-gantry fields. The algorithm uses simulated annealing and a cost function based on physical dose. The algorithm is a modification of a NOMOS Peacock planning implementation being used clinically. The computed weighting distributions represented the relative intensities over small 0.5 cm x 1.0 cm areas of the fields. The inverse planning was carried out using a Sun Model 20 computer using four processors. Between five and nine fixed-gantry beams were used in the plans. The weighting distributions were rendered into leaf-setting sequences using an algorithm developed for use with a Varian experimental dynamic-multileaf collimator. The sequences were saved as computer files in a format that was used to drive the Varian control system. X-ray fields having 6-MV and 18-MV energies were planned and delivered using tumor target and sensitive structure volumes segmented from clinical CT scans. Results: The resulting beam-modulation sequences could be loaded into the accelerator control systems and initiated. Each fixed-gantry angle beam was delivered in 30 s to 50 s. The resulting dose distributions were measured in quasi-anatomical phantoms using film. Dose distributions that could achieve significant tissue-sparing were demonstrated. There was good agreement between the delivered dose distributions and the planned distributions. Conclusion: The prototype inverse-planning system under development by NOMOS can be integrated with the prototype dynamic-delivery system being developed by Varian Associates. Should these commercial entities chose to offer compatible FDA

  11. Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains

    Science.gov (United States)

    Gao, C.; Lekic, V.

    2017-12-01

    Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.

  12. Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data

    Science.gov (United States)

    Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.

    2017-10-01

    The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.

  13. Elastic full-waveform inversion of transmission data in 2D VTI media

    KAUST Repository

    Kamath, Nishant; Tsvankin, Ilya

    2014-01-01

    Full-waveform inversion (FWI) has been implemented mostly for isotropic media, with extensions to anisotropic models typically limited to acoustic approximations. Here, we develop elastic FWI for transmitted waves in 2D heterogeneous VTI (transversely isotropic with a vertical symmetry axis) media. The model is parameterized in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. To test the FWI algorithm, we introduce Gaussian anomalies in the Thomsen parameters of a homogeneous VTI medium and perform FWI of transmission data for different configurations of the source and receiver arrays. The inversion results strongly depend on the acquisition geometry and the aperture because of the parameter trade-offs. In contrast to acoustic FWI, the elastic inversion helps constrain the S-wave vertical velocity, which for our model is decoupled from the other parameters.

  14. Elastic full-waveform inversion of transmission data in 2D VTI media

    KAUST Repository

    Kamath, Nishant

    2014-08-05

    Full-waveform inversion (FWI) has been implemented mostly for isotropic media, with extensions to anisotropic models typically limited to acoustic approximations. Here, we develop elastic FWI for transmitted waves in 2D heterogeneous VTI (transversely isotropic with a vertical symmetry axis) media. The model is parameterized in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. To test the FWI algorithm, we introduce Gaussian anomalies in the Thomsen parameters of a homogeneous VTI medium and perform FWI of transmission data for different configurations of the source and receiver arrays. The inversion results strongly depend on the acquisition geometry and the aperture because of the parameter trade-offs. In contrast to acoustic FWI, the elastic inversion helps constrain the S-wave vertical velocity, which for our model is decoupled from the other parameters.

  15. PREFACE: First International Congress of the International Association of Inverse Problems (IPIA): Applied Inverse Problems 2007: Theoretical and Computational Aspects

    Science.gov (United States)

    Uhlmann, Gunther

    2008-07-01

    This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology

  16. Application of homotopy analysis method and inverse solution of a rectangular wet fin

    International Nuclear Information System (INIS)

    Panda, Srikumar; Bhowmik, Arka; Das, Ranjan; Repaka, Ramjee; Martha, Subash C.

    2014-01-01

    Highlights: • Solution of a wet fin with is obtained by homotopy analysis method (HAM). • Present HAM results have been well-validated with literature results. • Inverse analysis is done using genetic algorithm. • Measurement error of ±10–12% (approx.) is found to yield satisfactory reconstructions. - Abstract: This paper presents the analytical solution of a rectangular fin under the simultaneous heat and mass transfer across the fin surface and the fin tip, and estimates the unknown thermal and geometrical configurations of the fin using inverse heat transfer analysis. The local temperature field is obtained by using homotopy analysis method for insulated and convective fin tip boundary conditions. Using genetic algorithm, the thermal and geometrical parameters, viz., thermal conductivity of the material, surface heat transfer coefficient and dimensions of the fin have been simultaneously estimated for the prescribed temperature field. Earlier inverse studies on wet fin have been restricted to the analysis of nonlinear governing equation with either insulated tip condition or finite tip temperature only. The present study developed a closed-form solution with the consideration of nonlinearity effects in both governing equation and boundary condition. The study on inverse optimization leads to many feasible combination of fin materials, thermal conditions and fin dimensions. Thus allows the flexibility for designing a fin under wet conditions, based on multiple combinations of fin materials, fin dimensions and thermal configurations to achieve the required heat transfer duty. It is further determined that the allowable measurement error should be limited to ±10–12% in order to achieve satisfactory reconstruction

  17. An efficient strategy for the inversion of bidirectional reflectance models with satellite remote sensing data

    Energy Technology Data Exchange (ETDEWEB)

    Privette, J.L.

    1994-12-31

    The angular distribution of radiation scattered by the earth surface contains information on the structural and optical properties of the surface. Potentially, this information may be retrieved through the inversion of surface bidirectional reflectance distribution function (BRDF) models. This report details the limitations and efficient application of BRDF model inversions using data from ground- and satellite-based sensors. A turbid medium BRDF model, based on the discrete ordinates solution to the transport equation, was used to quantify the sensitivity of top-of-canopy reflectance to vegetation and soil parameters. Results were used to define parameter sets for inversions. Using synthetic reflectance values, the invertibility of the model was investigated for different optimization algorithms, surface and sampling conditions. Inversions were also conducted with field data from a ground-based radiometer. First, a soil BRDF model was inverted for different soil and sampling conditions. A condition-invariant solution was determined and used as the lower boundary condition in canopy model inversions. Finally, a scheme was developed to improve the speed and accuracy of inversions.

  18. An investigation on the solutions for the linear inverse problem in gamma ray tomography

    International Nuclear Information System (INIS)

    Araujo, Bruna G.M.; Dantas, Carlos C.; Santos, Valdemir A. dos; Finkler, Christine L.L.; Oliveira, Eric F. de; Melo, Silvio B.; Santos, M. Graca dos

    2009-01-01

    This paper the results obtained in single beam gamma ray tomography are investigated according to direct problem formulation and the applied solution for the linear system of equations. By image reconstruction based algebraic computational algorithms are used. The sparse under and over-determined linear system of equations was analyzed. Build in functions of Matlab software were applied and optimal solutions were investigate. Experimentally a section of the tube is scanned from various positions and at different angles. The solution, to find the vector of coefficients μ, from the vector of measured p values through the W matrix inversion, constitutes an inverse problem. A industrial tomography process requires a numerical solution of the system of equations. The definition of inverse problem according to Hadmard's is considered and as well the requirement of a well posed problem to find stable solutions. The formulation of the basis function and the computational algorithm to structure the weight matrix W were analyzed. For W full rank matrix the obtained solution is unique as expected. Total Least Squares was implemented which theory and computation algorithm gives adequate treatment for the problems due to non-unique solutions of the system of equations. Stability of the solution was investigating by means of a regularization technique and the comparison shows that it improves the results. An optimal solution as a function of the image quality, computation time and minimum residuals were quantified. The corresponding reconstructed images are shown in 3D graphics in order to compare with the solution. (author)

  19. An Image Filter Based on Shearlet Transformation and Particle Swarm Optimization Algorithm

    Directory of Open Access Journals (Sweden)

    Kai Hu

    2015-01-01

    Full Text Available Digital image is always polluted by noise and made data postprocessing difficult. To remove noise and preserve detail of image as much as possible, this paper proposed image filter algorithm which combined the merits of Shearlet transformation and particle swarm optimization (PSO algorithm. Firstly, we use classical Shearlet transform to decompose noised image into many subwavelets under multiscale and multiorientation. Secondly, we gave weighted factor to those subwavelets obtained. Then, using classical Shearlet inverse transform, we obtained a composite image which is composed of those weighted subwavelets. After that, we designed fast and rough evaluation method to evaluate noise level of the new image; by using this method as fitness, we adopted PSO to find the optimal weighted factor we added; after lots of iterations, by the optimal factors and Shearlet inverse transform, we got the best denoised image. Experimental results have shown that proposed algorithm eliminates noise effectively and yields good peak signal noise ratio (PSNR.

  20. Building Generalized Inverses of Matrices Using Only Row and Column Operations

    Science.gov (United States)

    Stuart, Jeffrey

    2010-01-01

    Most students complete their first and only course in linear algebra with the understanding that a real, square matrix "A" has an inverse if and only if "rref"("A"), the reduced row echelon form of "A", is the identity matrix I[subscript n]. That is, if they apply elementary row operations via the Gauss-Jordan algorithm to the partitioned matrix…

  1. Two-dimensional joint inversion of Magnetotelluric and local earthquake data: Discussion on the contribution to the solution of deep subsurface structures

    Science.gov (United States)

    Demirci, İsmail; Dikmen, Ünal; Candansayar, M. Emin

    2018-02-01

    Joint inversion of data sets collected by using several geophysical exploration methods has gained importance and associated algorithms have been developed. To explore the deep subsurface structures, Magnetotelluric and local earthquake tomography algorithms are generally used individually. Due to the usage of natural resources in both methods, it is not possible to increase data quality and resolution of model parameters. For this reason, the solution of the deep structures with the individual usage of the methods cannot be fully attained. In this paper, we firstly focused on the effects of both Magnetotelluric and local earthquake data sets on the solution of deep structures and discussed the results on the basis of the resolving power of the methods. The presence of deep-focus seismic sources increase the resolution of deep structures. Moreover, conductivity distribution of relatively shallow structures can be solved with high resolution by using MT algorithm. Therefore, we developed a new joint inversion algorithm based on the cross gradient function in order to jointly invert Magnetotelluric and local earthquake data sets. In the study, we added a new regularization parameter into the second term of the parameter correction vector of Gallardo and Meju (2003). The new regularization parameter is enhancing the stability of the algorithm and controls the contribution of the cross gradient term in the solution. The results show that even in cases where resistivity and velocity boundaries are different, both methods influence each other positively. In addition, the region of common structural boundaries of the models are clearly mapped compared with original models. Furthermore, deep structures are identified satisfactorily even with using the minimum number of seismic sources. In this paper, in order to understand the future studies, we discussed joint inversion of Magnetotelluric and local earthquake data sets only in two-dimensional space. In the light of these

  2. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack; Demanet, Laurent; Maxwell, Nicholas; Ying, Lexing

    2014-01-01

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  3. A Parallel Butterfly Algorithm

    KAUST Repository

    Poulson, Jack

    2014-02-04

    The butterfly algorithm is a fast algorithm which approximately evaluates a discrete analogue of the integral transform (Equation Presented.) at large numbers of target points when the kernel, K(x, y), is approximately low-rank when restricted to subdomains satisfying a certain simple geometric condition. In d dimensions with O(Nd) quasi-uniformly distributed source and target points, when each appropriate submatrix of K is approximately rank-r, the running time of the algorithm is at most O(r2Nd logN). A parallelization of the butterfly algorithm is introduced which, assuming a message latency of α and per-process inverse bandwidth of β, executes in at most (Equation Presented.) time using p processes. This parallel algorithm was then instantiated in the form of the open-source DistButterfly library for the special case where K(x, y) = exp(iΦ(x, y)), where Φ(x, y) is a black-box, sufficiently smooth, real-valued phase function. Experiments on Blue Gene/Q demonstrate impressive strong-scaling results for important classes of phase functions. Using quasi-uniform sources, hyperbolic Radon transforms, and an analogue of a three-dimensional generalized Radon transform were, respectively, observed to strong-scale from 1-node/16-cores up to 1024-nodes/16,384-cores with greater than 90% and 82% efficiency, respectively. © 2014 Society for Industrial and Applied Mathematics.

  4. Inverse estimation for the unknown frost geometry on the external wall of a forced-convection pipe

    International Nuclear Information System (INIS)

    Chen, W.-L.; Yang, Y.-C.

    2009-01-01

    In this study, a conjugate gradient method based inverse algorithm is applied to estimate the unknown frost-layer boundary profile on the external wall of a pipe system using temperature measurements. It is assumed that no prior information is available on the functional form of the unknown profile; hence the procedure is classified as the function estimation in inverse calculation. The temperature data obtained from the direct problem are used to simulate the temperature measurements. The accuracy of the inverse analysis is examined by using simulated exact and inexact temperature measurements. Results show that an excellent estimation on boundary profile can be obtained for the test case considered in this study.

  5. Multirate-based fast parallel algorithms for 2-D DHT-based real-valued discrete Gabor transform.

    Science.gov (United States)

    Tao, Liang; Kwan, Hon Keung

    2012-07-01

    Novel algorithms for the multirate and fast parallel implementation of the 2-D discrete Hartley transform (DHT)-based real-valued discrete Gabor transform (RDGT) and its inverse transform are presented in this paper. A 2-D multirate-based analysis convolver bank is designed for the 2-D RDGT, and a 2-D multirate-based synthesis convolver bank is designed for the 2-D inverse RDGT. The parallel channels in each of the two convolver banks have a unified structure and can apply the 2-D fast DHT algorithm to speed up their computations. The computational complexity of each parallel channel is low and is independent of the Gabor oversampling rate. All the 2-D RDGT coefficients of an image are computed in parallel during the analysis process and can be reconstructed in parallel during the synthesis process. The computational complexity and time of the proposed parallel algorithms are analyzed and compared with those of the existing fastest algorithms for 2-D discrete Gabor transforms. The results indicate that the proposed algorithms are the fastest, which make them attractive for real-time image processing.

  6. Voxel inversion of airborne electromagnetic data for improved model integration

    Science.gov (United States)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054

  7. Inverse Ising problem in continuous time: A latent variable approach

    Science.gov (United States)

    Donner, Christian; Opper, Manfred

    2017-12-01

    We consider the inverse Ising problem: the inference of network couplings from observed spin trajectories for a model with continuous time Glauber dynamics. By introducing two sets of auxiliary latent random variables we render the likelihood into a form which allows for simple iterative inference algorithms with analytical updates. The variables are (1) Poisson variables to linearize an exponential term which is typical for point process likelihoods and (2) Pólya-Gamma variables, which make the likelihood quadratic in the coupling parameters. Using the augmented likelihood, we derive an expectation-maximization (EM) algorithm to obtain the maximum likelihood estimate of network parameters. Using a third set of latent variables we extend the EM algorithm to sparse couplings via L1 regularization. Finally, we develop an efficient approximate Bayesian inference algorithm using a variational approach. We demonstrate the performance of our algorithms on data simulated from an Ising model. For data which are simulated from a more biologically plausible network with spiking neurons, we show that the Ising model captures well the low order statistics of the data and how the Ising couplings are related to the underlying synaptic structure of the simulated network.

  8. Galerkin algorithm for multidimensional plasma simulation codes. Informal report

    International Nuclear Information System (INIS)

    Godfrey, B.B.

    1979-03-01

    A Galerkin finite element differencing scheme has been developed for a computer simulation of plasmas. The new difference equations identically satisfy an equation of continuity. Thus, the usual current correction procedure, involving inversion of Poisson's equation, is unnecessary. The algorithm is free of many numerical Cherenkov instabilities. This differencing scheme has been implemented in CCUBE, an already existing relativistic, electromagnetic, two-dimensional PIC code in arbitrary separable, orthogonal coordinates. The separability constraint is eliminated by the new algorithm. The new version of CCUBE exhibits good stability and accuracy with reduced computer memory and time requirements. Details of the algorithm and its implementation are presented

  9. Computational study on full-wave inversion based on the elastic wave-equation; Dansei hado hoteishiki full wave inversion no model keisan ni yoru kento

    Energy Technology Data Exchange (ETDEWEB)

    Uesaka, S [Kyoto University, Kyoto (Japan). Faculty of Engineering; Watanabe, T; Sassa, K [Kyoto University, Kyoto (Japan)

    1997-05-27

    Algorithm is constructed and a program developed for a full-wave inversion (FWI) method utilizing the elastic wave equation in seismic exploration. The FWI method is a method for obtaining a physical property distribution using the whole observed waveforms as the data. It is capable of high resolution which is several times smaller than the wavelength since it can handle such phenomena as wave reflection and dispersion. The method for determining the P-wave velocity structure by use of the acoustic wave equation does not provide information about the S-wave velocity since it does not consider S-waves or converted waves. In an analysis using the elastic wave equation, on the other hand, not only P-wave data but also S-wave data can be utilized. In this report, under such circumstances, an inverse analysis algorithm is constructed on the basis of the elastic wave equation, and a basic program is developed. On the basis of the methods of Mora and of Luo and Schuster, the correction factors for P-wave and S-wave velocities are formulated directly from the elastic wave equation. Computations are performed and the effects of the hypocenter frequency and vibration transmission direction are examined. 6 refs., 8 figs.

  10. New algorithms and new results for strong coupling LQCD

    CERN Document Server

    Unger, Wolfgang

    2012-01-01

    We present and compare new types of algorithms for lattice QCD with staggered fermions in the limit of infinite gauge coupling. These algorithms are formulated on a discrete spatial lattice but with continuous Euclidean time. They make use of the exact Hamiltonian, with the inverse temperature beta as the only input parameter. This formulation turns out to be analogous to that of a quantum spin system. The sign problem is completely absent, at zero and non-zero baryon density. We compare the performance of a continuous-time worm algorithm and of a Stochastic Series Expansion algorithm (SSE), which operates on equivalence classes of time-ordered interactions. Finally, we apply the SSE algorithm to a first exploratory study of two-flavor strong coupling lattice QCD, which is manageable in the Hamiltonian formulation because the sign problem can be controlled.

  11. Tradeoffs Between Branch Mispredictions and Comparisons for Sorting Algorithms

    DEFF Research Database (Denmark)

    Brodal, Gerth Stølting; Moruz, Gabriel

    2005-01-01

    Branch mispredictions is an important factor affecting the running time in practice. In this paper we consider tradeoffs between the number of branch mispredictions and the number of comparisons for sorting algorithms in the comparison model. We prove that a sorting algorithm using O(dnlog n......) comparisons performs Omega(nlogd n) branch mispredictions. We show that Multiway MergeSort achieves this tradeoff by adopting a multiway merger with a low number of branch mispredictions. For adaptive sorting algorithms we similarly obtain that an algorithm performing O(dn(1+log (1+Inv/n))) comparisons must...... perform Omega(nlogd (1+Inv/n)) branch mispredictions, where Inv is the number of inversions in the input. This tradeoff can be achieved by GenericSort by Estivill-Castro and Wood by adopting a multiway division protocol and a multiway merging algorithm with a low number of branch mispredictions....

  12. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT

    Directory of Open Access Journals (Sweden)

    Cunsuo Pang

    2016-09-01

    Full Text Available This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT’s performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated pulse radar, SAR (Synthetic aperture radar, or ISAR (Inverse synthetic aperture radar, for improving the probability of target recognition.

  13. Micro-Doppler Signal Time-Frequency Algorithm Based on STFRFT.

    Science.gov (United States)

    Pang, Cunsuo; Han, Yan; Hou, Huiling; Liu, Shengheng; Zhang, Nan

    2016-09-24

    This paper proposes a time-frequency algorithm based on short-time fractional order Fourier transformation (STFRFT) for identification of a complicated movement targets. This algorithm, consisting of a STFRFT order-changing and quick selection method, is effective in reducing the computation load. A multi-order STFRFT time-frequency algorithm is also developed that makes use of the time-frequency feature of each micro-Doppler component signal. This algorithm improves the estimation accuracy of time-frequency curve fitting through multi-order matching. Finally, experiment data were used to demonstrate STFRFT's performance in micro-Doppler time-frequency analysis. The results validated the higher estimate accuracy of the proposed algorithm. It may be applied to an LFM (Linear frequency modulated) pulse radar, SAR (Synthetic aperture radar), or ISAR (Inverse synthetic aperture radar), for improving the probability of target recognition.

  14. A robust spatial filtering technique for multisource localization and geoacoustic inversion.

    Science.gov (United States)

    Stotts, S A

    2005-07-01

    Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.

  15. Inversion of self-potential anomalies caused by simple-geometry bodies using global optimization algorithms

    International Nuclear Information System (INIS)

    Göktürkler, G; Balkaya, Ç

    2012-01-01

    Three naturally inspired meta-heuristic algorithms—the genetic algorithm (GA), simulated annealing (SA) and particle swarm optimization (PSO)—were used to invert some of the self-potential (SP) anomalies originated by some polarized bodies with simple geometries. Both synthetic and field data sets were considered. The tests with the synthetic data comprised of the solutions with both noise-free and noisy data; in the tests with the field data some SP anomalies observed over a copper belt (India), graphite deposits (Germany) and metallic sulfide (Turkey) were inverted. The model parameters included the electric dipole moment, polarization angle, depth, shape factor and origin of the anomaly. The estimated parameters were compared with those from previous studies using various optimization algorithms, mainly least-squares approaches, on the same data sets. During the test studies the solutions by GA, PSO and SA were characterized as being consistent with each other; a good starting model was not a requirement to reach the global minimum. It can be concluded that the global optimization algorithms considered in this study were able to yield compatible solutions with those from widely used local optimization algorithms. (paper)

  16. Three-dimensional magnetotelluric axial anisotropic forward modeling and inversion

    Science.gov (United States)

    Cao, Hui; Wang, Kunpeng; Wang, Tao; Hua, Boguang

    2018-06-01

    Magnetotelluric (MT) data has been widely used to image underground electrical structural. However, when the significant axial resistivity anisotropy presents, how this influences three-dimensional MT data has not been resolved clearly yet. We here propose a scheme for three-dimensional modeling of MT data in presence of axial anisotropic resistivity, where the electromagnetic fields are decomposed into primary and secondary components. A 3D staggered-grid finite difference method is then used to resolve the resulting 3D governing equations. Numerical tests have completed to validate the correctness and accuracy of the present algorithm. A limited-memory Broyden-Fletcher-Goldfarb-Shanno method is then utilized to realize the 3D MT axial anisotropic inversion. The testing results show that, compared to the results of isotropic resistivity inversion, taking account the axial anisotropy can much improve the inverted results.

  17. Multi-objective optimization of inverse planning for accurate radiotherapy

    International Nuclear Information System (INIS)

    Cao Ruifen; Pei Xi; Cheng Mengyun; Li Gui; Hu Liqin; Wu Yican; Jing Jia; Li Guoli

    2011-01-01

    The multi-objective optimization of inverse planning based on the Pareto solution set, according to the multi-objective character of inverse planning in accurate radiotherapy, was studied in this paper. Firstly, the clinical requirements of a treatment plan were transformed into a multi-objective optimization problem with multiple constraints. Then, the fast and elitist multi-objective Non-dominated Sorting Genetic Algorithm (NSGA-II) was introduced to optimize the problem. A clinical example was tested using this method. The results show that an obtained set of non-dominated solutions were uniformly distributed and the corresponding dose distribution of each solution not only approached the expected dose distribution, but also met the dose-volume constraints. It was indicated that the clinical requirements were better satisfied using the method and the planner could select the optimal treatment plan from the non-dominated solution set. (authors)

  18. Cone-beam local reconstruction based on a Radon inversion transformation

    International Nuclear Information System (INIS)

    Wang Xian-Chao; Yan Bin; Li Lei; Hu Guo-En

    2012-01-01

    The local reconstruction from truncated projection data is one area of interest in image reconstruction for computed tomography (CT), which creates the possibility for dose reduction. In this paper, a filtered-backprojection (FBP) algorithm based on the Radon inversion transform is presented to deal with the three-dimensional (3D) local reconstruction in the circular geometry. The algorithm achieves the data filtering in two steps. The first step is the derivative of projections, which acts locally on the data and can thus be carried out accurately even in the presence of data truncation. The second step is the nonlocal Hilbert filtering. The numerical simulations and the real data reconstructions have been conducted to validate the new reconstruction algorithm. Compared with the approximate truncation resistant algorithm for computed tomography (ATRACT), not only it has a comparable ability to restrain truncation artifacts, but also its reconstruction efficiency is improved. It is about twice as fast as that of the ATRACT. Therefore, this work provides a simple and efficient approach for the approximate reconstruction from truncated projections in the circular cone-beam CT

  19. A unified inversion scheme to process multifrequency measurements of various dispersive electromagnetic properties

    Science.gov (United States)

    Han, Y.; Misra, S.

    2018-04-01

    Multi-frequency measurement of a dispersive electromagnetic (EM) property, such as electrical conductivity, dielectric permittivity, or magnetic permeability, is commonly analyzed for purposes of material characterization. Such an analysis requires inversion of the multi-frequency measurement based on a specific relaxation model, such as Cole-Cole model or Pelton's model. We develop a unified inversion scheme that can be coupled to various type of relaxation models to independently process multi-frequency measurement of varied EM properties for purposes of improved EM-based geomaterial characterization. The proposed inversion scheme is firstly tested in few synthetic cases in which different relaxation models are coupled into the inversion scheme and then applied to multi-frequency complex conductivity, complex resistivity, complex permittivity, and complex impedance measurements. The method estimates up to seven relaxation-model parameters exhibiting convergence and accuracy for random initializations of the relaxation-model parameters within up to 3-orders of magnitude variation around the true parameter values. The proposed inversion method implements a bounded Levenberg algorithm with tuning initial values of damping parameter and its iterative adjustment factor, which are fixed in all the cases shown in this paper and irrespective of the type of measured EM property and the type of relaxation model. Notably, jump-out step and jump-back-in step are implemented as automated methods in the inversion scheme to prevent the inversion from getting trapped around local minima and to honor physical bounds of model parameters. The proposed inversion scheme can be easily used to process various types of EM measurements without major changes to the inversion scheme.

  20. An algorithm for the split-feasibility problems with application to the split-equality problem.

    Science.gov (United States)

    Chuang, Chih-Sheng; Chen, Chi-Ming

    2017-01-01

    In this paper, we study the split-feasibility problem in Hilbert spaces by using the projected reflected gradient algorithm. As applications, we study the convex linear inverse problem and the split-equality problem in Hilbert spaces, and we give new algorithms for these problems. Finally, numerical results are given for our main results.

  1. Inversion of Side Scan Sonar Motion and Posture in Seabed Geomorphology

    Directory of Open Access Journals (Sweden)

    Tao Weiliang

    2017-08-01

    Full Text Available Side scan sonar measurement platform, affected by underwater environment and its own motion precision, inevitably has posture and motion disturbance, which greatly affects accuracy of geomorphic image formation. It is difficult to sensitively and accurately capture these underwater disturbances by relying on auxiliary navigation devices. In this paper, we propose a method to invert motion and posture information of the measurement platform by using the matching relation between the strip images. The inversion algorithm is the key link in the image mosaic frame of side scan sonar, and the acquired motion posture information can effectively improve seabed topography and plotting accuracy and stability. In this paper, we first analyze influence of platform motion and posture on side scan sonar mapping, and establish the correlation model between motion, posture information and strip image matching information. Then, based on the model, a reverse neural network is established. Based on input, output of neural network, design of and test data set, a motion posture inversion mechanism based on strip image matching information is established. Accuracy and validity of the algorithm are verified by the experimental results.

  2. Inversion Method for Early Detection of ARES-1 Case Breach Failure

    Science.gov (United States)

    Mackey, Ryan M.; Kulikov, Igor K.; Bajwa, Anupa; Berg, Peter; Smelyanskiy, Vadim

    2010-01-01

    A document describes research into the problem of detecting a case breach formation at an early stage of a rocket flight. An inversion algorithm for case breach allocation is proposed and analyzed. It is shown how the case breach can be allocated at an early stage of its development by using the rocket sensor data and the output data from the control block of the rocket navigation system. The results are simulated with MATLAB/Simulink software. The efficiency of an inversion algorithm for a case breach location is discussed. The research was devoted to the analysis of the ARES-l flight during the first 120 seconds after the launch and early prediction of case breach failure. During this time, the rocket is propelled by its first-stage Solid Rocket Booster (SRB). If a breach appears in SRB case, the gases escaping through it will produce the (side) thrust directed perpendicular to the rocket axis. The side thrust creates torque influencing the rocket attitude. The ARES-l control system will compensate for the side thrust until it reaches some critical value, after which the flight will be uncontrollable. The objective of this work was to obtain the start time of case breach development and its location using the rocket inertial navigation sensors and GNC data. The algorithm was effective for the detection and location of a breach in an SRB field joint at an early stage of its development.

  3. Optimization of MIS/IL solar cells parameters using genetic algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Ahmed, K.A.; Mohamed, E.A.; Alaa, S.H. [Faculty of Engineering, Alexandria Univ. (Egypt); Motaz, M.S. [Institute of Graduate Studies and Research, Alexandria Univ. (Egypt)

    2004-07-01

    This paper presents a genetic algorithm optimization for MIS/IL solar cell parameters including doping concentration NA, metal work function {phi}m, oxide thickness d{sub ox}, mobile charge density N{sub m}, fixed oxide charge density N{sub ox} and the external back bias applied to the inversion grid V. The optimization results are compared with theoretical optimization and shows that the genetic algorithm can be used for determining the optimum parameters of the cell. (orig.)

  4. Improved L-BFGS diagonal preconditioners for a large-scale 4D-Var inversion system: application to CO2 flux constraints and analysis error calculation

    Science.gov (United States)

    Bousserez, Nicolas; Henze, Daven; Bowman, Kevin; Liu, Junjie; Jones, Dylan; Keller, Martin; Deng, Feng

    2013-04-01

    This work presents improved analysis error estimates for 4D-Var systems. From operational NWP models to top-down constraints on trace gas emissions, many of today's data assimilation and inversion systems in atmospheric science rely on variational approaches. This success is due to both the mathematical clarity of these formulations and the availability of computationally efficient minimization algorithms. However, unlike Kalman Filter-based algorithms, these methods do not provide an estimate of the analysis or forecast error covariance matrices, these error statistics being propagated only implicitly by the system. From both a practical (cycling assimilation) and scientific perspective, assessing uncertainties in the solution of the variational problem is critical. For large-scale linear systems, deterministic or randomization approaches can be considered based on the equivalence between the inverse Hessian of the cost function and the covariance matrix of analysis error. For perfectly quadratic systems, like incremental 4D-Var, Lanczos/Conjugate-Gradient algorithms have proven to be most efficient in generating low-rank approximations of the Hessian matrix during the minimization. For weakly non-linear systems though, the Limited-memory Broyden-Fletcher-Goldfarb-Shanno (L-BFGS), a quasi-Newton descent algorithm, is usually considered the best method for the minimization. Suitable for large-scale optimization, this method allows one to generate an approximation to the inverse Hessian using the latest m vector/gradient pairs generated during the minimization, m depending upon the available core memory. At each iteration, an initial low-rank approximation to the inverse Hessian has to be provided, which is called preconditioning. The ability of the preconditioner to retain useful information from previous iterations largely determines the efficiency of the algorithm. Here we assess the performance of different preconditioners to estimate the inverse Hessian of a

  5. Optimization of importance factors in inverse planning

    International Nuclear Information System (INIS)

    Xing, L.

    1999-01-01

    Inverse treatment planning starts with a treatment objective and obtains the solution by optimizing an objective function. The clinical objectives are usually multifaceted and potentially incompatible with one another. A set of importance factors is often incorporated in the objective function to parametrize trade-off strategies and to prioritize the dose conformality in different anatomical structures. Whereas the general formalism remains the same, different sets of importance factors characterize plans of obviously different flavour and thus critically determine the final plan. Up to now, the determination of these parameters has been a 'guessing' game based on empirical knowledge because the final dose distribution depends on the parameters in a complex and implicit way. The influence of these parameters is not known until the plan optimization is completed. In order to compromise properly the conflicting requirements of the target and sensitive structures, the parameters are usually adjusted through a trial-and-error process. In this paper, a method to estimate these parameters computationally is proposed and an iterative computer algorithm is described to determine these parameters numerically. The treatment plan selection is done in two steps. First, a set of importance factors are chosen and the corresponding beam parameters (e.g. beam profiles) are optimized under the guidance of a quadratic objective function using an iterative algorithm reported earlier. The 'optimal' plan is then evaluated by an additional scoring function. The importance factors in the objective function are accordingly adjusted to improve the ranking of the plan. For every change in the importance factors, the beam parameters need to be re-optimized. This process continues in an iterative fashion until the scoring function is saturated. The algorithm was applied to two clinical cases and the results demonstrated that it has the potential to improve significantly the existing method of

  6. Data inversion in coupled subsurface flow and geomechanics models

    International Nuclear Information System (INIS)

    Iglesias, Marco A; McLaughlin, Dennis

    2012-01-01

    We present an inverse modeling approach to estimate petrophysical and elastic properties of the subsurface. The aim is to use the fully coupled geomechanics-flow model of Girault et al (2011 Math. Models Methods Appl. Sci. 21 169–213) to jointly invert surface deformation and pressure data from wells. We use a functional-analytic framework to construct a forward operator (parameter-to-output map) that arises from the geomechanics-flow model of Girault et al. Then, we follow a deterministic approach to pose the inverse problem of finding parameter estimates from measurements of the output of the forward operator. We prove that this inverse problem is ill-posed in the sense of stability. The inverse problem is then regularized with the implementation of the Newton-conjugate gradient (CG) algorithm of Hanke (1997 Numer. Funct. Anal. Optim. 18 18–971). For a consistent application of the Newton-CG scheme, we establish the differentiability of the forward map and characterize the adjoint of its linearization. We provide assumptions under which the theory of Hanke ensures convergence and regularizing properties of the Newton-CG scheme. These properties are verified in our numerical experiments. In addition, our synthetic experiments display the capabilities of the proposed inverse approach to estimate parameters of the subsurface by means of data inversion. In particular, the added value of measurements of surface deformation in the estimation of absolute permeability is quantified with respect to the standard history matching approach of inverting production data with flow models. The proposed methodology can be potentially used to invert satellite geodetic data (e.g. InSAR and GPS) in combination with production data for optimal monitoring and characterization of the subsurface. (paper)

  7. Study on the algorithm of computational ghost imaging based on discrete fourier transform measurement matrix

    Science.gov (United States)

    Zhang, Leihong; Liang, Dong; Li, Bei; Kang, Yi; Pan, Zilan; Zhang, Dawei; Gao, Xiumin; Ma, Xiuhua

    2016-07-01

    On the basis of analyzing the cosine light field with determined analytic expression and the pseudo-inverse method, the object is illuminated by a presetting light field with a determined discrete Fourier transform measurement matrix, and the object image is reconstructed by the pseudo-inverse method. The analytic expression of the algorithm of computational ghost imaging based on discrete Fourier transform measurement matrix is deduced theoretically, and compared with the algorithm of compressive computational ghost imaging based on random measurement matrix. The reconstruction process and the reconstruction error are analyzed. On this basis, the simulation is done to verify the theoretical analysis. When the sampling measurement number is similar to the number of object pixel, the rank of discrete Fourier transform matrix is the same as the one of the random measurement matrix, the PSNR of the reconstruction image of FGI algorithm and PGI algorithm are similar, the reconstruction error of the traditional CGI algorithm is lower than that of reconstruction image based on FGI algorithm and PGI algorithm. As the decreasing of the number of sampling measurement, the PSNR of reconstruction image based on FGI algorithm decreases slowly, and the PSNR of reconstruction image based on PGI algorithm and CGI algorithm decreases sharply. The reconstruction time of FGI algorithm is lower than that of other algorithms and is not affected by the number of sampling measurement. The FGI algorithm can effectively filter out the random white noise through a low-pass filter and realize the reconstruction denoising which has a higher denoising capability than that of the CGI algorithm. The FGI algorithm can improve the reconstruction accuracy and the reconstruction speed of computational ghost imaging.

  8. Wavelet transform and Huffman coding based electrocardiogram compression algorithm: Application to telecardiology

    International Nuclear Information System (INIS)

    Chouakri, S A; Djaafri, O; Taleb-Ahmed, A

    2013-01-01

    We present in this work an algorithm for electrocardiogram (ECG) signal compression aimed to its transmission via telecommunication channel. Basically, the proposed ECG compression algorithm is articulated on the use of wavelet transform, leading to low/high frequency components separation, high order statistics based thresholding, using level adjusted kurtosis value, to denoise the ECG signal, and next a linear predictive coding filter is applied to the wavelet coefficients producing a lower variance signal. This latter one will be coded using the Huffman encoding yielding an optimal coding length in terms of average value of bits per sample. At the receiver end point, with the assumption of an ideal communication channel, the inverse processes are carried out namely the Huffman decoding, inverse linear predictive coding filter and inverse discrete wavelet transform leading to the estimated version of the ECG signal. The proposed ECG compression algorithm is tested upon a set of ECG records extracted from the MIT-BIH Arrhythmia Data Base including different cardiac anomalies as well as the normal ECG signal. The obtained results are evaluated in terms of compression ratio and mean square error which are, respectively, around 1:8 and 7%. Besides the numerical evaluation, the visual perception demonstrates the high quality of ECG signal restitution where the different ECG waves are recovered correctly

  9. Complex nonlinear Fourier transform and its inverse

    International Nuclear Information System (INIS)

    Saksida, Pavle

    2015-01-01

    We study the nonlinear Fourier transform associated to the integrable systems of AKNS-ZS type. Two versions of this transform appear in connection with the AKNS-ZS systems. These two versions can be considered as two real forms of a single complex transform F c . We construct an explicit algorithm for the calculation of the inverse transform (F c ) -1 (h) for an arbitrary argument h. The result is given in the form of a convergent series of functions in the domain space and the terms of this series can be computed explicitly by means of finitely many integrations. (paper)

  10. Identification of Water Diffusivity of Inorganic Porous Materials Using Evolutionary Algorithms

    Czech Academy of Sciences Publication Activity Database

    Kočí, J.; Maděra, J.; Jerman, M.; Keppert, M.; Svora, Petr; Černý, R.

    2016-01-01

    Roč. 113, č. 1 (2016), s. 51-66 ISSN 0169-3913 Institutional support: RVO:61388980 Keywords : Evolutionary algorithms * Water transport * Inorganic porous materials * Inverse analysis Subject RIV: CA - Inorganic Chemistry Impact factor: 2.205, year: 2016

  11. Inverse analysis of non-uniform temperature distributions using multispectral pyrometry

    Science.gov (United States)

    Fu, Tairan; Duan, Minghao; Tian, Jibin; Shi, Congling

    2016-05-01

    Optical diagnostics can be used to obtain sub-pixel temperature information in remote sensing. A multispectral pyrometry method was developed using multiple spectral radiation intensities to deduce the temperature area distribution in the measurement region. The method transforms a spot multispectral pyrometer with a fixed field of view into a pyrometer with enhanced spatial resolution that can give sub-pixel temperature information from a "one pixel" measurement region. A temperature area fraction function was defined to represent the spatial temperature distribution in the measurement region. The method is illustrated by simulations of a multispectral pyrometer with a spectral range of 8.0-13.0 μm measuring a non-isothermal region with a temperature range of 500-800 K in the spot pyrometer field of view. The inverse algorithm for the sub-pixel temperature distribution (temperature area fractions) in the "one pixel" verifies this multispectral pyrometry method. The results show that an improved Levenberg-Marquardt algorithm is effective for this ill-posed inverse problem with relative errors in the temperature area fractions of (-3%, 3%) for most of the temperatures. The analysis provides a valuable reference for the use of spot multispectral pyrometers for sub-pixel temperature distributions in remote sensing measurements.

  12. Use of apparent thickness for preprocessing of low-frequency electromagnetic data in inversion-based multibarrier evaluation workflow

    Science.gov (United States)

    Omar, Saad; Omeragic, Dzevat

    2018-04-01

    The concept of apparent thicknesses is introduced for the inversion-based, multicasing evaluation interpretation workflow using multifrequency and multispacing electromagnetic measurements. A thickness value is assigned to each measurement, enabling the development of two new preprocessing algorithms to remove casing collar artifacts. First, long-spacing apparent thicknesses are used to remove, from the pipe sections, artifacts ("ghosts") caused by the transmitter crossing a casing collar or corrosion. Second, a collar identification, localization, and assignment algorithm is developed to enable robust inversion in collar sections. Last, casing eccentering can also be identified on the basis of opposite deviation of short-spacing phase and magnitude apparent thicknesses from the nominal value. The proposed workflow can handle an arbitrary number of nested casings and has been validated on synthetic and field data.

  13. Simultaneous inversion of airborne electromagnetic data for resistivity and magnetic permeability

    International Nuclear Information System (INIS)

    Beard, L.P.; Nyquist, J.E.

    1998-01-01

    Where the magnetic permeability of rock or soil exceeds that of free space, the effect on airborne electromagnetic systems is to produce a frequency-independent shift in the in-phase response of the system while altering the quadrature response only slightly. The magnitude of the in-phase shift increases as (1) the relative magnetic permeability is increased, (2) the amount of magnetic material is increased, and (3) the airborne sensor gets nearer the earth's surface. Over resistive, magnetic ground, the shift may be evinced by negative in-phase measurements at low frequencies; but over more conductive ground, the same shift may go unnoticed because of the large positive in-phase response. If the airborne sensor is flown at low levels, the magnitude of the shift may be large enough to affect automatic inversion routines that do not take this shift into account, producing inaccurate estimated resistivities, usually overestimates. However, layered-earth inversion algorithms that incorporate magnetic permeability as an additional inversion parameter may improve the resistivity estimates. The authors demonstrate this improvement using data collected over hazardous waste sites near Oak Ridge, Tennessee, USA. Using resistivity inversion without magnetic permeability, the waste sites are almost invisible to the sensors. When magnetic permeability is included as an inversion parameter, the sites are detected, both by improved resistivity estimates and by estimated magnetic permeability

  14. Variational structure of inverse problems in wave propagation and vibration

    Energy Technology Data Exchange (ETDEWEB)

    Berryman, J.G.

    1995-03-01

    Practical algorithms for solving realistic inverse problems may often be viewed as problems in nonlinear programming with the data serving as constraints. Such problems are most easily analyzed when it is possible to segment the solution space into regions that are feasible (satisfying all the known constraints) and infeasible (violating some of the constraints). Then, if the feasible set is convex or at least compact, the solution to the problem will normally lie on the boundary of the feasible set. A nonlinear program may seek the solution by systematically exploring the boundary while satisfying progressively more constraints. Examples of inverse problems in wave propagation (traveltime tomography) and vibration (modal analysis) will be presented to illustrate how the variational structure of these problems may be used to create nonlinear programs using implicit variational constraints.

  15. Using machine learning to accelerate sampling-based inversion

    Science.gov (United States)

    Valentine, A. P.; Sambridge, M.

    2017-12-01

    In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.

  16. Lax-pair operators for squared eigenfunctions in the inverse scattering transformations

    International Nuclear Information System (INIS)

    Iino, Kazuhiro; Ichikawa, Yoshihiko.

    1982-05-01

    Modification of the algorithm of Chen, Lee and Liu enables us to construct alternative Lax-pair operators for the Korteweg-de Vries equation and the modified Korteweg-de Vries equation. These Lax-pair operators stand as the Lax-pair operators for the squared eigenfunction and the sum of squared eigenfunctions of the Ablowitz-Kaup-Newell-Segur inverse scattering transformation for these celebrated nonlinear evolution equations. (author)

  17. A comparative analysis of particle swarm optimization and differential evolution algorithms for parameter estimation in nonlinear dynamic systems

    International Nuclear Information System (INIS)

    Banerjee, Amit; Abu-Mahfouz, Issam

    2014-01-01

    The use of evolutionary algorithms has been popular in recent years for solving the inverse problem of identifying system parameters given the chaotic response of a dynamical system. The inverse problem is reformulated as a minimization problem and population-based optimizers such as evolutionary algorithms have been shown to be efficient solvers of the minimization problem. However, to the best of our knowledge, there has been no published work that evaluates the efficacy of using the two most popular evolutionary techniques – particle swarm optimization and differential evolution algorithm, on a wide range of parameter estimation problems. In this paper, the two methods along with their variants (for a total of seven algorithms) are applied to fifteen different parameter estimation problems of varying degrees of complexity. Estimation results are analyzed using nonparametric statistical methods to identify if an algorithm is statistically superior to others over the class of problems analyzed. Results based on parameter estimation quality suggest that there are significant differences between the algorithms with the newer, more sophisticated algorithms performing better than their canonical versions. More importantly, significant differences were also found among variants of the particle swarm optimizer and the best performing differential evolution algorithm

  18. Effect of training algorithms on neural networks aided pavement ...

    African Journals Online (AJOL)

    Especially, the use of Finite Element (FE) based pavement modeling results for training the NN aided inverse analysis is considered to be accurate in realistically characterizing the non-linear stress-sensitive response of underlying pavement layers in real-time. Efficient NN learning algorithms have been developed and ...

  19. Inversion of seismic data: how to take the correlated nature of noise into account; Inversion de donnees sismiques: prise en compte de la nature correlee du bruit

    Energy Technology Data Exchange (ETDEWEB)

    Renard, F.

    2003-01-01

    The goal of seismic inversion is to recover an Earth model that best fits some observed data. To reach that goal, we have to minimize an objective function that measures the amplitude of the misfits according to a norm to be chosen in data space. In general, the used norm is the L2 norm. Unfortunately, such a norm is not adapted to data corrupted by correlated noise: the noise is in that case inverted as signal and the inversion results are unacceptable. The goal of this thesis is to obtain satisfactory results to the inverse problem in that situation. For this purpose, we study two inverse problems: reflection tomography and waveform inversion. In reflection tomography, we propose a new formulation of the continuum inverse problem which relies on a H1 norm in data space. This allows us to account for the correlated nature of the noise that corrupts the kinematic information. However, this norm does not give more satisfactory results than the ones obtained with the classical formalism. This is why, for sake of simplicity, we recommend to use this classical formalism. Then we try to understand how to properly sample the kinematic information so as to obtain an accurate approximation of the continuum inverse problem. In waveform inversion, we propose to directly invert data corrupted by some correlated noise. A first idea consists in rejecting the noise in the residues. In that goal, we can use a semi-norm to formulate the inverse problem. This technique gives very good results, except when the data are corrupted by random noise. Thus we propose a second method which consists in retrieving, by solving an inverse problem, the signal and the noise whose sum best fits the data. This technique gives very satisfactory results, even if some random noise pollutes the data, and is moreover solved, thanks to an original algorithm, in a very efficient way. (author)

  20. Set-Membership Proportionate Affine Projection Algorithms

    Directory of Open Access Journals (Sweden)

    Stefan Werner

    2007-01-01

    Full Text Available Proportionate adaptive filters can improve the convergence speed for the identification of sparse systems as compared to their conventional counterparts. In this paper, the idea of proportionate adaptation is combined with the framework of set-membership filtering (SMF in an attempt to derive novel computationally efficient algorithms. The resulting algorithms attain an attractive faster converge for both situations of sparse and dispersive channels while decreasing the average computational complexity due to the data discerning feature of the SMF approach. In addition, we propose a rule that allows us to automatically adjust the number of past data pairs employed in the update. This leads to a set-membership proportionate affine projection algorithm (SM-PAPA having a variable data-reuse factor allowing a significant reduction in the overall complexity when compared with a fixed data-reuse factor. Reduced-complexity implementations of the proposed algorithms are also considered that reduce the dimensions of the matrix inversions involved in the update. Simulations show good results in terms of reduced number of updates, speed of convergence, and final mean-squared error.