A fast algorithm for sparse matrix computations related to inversion
International Nuclear Information System (INIS)
Li, S.; Wu, W.; Darve, E.
2013-01-01
We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G r and G for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors
A fast algorithm for sparse matrix computations related to inversion
Energy Technology Data Exchange (ETDEWEB)
Li, S., E-mail: lisong@stanford.edu [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Wu, W. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Packard Building, Room 268, Stanford, CA 94305 (United States); Darve, E. [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Department of Mechanical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Room 209, Stanford, CA 94305 (United States)
2013-06-01
We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round
Parallel Sparse Matrix - Vector Product
DEFF Research Database (Denmark)
Alexandersen, Joe; Lazarov, Boyan Stefanov; Dammann, Bernd
This technical report contains a case study of a sparse matrix-vector product routine, implemented for parallel execution on a compute cluster with both pure MPI and hybrid MPI-OpenMP solutions. C++ classes for sparse data types were developed and the report shows how these class can be used...
Noniterative MAP reconstruction using sparse matrix representations.
Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J
2009-09-01
We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.
Sparse Matrix for ECG Identification with Two-Lead Features
Directory of Open Access Journals (Sweden)
Kuo-Kun Tseng
2015-01-01
Full Text Available Electrocardiograph (ECG human identification has the potential to improve biometric security. However, improvements in ECG identification and feature extraction are required. Previous work has focused on single lead ECG signals. Our work proposes a new algorithm for human identification by mapping two-lead ECG signals onto a two-dimensional matrix then employing a sparse matrix method to process the matrix. And that is the first application of sparse matrix techniques for ECG identification. Moreover, the results of our experiments demonstrate the benefits of our approach over existing methods.
Efficient implementations of block sparse matrix operations on shared memory vector machines
International Nuclear Information System (INIS)
Washio, T.; Maruyama, K.; Osoda, T.; Doi, S.; Shimizu, F.
2000-01-01
In this paper, we propose vectorization and shared memory-parallelization techniques for block-type random sparse matrix operations in finite element (FEM) applications. Here, a block corresponds to unknowns on one node in the FEM mesh and we assume that the block size is constant over the mesh. First, we discuss some basic vectorization ideas (the jagged diagonal (JAD) format and the segmented scan algorithm) for the sparse matrix-vector product. Then, we extend these ideas to the shared memory parallelization. After that, we show that the techniques can be applied not only to the sparse matrix-vector product but also to the sparse matrix-matrix product, the incomplete or complete sparse LU factorization and preconditioning. Finally, we report the performance evaluation results obtained on an NEC SX-4 shared memory vector machine for linear systems in some FEM applications. (author)
Multi-threaded Sparse Matrix Sparse Matrix Multiplication for Many-Core and GPU Architectures.
Energy Technology Data Exchange (ETDEWEB)
Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2018-01-01
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scientific computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix- matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.
Massive Asynchronous Parallelization of Sparse Matrix Factorizations
Energy Technology Data Exchange (ETDEWEB)
Chow, Edmond [Georgia Inst. of Technology, Atlanta, GA (United States)
2018-01-08
Solving sparse problems is at the core of many DOE computational science applications. We focus on the challenge of developing sparse algorithms that can fully exploit the parallelism in extreme-scale computing systems, in particular systems with massive numbers of cores per node. Our approach is to express a sparse matrix factorization as a large number of bilinear constraint equations, and then solving these equations via an asynchronous iterative method. The unknowns in these equations are the matrix entries of the factorization that is desired.
New sparse matrix solver in the KIKO3D 3-dimensional reactor dynamics code
International Nuclear Information System (INIS)
Panka, I.; Kereszturi, A.; Hegedus, C.
2005-01-01
The goal of this paper is to present a more effective method Bi-CGSTAB for accelerating the large sparse matrix equation solution in the KIKO3D code. This equation system is obtained by using the factorization of the improved quasi static (IQS) method for the time dependent nodal kinetic equations. In the old methodology standard large sparse matrix techniques were considered, where Gauss-Seidel preconditioning and a GMRES-type solver were applied. The validation of KIKO3D using Bi-CGSTAB has been performed by solving of a VVER-1000 kinetic benchmark problem. Additionally, the convergence characteristics were investigated in given macro time steps of Control Rod Ejection transients. The results have been obtained by the old GMRES and new Bi-CGSTAB methods are compared. (author)
Fast sparse matrix-vector multiplication by partitioning and reordering
Yzelman, A.N.
2011-01-01
The thesis introduces a cache-oblivious method for the sparse matrix-vector (SpMV) multiplication, which is an important computational kernel in many applications. The method works by permuting rows and columns of the input matrix so that the resulting reordered matrix induces cache-friendly
Thermal measurements and inverse techniques
Orlande, Helcio RB; Maillet, Denis; Cotta, Renato M
2011-01-01
With its uncommon presentation of instructional material regarding mathematical modeling, measurements, and solution of inverse problems, Thermal Measurements and Inverse Techniques is a one-stop reference for those dealing with various aspects of heat transfer. Progress in mathematical modeling of complex industrial and environmental systems has enabled numerical simulations of most physical phenomena. In addition, recent advances in thermal instrumentation and heat transfer modeling have improved experimental procedures and indirect measurements for heat transfer research of both natural phe
Ab initio nuclear structure - the large sparse matrix eigenvalue problem
Energy Technology Data Exchange (ETDEWEB)
Vary, James P; Maris, Pieter [Department of Physics, Iowa State University, Ames, IA, 50011 (United States); Ng, Esmond; Yang, Chao [Computational Research Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94720 (United States); Sosonkina, Masha, E-mail: jvary@iastate.ed [Scalable Computing Laboratory, Ames Laboratory, Iowa State University, Ames, IA, 50011 (United States)
2009-07-01
The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10{sup 10} and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.
Ab initio nuclear structure - the large sparse matrix eigenvalue problem
International Nuclear Information System (INIS)
Vary, James P; Maris, Pieter; Ng, Esmond; Yang, Chao; Sosonkina, Masha
2009-01-01
The structure and reactions of light nuclei represent fundamental and formidable challenges for microscopic theory based on realistic strong interaction potentials. Several ab initio methods have now emerged that provide nearly exact solutions for some nuclear properties. The ab initio no core shell model (NCSM) and the no core full configuration (NCFC) method, frame this quantum many-particle problem as a large sparse matrix eigenvalue problem where one evaluates the Hamiltonian matrix in a basis space consisting of many-fermion Slater determinants and then solves for a set of the lowest eigenvalues and their associated eigenvectors. The resulting eigenvectors are employed to evaluate a set of experimental quantities to test the underlying potential. For fundamental problems of interest, the matrix dimension often exceeds 10 10 and the number of nonzero matrix elements may saturate available storage on present-day leadership class facilities. We survey recent results and advances in solving this large sparse matrix eigenvalue problem. We also outline the challenges that lie ahead for achieving further breakthroughs in fundamental nuclear theory using these ab initio approaches.
Point-source inversion techniques
Langston, Charles A.; Barker, Jeffrey S.; Pavlin, Gregory B.
1982-11-01
A variety of approaches for obtaining source parameters from waveform data using moment-tensor or dislocation point source models have been investigated and applied to long-period body and surface waves from several earthquakes. Generalized inversion techniques have been applied to data for long-period teleseismic body waves to obtain the orientation, time function and depth of the 1978 Thessaloniki, Greece, event, of the 1971 San Fernando event, and of several events associated with the 1963 induced seismicity sequence at Kariba, Africa. The generalized inversion technique and a systematic grid testing technique have also been used to place meaningful constraints on mechanisms determined from very sparse data sets; a single station with high-quality three-component waveform data is often sufficient to discriminate faulting type (e.g., strike-slip, etc.). Sparse data sets for several recent California earthquakes, for a small regional event associated with the Koyna, India, reservoir, and for several events at the Kariba reservoir have been investigated in this way. Although linearized inversion techniques using the moment-tensor model are often robust, even for sparse data sets, there are instances where the simplifying assumption of a single point source is inadequate to model the data successfully. Numerical experiments utilizing synthetic data and actual data for the 1971 San Fernando earthquake graphically demonstrate that severe problems may be encountered if source finiteness effects are ignored. These techniques are generally applicable to on-line processing of high-quality digital data, but source complexity and inadequacy of the assumed Green's functions are major problems which are yet to be fully addressed.
Sparse Matrix-Vector Multiplication on Multicore and Accelerators
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel W. [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Bell, Nathan [NVIDIA Research, Santa Clara, CA (United States); Choi, Jee Whan [Georgia Inst. of Technology, Atlanta, GA (United States); Garland, Michael [NVIDIA Research, Santa Clara, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Vuduc, Richard [Georgia Inst. of Technology, Atlanta, GA (United States)
2010-12-07
This chapter consolidates recent work on the development of high performance multicore and accelerator-based implementations of sparse matrix-vector multiplication (SpMV). As an object of study, SpMV is an interesting computation for two key reasons. First, it appears widely in applications in scientific and engineering computing, financial and economic modeling, and information retrieval, among others, and is therefore of great practical interest. Secondly, it is both simple to describe but challenging to implement well, since its performance is limited by a variety of factors, including low computational intensity, potentially highly irregular memory access behavior, and a strong input dependence that be known only at run time. Thus, we believe SpMV is both practically important and provides important insights for understanding the algorithmic and implementation principles necessary to making effective use of state-of-the-art systems.
Massively parallel sparse matrix function calculations with NTPoly
Dawson, William; Nakajima, Takahito
2018-04-01
We present NTPoly, a massively parallel library for computing the functions of sparse, symmetric matrices. The theory of matrix functions is a well developed framework with a wide range of applications including differential equations, graph theory, and electronic structure calculations. One particularly important application area is diagonalization free methods in quantum chemistry. When the input and output of the matrix function are sparse, methods based on polynomial expansions can be used to compute matrix functions in linear time. We present a library based on these methods that can compute a variety of matrix functions. Distributed memory parallelization is based on a communication avoiding sparse matrix multiplication algorithm. OpenMP task parallellization is utilized to implement hybrid parallelization. We describe NTPoly's interface and show how it can be integrated with programs written in many different programming languages. We demonstrate the merits of NTPoly by performing large scale calculations on the K computer.
Multi scales based sparse matrix spectral clustering image segmentation
Liu, Zhongmin; Chen, Zhicai; Li, Zhanming; Hu, Wenjin
2018-04-01
In image segmentation, spectral clustering algorithms have to adopt the appropriate scaling parameter to calculate the similarity matrix between the pixels, which may have a great impact on the clustering result. Moreover, when the number of data instance is large, computational complexity and memory use of the algorithm will greatly increase. To solve these two problems, we proposed a new spectral clustering image segmentation algorithm based on multi scales and sparse matrix. We devised a new feature extraction method at first, then extracted the features of image on different scales, at last, using the feature information to construct sparse similarity matrix which can improve the operation efficiency. Compared with traditional spectral clustering algorithm, image segmentation experimental results show our algorithm have better degree of accuracy and robustness.
Joint-2D-SL0 Algorithm for Joint Sparse Matrix Reconstruction
Directory of Open Access Journals (Sweden)
Dong Zhang
2017-01-01
Full Text Available Sparse matrix reconstruction has a wide application such as DOA estimation and STAP. However, its performance is usually restricted by the grid mismatch problem. In this paper, we revise the sparse matrix reconstruction model and propose the joint sparse matrix reconstruction model based on one-order Taylor expansion. And it can overcome the grid mismatch problem. Then, we put forward the Joint-2D-SL0 algorithm which can solve the joint sparse matrix reconstruction problem efficiently. Compared with the Kronecker compressive sensing method, our proposed method has a higher computational efficiency and acceptable reconstruction accuracy. Finally, simulation results validate the superiority of the proposed method.
Sparse-matrix factorizations for fast symmetric Fourier transforms
International Nuclear Information System (INIS)
Sequel, J.
1987-01-01
This work proposes new fast algorithms computing the discrete Fourier transform of certain families of symmetric sequences. Sequences commonly found in problems of structure determination by x-ray crystallography and in numerical solutions of boundary-value problems in partial differential equations are dealt with. In the algorithms presented, the redundancies in the input and output data, due to the presence of symmetries in the input data sequence, were eliminated. Using ring-theoretical methods a matrix representation is obtained for the remaining calculations; which factors as the product of a complex block-diagonal matrix times as integral matrix. A basic two-step algorithm scheme arises from this factorization with a first step consisting of pre-additions and a second step containing the calculations involved in computing with the blocks in the block-diagonal factor. These blocks are structured as block-Hankel matrices, and two sparse-matrix factoring formulas are developed in order to diminish their arithmetic complexity
User's Manual for PCSMS (Parallel Complex Sparse Matrix Solver). Version 1.
Reddy, C. J.
2000-01-01
PCSMS (Parallel Complex Sparse Matrix Solver) is a computer code written to make use of the existing real sparse direct solvers to solve complex, sparse matrix linear equations. PCSMS converts complex matrices into real matrices and use real, sparse direct matrix solvers to factor and solve the real matrices. The solution vector is reconverted to complex numbers. Though, this utility is written for Silicon Graphics (SGI) real sparse matrix solution routines, it is general in nature and can be easily modified to work with any real sparse matrix solver. The User's Manual is written to make the user acquainted with the installation and operation of the code. Driver routines are given to aid the users to integrate PCSMS routines in their own codes.
Pulse-Width-Modulation of Neutral-Point-Clamped Sparse Matrix Converter
DEFF Research Database (Denmark)
Loh, P.C.; Blaabjerg, Frede; Gao, F.
2007-01-01
input current and output voltage can be achieved with minimized rectification switching loss, rendering the sparse matrix converter as a competitive choice for interfacing the utility grid to (e.g.) defense facilities that require a different frequency supply. As an improvement, sparse matrix converter...... with improved waveform quality. Performances and practicalities of the designed schemes are verified in simulation and experimentally using an implemented laboratory prototype with some representative results captured and presented in the paper....
Nonlocal low-rank and sparse matrix decomposition for spectral CT reconstruction
Niu, Shanzhou; Yu, Gaohang; Ma, Jianhua; Wang, Jing
2018-02-01
Spectral computed tomography (CT) has been a promising technique in research and clinics because of its ability to produce improved energy resolution images with narrow energy bins. However, the narrow energy bin image is often affected by serious quantum noise because of the limited number of photons used in the corresponding energy bin. To address this problem, we present an iterative reconstruction method for spectral CT using nonlocal low-rank and sparse matrix decomposition (NLSMD), which exploits the self-similarity of patches that are collected in multi-energy images. Specifically, each set of patches can be decomposed into a low-rank component and a sparse component, and the low-rank component represents the stationary background over different energy bins, while the sparse component represents the rest of the different spectral features in individual energy bins. Subsequently, an effective alternating optimization algorithm was developed to minimize the associated objective function. To validate and evaluate the NLSMD method, qualitative and quantitative studies were conducted by using simulated and real spectral CT data. Experimental results show that the NLSMD method improves spectral CT images in terms of noise reduction, artifact suppression and resolution preservation.
A framework for general sparse matrix-matrix multiplication on GPUs and heterogeneous processors
DEFF Research Database (Denmark)
Liu, Weifeng; Vinter, Brian
2015-01-01
General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method (AMG), breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM implementation has to handle...... extra irregularity from three aspects: (1) the number of nonzero entries in the resulting sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the resulting sparse matrix dominate the execution time, and (3) load balancing must account for sparse data...... memory space and efficiently utilizes the very limited on-chip scratchpad memory. Parallel insert operations of the nonzero entries are implemented through the GPU merge path algorithm that is experimentally found to be the fastest GPU merge approach. Load balancing builds on the number of necessary...
An Efficient GPU General Sparse Matrix-Matrix Multiplication for Irregular Data
DEFF Research Database (Denmark)
Liu, Weifeng; Vinter, Brian
2014-01-01
General sparse matrix-matrix multiplication (SpGEMM) is a fundamental building block for numerous applications such as algebraic multigrid method, breadth first search and shortest path problem. Compared to other sparse BLAS routines, an efficient parallel SpGEMM algorithm has to handle extra...... irregularity from three aspects: (1) the number of the nonzero entries in the result sparse matrix is unknown in advance, (2) very expensive parallel insert operations at random positions in the result sparse matrix dominate the execution time, and (3) load balancing must account for sparse data in both input....... Load balancing builds on the number of the necessary arithmetic operations on the nonzero entries and is guaranteed in all stages. Compared with the state-of-the-art GPU SpGEMM methods in the CUSPARSE library and the CUSP library and the latest CPU SpGEMM method in the Intel Math Kernel Library, our...
Multi-threaded Sparse Matrix-Matrix Multiplication for Many-Core and GPU Architectures.
Energy Technology Data Exchange (ETDEWEB)
Deveci, Mehmet [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Rajamanickam, Sivasankaran [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Trott, Christian Robert [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2017-12-01
Sparse Matrix-Matrix multiplication is a key kernel that has applications in several domains such as scienti c computing and graph analysis. Several algorithms have been studied in the past for this foundational kernel. In this paper, we develop parallel algorithms for sparse matrix-matrix multiplication with a focus on performance portability across different high performance computing architectures. The performance of these algorithms depend on the data structures used in them. We compare different types of accumulators in these algorithms and demonstrate the performance difference between these data structures. Furthermore, we develop a meta-algorithm, kkSpGEMM, to choose the right algorithm and data structure based on the characteristics of the problem. We show performance comparisons on three architectures and demonstrate the need for the community to develop two phase sparse matrix-matrix multiplication implementations for efficient reuse of the data structures involved.
Inverse Raman effect: applications and detection techniques
International Nuclear Information System (INIS)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented
Inverse Raman effect: applications and detection techniques
Energy Technology Data Exchange (ETDEWEB)
Hughes, L.J. Jr.
1980-08-01
The processes underlying the inverse Raman effect are qualitatively described by comparing it to the more familiar phenomena of conventional and stimulated Raman scattering. An experession is derived for the inverse Raman absorption coefficient, and its relationship to the stimulated Raman gain is obtained. The power requirements of the two fields are examined qualitatively and quantitatively. The assumption that the inverse Raman absorption coefficient is constant over the interaction length is examined. Advantages of the technique are discussed and a brief survey of reported studies is presented.
Bustamam, A.; Ulul, E. D.; Hura, H. F. A.; Siswantining, T.
2017-07-01
Hierarchical clustering is one of effective methods in creating a phylogenetic tree based on the distance matrix between DNA (deoxyribonucleic acid) sequences. One of the well-known methods to calculate the distance matrix is k-mer method. Generally, k-mer is more efficient than some distance matrix calculation techniques. The steps of k-mer method are started from creating k-mer sparse matrix, and followed by creating k-mer singular value vectors. The last step is computing the distance amongst vectors. In this paper, we analyze the sequences of MERS-CoV (Middle East Respiratory Syndrome - Coronavirus) DNA by implementing hierarchical clustering using k-mer sparse matrix in order to perform the phylogenetic analysis. Our results show that the ancestor of our MERS-CoV is coming from Egypt. Moreover, we found that the MERS-CoV infection that occurs in one country may not necessarily come from the same country of origin. This suggests that the process of MERS-CoV mutation might not only be influenced by geographical factor.
Performance modeling and optimization of sparse matrix-vector multiplication on NVIDIA CUDA platform
Xu, S.; Xue, W.; Lin, H.X.
2011-01-01
In this article, we discuss the performance modeling and optimization of Sparse Matrix-Vector Multiplication (SpMV) on NVIDIA GPUs using CUDA. SpMV has a very low computation-data ratio and its performance is mainly bound by the memory bandwidth. We propose optimization of SpMV based on ELLPACK from
Energy Technology Data Exchange (ETDEWEB)
1978-01-01
The program and abstracts of the SIAM 1978 fall meeting in Knoxville, Tennessee, are given, along with those of the associated symposium on sparse matrix computations. The papers dealt with both pure mathematics and mathematics applied to many different subject areas. (RWR)
Analog fault diagnosis by inverse problem technique
Ahmed, Rania F.
2011-12-01
A novel algorithm for detecting soft faults in linear analog circuits based on the inverse problem concept is proposed. The proposed approach utilizes optimization techniques with the aid of sensitivity analysis. The main contribution of this work is to apply the inverse problem technique to estimate the actual parameter values of the tested circuit and so, to detect and diagnose single fault in analog circuits. The validation of the algorithm is illustrated through applying it to Sallen-Key second order band pass filter and the results show that the detecting percentage efficiency was 100% and also, the maximum error percentage of estimating the parameter values is 0.7%. This technique can be applied to any other linear circuit and it also can be extended to be applied to non-linear circuits. © 2011 IEEE.
Design Patterns for Sparse-Matrix Computations on Hybrid CPU/GPU Platforms
Directory of Open Access Journals (Sweden)
Valeria Cardellini
2014-01-01
Full Text Available We apply object-oriented software design patterns to develop code for scientific software involving sparse matrices. Design patterns arise when multiple independent developments produce similar designs which converge onto a generic solution. We demonstrate how to use design patterns to implement an interface for sparse matrix computations on NVIDIA GPUs starting from PSBLAS, an existing sparse matrix library, and from existing sets of GPU kernels for sparse matrices. We also compare the throughput of the PSBLAS sparse matrix–vector multiplication on two platforms exploiting the GPU with that obtained by a CPU-only PSBLAS implementation. Our experiments exhibit encouraging results regarding the comparison between CPU and GPU executions in double precision, obtaining a speedup of up to 35.35 on NVIDIA GTX 285 with respect to AMD Athlon 7750, and up to 10.15 on NVIDIA Tesla C2050 with respect to Intel Xeon X5650.
Porting of the DBCSR library for Sparse Matrix-Matrix Multiplications to Intel Xeon Phi systems
Bethune, Iain; Gloess, Andeas; Hutter, Juerg; Lazzaro, Alfio; Pabst, Hans; Reid, Fiona
2017-01-01
Multiplication of two sparse matrices is a key operation in the simulation of the electronic structure of systems containing thousands of atoms and electrons. The highly optimized sparse linear algebra library DBCSR (Distributed Block Compressed Sparse Row) has been specifically designed to efficiently perform such sparse matrix-matrix multiplications. This library is the basic building block for linear scaling electronic structure theory and low scaling correlated methods in CP2K. It is para...
Directory of Open Access Journals (Sweden)
Anil S Thakur
2007-10-01
Full Text Available Crystallization is a major bottleneck in the process of macromolecular structure determination by X-ray crystallography. Successful crystallization requires the formation of nuclei and their subsequent growth to crystals of suitable size. Crystal growth generally occurs spontaneously in a supersaturated solution as a result of homogenous nucleation. However, in a typical sparse matrix screening experiment, precipitant and protein concentration are not sampled extensively, and supersaturation conditions suitable for nucleation are often missed.We tested the effect of nine potential heterogenous nucleating agents on crystallization of ten test proteins in a sparse matrix screen. Several nucleating agents induced crystal formation under conditions where no crystallization occurred in the absence of the nucleating agent. Four nucleating agents: dried seaweed; horse hair; cellulose and hydroxyapatite, had a considerable overall positive effect on crystallization success. This effect was further enhanced when these nucleating agents were used in combination with each other.Our results suggest that the addition of heterogeneous nucleating agents increases the chances of crystal formation when using sparse matrix screens.
Interferogram analysis using the Abel inversion technique
International Nuclear Information System (INIS)
Yusof Munajat; Mohamad Kadim Suaidi
2000-01-01
High speed and high resolution optical detection system were used to capture the image of acoustic waves propagation. The freeze image in the form of interferogram was analysed to calculate the transient pressure profile of the acoustic waves. The interferogram analysis was based on the fringe shift and the application of the Abel inversion technique. An easier approach was made by mean of using MathCAD program as a tool in the programming; yet powerful enough to make such calculation, plotting and transfer of file. (Author)
Kreutzer, Moritz; Hager, Georg; Wellein, Gerhard; Fehske, Holger; Basermann, Achim; Bishop, Alan R.
2011-01-01
Sparse matrix-vector multiplication (spMVM) is the dominant operation in many sparse solvers. We investigate performance properties of spMVM with matrices of various sparsity patterns on the nVidia “Fermi” class of GPGPUs. A new “padded jagged diagonals storage” (pJDS) format is proposed which may substantially reduce the memory overhead intrinsic to the widespread ELLPACK-R scheme while making no assumptions about the matrix structure. In our test scenarios the pJDS format cuts the ...
Directory of Open Access Journals (Sweden)
Wang Wen-qin
2015-02-01
Full Text Available The waveforms used in Multiple-Input Multiple-Output (MIMO Synthetic Aperture Radar (SAR should have a large time-bandwidth product and good ambiguity function performance. A scheme to design multiple orthogonal MIMO SAR Orthogonal Frequency Division Multiplexing (OFDM chirp waveforms by combinational sparse matrix and correlation optimization is proposed. First, the problem of MIMO SAR waveform design amounts to the associated design of hopping frequency and amplitudes. Then a iterative exhaustive search algorithm is adopted to optimally design the code matrix with the constraints minimizing the block correlation coefficient of sparse matrix and the sum of cross-correlation peaks. And the amplitudes matrix are adaptively designed by minimizing the cross-correlation peaks with the genetic algorithm. Additionally, the impacts of waveform number, hopping frequency interval and selectable frequency index are also analyzed. The simulation results verify the proposed scheme can design multiple orthogonal large time-bandwidth product OFDM chirp waveforms with low cross-correlation peak and sidelobes and it improves ambiguity performance.
Abdelfattah, Ahmad
2016-05-23
Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Abdelfattah, Ahmad; Ltaief, Hatem; Keyes, David E.; Dongarra, Jack
2016-01-01
Simulations of many multi-component PDE-based applications, such as petroleum reservoirs or reacting flows, are dominated by the solution, on each time step and within each Newton step, of large sparse linear systems. The standard solver is a preconditioned Krylov method. Along with application of the preconditioner, memory-bound Sparse Matrix-Vector Multiplication (SpMV) is the most time-consuming operation in such solvers. Multi-species models produce Jacobians with a dense block structure, where the block size can be as large as a few dozen. Failing to exploit this dense block structure vastly underutilizes hardware capable of delivering high performance on dense BLAS operations. This paper presents a GPU-accelerated SpMV kernel for block-sparse matrices. Dense matrix-vector multiplications within the sparse-block structure leverage optimization techniques from the KBLAS library, a high performance library for dense BLAS kernels. The design ideas of KBLAS can be applied to block-sparse matrices. Furthermore, a technique is proposed to balance the workload among thread blocks when there are large variations in the lengths of nonzero rows. Multi-GPU performance is highlighted. The proposed SpMV kernel outperforms existing state-of-the-art implementations using matrices with real structures from different applications. Copyright © 2016 John Wiley & Sons, Ltd. Copyright © 2016 John Wiley & Sons, Ltd.
Library designs for generic C++ sparse matrix computations of iterative methods
Energy Technology Data Exchange (ETDEWEB)
Pozo, R.
1996-12-31
A new library design is presented for generic sparse matrix C++ objects for use in iterative algorithms and preconditioners. This design extends previous work on C++ numerical libraries by providing a framework in which efficient algorithms can be written *independent* of the matrix layout or format. That is, rather than supporting different codes for each (element type) / (matrix format) combination, only one version of the algorithm need be maintained. This not only reduces the effort for library developers, but also simplifies the calling interface seen by library users. Furthermore, the underlying matrix library can be naturally extended to support user-defined objects, such as hierarchical block-structured matrices, or application-specific preconditioners. Utilizing optimized kernels whenever possible, the resulting performance of such framework can be shown to be competitive with optimized Fortran programs.
Trimming and procrastination as inversion techniques
Backus, George E.
1996-12-01
By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.
Stoykov, S.; Atanassov, E.; Margenov, S.
2016-10-01
Many of the scientific applications involve sparse or dense matrix operations, such as solving linear systems, matrix-matrix products, eigensolvers, etc. In what concerns structural nonlinear dynamics, the computations of periodic responses and the determination of stability of the solution are of primary interest. Shooting method iswidely used for obtaining periodic responses of nonlinear systems. The method involves simultaneously operations with sparse and dense matrices. One of the computationally expensive operations in the method is multiplication of sparse by dense matrices. In the current work, a new algorithm for sparse matrix by dense matrix products is presented. The algorithm takes into account the structure of the sparse matrix, which is obtained by space discretization of the nonlinear Mindlin's plate equation of motion by the finite element method. The algorithm is developed to use the vector engine of Intel Xeon Phi coprocessors. It is compared with the standard sparse matrix by dense matrix algorithm and the one developed by Intel MKL and it is shown that by considering the properties of the sparse matrix better algorithms can be developed.
Graph Transformation and Designing Parallel Sparse Matrix Algorithms beyond Data Dependence Analysis
Directory of Open Access Journals (Sweden)
H.X. Lin
2004-01-01
Full Text Available Algorithms are often parallelized based on data dependence analysis manually or by means of parallel compilers. Some vector/matrix computations such as the matrix-vector products with simple data dependence structures (data parallelism can be easily parallelized. For problems with more complicated data dependence structures, parallelization is less straightforward. The data dependence graph is a powerful means for designing and analyzing parallel algorithms. However, for sparse matrix computations, parallelization based on solely exploiting the existing parallelism in an algorithm does not always give satisfactory results. For example, the conventional Gaussian elimination algorithm for the solution of a tri-diagonal system is inherently sequential, so algorithms specially for parallel computation has to be designed. After briefly reviewing different parallelization approaches, a powerful graph formalism for designing parallel algorithms is introduced. This formalism will be discussed using a tri-diagonal system as an example. Its application to general matrix computations is also discussed. Its power in designing parallel algorithms beyond the ability of data dependence analysis is shown by means of a new algorithm called ACER (Alternating Cyclic Elimination and Reduction algorithm.
Energy Technology Data Exchange (ETDEWEB)
Aktulga, Hasan Metin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Buluc, Aydin [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yang, Chao [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States)
2014-08-14
Obtaining highly accurate predictions on the properties of light atomic nuclei using the configuration interaction (CI) approach requires computing a few extremal Eigen pairs of the many-body nuclear Hamiltonian matrix. In the Many-body Fermion Dynamics for nuclei (MFDn) code, a block Eigen solver is used for this purpose. Due to the large size of the sparse matrices involved, a significant fraction of the time spent on the Eigen value computations is associated with the multiplication of a sparse matrix (and the transpose of that matrix) with multiple vectors (SpMM and SpMM-T). Existing implementations of SpMM and SpMM-T significantly underperform expectations. Thus, in this paper, we present and analyze optimized implementations of SpMM and SpMM-T. We base our implementation on the compressed sparse blocks (CSB) matrix format and target systems with multi-core architectures. We develop a performance model that allows us to understand and estimate the performance characteristics of our SpMM kernel implementations, and demonstrate the efficiency of our implementation on a series of real-world matrices extracted from MFDn. In particular, we obtain 3-4 speedup on the requisite operations over good implementations based on the commonly used compressed sparse row (CSR) matrix format. The improvements in the SpMM kernel suggest we may attain roughly a 40% speed up in the overall execution time of the block Eigen solver used in MFDn.
A Novel CSR-Based Sparse Matrix-Vector Multiplication on GPUs
Directory of Open Access Journals (Sweden)
Guixia He
2016-01-01
Full Text Available Sparse matrix-vector multiplication (SpMV is an important operation in scientific computations. Compressed sparse row (CSR is the most frequently used format to store sparse matrices. However, CSR-based SpMVs on graphic processing units (GPUs, for example, CSR-scalar and CSR-vector, usually have poor performance due to irregular memory access patterns. This motivates us to propose a perfect CSR-based SpMV on the GPU that is called PCSR. PCSR involves two kernels and accesses CSR arrays in a fully coalesced manner by introducing a middle array, which greatly alleviates the deficiencies of CSR-scalar (rare coalescing and CSR-vector (partial coalescing. Test results on a single C2050 GPU show that PCSR fully outperforms CSR-scalar, CSR-vector, and CSRMV and HYBMV in the vendor-tuned CUSPARSE library and is comparable with a most recently proposed CSR-based algorithm, CSR-Adaptive. Furthermore, we extend PCSR on a single GPU to multiple GPUs. Experimental results on four C2050 GPUs show that no matter whether the communication between GPUs is considered or not PCSR on multiple GPUs achieves good performance and has high parallel efficiency.
Optimization of sparse matrix-vector multiplication on emerging multicore platforms
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Oliker, Leonid [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Vuduc, Richard [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Shalf, John [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Yelick, Katherine [Lawrence Berkeley National Lab. (LBNL), Berkeley, CA (United States); Univ. of California, Berkeley, CA (United States); Demmel, James [Univ. of California, Berkeley, CA (United States)
2007-01-01
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD dual-core and Intel quad-core designs, the heterogeneous STI Cell, as well as the first scientific study of the highly multithreaded Sun Niagara2. We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural tradeoffs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.
Optimization of Sparse Matrix-Vector Multiplication on Emerging Multicore Platforms
Energy Technology Data Exchange (ETDEWEB)
Williams, Samuel; Oliker, Leonid; Vuduc, Richard; Shalf, John; Yelick, Katherine; Demmel, James
2008-10-16
We are witnessing a dramatic change in computer architecture due to the multicore paradigm shift, as every electronic device from cell phones to supercomputers confronts parallelism of unprecedented scale. To fully unleash the potential of these systems, the HPC community must develop multicore specific-optimization methodologies for important scientific computations. In this work, we examine sparse matrix-vector multiply (SpMV) - one of the most heavily used kernels in scientific computing - across a broad spectrum of multicore designs. Our experimental platform includes the homogeneous AMD quad-core, AMD dual-core, and Intel quad-core designs, the heterogeneous STI Cell, as well as one of the first scientific studies of the highly multithreaded Sun Victoria Falls (a Niagara2 SMP). We present several optimization strategies especially effective for the multicore environment, and demonstrate significant performance improvements compared to existing state-of-the-art serial and parallel SpMV implementations. Additionally, we present key insights into the architectural trade-offs of leading multicore design strategies, in the context of demanding memory-bound numerical algorithms.
A conditioning technique for matrix inversion for Wilson fermions
International Nuclear Information System (INIS)
DeGrand, T.A.
1988-01-01
I report a simple technique for conditioning conjugate gradient or conjugate residue matrix inversion as applied to the lattice gauge theory problem of computing the propagator of Wilson fermions. One form of the technique provides about a factor of three speedup over an unconditioned algorithm while running at the same speed as an unconditioned algorithm. I illustrate the method as it is applied to a conjugate residue algorithm. (orig.)
Directory of Open Access Journals (Sweden)
Bérenger Bramas
2018-04-01
Full Text Available The sparse matrix-vector product (SpMV is a fundamental operation in many scientific applications from various fields. The High Performance Computing (HPC community has therefore continuously invested a lot of effort to provide an efficient SpMV kernel on modern CPU architectures. Although it has been shown that block-based kernels help to achieve high performance, they are difficult to use in practice because of the zero padding they require. In the current paper, we propose new kernels using the AVX-512 instruction set, which makes it possible to use a blocking scheme without any zero padding in the matrix memory storage. We describe mask-based sparse matrix formats and their corresponding SpMV kernels highly optimized in assembly language. Considering that the optimal blocking size depends on the matrix, we also provide a method to predict the best kernel to be used utilizing a simple interpolation of results from previous executions. We compare the performance of our approach to that of the Intel MKL CSR kernel and the CSR5 open-source package on a set of standard benchmark matrices. We show that we can achieve significant improvements in many cases, both for sequential and for parallel executions. Finally, we provide the corresponding code in an open source library, called SPC5.
One-dimensional nonlinear inverse heat conduction technique
International Nuclear Information System (INIS)
Hills, R.G.; Hensel, E.C. Jr.
1986-01-01
The one-dimensional nonlinear problem of heat conduction is considered. A noniterative space-marching finite-difference algorithm is developed to estimate the surface temperature and heat flux from temperature measurements at subsurface locations. The trade-off between resolution and variance of the estimates of the surface conditions is discussed quantitatively. The inverse algorithm is stabilized through the use of digital filters applied recursively. The effect of the filters on the resolution and variance of the surface estimates is quantified. Results are presented which indicate that the technique is capable of handling noisy measurement data
SQUIDs and inverse problem techniques in nondestructive evaluation of metals
Bruno, A C
2001-01-01
Superconducting Quantum Interference Devices coupled to gradiometers were used to defect flaws in metals. We detected flaws in aluminium samples carrying current, measuring fields at lift-off distances up to one order of magnitude larger than the size of the flaw. Configured as a susceptometer we detected surface-braking flaws in steel samples, measuring the distortion on the applied magnetic field. We also used spatial filtering techniques to enhance the visualization of the magnetic field due to the flaws. In order to assess its severity, we used the generalized inverse method and singular value decomposition to reconstruct small spherical inclusions in steel. In addition, finite elements and optimization techniques were used to image complex shaped flaws.
Integrated intensities in inverse time-of-flight technique
International Nuclear Information System (INIS)
Dorner, Bruno
2006-01-01
In traditional data analysis a model function, convoluted with the resolution, is fitted to the measured data. In case that integrated intensities of signals are of main interest, one can use an approach which does not require a model function for the signal nor detailed knowledge of the resolution. For inverse TOF technique, this approach consists of two steps: (i) Normalisation of the measured spectrum with the help of a monitor, with 1/k sensitivity, which is positioned in front of the sample. This means at the same time a conversion of the data from time of flight to energy transfer. (ii) A Jacobian [I. Waller, P.O. Froeman, Ark. Phys. 4 (1952) 183] transforms data collected at constant scattering angle into data as if measured at constant momentum transfer Q. This Jacobian works correctly for signals which have a constant width at different Q along the trajectory of constant scattering angle. The approach has been tested on spectra of Compton scattering with neutrons, having epithermal energies, obtained on the inverse TOF spectrometer VESUVIO/ISIS. In this case the width of the signal is increasing proportional to Q and in consequence the application of the Jacobian leads to integrated intensities slightly too high. The resulting integrated intensities agree very well with results derived in the traditional way. Thus this completely different approach confirms the observation that signals from recoil by H-atoms at large momentum transfers are weaker than expected
Effects of Ordering Strategies and Programming Paradigms on Sparse Matrix Computations
Oliker, Leonid; Li, Xiaoye; Husbands, Parry; Biswas, Rupak; Biegel, Bryan (Technical Monitor)
2002-01-01
The Conjugate Gradient (CG) algorithm is perhaps the best-known iterative technique to solve sparse linear systems that are symmetric and positive definite. For systems that are ill-conditioned, it is often necessary to use a preconditioning technique. In this paper, we investigate the effects of various ordering and partitioning strategies on the performance of parallel CG and ILU(O) preconditioned CG (PCG) using different programming paradigms and architectures. Results show that for this class of applications: ordering significantly improves overall performance on both distributed and distributed shared-memory systems, that cache reuse may be more important than reducing communication, that it is possible to achieve message-passing performance using shared-memory constructs through careful data ordering and distribution, and that a hybrid MPI+OpenMP paradigm increases programming complexity with little performance gains. A implementation of CG on the Cray MTA does not require special ordering or partitioning to obtain high efficiency and scalability, giving it a distinct advantage for adaptive applications; however, it shows limited scalability for PCG due to a lack of thread level parallelism.
Sparse matrix test collections
Energy Technology Data Exchange (ETDEWEB)
Duff, I.
1996-12-31
This workshop will discuss plans for coordinating and developing sets of test matrices for the comparison and testing of sparse linear algebra software. We will talk of plans for the next release (Release 2) of the Harwell-Boeing Collection and recent work on improving the accessibility of this Collection and others through the World Wide Web. There will only be three talks of about 15 to 20 minutes followed by a discussion from the floor.
Reconstruction of sound speed profile through natural generalized inverse technique
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Somayajulu, Y.K.; Murty, C.S.
An acoustic model has been developed for reconstruction of vertical sound speed in a near stable or stratified ocean. Generalized inverse method is utilised in the model development. Numerical experiments have been carried out to account...
Brown, Malcolm
2009-01-01
Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…
Relevance vector machine technique for the inverse scattering problem
International Nuclear Information System (INIS)
Wang Fang-Fang; Zhang Ye-Rong
2012-01-01
A novel method based on the relevance vector machine (RVM) for the inverse scattering problem is presented in this paper. The nonlinearity and the ill-posedness inherent in this problem are simultaneously considered. The nonlinearity is embodied in the relation between the scattered field and the target property, which can be obtained through the RVM training process. Besides, rather than utilizing regularization, the ill-posed nature of the inversion is naturally accounted for because the RVM can produce a probabilistic output. Simulation results reveal that the proposed RVM-based approach can provide comparative performances in terms of accuracy, convergence, robustness, generalization, and improved performance in terms of sparse property in comparison with the support vector machine (SVM) based approach. (general)
Development of high-energy resolution inverse photoemission technique
International Nuclear Information System (INIS)
Asakura, D.; Fujii, Y.; Mizokawa, T.
2005-01-01
We developed a new inverse photoemission (IPES) machine based on a new idea to improve the energy resolution: off-plane Eagle mounting of the optical system in combination with dispersion matching between incoming electron and outgoing photon. In order to achieve dispersion matching, we have employed a parallel plate electron source and have investigated whether the electron beam is obtained as expected. In this paper, we present the principle and design of the new IPES method and report the current status of the high-energy resolution IPES machine
Solving Inverse Kinematics – A New Approach to the Extended Jacobian Technique
Directory of Open Access Journals (Sweden)
M. Šoch
2005-01-01
Full Text Available This paper presents a brief summary of current numerical algorithms for solving the Inverse Kinematics problem. Then a new approach based on the Extended Jacobian technique is compared with the current Jacobian Inversion method. The presented method is intended for use in the field of computer graphics for animation of articulated structures.
Recovery of material parameters of soft hyperelastic tissue by an inverse spectral technique
Gou, Kun; Joshi, Sunnie; Walton, Jay R.
2012-01-01
An inverse spectral method is developed for recovering a spatially inhomogeneous shear modulus for soft tissue. The study is motivated by a novel use of the intravascular ultrasound technique to image arteries. The arterial wall is idealized as a
Magnetic resonance separation imaging using a divided inversion recovery technique (DIRT).
Goldfarb, James W
2010-04-01
The divided inversion recovery technique is an MRI separation method based on tissue T(1) relaxation differences. When tissue T(1) relaxation times are longer than the time between inversion pulses in a segmented inversion recovery pulse sequence, longitudinal magnetization does not pass through the null point. Prior to additional inversion pulses, longitudinal magnetization may have an opposite polarity. Spatial displacement of tissues in inversion recovery balanced steady-state free-precession imaging has been shown to be due to this magnetization phase change resulting from incomplete magnetization recovery. In this paper, it is shown how this phase change can be used to provide image separation. A pulse sequence parameter, the time between inversion pulses (T180), can be adjusted to provide water-fat or fluid separation. Example water-fat and fluid separation images of the head, heart, and abdomen are presented. The water-fat separation performance was investigated by comparing image intensities in short-axis divided inversion recovery technique images of the heart. Fat, blood, and fluid signal was suppressed to the background noise level. Additionally, the separation performance was not affected by main magnetic field inhomogeneities.
A Highly Efficient Shannon Wavelet Inverse Fourier Technique for Pricing European Options
Ortiz-Gracia, Luis; Oosterlee, C.W.
2016-01-01
In the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of sharp quantitative error
A highly efficient Shannon wavelet inverse Fourier technique for pricing European options
L. Ortiz Gracia (Luis); C.W. Oosterlee (Cornelis)
2016-01-01
htmlabstractIn the search for robust, accurate, and highly efficient financial option valuation techniques, we here present the SWIFT method (Shannon wavelets inverse Fourier technique), based on Shannon wavelets. SWIFT comes with control over approximation errors made by means of
Hine, N D M; Haynes, P D; Mostofi, A A; Payne, M C
2010-09-21
We present calculations of formation energies of defects in an ionic solid (Al(2)O(3)) extrapolated to the dilute limit, corresponding to a simulation cell of infinite size. The large-scale calculations required for this extrapolation are enabled by developments in the approach to parallel sparse matrix algebra operations, which are central to linear-scaling density-functional theory calculations. The computational cost of manipulating sparse matrices, whose sizes are determined by the large number of basis functions present, is greatly improved with this new approach. We present details of the sparse algebra scheme implemented in the ONETEP code using hierarchical sparsity patterns, and demonstrate its use in calculations on a wide range of systems, involving thousands of atoms on hundreds to thousands of parallel processes.
Berger, B. S.; Duangudom, S.
1973-01-01
A technique is introduced which extends the range of useful approximation of numerical inversion techniques to many cycles of an oscillatory function without requiring either the evaluation of the image function for many values of s or the computation of higher-order terms. The technique consists in reducing a given initial value problem defined over some interval into a sequence of initial value problems defined over a set of subintervals. Several numerical examples demonstrate the utility of the method.
International Nuclear Information System (INIS)
Castaneda M, V. H.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Leon P, A. A.; Hernandez P, C. F.; Espinoza G, J. G.; Ortiz R, J. M.; Vega C, H. R.; Mendez, R.; Gallego, E.; Sousa L, M. A.
2016-10-01
The Taguchi methodology has proved to be highly efficient to solve inverse problems, in which the values of some parameters of the model must be obtained from the observed data. There are intrinsic mathematical characteristics that make a problem known as inverse. Inverse problems appear in many branches of science, engineering and mathematics. To solve this type of problem, researches have used different techniques. Recently, the use of techniques based on Artificial Intelligence technology is being explored by researches. This paper presents the use of a software tool based on artificial neural networks of generalized regression in the solution of inverse problems with application in high energy physics, specifically in the solution of the problem of neutron spectrometry. To solve this problem we use a software tool developed in the Mat Lab programming environment, which employs a friendly user interface, intuitive and easy to use for the user. This computational tool solves the inverse problem involved in the reconstruction of the neutron spectrum based on measurements made with a Bonner spheres spectrometric system. Introducing this information, the neural network is able to reconstruct the neutron spectrum with high performance and generalization capability. The tool allows that the end user does not require great training or technical knowledge in development and/or use of software, so it facilitates the use of the program for the resolution of inverse problems that are in several areas of knowledge. The techniques of Artificial Intelligence present singular veracity to solve inverse problems, given the characteristics of artificial neural networks and their network topology, therefore, the tool developed has been very useful, since the results generated by the Artificial Neural Network require few time in comparison to other techniques and are correct results comparing them with the actual data of the experiment. (Author)
Justiniano, A.; Jaya, Y.; Diephuis, G.; Veenhof, R.; Pringle, T.
2015-01-01
The objective of the study is to characterise the Triassic massive stacked sandstone deposits of the Main Buntsandstein Subgroup at Block Q16 located in the West Netherlands Basin. The characterisation was carried out through combining rock-physics modelling and seismic inversion techniques. The
Evaluation of inverse modeling techniques for pinpointing water leakages at building constructions
Schijndel, van A.W.M.
2015-01-01
The location and nature of the moisture leakages are sometimes difficult to detect. Moreover, the relation between observed inside surface moisture patterns and where the moisture enters the construction is often not clear. The objective of this paper is to investigate inverse modeling techniques as
A comparative study of surface waves inversion techniques at strong motion recording sites in Greece
Panagiotis C. Pelekis,; Savvaidis, Alexandros; Kayen, Robert E.; Vlachakis, Vasileios S.; Athanasopoulos, George A.
2015-01-01
Surface wave method was used for the estimation of Vs vs depth profile at 10 strong motion stations in Greece. The dispersion data were obtained by SASW method, utilizing a pair of electromechanical harmonic-wave source (shakers) or a random source (drop weight). In this study, three inversion techniques were used a) a recently proposed Simplified Inversion Method (SIM), b) an inversion technique based on a neighborhood algorithm (NA) which allows the incorporation of a priori information regarding the subsurface structure parameters, and c) Occam's inversion algorithm. For each site constant value of Poisson's ratio was assumed (ν=0.4) since the objective of the current study is the comparison of the three inversion schemes regardless the uncertainties resulting due to the lack of geotechnical data. A penalty function was introduced to quantify the deviations of the derived Vs profiles. The Vs models are compared as of Vs(z), Vs30 and EC8 soil category, in order to show the insignificance of the existing variations. The comparison results showed that the average variation of SIM profiles is 9% and 4.9% comparing with NA and Occam's profiles respectively whilst the average difference of Vs30 values obtained from SIM is 7.4% and 5.0% compared with NA and Occam's.
Filtering techniques for efficient inversion of two-dimensional Nuclear Magnetic Resonance data
Bortolotti, V.; Brizi, L.; Fantazzini, P.; Landi, G.; Zama, F.
2017-10-01
The inversion of two-dimensional Nuclear Magnetic Resonance (NMR) data requires the solution of a first kind Fredholm integral equation with a two-dimensional tensor product kernel and lower bound constraints. For the solution of this ill-posed inverse problem, the recently presented 2DUPEN algorithm [V. Bortolotti et al., Inverse Problems, 33(1), 2016] uses multiparameter Tikhonov regularization with automatic choice of the regularization parameters. In this work, I2DUPEN, an improved version of 2DUPEN that implements Mean Windowing and Singular Value Decomposition filters, is deeply tested. The reconstruction problem with filtered data is formulated as a compressed weighted least squares problem with multi-parameter Tikhonov regularization. Results on synthetic and real 2D NMR data are presented with the main purpose to deeper analyze the separate and combined effects of these filtering techniques on the reconstructed 2D distribution.
Recovery of material parameters of soft hyperelastic tissue by an inverse spectral technique
Gou, Kun
2012-07-01
An inverse spectral method is developed for recovering a spatially inhomogeneous shear modulus for soft tissue. The study is motivated by a novel use of the intravascular ultrasound technique to image arteries. The arterial wall is idealized as a nonlinear isotropic cylindrical hyperelastic body. A boundary value problem is formulated for the response of the arterial wall within a specific class of quasistatic deformations reflective of the response due to imposed blood pressure. Subsequently, a boundary value problem is developed via an asymptotic construction modeling intravascular ultrasound interrogation which generates small amplitude, high frequency time harmonic vibrations superimposed on the static finite deformation. This leads to a system of second order ordinary Sturm-Liouville boundary value problems that are then employed to reconstruct the shear modulus through a nonlinear inverse spectral technique. Numerical examples are demonstrated to show the viability of the method. © 2012 Elsevier Ltd. All rights reserved.
International Nuclear Information System (INIS)
Choi, C. Y.
1997-01-01
A geometrical inverse heat conduction problem is solved for the infrared scanning cavity detection by the boundary element method using minimal energy technique. By minimizing the kinetic energy of temperature field, boundary element equations are converted to the quadratic programming problem. A hypothetical inner boundary is defined such that the actual cavity is located interior to the domain. Temperatures at hypothetical inner boundary are determined to meet the constraints of measurement error of surface temperature obtained by infrared scanning, and then boundary element analysis is performed for the position of an unknown boundary (cavity). Cavity detection algorithm is provided, and the effects of minimal energy technique on the inverse solution method are investigated by means of numerical analysis
Directory of Open Access Journals (Sweden)
Marcelo Ribeiro dos Santos
2014-01-01
Full Text Available During machining energy is transformed into heat due to plastic deformation of the workpiece surface and friction between tool and workpiece. High temperatures are generated in the region of the cutting edge, which have a very important influence on wear rate of the cutting tool and on tool life. This work proposes the estimation of heat flux at the chip-tool interface using inverse techniques. Factors which influence the temperature distribution at the AISI M32C high speed steel tool rake face during machining of a ABNT 12L14 steel workpiece were also investigated. The temperature distribution was predicted using finite volume elements. A transient 3D numerical code using irregular and nonstaggered mesh was developed to solve the nonlinear heat diffusion equation. To validate the software, experimental tests were made. The inverse problem was solved using the function specification method. Heat fluxes at the tool-workpiece interface were estimated using inverse problems techniques and experimental temperatures. Tests were performed to study the effect of cutting parameters on cutting edge temperature. The results were compared with those of the tool-work thermocouple technique and a fair agreement was obtained.
A gEUD-based inverse planning technique for HDR prostate brachytherapy: Feasibility study
Energy Technology Data Exchange (ETDEWEB)
Giantsoudi, D. [Department of Radiological Sciences, University of Texas Health Sciences Center, San Antonio, Texas 78229 (United States); Department of Radiation Oncology, Francis H. Burr Proton Therapy Center, Boston, Massachusetts 02114 (United States); Baltas, D. [Department of Medical Physics and Engineering, Strahlenklinik, Klinikum Offenbach GmbH, 63069 Offenbach (Germany); Nuclear and Particle Physics Section, Physics Department, University of Athens, 15701 Athens (Greece); Karabis, A. [Pi-Medical Ltd., Athens 10676 (Greece); Mavroidis, P. [Department of Radiological Sciences, University of Texas Health Sciences Center, San Antonio, Texas 78299 and Department of Medical Radiation Physics, Karolinska Institutet and Stockholm University, 17176 (Sweden); Zamboglou, N.; Tselis, N. [Strahlenklinik, Klinikum Offenbach GmbH, 63069 Offenbach (Germany); Shi, C. [St. Vincent' s Medical Center, 2800 Main Street, Bridgeport, Connecticut 06606 (United States); Papanikolaou, N. [Department of Radiological Sciences, University of Texas Health Sciences Center, San Antonio, Texas 78299 (United States)
2013-04-15
Purpose: The purpose of this work was to study the feasibility of a new inverse planning technique based on the generalized equivalent uniform dose for image-guided high dose rate (HDR) prostate cancer brachytherapy in comparison to conventional dose-volume based optimization. Methods: The quality of 12 clinical HDR brachytherapy implants for prostate utilizing HIPO (Hybrid Inverse Planning Optimization) is compared with alternative plans, which were produced through inverse planning using the generalized equivalent uniform dose (gEUD). All the common dose-volume indices for the prostate and the organs at risk were considered together with radiobiological measures. The clinical effectiveness of the different dose distributions was investigated by comparing dose volume histogram and gEUD evaluators. Results: Our results demonstrate the feasibility of gEUD-based inverse planning in HDR brachytherapy implants for prostate. A statistically significant decrease in D{sub 10} or/and final gEUD values for the organs at risk (urethra, bladder, and rectum) was found while improving dose homogeneity or dose conformity of the target volume. Conclusions: Following the promising results of gEUD-based optimization in intensity modulated radiation therapy treatment optimization, as reported in the literature, the implementation of a similar model in HDR brachytherapy treatment plan optimization is suggested by this study. The potential of improved sparing of organs at risk was shown for various gEUD-based optimization parameter protocols, which indicates the ability of this method to adapt to the user's preferences.
A robust spatial filtering technique for multisource localization and geoacoustic inversion.
Stotts, S A
2005-07-01
Geoacoustic inversion and source localization using beamformed data from a ship of opportunity has been demonstrated with a bottom-mounted array. An alternative approach, which lies within a class referred to as spatial filtering, transforms element level data into beam data, applies a bearing filter, and transforms back to element level data prior to performing inversions. Automation of this filtering approach is facilitated for broadband applications by restricting the inverse transform to the degrees of freedom of the array, i.e., the effective number of elements, for frequencies near or below the design frequency. A procedure is described for nonuniformly spaced elements that guarantees filter stability well above the design frequency. Monitoring energy conservation with respect to filter output confirms filter stability. Filter performance with both uniformly spaced and nonuniformly spaced array elements is discussed. Vertical (range and depth) and horizontal (range and bearing) ambiguity surfaces are constructed to examine filter performance. Examples that demonstrate this filtering technique with both synthetic data and real data are presented along with comparisons to inversion results using beamformed data. Examinations of cost functions calculated within a simulated annealing algorithm reveal the efficacy of the approach.
Utility of natural generalised inverse technique in the interpretation of dyke structures
Digital Repository Service at National Institute of Oceanography (India)
Rao, M.M.M.; Murty, T.V.R.; Rao, P.R.; Lakshminarayana, S.; Subrahmanyam, A.S.; Murthy, K.S.R.
environs along the central west coast of India: analysis using EOF, J. Geophys.Res.,91(1986) 8523 -8526. 9 Marquardt D W, An algorithm for least-squares estimation of non-linear parameters, J. Soc. Indust. Appl. Math, 11 (1963) 431-441. INDIAN J. MAR... technique in reconstruction of gravity anomalies due to a fault, Indian J. Pure. Appl. Math., 34 (2003) 31-47. 16 Ramana Murty T V, Somayajulu Y K & Murty C S, Reconstruction of sound speed profile through natural generalised inverse technique, Indian J...
Inverse kinetics technique for reactor shutdown measurement: an experimental assessment. [AGR
Energy Technology Data Exchange (ETDEWEB)
Lewis, T. A.; McDonald, D.
1975-09-15
It is proposed to use the Inverse Kinetics Technique to measure the subcritical reactivity as a function of time during the testing of the nitrogen injection systems on AGRs. A description is given of an experimental assessment of the technique by investigating known transients created by control rod movements on a small experimental reactor, (2m high, 1m radius). Spatial effects were observed close to the moving rods but otherwise derived reactivities were independent of detector position and agreed well with the existing calibrations. This prompted the suggestion that data from installed reactor instrumentation could be used to calibrate CAGR control rods.
Singh, Sarvesh Kumar; Kumar, Pramod; Rani, Raj; Turbelin, Grégory
2017-04-01
The study highlights a theoretical comparison and various interpretations of a recent inversion technique, called renormalization, developed for the reconstruction of unknown tracer emissions from their measured concentrations. The comparative interpretations are presented in relation to the other inversion techniques based on principle of regularization, Bayesian, minimum norm, maximum entropy on mean, and model resolution optimization. It is shown that the renormalization technique can be interpreted in a similar manner to other techniques, with a practical choice of a priori information and error statistics, while eliminating the need of additional constraints. The study shows that the proposed weight matrix and weighted Gram matrix offer a suitable deterministic choice to the background error and measurement covariance matrices, respectively, in the absence of statistical knowledge about background and measurement errors. The technique is advantageous since it (i) utilizes weights representing a priori information apparent to the monitoring network, (ii) avoids dependence on background source estimates, (iii) improves on alternative choices for the error statistics, (iv) overcomes the colocalization problem in a natural manner, and (v) provides an optimally resolved source reconstruction. A comparative illustration of source retrieval is made by using the real measurements from a continuous point release conducted in Fusion Field Trials, Dugway Proving Ground, Utah.
Technique detection software for Sparse Matrices
Directory of Open Access Journals (Sweden)
KHAN Muhammad Taimoor
2009-12-01
Full Text Available Sparse storage formats are techniques for storing and processing the sparse matrix data efficiently. The performance of these storage formats depend upon the distribution of non-zeros, within the matrix in different dimensions. In order to have better results we need a technique that suits best the organization of data in a particular matrix. So the decision of selecting a better technique is the main step towards improving the system's results otherwise the efficiency can be decreased. The purpose of this research is to help identify the best storage format in case of reduced storage size and high processing efficiency for a sparse matrix.
Determining the metallicity of the solar envelope using seismic inversion techniques
Buldgen, G.; Salmon, S. J. A. J.; Noels, A.; Scuflaire, R.; Dupret, M. A.; Reese, D. R.
2017-11-01
The solar metallicity issue is a long-lasting problem of astrophysics, impacting multiple fields and still subject to debate and uncertainties. While spectroscopy has mostly been used to determine the solar heavy elements abundance, helioseismologists attempted providing a seismic determination of the metallicity in the solar convective envelope. However, the puzzle remains since two independent groups provided two radically different values for this crucial astrophysical parameter. We aim at providing an independent seismic measurement of the solar metallicity in the convective envelope. Our main goal is to help provide new information to break the current stalemate amongst seismic determinations of the solar heavy element abundance. We start by presenting the kernels, the inversion technique and the target function of the inversion we have developed. We then test our approach in multiple hare-and-hounds exercises to assess its reliability and accuracy. We then apply our technique to solar data using calibrated solar models and determine an interval of seismic measurements for the solar metallicity. We show that our inversion can indeed be used to estimate the solar metallicity thanks to our hare-and-hounds exercises. However, we also show that further dependencies in the physical ingredients of solar models lead to a low accuracy. Nevertheless, using various physical ingredients for our solar models, we determine metallicity values between 0.008 and 0.014.
Directory of Open Access Journals (Sweden)
J. S. de Villiers
2014-10-01
Full Text Available This research focuses on the inversion of geomagnetic variation field measurement to obtain source currents in the ionosphere. During a geomagnetic disturbance, the ionospheric currents create magnetic field variations that induce geoelectric fields, which drive geomagnetically induced currents (GIC in power systems. These GIC may disturb the operation of power systems and cause damage to grounded power transformers. The geoelectric fields at any location of interest can be determined from the source currents in the ionosphere through a solution of the forward problem. Line currents running east–west along given surface position are postulated to exist at a certain height above the Earth's surface. This physical arrangement results in the fields on the ground having the magnetic north and down components, and the electric east component. Ionospheric currents are modelled by inverting Fourier integrals (over the wavenumber of elementary geomagnetic fields using the Levenberg–Marquardt technique. The output parameters of the inversion model are the current strength, height and surface position of the ionospheric current system. A ground conductivity structure with five layers from Quebec, Canada, based on the Layered-Earth model is used to obtain the complex skin depth at a given angular frequency. This paper presents preliminary and inversion results based on these structures and simulated geomagnetic fields. The results show some interesting features in the frequency domain. Model parameters obtained through inversion are within 2% of simulated values. This technique has applications for modelling the currents of electrojets at the equator and auroral regions, as well as currents in the magnetosphere.
Uncertainty estimates of a GRACE inversion modelling technique over Greenland using a simulation
Bonin, Jennifer; Chambers, Don
2013-07-01
The low spatial resolution of GRACE causes leakage, where signals in one location spread out into nearby regions. Because of this leakage, using simple techniques such as basin averages may result in an incorrect estimate of the true mass change in a region. A fairly simple least squares inversion technique can be used to more specifically localize mass changes into a pre-determined set of basins of uniform internal mass distribution. However, the accuracy of these higher resolution basin mass amplitudes has not been determined, nor is it known how the distribution of the chosen basins affects the results. We use a simple `truth' model over Greenland as an example case, to estimate the uncertainties of this inversion method and expose those design parameters which may result in an incorrect high-resolution mass distribution. We determine that an appropriate level of smoothing (300-400 km) and process noise (0.30 cm2 of water) gets the best results. The trends of the Greenland internal basins and Iceland can be reasonably estimated with this method, with average systematic errors of 3.5 cm yr-1 per basin. The largest mass losses found from GRACE RL04 occur in the coastal northwest (-19.9 and -33.0 cm yr-1) and southeast (-24.2 and -27.9 cm yr-1), with small mass gains (+1.4 to +7.7 cm yr-1) found across the northern interior. Acceleration of mass change is measurable at the 95 per cent confidence level in four northwestern basins, but not elsewhere in Greenland. Due to an insufficiently detailed distribution of basins across internal Canada, the trend estimates of Baffin and Ellesmere Islands are expected to be incorrect due to systematic errors caused by the inversion technique.
International Nuclear Information System (INIS)
Kravtsov, Y.A.; Kravtsov, Y.A.; Chrzanowski, J.; Mazon, D.
2011-01-01
New procedure for plasma polarimetry data inversion is suggested, which fits two parameter knowledge-based plasma model to the measured parameters (azimuthal and ellipticity angles) of the polarization ellipse. The knowledge-based model is supposed to use the magnetic field and electron density profiles, obtained from magnetic measurements and LIDAR data on the Thomson scattering. In distinction to traditional polarimetry, polarization evolution along the ray is determined on the basis of angular variables technique (AVT). The paper contains a few examples of numerical solutions of these equations, which are applicable in conditions, when Faraday and Cotton-Mouton effects are simultaneously strong. (authors)
A new recoil distance technique using low energy coulomb excitation in inverse kinematics
Energy Technology Data Exchange (ETDEWEB)
Rother, W., E-mail: wolfram.rother@googlemail.com [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Dewald, A.; Pascovici, G.; Fransen, C.; Friessner, G.; Hackstein, M. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Ilie, G. [Wright Nuclear Structure Laboratory, Yale University, New Haven, CT 06520 (United States); National Institute of Physics and Nuclear Engineering, P.O. Box MG-6, Bucharest-Magurele (Romania); Iwasaki, H. [National Superconducting Cyclotron Laboratory, Michigan State University, East Lansing, MI 48824 (United States); Department of Physics and Astronomy, Michigan State University, East Lansing, MI 48824 (United States); Jolie, J. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Melon, B. [Dipartimento di Fisica, Universita di Firenze and INFN Sezione di Firenze, Sesto Fiorentino (Firenze) I-50019 (Italy); Petkov, P. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); INRNE-BAS, Sofia (Bulgaria); Pfeiffer, M. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Pissulla, Th. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Bundesumweltministerium, Robert-Schuman-Platz 3, D - 53175 Bonn (Germany); Zell, K.-O. [Institut fuer Kernphysik der Universitaet zu Koeln, Zuelpicher Str. 77, D-50937 Koeln (Germany); Jakobsson, U.; Julin, R.; Jones, P.; Ketelhut, S.; Nieminen, P.; Peura, P. [Department of Physics, University of Jyvaeskylae, P.O. Box 35, FI-40014 (Finland); and others
2011-10-21
We report on the first experiment combining the Recoil Distance Doppler Shift technique and multistep Coulomb excitation in inverse kinematics at beam energies of 3-10 A MeV. The setup involves a standard plunger device equipped with a degrader foil instead of the normally used stopper foil. An array of particle detectors is positioned at forward angles to detect target-like recoil nuclei which are used as a trigger to discriminate against excitations in the degrader foil. The method has been successfully applied to measure lifetimes in {sup 128}Xe and is suited to be a useful tool for experiments with radioactive ion beams.
International Nuclear Information System (INIS)
Ganapol, B.D.; Sumini, M.
1990-01-01
The time dependent space second order discrete form of the monokinetic transport equation is given an analytical solution, within the Laplace transform domain. Th A n dynamic model is presented and the general resolution procedure is worked out. The solution in the time domain is then obtained through the application of a numerical transform inversion technique. The justification of the research relies in the need to produce reliable and physically meaningful transport benchmarks for dynamic calculations. The paper is concluded by a few results followed by some physical comments
Application of a numerical Laplace transform inversion technique to a problem in reactor dynamics
International Nuclear Information System (INIS)
Ganapol, B.D.; Sumini, M.
1990-01-01
A newly developed numerical technique for the Laplace transform inversion is applied to a classical time-dependent problem of reactor physics. The dynamic behaviour of a multiplying system has been analyzed through a continuous slowing down model, taking into account a finite slowing down time, the presence of several groups of neutron precursors and simplifying the spatial analysis using the space asymptotic approximation. The results presented, show complete agreement with analytical ones previously obtained and allow a deeper understanding of the model features. (author)
International Nuclear Information System (INIS)
Arnold, Alexander; Bruhns, Otto T; Reichling, Stefan; Mosler, Joern
2010-01-01
This paper is concerned with an efficient implementation suitable for the elastography inverse problem. More precisely, the novel algorithm allows us to compute the unknown stiffness distribution in soft tissue by means of the measured displacement field by considerably reducing the numerical cost compared to previous approaches. This is realized by combining and further elaborating variational mesh adaption with a clustering technique similar to those known from digital image compression. Within the variational mesh adaption, the underlying finite element discretization is only locally refined if this leads to a considerable improvement of the numerical solution. Additionally, the numerical complexity is reduced by the aforementioned clustering technique, in which the parameters describing the stiffness of the respective soft tissue are sorted according to a predefined number of intervals. By doing so, the number of unknowns associated with the elastography inverse problem can be chosen explicitly. A positive side effect of this method is the reduction of artificial noise in the data (smoothing of the solution). The performance and the rate of convergence of the resulting numerical formulation are critically analyzed by numerical examples.
Three-Dimensional Inverse Transport Solver Based on Compressive Sensing Technique
Cheng, Yuxiong; Wu, Hongchun; Cao, Liangzhi; Zheng, Youqi
2013-09-01
According to the direct exposure measurements from flash radiographic image, a compressive sensing-based method for three-dimensional inverse transport problem is presented. The linear absorption coefficients and interface locations of objects are reconstructed directly at the same time. It is always very expensive to obtain enough measurements. With limited measurements, compressive sensing sparse reconstruction technique orthogonal matching pursuit is applied to obtain the sparse coefficients by solving an optimization problem. A three-dimensional inverse transport solver is developed based on a compressive sensing-based technique. There are three features in this solver: (1) AutoCAD is employed as a geometry preprocessor due to its powerful capacity in graphic. (2) The forward projection matrix rather than Gauss matrix is constructed by the visualization tool generator. (3) Fourier transform and Daubechies wavelet transform are adopted to convert an underdetermined system to a well-posed system in the algorithm. Simulations are performed and numerical results in pseudo-sine absorption problem, two-cube problem and two-cylinder problem when using compressive sensing-based solver agree well with the reference value.
A forward model and conjugate gradient inversion technique for low-frequency ultrasonic imaging.
van Dongen, Koen W A; Wright, William M D
2006-10-01
Emerging methods of hyperthermia cancer treatment require noninvasive temperature monitoring, and ultrasonic techniques show promise in this regard. Various tomographic algorithms are available that reconstruct sound speed or contrast profiles, which can be related to temperature distribution. The requirement of a high enough frequency for adequate spatial resolution and a low enough frequency for adequate tissue penetration is a difficult compromise. In this study, the feasibility of using low frequency ultrasound for imaging and temperature monitoring was investigated. The transient probing wave field had a bandwidth spanning the frequency range 2.5-320.5 kHz. The results from a forward model which computed the propagation and scattering of low-frequency acoustic pressure and velocity wave fields were used to compare three imaging methods formulated within the Born approximation, representing two main types of reconstruction. The first uses Fourier techniques to reconstruct sound-speed profiles from projection or Radon data based on optical ray theory, seen as an asymptotical limit for comparison. The second uses backpropagation and conjugate gradient inversion methods based on acoustical wave theory. The results show that the accuracy in localization was 2.5 mm or better when using low frequencies and the conjugate gradient inversion scheme, which could be used for temperature monitoring.
Energy Technology Data Exchange (ETDEWEB)
Kawamura, S [Nippon Geophysical Prospecting Co. Ltd., Tokyo (Japan)
1996-10-01
Smoothness-constrained least-squares technique with ABIC minimization was applied to the inversion of phase velocity of surface waves during geophysical exploration, to confirm its usefulness. Since this study aimed mainly at the applicability of the technique, Love wave was used which is easier to treat theoretically than Rayleigh wave. Stable successive approximation solutions could be obtained by the repeated improvement of velocity model of S-wave, and an objective model with high reliability could be determined. While, for the inversion with simple minimization of the residuals squares sum, stable solutions could be obtained by the repeated improvement, but the judgment of convergence was very hard due to the smoothness-constraint, which might make the obtained model in a state of over-fitting. In this study, Love wave was used to examine the applicability of the smoothness-constrained least-squares technique with ABIC minimization. Applicability of this to Rayleigh wave will be investigated. 8 refs.
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient response of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5. 12 refs., 3 figs
Source-jerk analysis using a semi-explicit inverse kinetic technique
International Nuclear Information System (INIS)
Spriggs, G.D.; Pederson, R.A.
1985-01-01
A method is proposed for measuring the effective reproduction factor, k, in subcritical systems. The method uses the transient responses of a subcritical system to the sudden removal of an extraneous neutron source (i.e., a source jerk). The response is analyzed using an inverse kinetic technique that least-squares fits the exact analytical solution corresponding to a source-jerk transient as derived from the point-reactor model. It has been found that the technique can provide an accurate means of measuring k in systems that are close to critical (i.e., 0.95 < k < 1.0). As a system becomes more subcritical (i.e., k << 1.0) spatial effects can introduce significant biases depending on the source and detector positions. However, methods are available that can correct for these biases and, hence, can allow measuring subcriticality in systems with k as low as 0.5
The modified inverse hockey stick technique for adjuvant irradiation after mastectomy
International Nuclear Information System (INIS)
Kukolowicz, P.; Selerski, B.; Kuszewski, T.; Wieczorek, A.
2004-01-01
To present the technique of irradiation of post-mastectomy patients used in the Holycross Cancer Centre in Kielce.The paper presents a detailed description of the technique which is referred to as the 'modified inverse hockey stick technique (MIHS)'. The dosimetric characteristic of dose distribution for the MIHS technique is presented basing on dose distributions calculated for 40 patients. The measurements used to evaluate dose distribution included standard deviation of the dose in the Planning Target Volume (PTV) and the percentage of the PTV volume receiving a dose larger than 110% and smaller than 90%; the lung volume received at least 20 Gy (LV20) and the heart volume received at least 30 Gy (HV30). The distribution of the electron beam energy is also presented. The standard deviation of the dose in the PTV was approx. 10% in a majority of patients. About 12% of the PTV volume received a dose more than 10% smaller than intended and about 10% of the PTV volume received a dose more than 10% greater than intended. For patients irradiated on the left side of the chest wall the LV20 was always lesser than 25% and for patients irradiated on the right side of the chest wall - always less than 35%, except for one patient, in whom it reached 37%. The HV30 was always below 8%. The MIHS technique is a safe and reliable modality. The main advantages of the technique include very convenient and easily repeated positioning of the patient and small doses applied to the organs at risk. The individually calculated bolus plays an important role in diminishing the dose to the lung and heart. The disadvantages of the technique include poor dose homogeneity within the PTV and long matching lines of the electron and photon beams. (author)
Inverse Optimization and Forecasting Techniques Applied to Decision-making in Electricity Markets
DEFF Research Database (Denmark)
Saez Gallego, Javier
patterns that the load traditionally exhibited. On the other hand, this thesis is motivated by the decision-making processes of market players. In response to these challenges, this thesis provides mathematical models for decision-making under uncertainty in electricity markets. Demand-side bidding refers......This thesis deals with the development of new mathematical models that support the decision-making processes of market players. It addresses the problems of demand-side bidding, price-responsive load forecasting and reserve determination. From a methodological point of view, we investigate a novel...... approach to model the response of aggregate price-responsive load as a constrained optimization model, whose parameters are estimated from data by using inverse optimization techniques. The problems tackled in this dissertation are motivated, on one hand, by the increasing penetration of renewable energy...
Sodium ion conducting polymer electrolyte membrane prepared by phase inversion technique
Harshlata, Mishra, Kuldeep; Rai, D. K.
2018-04-01
A mechanically stable porous polymer membrane of Poly(vinylidene fluoride-hexafluoropropylene) has been prepared by phase inversion technique using steam as a non-solvent. The membrane possesses semicrystalline network with enhanced amorphicity as observed by X-ray diffraction. The membrane has been soaked in an electrolyte solution of 0.5M NaPF6 in Ethylene Carbonate/Propylene Carbonate (1:1) to obtain the gel polymer electrolyte. The porosity and electrolyte uptake of the membrane have been found to be 67% and 220% respectively. The room temperature ionic conductivity of the membrane has been obtained as ˜ 0.3 mS cm-1. The conductivity follows Arrhenius behavior with temperature and gives activation energy as 0.8 eV. The membrane has been found to possess significantly large electrochemical stability window of 5.0 V.
Inverse Function: Pre-Service Teachers' Techniques and Meanings
Paoletti, Teo; Stevens, Irma E.; Hobson, Natalie L. F.; Moore, Kevin C.; LaForest, Kevin R.
2018-01-01
Researchers have argued teachers and students are not developing connected meanings for function inverse, thus calling for a closer examination of teachers' and students' inverse function meanings. Responding to this call, we characterize 25 pre-service teachers' inverse function meanings as inferred from our analysis of clinical interviews. After…
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Rao, M.M.M.; Sadhuram, Y.; Sridevi, B.; Maneesha, K.; SujithKumar, S.; Prasanna, P.L.; Murthy, K.S.R.
of Bengal during south-west monsoon season and explore possibility to reconstruct the acoustic profile of the eddy by Stochastic Inverse Technique. A simulation experiment on forward and inverse problems for observed sound velocity perturbation field has...
International Nuclear Information System (INIS)
Stieler, Florian; Yan, Hui; Lohr, Frank; Wenz, Frederik; Yin, Fang-Fang
2009-01-01
Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT) is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI) guided system was developed and examined. The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS). Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be 'translated' to a set of 'if-then rules' for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS), was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints). The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02%) and membership functions (3.9%), thus suggesting that the 'behavior' of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. The study demonstrated a feasible way
Directory of Open Access Journals (Sweden)
Wenz Frederik
2009-09-01
Full Text Available Abstract Background Parameter optimization in the process of inverse treatment planning for intensity modulated radiation therapy (IMRT is mainly conducted by human planners in order to create a plan with the desired dose distribution. To automate this tedious process, an artificial intelligence (AI guided system was developed and examined. Methods The AI system can automatically accomplish the optimization process based on prior knowledge operated by several fuzzy inference systems (FIS. Prior knowledge, which was collected from human planners during their routine trial-and-error process of inverse planning, has first to be "translated" to a set of "if-then rules" for driving the FISs. To minimize subjective error which could be costly during this knowledge acquisition process, it is necessary to find a quantitative method to automatically accomplish this task. A well-developed machine learning technique, based on an adaptive neuro fuzzy inference system (ANFIS, was introduced in this study. Based on this approach, prior knowledge of a fuzzy inference system can be quickly collected from observation data (clinically used constraints. The learning capability and the accuracy of such a system were analyzed by generating multiple FIS from data collected from an AI system with known settings and rules. Results Multiple analyses showed good agreements of FIS and ANFIS according to rules (error of the output values of ANFIS based on the training data from FIS of 7.77 ± 0.02% and membership functions (3.9%, thus suggesting that the "behavior" of an FIS can be propagated to another, based on this process. The initial experimental results on a clinical case showed that ANFIS is an effective way to build FIS from practical data, and analysis of ANFIS and FIS with clinical cases showed good planning results provided by ANFIS. OAR volumes encompassed by characteristic percentages of isodoses were reduced by a mean of between 0 and 28%. Conclusion The
Directory of Open Access Journals (Sweden)
K. Verbist
2009-10-01
Full Text Available In arid and semi-arid zones, runoff harvesting techniques are often applied to increase the water retention and infiltration on steep slopes. Additionally, they act as an erosion control measure to reduce land degradation hazards. Nevertheless, few efforts were observed to quantify the water harvesting processes of these techniques and to evaluate their efficiency. In this study, a combination of detailed field measurements and modelling with the HYDRUS-2D software package was used to visualize the effect of an infiltration trench on the soil water content of a bare slope in northern Chile. Rainfall simulations were combined with high spatial and temporal resolution water content monitoring in order to construct a useful dataset for inverse modelling purposes. Initial estimates of model parameters were provided by detailed infiltration and soil water retention measurements. Four different measurement techniques were used to determine the saturated hydraulic conductivity (K_{sat} independently. The tension infiltrometer measurements proved a good estimator of the K_{sat} value and a proxy for those measured under simulated rainfall, whereas the pressure and constant head well infiltrometer measurements showed larger variability. Six different parameter optimization functions were tested as a combination of soil-water content, water retention and cumulative infiltration data. Infiltration data alone proved insufficient to obtain high model accuracy, due to large scatter on the data set, and water content data were needed to obtain optimized effective parameter sets with small confidence intervals. Correlation between the observed soil water content and the simulated values was as high as R^{2}=0.93 for ten selected observation points used in the model calibration phase, with overall correlation for the 22 observation points equal to 0.85. The model results indicate that the infiltration trench has a
Inverse kinematics technique for the study of fission-fragment isotopic yields at GANIL energies
International Nuclear Information System (INIS)
Delaune, O.
2012-01-01
The characteristics of the fission-products distributions result of dynamical and quantum properties of the deformation process of the fissioning nucleus. These distributions have also an interest for the conception of new nuclear power plants or for the transmutation of the nuclear wastes. Up to now, our understanding of the nuclear fission remains restricted because of experimental limitations. In particular, yields of the heavy fission products are difficult to get with precision. In this work, an innovative experimental technique is presented. It is based on the use of inverse kinematics coupled to the use of a spectrometer, in which a 238 U beam at 6 or 24 A MeV impinges on light targets. Several actinides, from 238 U to 250 Cf, are produced by transfer or fusion reactions, with an excitation energy ranges from ten to few hundreds MeV depending on the reaction and the beam energy. The fission fragments of these actinides are detected by the VAMOS spectrometer or the LISE separator. The isotopic yields of fission products are completely measured for different fissioning systems. The neutron excess of the fragments is used to characterise the isotopic distributions. Its evolution with excitation energy gives important insights on the mechanisms of the compound-nucleus formation and its deexcitation. Neutron excess is also used to determine the multiplicity of neutrons evaporated by the fragments. The role of the proton and neutron shell effects into the formation of fission fragments is also discussed. (author) [fr
Objective quantification of perturbations produced with a piecewise PV inversion technique
Directory of Open Access Journals (Sweden)
L. Fita
2007-11-01
Full Text Available PV inversion techniques have been widely used in numerical studies of severe weather cases. These techniques can be applied as a way to study the sensitivity of the responsible meteorological system to changes in the initial conditions of the simulations. Dynamical effects of a collection of atmospheric features involved in the evolution of the system can be isolated. However, aspects, such as the definition of the atmospheric features or the amount of change in the initial conditions, are largely case-dependent and/or subjectively defined. An objective way to calculate the modification of the initial fields is proposed to alleviate this problem. The perturbations are quantified as the mean absolute variations of the total energy between the original and modified fields, and an unique energy variation value is fixed for all the perturbations derived from different PV anomalies. Thus, PV features of different dimensions and characteristics introduce the same net modification of the initial conditions from an energetic point of view. The devised quantification method is applied to study the high impact weather case of 9–11 November 2001 in the Western Mediterranean basin, when a deep and strong cyclone was formed. On the Balearic Islands 4 people died, and sustained winds of 30 ms−1 and precipitation higher than 200 mm/24 h were recorded. Moreover, 700 people died in Algiers during the first phase of the event. The sensitivities to perturbations in the initial conditions of a deep upper level trough, the anticyclonic system related to the North Atlantic high and the surface thermal anomaly related to the baroclinicity of the environment are determined. Results reveal a high influence of the upper level trough and the surface thermal anomaly and a minor role of the North Atlantic high during the genesis of the cyclone.
Tuning of Block Copolymer Membrane Morphology through Water Induced Phase Inversion Technique
Madhavan, Poornima
2016-06-01
surface and pore walls of PS-b-P4VP block copolymer membranes and then investigated the biocidal activity of the silver nanoparticles grown membranes. Finally, a novel photoresponsive nanostructured triblock copolymer membranes were developed by phase inversion technique. In addition, the photoresponsive behavior on irradiation with light and their membrane flux and retention properties were studied.
Energy Technology Data Exchange (ETDEWEB)
Gupta, S. C.P.; Khan, A. A.; Dass, L. L.; Sahay, P. N.; Jha, G. J.
1985-07-01
Single layer end-to-end inverted and everted techniques of entero-anastomosis were evaluated in sixteen male buffalo calves using silk and catgut sutures. All the animals of everting group showed areas of adhesion grossly, whereas it was only in three animals of inverting group. Histological evidences revealed a more uniform healing pattern in inversion group and radiography suggested comparatively greater degree of stenosis, but without functional impairment of intestinal lumen, than everting anastomosis. Connective tissue proliferation and mononuclear cell infiltrations were very minimal with silk suture whereas these were pronounced with catgut, irrespective of anastomotic technique. Thus inversion technique of anastomosis accomplished by single layer suturing with silk thread was ideal for enteroanastomosis in cattle.
The application of neural network techniques to magnetic and optical inverse problems
International Nuclear Information System (INIS)
Jones, H.V.
2000-12-01
The processing power of the computer has increased at unimaginable rates over the last few decades. However, even today's fastest computer can take several hours to find solutions to some mathematical problems; and there are instances where a high powered supercomputer may be impractical, with the need for near instant solutions just as important (such as in an on-line testing system). This led us to believe that such complex problems could be solved using a novel approach, whereby the system would have prior knowledge about the expected solutions through a process of learning. One method of approaching this kind of problem is through the use of machine learning. Just as a human can be trained and is able to learn from past experiences, a machine is can do just the same. This is the concept of neural networks. The research which was conducted involves the investigation of various neural network techniques, and their applicability to solve some known complex inverse problems in the field of magnetic and optical recording. In some cases a comparison is also made to more conventional methods of solving the problems, from which it was possible to outline some key advantages of using a neural network approach. We initially investigated the application of neural networks to transverse susceptibility data in order to determine anisotropy distributions. This area of research is proving to be very important, as it gives us information about the switching field distribution, which then determines the minimum transition width achievable in a medium, and affects the overwrite characteristics of the media. Secondly, we investigated a similar situation, but applied to an optical problem. This involved the determination of important compact disc parameters from the diffraction pattern of a laser from a disc. This technique was then intended for use in an on-line testing system. Finally we investigated another area of neural networks with the analysis of magnetisation maps and
Energy Technology Data Exchange (ETDEWEB)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G., E-mail: akosovichev@solar.stanford.edu [Stanford University, HEPL, Stanford, CA 94305 (United States)
2014-04-10
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.
International Nuclear Information System (INIS)
Parchevsky, K. V.; Zhao, J.; Hartlep, T.; Kosovichev, A. G.
2014-01-01
We performed three-dimensional numerical simulations of the solar surface acoustic wave field for the quiet Sun and for three models with different localized sound-speed perturbations in the interior with deep, shallow, and two-layer structures. We used the simulated data generated by two solar acoustics codes that employ the same standard solar model as a background model, but utilize different integration techniques and different models of stochastic wave excitation. Acoustic travel times were measured using a time-distance helioseismology technique, and compared with predictions from ray theory frequently used for helioseismic travel-time inversions. It is found that the measured travel-time shifts agree well with the helioseismic theory for sound-speed perturbations, and for the measurement procedure with and without phase-speed filtering of the oscillation signals. This testing verifies the whole measuring-filtering-inversion procedure for static sound-speed anomalies with small amplitude inside the Sun outside regions of strong magnetic field. It is shown that the phase-speed filtering, frequently used to extract specific wave packets and improve the signal-to-noise ratio, does not introduce significant systematic errors. Results of the sound-speed inversion procedure show good agreement with the perturbation models in all cases. Due to its smoothing nature, the inversion procedure may overestimate sound-speed variations in regions with sharp gradients of the sound-speed profile.
Wang, Feiyan; Morten, Jan Petter; Spitzer, Klaus
2018-05-01
In this paper, we present a recently developed anisotropic 3-D inversion framework for interpreting controlled-source electromagnetic (CSEM) data in the frequency domain. The framework integrates a high-order finite-element forward operator and a Gauss-Newton inversion algorithm. Conductivity constraints are applied using a parameter transformation. We discretize the continuous forward and inverse problems on unstructured grids for a flexible treatment of arbitrarily complex geometries. Moreover, an unstructured mesh is more desirable in comparison to a single rectilinear mesh for multisource problems because local grid refinement will not significantly influence the mesh density outside the region of interest. The non-uniform spatial discretization facilitates parametrization of the inversion domain at a suitable scale. For a rapid simulation of multisource EM data, we opt to use a parallel direct solver. We further accelerate the inversion process by decomposing the entire data set into subsets with respect to frequencies (and transmitters if memory requirement is affordable). The computational tasks associated with each data subset are distributed to different processes and run in parallel. We validate the scheme using a synthetic marine CSEM model with rough bathymetry, and finally, apply it to an industrial-size 3-D data set from the Troll field oil province in the North Sea acquired in 2008 to examine its robustness and practical applicability.
Revil, A.
2015-12-01
Geological expertise and petrophysical relationships can be brought together to provide prior information while inverting multiple geophysical datasets. The merging of such information can result in more realistic solution in the distribution of the model parameters, reducing ipse facto the non-uniqueness of the inverse problem. We consider two level of heterogeneities: facies, described by facies boundaries and heteroegenities inside each facies determined by a correlogram. In this presentation, we pose the geophysical inverse problem in terms of Gaussian random fields with mean functions controlled by petrophysical relationships and covariance functions controlled by a prior geological cross-section, including the definition of spatial boundaries for the geological facies. The petrophysical relationship problem is formulated as a regression problem upon each facies. The inversion of the geophysical data is performed in a Bayesian framework. We demonstrate the usefulness of this strategy using a first synthetic case for which we perform a joint inversion of gravity and galvanometric resistivity data with the stations located at the ground surface. The joint inversion is used to recover the density and resistivity distributions of the subsurface. In a second step, we consider the possibility that the facies boundaries are deformable and their shapes are inverted as well. We use the level set approach to perform such deformation preserving prior topological properties of the facies throughout the inversion. With the help of prior facies petrophysical relationships and topological characteristic of each facies, we make posterior inference about multiple geophysical tomograms based on their corresponding geophysical data misfits. The method is applied to a second synthetic case showing that we can recover the heterogeneities inside the facies, the mean values for the petrophysical properties, and, to some extent, the facies boundaries using the 2D joint inversion of
Identifying Isotropic Events using an Improved Regional Moment Tensor Inversion Technique
Energy Technology Data Exchange (ETDEWEB)
Dreger, Douglas S. [Univ. of California, Berkeley, CA (United States); Ford, Sean R. [Univ. of California, Berkeley, CA (United States); Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States); Walter, William R. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States)
2016-12-08
Research was carried out investigating the feasibility of using a regional distance seismic waveform moment tensor inverse procedure to estimate source parameters of nuclear explosions and to use the source inversion results to develop a source-type discrimination capability. The results of the research indicate that it is possible to robustly determine the seismic moment tensor of nuclear explosions, and when compared to natural seismicity in the context of the a Hudson et al. (1989) source-type diagram they are found to separate from populations of earthquakes and underground cavity collapse seismic sources.
Source Identification in Structural Acoustics with an Inverse Frequency Response Function Technique
Visser, Rene
2002-01-01
Inverse source identification based on acoustic measurements is essential for the investigation and understanding of sound fields generated by structural vibrations of various devices and machinery. Acoustic pressure measurements performed on a grid in the nearfield of a surface can be used to
Advanced Multivariate Inversion Techniques for High Resolution 3D Geophysical Modeling
2011-09-01
2005). We implemented a method to increase the usefulness of gravity data by filtering the Bouguer anomaly map. Though commonly applied 40 km 30 35...remove the long-wavelength components from the Bouguer gravity map we follow Tessema and Antoine (2004), who use an upward continuation method and...inversion of group velocities and gravity. (a) Top: Group velocities from a representative cell in the model. Bottom: Filtered Bouguer anomalies. (b
Parlangeau, Camille; Lacombe, Olivier; Daniel, Jean-Marc; Schueller, Sylvie
2015-04-01
Inversion of calcite twin data are known to be a powerful tool to reconstruct the past-state of stress in carbonate rocks of the crust, especially in fold-and-thrust belts and sedimentary basins. This is of key importance to constrain results of geomechanical modelling. Without proposing a new inversion scheme, this contribution reports some recent improvements of the most efficient stress inversion technique to date (Etchecopar, 1984) that allows to reconstruct the 5 parameters of the deviatoric paleostress tensors (principal stress orientations and differential stress magnitudes) from monophase and polyphase twin data sets. The improvements consist in the search of the possible tensors that account for the twin data (twinned and untwinned planes) and the aid to the user to define the best stress tensor solution, among others. We perform a systematic exploration of an hypersphere in 4 dimensions by varying different parameters, Euler's angles and the stress ratio. We first record all tensors with a minimum penalization function accounting for 20% of the twinned planes. We then define clusters of tensors following a dissimilarity criterion based on the stress distance between the 4 parameters of the reduced stress tensors and a degree of disjunction of the related sets of twinned planes. The percentage of twinned data to be explained by each tensor is then progressively increased and tested using the standard Etchecopar procedure until the best solution that explains the maximum number of twinned planes and the whole set of untwinned planes is reached. This new inversion procedure is tested on monophase and polyphase numerically-generated as well as natural calcite twin data in order to more accurately define the ability of the technique to separate more or less similar deviatoric stress tensors applied in sequence on the samples, to test the impact of strain hardening through the change of the critical resolved shear stress for twinning as well as to evaluate the
Hierarchical probing for estimating the trace of the matrix inverse on toroidal lattices
Energy Technology Data Exchange (ETDEWEB)
Stathopoulos, Andreas [College of William and Mary, Williamsburg, VA; Laeuchli, Jesse [College of William and Mary, Williamsburg, VA; Orginos, Kostas [College of William and Mary, Williamsburg, VA; Jefferson Lab
2013-10-01
The standard approach for computing the trace of the inverse of a very large, sparse matrix $A$ is to view the trace as the mean value of matrix quadratures, and use the Monte Carlo algorithm to estimate it. This approach is heavily used in our motivating application of Lattice QCD. Often, the elements of $A^{-1}$ display certain decay properties away from the non zero structure of $A$, but random vectors cannot exploit this induced structure of $A^{-1}$. Probing is a technique that, given a sparsity pattern of $A$, discovers elements of $A$ through matrix-vector multiplications with specially designed vectors. In the case of $A^{-1}$, the pattern is obtained by distance-$k$ coloring of the graph of $A$. For sufficiently large $k$, the method produces accurate trace estimates but the cost of producing the colorings becomes prohibitively expensive. More importantly, it is difficult to search for an optimal $k$ value, since none of the work for prior choices of $k$ can be reused.
Note of non-destructive detection of voids by a high frequency inversion technique
International Nuclear Information System (INIS)
Cohen, J.K.; Bleistein, N.
1978-01-01
An inverse method for nondestructive detection of scatterers of high contrast, such as voids or strongly reflecting inclusions, is described. The phase and range normalized far field scattering amplitude is shown to be directly proportional to the Fourier transform of the characteristic function of the scatterer. The characteristic function is equal to unity inside the region occupied by the scatterer and is zero outside. Thus, knowledge of this function provides a description of the scatterer. The method is applied to flaws in a sphere
128Xe Lifetime Measurement Using the Coulex-Plunger Technique in Inverse Kinematics
International Nuclear Information System (INIS)
Konstantinopoulos, T.; Lagoyannis, A.; Harissopulos, S.; Dewald, A.; Rother, W.; Ilie, G.; Jones, P.; Rakhila, P.; Greenlees, P.; Grahn, T.; Julin, R.; Balabanski, D. L.
2008-01-01
The lifetimes of the lowest collective yrast and non-yrast states in 128 Xe were measured in a Coulomb excitation experiment using the recoil distance method (RDM) in inverse kinematics. Hereby, the Cologne plunger apparatus was employed together with the JUROGAM spectrometer. Excited states in 128 Xe were populated using a 128 Xe beam impinging on a nat Fe target with E( 128 Xe)≅525 MeV. Recoils were detected by means of an array of solar cells placed at forward angles. Recoil-gated γ-spectra were measured at different plunger distances
128Xe Lifetime Measurement Using the Coulex-Plunger Technique in Inverse Kinematics
Konstantinopoulos, T.; Lagoyannis, A.; Harissopulos, S.; Dewald, A.; Rother, W.; Ilie, G.; Jones, P.; Rakhila, P.; Greenlees, P.; Grahn, T.; Julin, R.; Balabanski, D. L.
2008-05-01
The lifetimes of the lowest collective yrast and non-yrast states in 128Xe were measured in a Coulomb excitation experiment using the recoil distance method (RDM) in inverse kinematics. Hereby, the Cologne plunger apparatus was employed together with the JUROGAM spectrometer. Excited states in 128Xe were populated using a 128Xe beam impinging on a natFe target with E(128Xe)~525 MeV. Recoils were detected by means of an array of solar cells placed at forward angles. Recoil-gated γ-spectra were measured at different plunger distances.
International Nuclear Information System (INIS)
Namatame, Hirofumi; Taniguchi, Masaki
1994-01-01
Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)
International Nuclear Information System (INIS)
Serai, Suraj; Towbin, Alexander J.; Podberesky, Daniel J.
2012-01-01
Abdominal contrast-enhanced MR angiography (CE-MRA) is routinely performed in children. CE-MRA is challenging in children because of patient motion, difficulty in obtaining intravenous access, and the inability of young patients to perform a breath-hold during imaging. The combination of pediatric-specific difficulties in imaging and the safety concerns regarding the risk of gadolinium-based contrast agents in patients with impaired renal function has renewed interest in the use of non-contrast (NC) MRA techniques. At our institution, we have optimized 3-D NC-MRA techniques for abdominal imaging. The purpose of this work is to demonstrate the utility of an inflow-enhanced, inversion recovery balanced steady-state free precession-based (b-SSFP) NC-MRA technique. (orig.)
Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.
2012-12-01
The Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) aircraft program spanned the dry to wet and wet to dry transition seasons in November 2008 & May 2009 respectively. It resulted in ~150 vertical profiles covering the Brazilian Amazon Basin (BAB). With the data we attempt to estimate a carbon budget for the BAB, to determine if regional aircraft experiments can provide strong constraints for a budget, and to compare inversion frameworks when optimizing flux estimates. We use a LPDM to integrate satellite-, aircraft-, & surface-data with mesoscale meteorological fields to link bottom-up and top-down models to provide constraints and error bounds for regional fluxes. The Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from BRAMS, ECMWF, and WRF are coupled to a biosphere model, the Vegetation Photosynthesis Respiration Model (VPRM), to determine regional CO2 fluxes for the BAB. The VPRM is a prognostic biosphere model driven by MODIS 8-day EVI and LSWI indices along with shortwave radiation and temperature from tower measurements and mesoscale meteorological data. VPRM parameters are tuned using eddy flux tower data from the Large-Scale Biosphere Atmosphere experiment. VPRM computes hourly CO2 fluxes by calculating Gross Ecosystem Exchange (GEE) and Respiration (R) for 8 different vegetation types. The VPRM fluxes are scaled up to the BAB by using time-averaged drivers (shortwave radiation & temperature) from high-temporal resolution runs of BRAMS, ECMWF, and WRF and vegetation maps from SYNMAP and IGBP2007. Shortwave radiation from each mesoscale model is validated using surface data and output from GL 1.2, a global radiation model based on GOES 8 visible imagery. The vegetation maps are updated to 2008 and 2009 using landuse scenarios modeled by Sim Amazonia 2 and Sim Brazil. A priori fluxes modeled by STILT-VPRM are optimized using data from BARCA, eddy covariance sites, and flask measurements. The
Karunakaran, Madhavan; Shevate, Rahul; Peinemann, Klaus-Viktor
2016-01-01
In this paper, we demonstrate the formation of nanostructured double hydrophobic poly(styrene-b-methyl methacrylate) (PS-b-PMMA) block copolymer membranes via state-of-the-art phase inversion technique. The nanostructured membrane morphologies are tuned by different solvent and block copolymer compositions. The membrane morphology has been investigated using FESEM, AFM and TEM. Morphological investigation shows the formation of both cylindrical and lamellar structures on the top surface of the block copolymer membranes. The PS-b-PMMA having an equal block length (PS160K-b-PMMA160K) exhibits both cylindrical and lamellar structures on the top layer of the asymmetric membrane. All membranes fabricated from PS160K-b-PMMA160K shows an incomplete pore formation in both cylindrical and lamellar morphologies during the phase inversion process. However, PS-b-PMMA (PS135K-b-PMMA19.5K) block copolymer having a short PMMA block allowed us to produce open pore structures with ordered hexagonal cylindrical pores during the phase inversion process. The resulting PS-b-PMMA nanostructured block copolymer membranes have pure water flux from 105-820 l/m2.h.bar and 95% retention of PEG50K
International Nuclear Information System (INIS)
Lopez, C.; Koski, J.A.; Razani, A.
2000-01-01
A study of the errors introduced when one-dimensional inverse heat conduction techniques are applied to problems involving two-dimensional heat transfer effects was performed. The geometry used for the study was a cylinder with similar dimensions as a typical container used for the transportation of radioactive materials. The finite element analysis code MSC P/Thermal was used to generate synthetic test data that was then used as input for an inverse heat conduction code. Four different problems were considered including one with uniform flux around the outer surface of the cylinder and three with non-uniform flux applied over 360 deg C, 180 deg C, and 90 deg C sections of the outer surface of the cylinder. The Sandia One-Dimensional Direct and Inverse Thermal (SODDIT) code was used to estimate the surface heat flux of all four cases. The error analysis was performed by comparing the results from SODDIT and the heat flux calculated based on the temperature results obtained from P/Thermal. Results showed an increase in error of the surface heat flux estimates as the applied heat became more localized. For the uniform case, SODDIT provided heat flux estimates with a maximum error of 0.5% whereas for the non-uniform cases, the maximum errors were found to be about 3%, 7%, and 18% for the 360 deg C, 180 deg C, and 90 deg C cases, respectively
Karunakaran, Madhavan
2016-03-11
In this paper, we demonstrate the formation of nanostructured double hydrophobic poly(styrene-b-methyl methacrylate) (PS-b-PMMA) block copolymer membranes via state-of-the-art phase inversion technique. The nanostructured membrane morphologies are tuned by different solvent and block copolymer compositions. The membrane morphology has been investigated using FESEM, AFM and TEM. Morphological investigation shows the formation of both cylindrical and lamellar structures on the top surface of the block copolymer membranes. The PS-b-PMMA having an equal block length (PS160K-b-PMMA160K) exhibits both cylindrical and lamellar structures on the top layer of the asymmetric membrane. All membranes fabricated from PS160K-b-PMMA160K shows an incomplete pore formation in both cylindrical and lamellar morphologies during the phase inversion process. However, PS-b-PMMA (PS135K-b-PMMA19.5K) block copolymer having a short PMMA block allowed us to produce open pore structures with ordered hexagonal cylindrical pores during the phase inversion process. The resulting PS-b-PMMA nanostructured block copolymer membranes have pure water flux from 105-820 l/m2.h.bar and 95% retention of PEG50K
Directory of Open Access Journals (Sweden)
Qi Hong
2015-01-01
Full Text Available The particle size distribution (PSD plays an important role in environmental pollution detection and human health protection, such as fog, haze and soot. In this study, the Attractive and Repulsive Particle Swarm Optimization (ARPSO algorithm and the basic PSO were applied to retrieve the PSD. The spectral extinction technique coupled with the Anomalous Diffraction Approximation (ADA and the Lambert-Beer Law were employed to investigate the retrieval of the PSD. Three commonly used monomodal PSDs, i.e. the Rosin-Rammer (R-R distribution, the normal (N-N distribution, the logarithmic normal (L-N distribution were studied in the dependent model. Then, an optimal wavelengths selection algorithm was proposed. To study the accuracy and robustness of the inverse results, some characteristic parameters were employed. The research revealed that the ARPSO showed more accurate and faster convergence rate than the basic PSO, even with random measurement error. Moreover, the investigation also demonstrated that the inverse results of four incident laser wavelengths showed more accurate and robust than those of two wavelengths. The research also found that if increasing the interval of the selected incident laser wavelengths, inverse results would show more accurate, even in the presence of random error.
Boonyasiriwat, Chaiwoot
2010-11-01
A recently developed time-domain multiscale waveform tomography (MWT) method is applied to synthetic and field marine data. Although the MWT method was already applied to synthetic data, the synthetic data application leads to a development of a hybrid method between waveform tomography and the salt flooding technique commonly use in subsalt imaging. This hybrid method can overcome a convergence problem encountered by inversion with a traveltime velocity tomogram and successfully provides an accurate and highly resolved velocity tomogram for the 2D SEG/EAGE salt model. In the application of MWT to the field data, the inversion process is carried out using a multiscale method with a dynamic early-arrival muting window to mitigate the local minima problem of waveform tomography and elastic effects. With the modified MWT method, reasonably accurate results as verified by comparison of migration images and common image gathers were obtained. The hybrid method with the salt flooding technique is not used in this field data example because there is no salt in the subsurface according to our interpretation. However, we believe it is applicable to field data applications. © 2010 Society of Exploration Geophysicists.
Application of stepwise multiple regression techniques to inversion of Nimbus 'IRIS' observations.
Ohring, G.
1972-01-01
Exploratory studies with Nimbus-3 infrared interferometer-spectrometer (IRIS) data indicate that, in addition to temperature, such meteorological parameters as geopotential heights of pressure surfaces, tropopause pressure, and tropopause temperature can be inferred from the observed spectra with the use of simple regression equations. The technique of screening the IRIS spectral data by means of stepwise regression to obtain the best radiation predictors of meteorological parameters is validated. The simplicity of application of the technique and the simplicity of the derived linear regression equations - which contain only a few terms - suggest usefulness for this approach. Based upon the results obtained, suggestions are made for further development and exploitation of the stepwise regression analysis technique.
On an asymptotic technique of solution of the inverse problem of helioseismology
International Nuclear Information System (INIS)
Brodskij, M.A.; Vorontsov, S.V.
1987-01-01
The technique for the solution of the universe problem for the solar 5-min. oscillations is proposed, which provides an independent determination of the second speed as a function of depth in solar interior and the frequency dependence of the effective phase shift for the reflection of the trapped acoustic waves from the outer layers. The preliminary numerical results are presented
Inversion kinematics at deep-seated gravity slope deformations revealed by trenching techniques
Pasquaré Mariotto, Federico; Tibaldi, Alessandro
2016-01-01
We compare data from three deep-seated gravitational slope deformations (DSGSDs) where palaeoseismological techniques were applied in artificial trenches. At all trenches, located in metamorphic rocks of the Italian Alps, there is evidence of extensional deformation given by normal movements along slip planes dipping downhill or uphill, and/or fissures, as expected in gravitational failure. However, we document and illustrate – with the aid of trenching – evidenc...
International Nuclear Information System (INIS)
Ryu, Jeong Ah; Kim, Bohyun; Kim, Sooah; Yang, Soon Ha; Choi, Moon Hae; Ahn, Hyeong Sik
2003-01-01
To determine the usefulness of tissue harmonic imaging (THI) and pulse-inversion harmonic imaging (PIHI) in the evaluation of normal and abnormal fetuses. Forty-one pregnant women who bore a total of 31 normal and ten abnormal fetuses underwent conventional ultrasonography (CUS), and then THI and PIHI. US images of six organ systems, namely the brain, spine, heart, abdomen, extremities and face were compared between the three techniques in terms of overall conspicuity and the definition of borders and internal structures. For the brain, heart, abdomen and face, overall conspicuity at THI and PIHI was significantly better than at CUS (p < 0.05). There was, though, no significant difference between THI and PIHI. Affected organs in abnormal fetuses were more clearly depicted at THI and PIHI than at CUS. Both THI and PIHI appear to be superior to CUS for the evaluation of normal or abnormal structures, particularly the brain, heart, abdomen and face
International Nuclear Information System (INIS)
Popov, V.P.; Semenov, A.L.
1987-01-01
The calibration technique is described, and the metrological characteristics of a high-voltage generator of the inverse-quadratic function (HGF), being a functional unit of the diagnostic system of an electrodynamic analyser of a ionic component of a laser plasma, is analysed. The results of HGF testing in the range of time constants of the τ=(5-25)μs function are given. Analysis of metrologic and experimental characteristics shows, that HGF with automatic calibration has quite high accurate parameters. The high accuracy of function generation is provided with the possibility of calibration and adjustment conduction under experimental working conditions. Increase of the generated pulse amplitude to several tens of kilovelts is possible. Besides, the possibility of timely function adjustment to the necessary parameter (τ) increases essentially the HGF functional possibilities
Simulation of sparse matrix array designs
Boehm, Rainer; Heckel, Thomas
2018-04-01
Matrix phased array probes are becoming more prominently used in industrial applications. The main drawbacks, using probes incorporating a very large number of transducer elements, are needed for an appropriate cabling and an ultrasonic device offering many parallel channels. Matrix arrays designed for extended functionality feature at least 64 or more elements. Typical arrangements are square matrices, e.g., 8 by 8 or 11 by 11 or rectangular matrixes, e.g., 8 by 16 or 10 by 12 to fit a 128-channel phased array system. In some phased array systems, the number of simultaneous active elements is limited to a certain number, e.g., 32 or 64. Those setups do not allow running the probe with all elements active, which may cause a significant change in the directivity pattern of the resulting sound beam. When only a subset of elements can be used during a single acquisition, different strategies may be applied to collect enough data for rebuilding the missing information from the echo signal. Omission of certain elements may be one approach, overlay of subsequent shots with different active areas may be another one. This paper presents the influence of a decreased number of active elements on the sound field and their distribution on the array. Solutions using subsets with different element activity patterns on matrix arrays and their advantages and disadvantages concerning the sound field are evaluated using semi-analytical simulation tools. Sound field criteria are discussed, which are significant for non-destructive testing results and for the system setup.
Better Size Estimation for Sparse Matrix Products
DEFF Research Database (Denmark)
Amossen, Rasmus Resen; Campagna, Andrea; Pagh, Rasmus
2010-01-01
We consider the problem of doing fast and reliable estimation of the number of non-zero entries in a sparse Boolean matrix product. Let n denote the total number of non-zero entries in the input matrices. We show how to compute a 1 ± ε approximation (with small probability of error) in expected t...
Smith, G. A.; Meyer, G.; Nordstrom, M.
1986-01-01
A new automatic flight control system concept suitable for aircraft with highly nonlinear aerodynamic and propulsion characteristics and which must operate over a wide flight envelope was investigated. This exact model follower inverts a complete nonlinear model of the aircraft as part of the feed-forward path. The inversion is accomplished by a Newton-Raphson trim of the model at each digital computer cycle time of 0.05 seconds. The combination of the inverse model and the actual aircraft in the feed-forward path alloys the translational and rotational regulators in the feedback path to be easily designed by linear methods. An explanation of the model inversion procedure is presented. An extensive set of simulation data for essentially the full flight envelope for a vertical attitude takeoff and landing aircraft (VATOL) is presented. These data demonstrate the successful, smooth, and precise control that can be achieved with this concept. The trajectory includes conventional flight from 200 to 900 ft/sec with path accelerations and decelerations, altitude changes of over 6000 ft and 2g and 3g turns. Vertical attitude maneuvering as a tail sitter along all axes is demonstrated. A transition trajectory from 200 ft/sec in conventional flight to stationary hover in the vertical attitude includes satisfactory operation through lift-cure slope reversal as attitude goes from horizontal to vertical at constant altitude. A vertical attitude takeoff from stationary hover to conventional flight is also demonstrated.
Pajewski, Lara; Giannopoulos, Antonios; Sesnic, Silvestar; Randazzo, Andrea; Lambot, Sébastien; Benedetto, Francesco; Economou, Nikos
2017-04-01
opportunity of testing and validating, against reliable data, their electromagnetic-modelling, inversion, imaging and processing algorithms. One of the most interesting dataset comes from the IFSTTAR Geophysical Test Site, in Nantes (France): this is an open-air laboratory including a large and deep area, filled with various materials arranged in horizontal compacted slices, separated by vertical interfaces and water-tighted in surface; several objects as pipes, polystyrene hollows, boulders and masonry are embedded in the field. Data were collected by using nine different GPR systems and at different frequencies ranging from 200 MHz to 1 GHz. Moreover, some sections of this test site were modelled by using gprMax and the commercial software CST Microwave Studio. Hence, both experimental and synthetic data are available. Further interesting datasets were collected on roads, bridges, concrete cells, columns - and more. (v) WG3 contributed to the TU1208 Education Pack, an open educational package conceived to teach GPR in University courses. (vi) WG3 was very active in offering training activities. The following courses were successfully organised: Training School (TS) "Microwave Imaging and Diagnostics" (in cooperation with the European School of Antennas; 1st edition: Madonna di Campiglio, Italy, March 2014, 2nd edition: Taormina, Italy, October 2016); TS "Numerical modelling of Ground Penetrating Radar using gprMax" (Thessaloniki, Greece, November 2015); TS "Electromagnetic Modelling Techniques for Ground Penetrating Radar" (Split, Croatia, November 2016). Moreover, WG3 organized a workshop on "Electromagnetic modelling with the Finite-Difference Time-Domain technique" (Nantes, France, February 2014) and a workshop on "Electromagnetic modelling and inversion techniques for GPR" (Davos, Switzerland, April 2016) within the 2016 European Conference on Antennas and Propagation (EuCAP). Acknowledgement: The Authors are deeply grateful to COST (European COoperation in Science and
TH-C-12A-06: Feasibility of a MLC-Based Inversely Optimized Multi-Field Grid Therapy Technique
Energy Technology Data Exchange (ETDEWEB)
Jin, J [Georgia Regents University, Augusta, GA (Georgia); Zhao, B; Huang, Y; Kim, J; Qin, Y; Wen, N; Ryu, S; Chetty, I [Henry Ford Health System, Detroit, MI (United States)
2014-06-15
Purpose: Grid therapy (GT), which generates highly spatially modulated dose distributions, can deliver single- or hypo-fractionated radiotherapy for large tumors without causing significant toxicities. GT may be applied in combination with immunotherapy, in light of recent preclinical data of synergetic interaction between radiotherapy and immunotherapy. However, conventional GT uses only one field, which does not have the advantage of multi-fields in 3D conformal-RT or IMRT. We have proposed a novel MLC-based, inverse-planned multi-field 3D GT technique. This study aims to test its deliverability and dosimetric accuracy. Methods: A lattice of small spheres was created as the boost volume within a large target. A simultaneous boost IMRT plan with 8-Gy to the target and 20-Gy to the boost volume was generated in the Eclipse treatment planning system (AAA v10) with a HD120 MLC. Nine beams were used, and the gantry and couch angles were selected so that the spheres were perfectly aligned in every beams eye view. The plan was mapped to a phantom with dose scaled. EBT3 films were calibrated and used to measure the delivered dose. Results: The IMRT plan generated a highly spatially modulated dose distribution in the target. D95%, D50%, D5% for the spheres and the targets in Gy were 18.5, 20.0, 21.4 and 7.9, 9.8, 16.1, respectively. D50% for a 1cm ring 1cm outside the target was 2.9-Gy. Film dosimetry showed good agreement between calculated and delivered dose, with an overall gamma passing rate of 99.6% (3%/1mm). The point dose differences for different spheres varied from 1–6%. Conclusion: We have demonstrated the deliverability and dose calculation accuracy of the MLC-based inversely optimized multi-field GT technique, which achieved a brachytherapy-like dose distribution. Single-fraction high dose can be delivered to the spheres in a large target with minimal dose to the surrounding normal tissue.
Bragov, A. M.; Balandin, Vl. V.; Kotov, V. L.; Balandin, Vl. Vl.
2018-04-01
We present new experimental results on the investigation of the dynamic properties of sand soil on the basis of the inverse experiment technique using a measuring rod with a flat front-end face. A limited applicability has been shown of the method using the procedure for correcting the shape of the deformation pulse due to dispersion during its propagation in the measuring rod. Estimates of the pulse maximum have been obtained and the results of comparison of numerical calculations with experimental data are given. The sufficient accuracy in determining the drag force during the quasi-stationary stage of penetration has been established. The parameters of dynamic compressibility and resistance to shear of water-saturated sand have been determined in the course of the experimental-theoretical analysis of the maximum values of the drag force and its values at the quasi-stationary stage of penetration. It has been shown that with almost complete water saturation of sand its shear properties are reduced but remain significant in the practically important range of penetration rates.
International Nuclear Information System (INIS)
Pang, A.K.K.; Hughes, T.
2000-01-01
The present limited retrospective study was performed to assess MR imaging of lipomatous tumours of the musculoskeletal system and to evaluate the potential of the T2 short tau inversion-recovery (STIR) technique for differentiating lipomas from liposarcomas. Magnetic resonance imaging of 12 patients with lipomatous tumours of the musculoskeletal system (eight benign lipomas, three well differentiated liposarcomas and one myxoid liposarcoma) were reviewed. Benign lipomas were usually superficial and showed homogeneity on T1- and T2-weighted spin echo sequences. Full suppression at T2-STIR was readily demonstrated. In contrast the liposarcomas in the present series were all deep-seated. Two well-differentiated liposarcomas showed homogeneity at long and short relaxation time (TR) but failed to show complete suppression at T2-STIR. One case of well-differentiated liposarcoma (dedifferentiated liposarcoma) and one of myxoid liposarcoma showed mild and moderate heterogeneity at T1 and T2, respectively and posed no difficulty in being diagnosed correctly. In conclusion, short and long TR in combination with T2 STIR show promise in differentiating benign from malignant lipomatous tumours of the musculoskeletal system, when taken in combination with the position of the tumour. Copyright (1999) Blackwell Science Pty Ltd
Directory of Open Access Journals (Sweden)
Siti Khadijah Hubadillah
2016-06-01
Full Text Available In this study, low cost ceramic supports were prepared from kaolin via phase inversion technique with two kaolin particle sizes, which are 0.04–0.6 μm (denoted as type A and 10–15 μm (denoted as type B, at different kaolin contents ranging from 14 to 39 wt.%, sintered at 1200 °C. The effect of kaolin particle sizes as well as kaolin contents on membrane structure, pore size distribution, porosity, mechanical strength, surface roughness and gas permeation of the support were investigated. The support was prepared using kaolin type A induced asymmetric structure by combining macroporous voids and sponge-like structure in the support with pore size of 0.38 μm and 1.05 μm, respectively, and exhibited ideal porosity (27.7%, great mechanical strength (98.9 MPa and excellent gas permeation. Preliminary study shows that the kaolin ceramic support in this work is potential to gas separation application at lower cost.
He, Zhenzong; Qi, Hong; Wang, Yuqing; Ruan, Liming
2014-10-01
Four improved Ant Colony Optimization (ACO) algorithms, i.e. the probability density function based ACO (PDF-ACO) algorithm, the Region ACO (RACO) algorithm, Stochastic ACO (SACO) algorithm and Homogeneous ACO (HACO) algorithm, are employed to estimate the particle size distribution (PSD) of the spheroidal particles. The direct problems are solved by the extended Anomalous Diffraction Approximation (ADA) and the Lambert-Beer law. Three commonly used monomodal distribution functions i.e. the Rosin-Rammer (R-R) distribution function, the normal (N-N) distribution function, and the logarithmic normal (L-N) distribution function are estimated under dependent model. The influence of random measurement errors on the inverse results is also investigated. All the results reveal that the PDF-ACO algorithm is more accurate than the other three ACO algorithms and can be used as an effective technique to investigate the PSD of the spheroidal particles. Furthermore, the Johnson's SB (J-SB) function and the modified beta (M-β) function are employed as the general distribution functions to retrieve the PSD of spheroidal particles using PDF-ACO algorithm. The investigation shows a reasonable agreement between the original distribution function and the general distribution function when only considering the variety of the length of the rotational semi-axis.
International Nuclear Information System (INIS)
Shimazu, Y.; Rooijen, W.F.G. van
2014-01-01
Highlights: • Estimation of the reactivity of nuclear reactor based on neutron flux measurements. • Comparison of the traditional method, and the new approach based on Extended Kalman Filtering (EKF). • Estimation accuracy depends on filter parameters, the selection of which is described in this paper. • The EKF algorithm is preferred if the signal to noise ratio is low (low flux situation). • The accuracy of the EKF depends on the ratio of the filter coefficients. - Abstract: The Extended Kalman Filtering (EKF) technique has been applied for estimation of subcriticality with a good noise filtering and accuracy. The Inverse Point Kinetic (IPK) method has also been widely used for reactivity estimation. The important parameters for the EKF estimation are the process noise covariance, and the measurement noise covariance. However the optimal selection is quite difficult. On the other hand, there is only one parameter in the IPK method, namely the time constant for the first order delay filter. Thus, the selection of this parameter is quite easy. Thus, it is required to give certain idea for the selection of which method should be selected and how to select the required parameters. From this point of view, a qualitative performance comparison is carried out
International Nuclear Information System (INIS)
Yusa, Noritaka; Machida, Eiji; Janousek, Ladislav; Rebican, Mihai; Chen, Zhenmao; Miya, Kenzo
2005-01-01
This paper evaluates the applicability of eddy current inversion techniques to the sizing of defects in Inconel welds with rough surfaces. For this purpose, a plate Inconel weld specimen, which models the welding of a stub tube in a boiling water nuclear reactor is fabricated, and artificial notches machined into the specimen. Eddy current inspections using six different eddy current probes are conducted and efficiencies were evaluated for the six probes for weld inspection. It is revealed that if suitable probes are applied, an Inconel weld does not cause large noise levels during eddy current inspections even though the surface of the weld is rough. Finally, reconstruction of the notches is performed using eddy current signals measured using the uniform eddy current probe that showed the best results among the six probes in this study. A simplified configuration is proposed in order to consider the complicated configuration of the welded specimen in numerical simulations. While reconstructed profiles of the notches are slightly larger than the true profiles, quite good agreements are obtained in spite of the simple approximation of the configuration, which reveals that eddy current testing would be an efficient non-destructive testing method for the sizing of defects in Inconel welds
Richey, Lauren; Gardner, John; Standing, Michael; Jorgensen, Matthew; Bartl, Michael
2010-10-01
Photonic crystals (PCs) are periodic structures that manipulate electromagnetic waves by defining allowed and forbidden frequency bands known as photonic band gaps. Despite production of PC structures operating at infrared wavelengths, visible counterparts are difficult to fabricate because periodicities must satisfy the diffraction criteria. As part of an ongoing search for naturally occurring PCs [1], a three-dimensional array of nanoscopic spheres in the iridescent scales of the Cerambycidae insects A. elegans and G. celestis has been found. Such arrays are similar to opal gemstones and self-assembled colloidal spheres which can be chemically inverted to create a lattice-like PC. Through a chemical replication process [2], scanning electron microscopy analysis, sequential focused ion beam slicing and three-dimensional modeling, we analyzed the structural arrangement of the nanoscopic spheres. The study of naturally occurring structures and their inversing techniques into PCs allows for diversity in optical PC fabrication. [1] J.W. Galusha et al., Phys. Rev. E 77 (2008) 050904. [2] J.W. Galusha et al., J. Mater. Chem. 20 (2010) 1277.
International Nuclear Information System (INIS)
Yusa, Noritaka; Janousek, Ladislav; Rebican, Mihai; Chen, Zhenmao; Miya, Kenzo; Machida, Eiji
2004-01-01
This paper evaluates the applicability of eddy current inversion techniques to the sizing of defects in Inconel welds with rough surfaces. For this purpose, a plate Inconel weld specimen, which models the welding of a stub tube in a boiling water nuclear reactor, is fabricated, and artificial notches machined into the specimen. Eddy current inspections using six probes in weld inspection evaluated. It is revealed that if suitable probes are applied, an Inconel weld does not provide large noise signals in eddy current inspections even though the surface of the weld is rough. Finally, reconstruction of the notches are performed using eddy current signals measured with the use of the uniform eddy current probe that showed the best results among the six probes in the inspection. A simplified configuration is proposed in order to consider the complicated configuration of the welded specimen in numerical simulations. While reconstructed profiles of the notches are slightly larger than the true profiles, quite good agreements are obtained in spite of the simple approximation of the configuration, which reveals that eddy current testing would be an efficient non-destructive testing method for the sizing of defects in Inconel welds. (author)
Directory of Open Access Journals (Sweden)
J Swain
2017-12-01
Full Text Available Indian Space Research Organization had launched Oceansat-2 on 23 September 2009, and the scatterometer onboard was a space-borne sensor capable of providing ocean surface winds (both speed and direction over the globe for a mission life of 5 years. The observations of ocean surface winds from such a space-borne sensor are the potential source of data covering the global oceans and useful for driving the state-of-the-art numerical models for simulating ocean state if assimilated/blended with weather prediction model products. In this study, an efficient interpolation technique of inverse distance and time is demonstrated using the Oceansat-2 wind measurements alone for a selected month of June 2010 to generate gridded outputs. As the data are available only along the satellite tracks and there are obvious data gaps due to various other reasons, Oceansat-2 winds were subjected to spatio-temporal interpolation, and 6-hour global wind fields for the global oceans were generated over 1 × 1 degree grid resolution. Such interpolated wind fields can be used to drive the state-of-the-art numerical models to predict/hindcast ocean-state so as to experiment and test the utility/performance of satellite measurements alone in the absence of blended fields. The technique can be tested for other satellites, which provide wind speed as well as direction data. However, the accuracy of input winds is obviously expected to have a perceptible influence on the predicted ocean-state parameters. Here, some attempts are also made to compare the interpolated Oceansat-2 winds with available buoy measurements and it was found that they are reasonably in good agreement with a correlation coefficient of R > 0.8 and mean deviation 1.04 m/s and 25° for wind speed and direction, respectively.
Directory of Open Access Journals (Sweden)
Shiann-Jong Lee
2010-01-01
Full Text Available Moment tensor inversion is a routine procedure to obtain information on an earthquake source for moment magnitude and focal mechanism. However, the inversion quality is usually controlled by factors such as knowledge of an earthquake location and the suitability of a 1-D velocity model used. Here we present an improved method to invert the moment tensor solution for local earthquakes. The proposed method differs from routine centroid-moment-tensor inversion of the Broadband Array in Taiwan for Seismology in three aspects. First, the inversion is repeated in the neighborhood of an earthquake_?s hypocenter on a grid basis. Second, it utilizes Green_?s functions based on a true three-dimensional velocity model. And third, it incorporates most of the input waveforms from strong-motion records. The proposed grid-based moment tensor inversion is applied to a local earthquake that occurred near the Taipei basin on 23 October 2004 to demonstrate its effectiveness and superiority over methods used in previous studies. By using the grid-based moment tensor inversion technique and 3-D Green_?s functions, the earthquake source parameters, including earthquake location, moment magnitude and focal mechanism, are accurately found that are sufficiently consistent with regional ground motion observations up to a frequency of 1.0 Hz. This approach can obtain more precise source parameters for other earthquakes in or near a well-modeled basin and crustal structure.
Kruecken, R; Speidel, K; Voulot, D; Neyens, G; Gernhaeuser, R A; Fraile prieto, L M; Leske, J
We propose to measure the sign and magnitude of the g-factors of the first 2$^{+}$ states in radioactive neutron-rich $^{72,74}$Zn applying the transient field (TF) technique in inverse kinematics. The result of this experiment will allow to probe the $\
International Nuclear Information System (INIS)
Desesquelles, P.
1997-01-01
Computer Monte Carlo simulations occupy an increasingly important place between theory and experiment. This paper introduces a global protocol for the comparison of model simulations with experimental results. The correlated distributions of the model parameters are determined using an original recursive inversion procedure. Multivariate analysis techniques are used in order to optimally synthesize the experimental information with a minimum number of variables. This protocol is relevant in all fields if physics dealing with event generators and multi-parametric experiments. (authors)
International Nuclear Information System (INIS)
Xiao Ying; Werner-Wasik, Maria; Michalski, D.; Houser, C.; Bednarz, G.; Curran, W.; Galvin, James
2004-01-01
The purpose of this study is to compare 3 intensity-modulated radiation therapy (IMRT) inverse treatment planning techniques as applied to locally-advanced lung cancer. This study evaluates whether sufficient radiotherapy (RT) dose is given for durable control of tumors while sparing a portion of the esophagus, and whether large number of segments and monitor units are required. We selected 5 cases of locally-advanced lung cancer with large central tumor, abutting the esophagus. To ensure that no more than half of the esophagus circumference at any level received the specified dose limit, it was divided into disk-like sections and dose limits were imposed on each. Two sets of dose objectives were specified for tumor and other critical structures for standard dose RT and for dose escalation RT. Plans were generated using an aperture-based inverse planning (ABIP) technique with the Cimmino algorithm for optimization. Beamlet-based inverse treatment planning was carried out with a commercial simulated annealing package (CORVUS) and with an in-house system that used the Cimmino projection algorithm (CIMM). For 3 of the 5 cases, results met all of the constraints from the 3 techniques for the 2 sets of dose objectives. The CORVUS system without delivery efficiency consideration required the most segments and monitor units. The CIMM system reduced the number while the ABIP techniques showed a further reduction, although for one of the cases, a solution was not readily obtained using the ABIP technique for dose escalation objectives
2.5D Inversion Algorithm of Frequency-Domain Airborne Electromagnetics with Topography
Directory of Open Access Journals (Sweden)
Jianjun Xi
2016-01-01
Full Text Available We presented a 2.5D inversion algorithm with topography for frequency-domain airborne electromagnetic data. The forward modeling is based on edge finite element method and uses the irregular hexahedron to adapt the topography. The electric and magnetic fields are split into primary (background and secondary (scattered field to eliminate the source singularity. For the multisources of frequency-domain airborne electromagnetic method, we use the large-scale sparse matrix parallel shared memory direct solver PARDISO to solve the linear system of equations efficiently. The inversion algorithm is based on Gauss-Newton method, which has the efficient convergence rate. The Jacobian matrix is calculated by “adjoint forward modelling” efficiently. The synthetic inversion examples indicated that our proposed method is correct and effective. Furthermore, ignoring the topography effect can lead to incorrect results and interpretations.
DEFF Research Database (Denmark)
Farahani, Saeed Davoudabadi; Andersen, Michael Skipper; de Zee, Mark
2012-01-01
derived from the detailed musculoskeletal analysis. The technique is demonstrated on a human model pedaling a bicycle. We use a physiology-based cost function expressing the mean square of all muscle activities over the cycle to predict a realistic motion pattern. Posture and motion prediction...... on a physics model including dynamic effects and a high level of anatomical realism. First, a musculoskeletal model comprising several hundred muscles is built in AMS. The movement is then parameterized by means of time functions controlling selected degrees of freedom of the model. Subsequently......, the parameters of these functions are optimized to produce an optimum posture or movement according to a user-defined cost function and constraints. The cost function and the constraints are typically express performance, comfort, injury risk, fatigue, muscle load, joint forces and other physiological properties...
Boonyasiriwat, Chaiwoot; Schuster, Gerard T.; Valasek, Paul A.; Cao, Weiping
2010-01-01
an accurate and highly resolved velocity tomogram for the 2D SEG/EAGE salt model. In the application of MWT to the field data, the inversion process is carried out using a multiscale method with a dynamic early-arrival muting window to mitigate the local
Pajewski, Lara; Giannopoulos, Antonis; van der Kruk, Jan
2015-04-01
This work aims at presenting the ongoing research activities carried out in Working Group 3 (WG3) 'EM methods for near-field scattering problems by buried structures; data processing techniques' of the COST (European COoperation in Science and Technology) Action TU1208 'Civil Engineering Applications of Ground Penetrating Radar' (www.GPRadar.eu). The principal goal of the COST Action TU1208 is to exchange and increase scientific-technical knowledge and experience of GPR techniques in civil engineering, simultaneously promoting throughout Europe the effective use of this safe and non-destructive technique in the monitoring of infrastructures and structures. WG3 is structured in four Projects. Project 3.1 deals with 'Electromagnetic modelling for GPR applications.' Project 3.2 is concerned with 'Inversion and imaging techniques for GPR applications.' The topic of Project 3.3 is the 'Development of intrinsic models for describing near-field antenna effects, including antenna-medium coupling, for improved radar data processing using full-wave inversion.' Project 3.4 focuses on 'Advanced GPR data-processing algorithms.' Electromagnetic modeling tools that are being developed and improved include the Finite-Difference Time-Domain (FDTD) technique and the spectral domain Cylindrical-Wave Approach (CWA). One of the well-known freeware and versatile FDTD simulators is GprMax that enables an improved realistic representation of the soil/material hosting the sought structures and of the GPR antennas. Here, input/output tools are being developed to ease the definition of scenarios and the visualisation of numerical results. The CWA expresses the field scattered by subsurface two-dimensional targets with arbitrary cross-section as a sum of cylindrical waves. In this way, the interaction is taken into account of multiple scattered fields within the medium hosting the sought targets. Recently, the method has been extended to deal with through-the-wall scenarios. One of the
Directory of Open Access Journals (Sweden)
S. Ars
2017-12-01
Full Text Available This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping
Ars, Sébastien; Broquet, Grégoire; Yver Kwok, Camille; Roustan, Yelva; Wu, Lin; Arzoumanian, Emmanuel; Bousquet, Philippe
2017-12-01
This study presents a new concept for estimating the pollutant emission rates of a site and its main facilities using a series of atmospheric measurements across the pollutant plumes. This concept combines the tracer release method, local-scale atmospheric transport modelling and a statistical atmospheric inversion approach. The conversion between the controlled emission and the measured atmospheric concentrations of the released tracer across the plume places valuable constraints on the atmospheric transport. This is used to optimise the configuration of the transport model parameters and the model uncertainty statistics in the inversion system. The emission rates of all sources are then inverted to optimise the match between the concentrations simulated with the transport model and the pollutants' measured atmospheric concentrations, accounting for the transport model uncertainty. In principle, by using atmospheric transport modelling, this concept does not strongly rely on the good colocation between the tracer and pollutant sources and can be used to monitor multiple sources within a single site, unlike the classical tracer release technique. The statistical inversion framework and the use of the tracer data for the configuration of the transport and inversion modelling systems should ensure that the transport modelling errors are correctly handled in the source estimation. The potential of this new concept is evaluated with a relatively simple practical implementation based on a Gaussian plume model and a series of inversions of controlled methane point sources using acetylene as a tracer gas. The experimental conditions are chosen so that they are suitable for the use of a Gaussian plume model to simulate the atmospheric transport. In these experiments, different configurations of methane and acetylene point source locations are tested to assess the efficiency of the method in comparison to the classic tracer release technique in coping with the distances
Kaporin, I. E.
2012-02-01
In order to precondition a sparse symmetric positive definite matrix, its approximate inverse is examined, which is represented as the product of two sparse mutually adjoint triangular matrices. In this way, the solution of the corresponding system of linear algebraic equations (SLAE) by applying the preconditioned conjugate gradient method (CGM) is reduced to performing only elementary vector operations and calculating sparse matrix-vector products. A method for constructing the above preconditioner is described and analyzed. The triangular factor has a fixed sparsity pattern and is optimal in the sense that the preconditioned matrix has a minimum K-condition number. The use of polynomial preconditioning based on Chebyshev polynomials makes it possible to considerably reduce the amount of scalar product operations (at the cost of an insignificant increase in the total number of arithmetic operations). The possibility of an efficient massively parallel implementation of the resulting method for solving SLAEs is discussed. For a sequential version of this method, the results obtained by solving 56 test problems from the Florida sparse matrix collection (which are large-scale and ill-conditioned) are presented. These results show that the method is highly reliable and has low computational costs.
International Nuclear Information System (INIS)
Kappadath, S. Cheenu; Shaw, Chris C.
2003-01-01
Breast cancer may manifest as microcalcifications in x-ray mammography. Small microcalcifications, essential to the early detection of breast cancer, are often obscured by overlapping tissue structures. Dual-energy imaging, where separate low- and high-energy images are acquired and synthesized to cancel the tissue structures, may improve the ability to detect and visualize microcalcifications. Transmission measurements at two different kVp values were made on breast-tissue-equivalent materials under narrow-beam geometry using an indirect flat-panel mammographic imager. The imaging scenario consisted of variable aluminum thickness (to simulate calcifications) and variable glandular ratio (defined as the ratio of the glandular-tissue thickness to the total tissue thickness) for a fixed total tissue thickness--the clinical situation of microcalcification imaging with varying tissue composition under breast compression. The coefficients of the inverse-mapping functions used to determine material composition from dual-energy measurements were calculated by a least-squares analysis. The linear function poorly modeled both the aluminum thickness and the glandular ratio. The inverse-mapping functions were found to vary as analytic functions of second (conic) or third (cubic) order. By comparing the model predictions with the calibration values, the root-mean-square residuals for both the cubic and the conic functions were ∼50 μm for the aluminum thickness and ∼0.05 for the glandular ratio
Martins, Evandro; Poncelet, Denis; Rodrigues, Ramila Cristiane; Renard, Denis
2017-09-01
In the first part of this article, it was described an innovative method of oil encapsulation from dripping-inverse gelation using water-in-oil (W/O) emulsions. It was noticed that the method of oil encapsulation was quite different depending on the emulsion type (W/O or oil-in-water (O/W)) used and that the emulsion structure (W/O or O/W) had a high impact on the dripping technique and the capsules characteristics. The objective of this article was to elucidate the differences between the dripping techniques using both emulsions and compare the capsule properties (mechanical resistance and release of actives). The oil encapsulation using O/W emulsions was easier to perform and did not require the use of emulsion destabilisers. However, capsules produced from W/O emulsions were more resistant to compression and showed the slower release of actives over time. The findings detailed here widened the knowledge of the inverse gelation and gave opportunities to develop new techniques of oil encapsulation.
Parlangeau, Camille; Lacombe, Olivier; Schueller, Sylvie; Daniel, Jean-Marc
2018-01-01
The inversion of calcite twin data is a powerful tool to reconstruct paleostresses sustained by carbonate rocks during their geological history. Following Etchecopar's (1984) pioneering work, this study presents a new technique for the inversion of calcite twin data that reconstructs the 5 parameters of the deviatoric stress tensors from both monophase and polyphase twin datasets. The uncertainties in the parameters of the stress tensors reconstructed by this new technique are evaluated on numerically-generated datasets. The technique not only reliably defines the 5 parameters of the deviatoric stress tensor, but also reliably separates very close superimposed stress tensors (30° of difference in maximum principal stress orientation or switch between σ3 and σ2 axes). The technique is further shown to be robust to sampling bias and to slight variability in the critical resolved shear stress. Due to our still incomplete knowledge of the evolution of the critical resolved shear stress with grain size, our results show that it is recommended to analyze twin data subsets of homogeneous grain size to minimize possible errors, mainly those concerning differential stress values. The methodological uncertainty in principal stress orientations is about ± 10°; it is about ± 0.1 for the stress ratio. For differential stresses, the uncertainty is lower than ± 30%. Applying the technique to vein samples within Mesozoic limestones from the Monte Nero anticline (northern Apennines, Italy) demonstrates its ability to reliably detect and separate tectonically significant paleostress orientations and magnitudes from naturally deformed polyphase samples, hence to fingerprint the regional paleostresses of interest in tectonic studies.
International Nuclear Information System (INIS)
Rey Silva, D.V.F.M.; Oliveira, A.P.; Macacini, J.F.; Da Silva, N.C.; Cipriani, M.; Quinelato, A.L.
2005-01-01
Full text of publication follows: The study of the dispersion of radioactive materials in soils and in engineering barriers plays an important role in the safety analysis of nuclear waste repositories. In order to proceed with such kind of study the involved physical properties must be determined with precision, including the apparent mass diffusion coefficient, which is defined as the ratio between the effective mass diffusion coefficient and the retardation factor. Many different experimental and estimation techniques are available on the literature for the identification of the diffusion coefficient and this work describes the implementation of that developed by Pereira et al [1]. This technique is based on non-intrusive radiation measurements and the experimental setup consists of a cylindrical column filled with compacted media saturated with water. A radioactive contaminant is mixed with a portion of the media and then placed in the bottom of the column. Therefore, the contaminant will diffuse through the uncontaminated media due to the concentration gradient. A radiation detector is used to measure the number of counts, which is associated to the contaminant concentration, at several positions along the column during the experiment. Such measurements are then used to estimate the apparent diffusion coefficient of the contaminant in the porous media by inverse analysis. The inverse problem of parameter estimation is solved with the Levenberg-Marquart Method of minimization of the least-square norm. The experiment was optimized with respect to the number of measurement locations, frequency of measurements and duration of the experiment through the analysis of the sensitivity coefficients and by using a D-optimum approach. This setup is suitable for studying a great number of combinations of diverse contaminants and porous media varying in composition and compacting, with considerable easiness and reliable results, and it was chosen because that is the
Energy Technology Data Exchange (ETDEWEB)
Sakurai, K; Shima, H [OYO Corp., Tokyo (Japan)
1996-10-01
This paper proposes a modeling method of one-dimensional complex resistivity using linear filter technique which has been extended to the complex resistivity. In addition, a numerical test of inversion was conducted using the monitoring results, to discuss the measured frequency band. Linear filter technique is a method by which theoretical potential can be calculated for stratified structures, and it is widely used for the one-dimensional analysis of dc electrical exploration. The modeling can be carried out only using values of complex resistivity without using values of potential. In this study, a bipolar method was employed as a configuration of electrodes. The numerical test of one-dimensional complex resistivity inversion was conducted using the formulated modeling. A three-layered structure model was used as a numerical model. A multi-layer structure with a thickness of 5 m was analyzed on the basis of apparent complex resistivity calculated from the model. From the results of numerical test, it was found that both the chargeability and the time constant agreed well with those of the original model. A trade-off was observed between the chargeability and the time constant at the stage of convergence. 3 refs., 9 figs., 1 tab.
International Nuclear Information System (INIS)
Alumbaugh, D.L.
1997-01-01
'It is the objective of this proposed study to develop and field test a new, integrated Hybrid Hydrologic-Geophysical Inverse Technique (HHGIT) for characterization of the vadose zone at contaminated sites. This fundamentally new approach to site characterization and monitoring will provide detailed knowledge about hydrological properties, geological heterogeneity and the extent and movement of contamination. HHGIT combines electrical resistivity tomography (ERT) to geophysically sense a 3D volume, statistical information about fabric of geological formations, and sparse data on moisture and contaminant distributions. Combining these three types of information into a single inversion process will provide much better estimates of spatially varied hydraulic properties and three-dimensional contaminant distributions than could be obtained from interpreting the data types individually. Furthermore, HHGIT will be a geostatistically based estimation technique; the estimates represent conditional mean hydraulic property fields and contaminant distributions. Thus, this method will also quantify the uncertainty of the estimates as well as the estimates themselves. The knowledge of this uncertainty is necessary to determine the likelihood of success of remediation efforts and the risk posed by hazardous materials. Controlled field experiments will be conducted to provide critical data sets for evaluation of these methodologies, for better understanding of mechanisms controlling contaminant movement in the vadose zone, and for evaluation of the HHGIT method as a long term monitoring strategy.'
Directory of Open Access Journals (Sweden)
Ardhia Wishnuprakasa
2016-12-01
Full Text Available In this study, the IEEE 519 Standard as a basis benchmarking for voltage (THDV and current (THDI in draft performance. Comparative Study based onthree-techniques of 2-Level Converter (2LC by using a Star-Connection Induction Motor (Y-CIM in ExtraLow Voltage (ELV Configuration.For the detail explanation, a primary inverter as Direct-Inverterby PWMdirect (PWM degreesand asecondary inverter as Inverse-Inverterby PWMinverse(PWM + PI degrees. It tends a modified algorithm,for eachof SPWM in six rules, and FHIPWM in 5th harmonics Injectedin standard modulation as the purpose for the Open-Ends of Pre-Dual Inverter in Decoupled SPWM for twelve rules, and Decoupled FHIPWM in combination of 5th harmonics Injectedin combination of two-standard-modulation. Those techniques are the purpose of two-inverter combination, which namelythe Equal Direct-Inverse (EDI algorithmproduct of prototyping in similarities. The observation is restricted in voltage scope between Simulation by using Power Simulator (PSIMand Application by using Microcontroller ARM STM32F4 Discovery.
Directory of Open Access Journals (Sweden)
M.A. Nwachukwu
2017-01-01
Full Text Available The use of trial pits as a first step in quarry site development causes land degradation and results in more failure than success for potential quarry investors in some parts of the world. In this paper, resistivity, depth and distance values derived from 26 Vertical Electric Soundings (VES and 2 profiling inversion sections were successfully used to evaluate a quarry site prior to development. The target rock Diabase (Dolerite was observed and it had a resistivity range of 3.0 × 104 –7. 8 × 106 Ω-m, and was clearly distinguishable from associated rocks with its bright red color code on the AGI 1D inversion software. This target rock was overlain by quartzite, indurate shale and mudstone as overburden materials. The quartzite, with its off-red colour, has a resistivity range of 2.0 × 103–2.9 × 105 Ω-m, while the indurate shale, with a yellowish-brown colour, showed resistivity values ranging from 6.1 × 102 – 2.8 × 105 Ω-m. Topsoil was clayey, with a resistivity range from 8 – 8.6 × 102u Ω-m and depths of 0.3–1.8 m, often weathered and replaced by associated rocks outcrops. The diabase rock, in the three prospective pits mapped, showed thicknesses of between 40 and 76 m across the site. The prospective pits were identified to accommodate an estimated 2,569,450 tonnes of diabase with an average quarry pit depth of 50 m. This figure was justified by physical observations made at a nearby quarry pit and from test holes. Communities were able to prepare a geophysical appraisal of the intrusive body in their domain for economic planning and sustainability of the natural resource.
International Nuclear Information System (INIS)
Hu, Chengyao; Huang, Pei
2011-01-01
The importance of sugar and sugar-containing materials is well recognized nowadays, owing to their application in industrial processes, particularly in the food, pharmaceutical and cosmetic industries. Because of the large numbers of those compounds involved and the relatively small number of solubility and/or diffusion coefficient data for each compound available, it is highly desirable to measure the solubility and/or diffusion coefficient as efficiently as possible and to be able to improve the accuracy of the methods used. In this work, a new technique was developed for the measurement of the diffusion coefficient of a stationary solid solute in a stagnant solvent which simultaneously measures solubility based on an inverse measurement problem algorithm with the real-time dissolved amount profile as a function of time. This study differs from established techniques in both the experimental method and the data analysis. The experimental method was developed in which the dissolved amount of solid solute in quiescent solvent was investigated using a continuous weighing technique. In the data analysis, the hybrid genetic algorithm is used to minimize an objective function containing a calculated and a measured dissolved amount with time. This is measured on a cylindrical sample of amorphous glucose in methanol or ethanol. The calculated dissolved amount, that is a function of the unknown physical properties of the solid solute in the solvent, is calculated by the solution of the two-dimensional nonlinear inverse natural convection problem. The estimated values of the solubility of amorphous glucose in methanol and ethanol at 293 K were respectively 32.1 g/100 g methanol and 1.48 g/100 g ethanol, in agreement with the literature values, and support the validity of the simultaneously measured diffusion coefficient. These results show the efficiency and the stability of the developed technique to simultaneously estimate the solubility and diffusion coefficient. Also
Directory of Open Access Journals (Sweden)
C. B. Alden
2018-03-01
Full Text Available Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m, integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB. The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model–data mismatch. It is also tested with field observations of (1 a non-leaking source location and (2 a source location where a controlled emission of 3.1 × 10−5 kg s−1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests. The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability and measurement uncertainty of 5 ppb (1σ, when
Alden, Caroline B.; Ghosh, Subhomoy; Coburn, Sean; Sweeney, Colm; Karion, Anna; Wright, Robert; Coddington, Ian; Rieker, Gregory B.; Prasad, Kuldeep
2018-03-01
Advances in natural gas extraction technology have led to increased activity in the production and transport sectors in the United States and, as a consequence, an increased need for reliable monitoring of methane leaks to the atmosphere. We present a statistical methodology in combination with an observing system for the detection and attribution of fugitive emissions of methane from distributed potential source location landscapes such as natural gas production sites. We measure long (> 500 m), integrated open-path concentrations of atmospheric methane using a dual frequency comb spectrometer and combine measurements with an atmospheric transport model to infer leak locations and strengths using a novel statistical method, the non-zero minimum bootstrap (NZMB). The new statistical method allows us to determine whether the empirical distribution of possible source strengths for a given location excludes zero. Using this information, we identify leaking source locations (i.e., natural gas wells) through rejection of the null hypothesis that the source is not leaking. The method is tested with a series of synthetic data inversions with varying measurement density and varying levels of model-data mismatch. It is also tested with field observations of (1) a non-leaking source location and (2) a source location where a controlled emission of 3.1 × 10-5 kg s-1 of methane gas is released over a period of several hours. This series of synthetic data tests and outdoor field observations using a controlled methane release demonstrates the viability of the approach for the detection and sizing of very small leaks of methane across large distances (4+ km2 in synthetic tests). The field tests demonstrate the ability to attribute small atmospheric enhancements of 17 ppb to the emitting source location against a background of combined atmospheric (e.g., background methane variability) and measurement uncertainty of 5 ppb (1σ), when measurements are averaged over 2 min. The
International Nuclear Information System (INIS)
Zimmerman, D.A.; Gallegos, D.P.
1993-10-01
The groundwater flow pathway in the Culebra Dolomite aquifer at the Waste Isolation Pilot Plant (WIPP) has been identified as a potentially important pathway for radionuclide migration to the accessible environment. Consequently, uncertainties in the models used to describe flow and transport in the Culebra need to be addressed. A ''Geostatistics Test Problem'' is being developed to evaluate a number of inverse techniques that may be used for flow calculations in the WIPP performance assessment (PA). The Test Problem is actually a series of test cases, each being developed as a highly complex synthetic data set; the intent is for the ensemble of these data sets to span the range of possible conceptual models of groundwater flow at the WIPP site. The Test Problem analysis approach is to use a comparison of the probabilistic groundwater travel time (GWTT) estimates produced by each technique as the basis for the evaluation. Participants are given observations of head and transmissivity (possibly including measurement error) or other information such as drawdowns from pumping wells, and are asked to develop stochastic models of groundwater flow for the synthetic system. Cumulative distribution functions (CDFs) of groundwater flow (computed via particle tracking) are constructed using the head and transmissivity data generated through the application of each technique; one semi-analytical method generates the CDFs of groundwater flow directly. This paper describes the results from Test Case No. 1
International Nuclear Information System (INIS)
Samani, Abbas; Zubovits, Judit; Plewes, Donald
2007-01-01
Understanding and quantifying the mechanical properties of breast tissues has been a subject of interest for the past two decades. This has been motivated in part by interest in modelling soft tissue response for surgery planning and virtual-reality-based surgical training. Interpreting elastography images for diagnostic purposes also requires a sound understanding of normal and pathological tissue mechanical properties. Reliable data on tissue elastic properties are very limited and those which are available tend to be inconsistent, in part as a result of measurement methodology. We have developed specialized techniques to measure tissue elasticity of breast normal tissues and tumour specimens and applied them to 169 fresh ex vivo breast tissue samples including fat and fibroglandular tissue as well as a range of benign and malignant breast tumour types. Results show that, under small deformation conditions, the elastic modulus of normal breast fat and fibroglandular tissues are similar while fibroadenomas were approximately twice the stiffness. Fibrocystic disease and malignant tumours exhibited a 3-6-fold increased stiffness with high-grade invasive ductal carcinoma exhibiting up to a 13-fold increase in stiffness compared to fibrogalndular tissue. A statistical analysis showed that differences between the elastic modulus of the majority of those tissues were statistically significant. Implications for the specificity advantages of elastography are reviewed
Energy Technology Data Exchange (ETDEWEB)
Samani, Abbas [Department of Medical Biophysics/Electrical and Computer Engineering, University of Western Ontario, Medical Sciences Building, London, Ontario, N6A 5C1 (Canada); Zubovits, Judit [Department of Anatomic Pathology, Sunnybrook Health Sciences Centre, 2075 Bayview Avenue, Toronto, Ontario, M4N 3M5 (Canada); Plewes, Donald [Department of Medical Biophysics, University of Toronto, 2075 Bayview Avenue, Toronto, Ontario, M4N 3M5 (Canada)
2007-03-21
Understanding and quantifying the mechanical properties of breast tissues has been a subject of interest for the past two decades. This has been motivated in part by interest in modelling soft tissue response for surgery planning and virtual-reality-based surgical training. Interpreting elastography images for diagnostic purposes also requires a sound understanding of normal and pathological tissue mechanical properties. Reliable data on tissue elastic properties are very limited and those which are available tend to be inconsistent, in part as a result of measurement methodology. We have developed specialized techniques to measure tissue elasticity of breast normal tissues and tumour specimens and applied them to 169 fresh ex vivo breast tissue samples including fat and fibroglandular tissue as well as a range of benign and malignant breast tumour types. Results show that, under small deformation conditions, the elastic modulus of normal breast fat and fibroglandular tissues are similar while fibroadenomas were approximately twice the stiffness. Fibrocystic disease and malignant tumours exhibited a 3-6-fold increased stiffness with high-grade invasive ductal carcinoma exhibiting up to a 13-fold increase in stiffness compared to fibrogalndular tissue. A statistical analysis showed that differences between the elastic modulus of the majority of those tissues were statistically significant. Implications for the specificity advantages of elastography are reviewed.
Samani, Abbas; Zubovits, Judit; Plewes, Donald
2007-03-01
Understanding and quantifying the mechanical properties of breast tissues has been a subject of interest for the past two decades. This has been motivated in part by interest in modelling soft tissue response for surgery planning and virtual-reality-based surgical training. Interpreting elastography images for diagnostic purposes also requires a sound understanding of normal and pathological tissue mechanical properties. Reliable data on tissue elastic properties are very limited and those which are available tend to be inconsistent, in part as a result of measurement methodology. We have developed specialized techniques to measure tissue elasticity of breast normal tissues and tumour specimens and applied them to 169 fresh ex vivo breast tissue samples including fat and fibroglandular tissue as well as a range of benign and malignant breast tumour types. Results show that, under small deformation conditions, the elastic modulus of normal breast fat and fibroglandular tissues are similar while fibroadenomas were approximately twice the stiffness. Fibrocystic disease and malignant tumours exhibited a 3-6-fold increased stiffness with high-grade invasive ductal carcinoma exhibiting up to a 13-fold increase in stiffness compared to fibrogalndular tissue. A statistical analysis showed that differences between the elastic modulus of the majority of those tissues were statistically significant. Implications for the specificity advantages of elastography are reviewed.
A finite-difference contrast source inversion method
International Nuclear Information System (INIS)
Abubakar, A; Hu, W; Habashy, T M; Van den Berg, P M
2008-01-01
We present a contrast source inversion (CSI) algorithm using a finite-difference (FD) approach as its backbone for reconstructing the unknown material properties of inhomogeneous objects embedded in a known inhomogeneous background medium. Unlike the CSI method using the integral equation (IE) approach, the FD-CSI method can readily employ an arbitrary inhomogeneous medium as its background. The ability to use an inhomogeneous background medium has made this algorithm very suitable to be used in through-wall imaging and time-lapse inversion applications. Similar to the IE-CSI algorithm the unknown contrast sources and contrast function are updated alternately to reconstruct the unknown objects without requiring the solution of the full forward problem at each iteration step in the optimization process. The FD solver is formulated in the frequency domain and it is equipped with a perfectly matched layer (PML) absorbing boundary condition. The FD operator used in the FD-CSI method is only dependent on the background medium and the frequency of operation, thus it does not change throughout the inversion process. Therefore, at least for the two-dimensional (2D) configurations, where the size of the stiffness matrix is manageable, the FD stiffness matrix can be inverted using a non-iterative inversion matrix approach such as a Gauss elimination method for the sparse matrix. In this case, an LU decomposition needs to be done only once and can then be reused for multiple source positions and in successive iterations of the inversion. Numerical experiments show that this FD-CSI algorithm has an excellent performance for inverting inhomogeneous objects embedded in an inhomogeneous background medium
Directory of Open Access Journals (Sweden)
Seyyed Ghoreishi
2017-09-01
Full Text Available Objective(S: In this work, paclitaxel (PX, a promising anticancer drug, was loaded in the basil seed mucilage (BSM aerogels by implementation of supercritical carbon dioxide (SC-CO2 technology. Then, the effects of operating conditions were studied on the PX mean particle size (MPS, particle size distribution (PSD and drug loading efficiency (DLE. Methods: The employed SC-CO2 process in this research is the combination of phase inversion technique and gas antisolvent (GAS process. The effect of DMSO/water ratio (4 and 6 (v/v, pressure (10-20 MPa, CO2 addition rate (1–3 mL/min and ethanol concentration (5-10% were studied on MPS, PSD and DLE. Scanning electron microscopy (SEM and Zetasizer were used for particle analysis. DLE was investigated by utilizing the high-performance liquid chromatography (HPLC. Results: Nanoparticles of paclitaxel (MPS of 82–131 nm depending on process variables with narrow PSD were successfully loaded in BSM aerogel with DLE of 28–52%. Experimental results indicated that higher DMSO/water ratio, ethanol concentration, pressure and CO2 addition rate reduced MPS and DLE. Conclusions: A modified semi batch SC-CO2 process based on the combination of gas antisolvent process and phase inversion methods using DMSO as co-solvent and ethanol as a secondary solvent was developed for the loading of an anticancer drug, PX, in ocimum basilicum mucilage aerogel. The experimental results determined that the mean particle size, particle size distribution, and drug loading efficiency be controlled with operating conditions.
Energy Technology Data Exchange (ETDEWEB)
Castaneda M, V. H.; Martinez B, M. R.; Solis S, L. O.; Castaneda M, R.; Leon P, A. A.; Hernandez P, C. F.; Espinoza G, J. G.; Ortiz R, J. M.; Vega C, H. R. [Universidad Autonoma de Zacatecas, 98000 Zacatecas, Zac. (Mexico); Mendez, R. [CIEMAT, Departamento de Metrologia de Radiaciones Ionizantes, Laboratorio de Patrones Neutronicos, Av. Complutense 22, 28040 Madrid (Spain); Gallego, E. [Universidad Politecnica de Madrid, Departamento de Ingenieria Nuclear, C. Jose Gutierrez Abascal 2, 28006 Madrid (Spain); Sousa L, M. A. [Comision Nacional de Energia Nuclear, Centro de Investigacion de Tecnologia Nuclear, Av. Pte. Antonio Carlos 6627, Pampulha, 31270-901 Belo Horizonte, Minas Gerais (Brazil)
2016-10-15
The Taguchi methodology has proved to be highly efficient to solve inverse problems, in which the values of some parameters of the model must be obtained from the observed data. There are intrinsic mathematical characteristics that make a problem known as inverse. Inverse problems appear in many branches of science, engineering and mathematics. To solve this type of problem, researches have used different techniques. Recently, the use of techniques based on Artificial Intelligence technology is being explored by researches. This paper presents the use of a software tool based on artificial neural networks of generalized regression in the solution of inverse problems with application in high energy physics, specifically in the solution of the problem of neutron spectrometry. To solve this problem we use a software tool developed in the Mat Lab programming environment, which employs a friendly user interface, intuitive and easy to use for the user. This computational tool solves the inverse problem involved in the reconstruction of the neutron spectrum based on measurements made with a Bonner spheres spectrometric system. Introducing this information, the neural network is able to reconstruct the neutron spectrum with high performance and generalization capability. The tool allows that the end user does not require great training or technical knowledge in development and/or use of software, so it facilitates the use of the program for the resolution of inverse problems that are in several areas of knowledge. The techniques of Artificial Intelligence present singular veracity to solve inverse problems, given the characteristics of artificial neural networks and their network topology, therefore, the tool developed has been very useful, since the results generated by the Artificial Neural Network require few time in comparison to other techniques and are correct results comparing them with the actual data of the experiment. (Author)
Ingram, WT
2012-01-01
Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen
LiFAP-based PVdF-HFP microporous membranes by phase-inversion technique with Li/LiFePO{sub 4} cell
Energy Technology Data Exchange (ETDEWEB)
Aravindan, V.; Vickraman, P. [Gandhigram Rural University, Department of Physics, Gandhigram (India); Sivashanmugam, A.; Thirunakaran, R.; Gopukumar, S. [Central Electrochemical Research Institute, Electrochemical Energy Systems Division, Karaikudi (India)
2009-12-15
Polyvinylidenefluoride-hexafluoropropylene-based (PVdF-HFP-based) gel and composite microporous membranes (GPMs and CPMs) were prepared by phase-inversion technique in the presence 10 wt% of AlO(OH){sub n} nanoparticles. The prepared membranes were gelled with 0.5-M LiPF{sub 3}(CF{sub 2}CF{sub 3}){sub 3} (lithium fluoroalkylphosphate, LiFAP) in EC:DEC (1:1 v/v) and subjected to various characterizations; the AC impedance study shows that CPMs exhibit higher conductivity than GPMs. Mechanical stability measurements on these systems reveal that CPMs exhibit Young's modulus higher than that of bare and GPMs and addition of nanoparticles drastically improves the elongation break was also noted. Transition of the host from {alpha} to {beta} phase after the loading of nanosized filler was confirmed by XRD and Raman studies. Physico-chemical properties, like liquid uptake, porosity, surface area, and activation energy, of the membranes were calculated and results are summarized. Cycling performance of Li/CPM/LiFePO{sub 4} coin cell was fabricated and evaluated at C/10 rate and delivered a discharge capacity of 157 and 148 mAh g {sup -1} respectively for first and tenth cycles. (orig.)
International Nuclear Information System (INIS)
Zhang Lin; Wang Chaoyang; Luo Xuan; Du Kai; Tu Haiyan; Fan Hong; Luo Qing; Yuan Guanghui; Huang Lizhen
2003-01-01
By thermally induced phase-inversion technique, ploy (4-methyl-1-pentene) (PMP) foams are successfully prepared; the density and pore size are 3-80 mg/cm 3 and 1-20 μm respectively. Durene/naphthalene (60/40) is confirmed as the suitable solvent/nonsolvent binary system. The PMP's thermal properties are characterized by TG-DSC system. It is found that the foams thermal properties depend on the density. The thermal analysis method is utilized to measure the gelation of PMP in the binary solvent/nonsolvent system. The range of gelation temperature is preliminarily determined. The influence of mixture system composition and the cooling rate during the making of foams is discussed. TG-DSC is applied to determine the thermal properties of low-density PMP foams prepared in the laboratory. And the effect of density change on the thermal stability of foams are studied. The thermal analysis data play a great role in improving the foam quality. (authors)
International Nuclear Information System (INIS)
Chang, C.J.; Anghaie, S.
1998-01-01
A numerical experimental technique is presented to find an optimum solution to an undetermined inverse gamma-ray transport problem involving the nondestructive assay of radionuclide inventory in a nuclear waste drum. The method introduced is an optimization scheme based on performing a large number of numerical simulations that account for the counting statistics, the nonuniformity of source distribution, and the heterogeneous density of the self-absorbing medium inside the waste drum. The simulation model uses forward projection and backward reconstruction algorithms. The forward projection algorithm uses randomly selected source distribution and a first-flight kernel method to calculate external detector responses. The backward reconstruction algorithm uses the conjugate gradient with nonnegative constraint or the maximum likelihood expectation maximum method to reconstruct the source distribution based on calculated detector responses. Total source activity is determined by summing the reconstructed activity of each computational grid. By conducting 10,000 numerical simulations, the error bound and the associated confidence level for the prediction of total source activity are determined. The accuracy and reliability of the simulation model are verified by performing a series of experiments in a 208-ell waste barrel. Density heterogeneity is simulated by using different materials distributed in 37 egg-crate-type compartments simulating a vertical segment of the barrel. Four orthogonal detector positions are used to measure the emerging radiation field from the distributed source. Results of the performed experiments are in full agreement with the estimated error and the confidence level, which are predicted by the simulation model
Mishin, V. V.; Mishin, V. M.; Karavaev, Yu.; Han, J. P.; Wang, C.
2016-07-01
We report on novel features of the saturation process of the polar cap magnetic flux and Poynting flux into the magnetosphere from the solar wind during three superstorms. In addition to the well-known effect of the interplanetary electric (Esw) and southward magnetic (interplanetary magnetic field (IMF) Bz) fields, we found that the saturation depends also on the solar wind ram pressure Pd. By means of the magnetogram inversion technique and a global MHD numerical model Piecewise Parabolic Method with a Lagrangian Remap, we explore the dependence of the magnetopause standoff distance on ram pressure and the southward IMF. Unlike earlier studies, in the considered superstorms both Pd and Bz achieve extreme values. As a result, we show that the compression rate of the dayside magnetosphere decreases with increasing Pd and the southward Bz, approaching very small values for extreme Pd ≥ 15 nPa and Bz ≤ -40 nT. This dependence suggests that finite compressibility of the magnetosphere controls saturation of superstorms.
Directory of Open Access Journals (Sweden)
María Fernanda Garcés
2017-04-01
Conclusions: Inversions of intron 22 and 1 were found in half of this group of patients. These results are reproducible and useful to identify the two most frequent mutations in severe hemophilia A patients.
Thorne, Lawrence R.
2011-01-01
I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...
Penerapan Sparse Matrix pada Rekomendasi Berita Personal untuk Pengguna Anonim
Mulki, Rizqi
2015-01-01
Online news links are being spread through the social media by news agencies in order to encourage people to read news from their site. After users have logged in to their site, users will keep on reading news that is relevant to their personalized news recommendation. But, nowadays personalized recommendation could be provided to users only if the site has recorded much of users browsing history and it‟s mandatory that users have to log in to the site. This could be problematic if the news r...
A sparse matrix based full-configuration interaction algorithm
International Nuclear Information System (INIS)
Rolik, Zoltan; Szabados, Agnes; Surjan, Peter R.
2008-01-01
We present an algorithm related to the full-configuration interaction (FCI) method that makes complete use of the sparse nature of the coefficient vector representing the many-electron wave function in a determinantal basis. Main achievements of the presented sparse FCI (SFCI) algorithm are (i) development of an iteration procedure that avoids the storage of FCI size vectors; (ii) development of an efficient algorithm to evaluate the effect of the Hamiltonian when both the initial and the product vectors are sparse. As a result of point (i) large disk operations can be skipped which otherwise may be a bottleneck of the procedure. At point (ii) we progress by adopting the implementation of the linear transformation by Olsen et al. [J. Chem Phys. 89, 2185 (1988)] for the sparse case, getting the algorithm applicable to larger systems and faster at the same time. The error of a SFCI calculation depends only on the dropout thresholds for the sparse vectors, and can be tuned by controlling the amount of system memory passed to the procedure. The algorithm permits to perform FCI calculations on single node workstations for systems previously accessible only by supercomputers
SparseM: A Sparse Matrix Package for R *
Directory of Open Access Journals (Sweden)
Roger Koenker
2003-02-01
Full Text Available SparseM provides some basic R functionality for linear algebra with sparse matrices. Use of the package is illustrated by a family of linear model fitting functions that implement least squares methods for problems with sparse design matrices. Significant performance improvements in memory utilization and computational speed are possible for applications involving large sparse matrices.
A Computing Platform for Parallel Sparse Matrix Computations
2016-01-05
REPORT NUMBER 19a. NAME OF RESPONSIBLE PERSON 19b. TELEPHONE NUMBER Ahmed Sameh Ahmed H. Sameh, Alicia Klinvex, Yao Zhu 611103 c. THIS PAGE The...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: Discipline Yao Zhu 0.50 Alicia Klinvex 0.10 0.60 2 Names of Post Doctorates Names of Faculty Supported...PERCENT_SUPPORTEDNAME FTE Equivalent: Total Number: NAME Total Number: NAME Total Number: Yao Zhu Alicia Klinvex 2 ...... ...... Sub Contractors (DD882) Names of other
Directory of Open Access Journals (Sweden)
Joel Sereno
2010-01-01
Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.
Elongation cutoff technique armed with quantum fast multipole method for linear scaling.
Korchowiec, Jacek; Lewandowski, Jakub; Makowski, Marcin; Gu, Feng Long; Aoki, Yuriko
2009-11-30
A linear-scaling implementation of the elongation cutoff technique (ELG/C) that speeds up Hartree-Fock (HF) self-consistent field calculations is presented. The cutoff method avoids the known bottleneck of the conventional HF scheme, that is, diagonalization, because it operates within the low dimension subspace of the whole atomic orbital space. The efficiency of ELG/C is illustrated for two model systems. The obtained results indicate that the ELG/C is a very efficient sparse matrix algebra scheme. Copyright 2009 Wiley Periodicals, Inc.
Cosburn, K.; Roy, M.; Rowe, C. A.; Guardincerri, E.
2017-12-01
Obtaining accurate static and time-dependent shallow subsurface density structure beneath volcanic, hydrogeologic, and tectonic targets can help illuminate active processes of fluid flow and magma transport. A limitation of using surface gravity measurements for such imaging is that these observations are vastly underdetermined and non-unique. In order to hone in on a more accurate solution, other data sets are needed to provide constraints, typically seismic or borehole observations. The spatial resolution of these techniques, however, is relatively poor, and a novel solution to this problem in recent years has been to use attenuation of the cosmic ray muon flux, which provides an independent constraint on density. In this study we present a joint inversion of gravity and cosmic ray muon flux observations to infer the density structure of a target rock volume at a well-characterized site near Los Alamos, New Mexico, USA. We investigate the shallow structure of a mesa formed by the Quaternary ash-flow tuffs on the Pajarito Plateau, flanking the Jemez volcano in New Mexico. Gravity measurements were made using a Lacoste and Romberg D meter on the surface of the mesa and inside a tunnel beneath the mesa. Muon flux measurements were also made at the mesa surface and at various points within the same tunnel using a muon detector having an acceptance region of 45 degrees from the vertical and a track resolution of several milliradians. We expect the combination of muon and gravity data to provide us with enhanced resolution as well as the ability to sense deeper structures in our region of interest. We use Bayesian joint inversion techniques on the gravity-muon dataset to test these ideas, building upon previous work using gravity inversion alone to resolve density structure in our study area. Both the regional geology and geometry of our study area is well-known and we assess the inferred density structure from our gravity-muon joint inversion within this known
Intersections, ideals, and inversion
International Nuclear Information System (INIS)
Vasco, D.W.
1998-01-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons
Intersections, ideals, and inversion
Energy Technology Data Exchange (ETDEWEB)
Vasco, D.W.
1998-10-01
Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.
Bayesian seismic AVO inversion
Energy Technology Data Exchange (ETDEWEB)
Buland, Arild
2002-07-01
A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S
International Nuclear Information System (INIS)
Takeuchi, Mitsuo; Wada, Shigeru; Takahashi, Hiroyuki; Hayashi, Kazuhiko; Murayama, Yoji
2000-09-01
At the research reactor such as JRR-3M, the operation management is carried out in order to ensure safe operation, for example, the excess reactivity is measured regularly and confirmed that it satisfies a safety condition. The excess reactivity is calculated using control rod position in criticality and control rod worth measured by a positive period method (P.P method), the conventional inverse kinetic method (IK method) and so on. The neutron source, however, influences measurement results and brings in a measurement error. A new IK method considering the influence of the steady neutron sources is proposed and applied to the JRR-3M. This report shows that the proposed IK method measures control rod worth more precisely than a conventional IK method. (author)
International Nuclear Information System (INIS)
Kaneko Mikami, Wakako; Kazama, Toshiki; Sato, Hirotaka
2013-01-01
The purpose of this study was to compare two fat suppression methods in contrast-enhanced MR imaging of breast cancer at 3.0 T: the two-point Dixon method and the frequency selective inversion method. Forty female patients with breast cancer underwent contrast-enhanced three-dimensional T1-weighted MR imaging at 3.0 T. Both the two-point Dixon method and the frequency selective inversion method were applied. Quantitative analyses of the residual fat signal-to-noise ratio and the contrast noise ratio (CNR) of lesion-to-breast parenchyma, lesion-to-fat, and parenchyma-to-fat were performed. Qualitative analyses of the uniformity of fat suppression, image contrast, and the visibility of breast lesions and axillary metastatic adenopathy were performed. The signal-to-noise ratio was significantly lower in the two-point Dixon method (P<0.001). All CNR values were significantly higher in the two-point Dixon method (P<0.001 and P=0.001, respectively). According to qualitative analysis, both the uniformity of fat suppression and image contrast with the two-point Dixon method were significantly higher (P<0.001 and P=0.002, respectively). Visibility of breast lesions and metastatic adenopathy was significantly better in the two-point Dixon method (P<0.001 and P=0.03, respectively). The two-point Dixon method suppressed the fat signal more potently and improved contrast and visibility of the breast lesions and axillary adenopathy. (author)
Schenini, L.; Beslier, M. O.; Sage, F.; Badji, R.; Galibert, P. Y.; Lepretre, A.; Dessa, J. X.; Aidi, C.; Watremez, L.
2014-12-01
Recent studies on the Algerian and the North-Ligurian margins in the Western Mediterranean have evidenced inversion-related superficial structures, such as folds and asymmetric sedimentary perched basins whose geometry hints at deep compressive structures dipping towards the continent. Deep seismic imaging of these margins is difficult due to steep slope and superficial multiples, and, in the Mediterranean context, to the highly diffractive Messinian evaporitic series in the basin. During the Algerian-French SPIRAL survey (2009, R/V Atalante), 2D marine multi-channel seismic (MCS) reflection data were collected along the Algerian Margin using a 4.5 km, 360 channel digital streamer and a 3040 cu. in. air-gun array. An advanced processing workflow has been laid out using Geocluster CGG software, which includes noise attenuation, 2D SRME multiple attenuation, surface consistent deconvolution, Kirchhoff pre-stack time migration. This processing produces satisfactory seismic images of the whole sedimentary cover, and of southward dipping reflectors in the acoustic basement along the central part of the margin offshore Great Kabylia, that are interpreted as inversion-related blind thrusts as part of flat-ramp systems. We applied this successful processing workflow to old 2D marine MCS data acquired on the North-Ligurian Margin (Malis survey, 1995, R/V Le Nadir), using a 2.5 km, 96 channel streamer and a 1140 cu. in. air-gun array. Particular attention was paid to multiple attenuation in adapting our workflow. The resulting reprocessed seismic images, interpreted with a coincident velocity model obtained by wide-angle data tomography, provide (1) enhanced imaging of the sedimentary cover down to the top of the acoustic basement, including the base of the Messinian evaporites and the sub-salt Miocene series, which appear to be tectonized as far as in the mid-basin, and (2) new evidence of deep crustal structures in the margin which the initial processing had failed to
Statistical perspectives on inverse problems
DEFF Research Database (Denmark)
Andersen, Kim Emil
of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...
Parameter estimation and inverse problems
Aster, Richard C; Thurber, Clifford H
2005-01-01
Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...
Luciani, S.; LeNiliot, C.
2008-11-01
Two-phase and boiling flow instabilities are complex, due to phase change and the existence of several interfaces. To fully understand the high heat transfer potential of boiling flows in microscale's geometry, it is vital to quantify these transfers. To perform this task, an experimental device has been designed to observe flow patterns. Analysis is made up by using an inverse method which allows us to estimate the local heat transfers while boiling occurs inside a microchannel. In our configuration, the direct measurement would impair the accuracy of the searched heat transfer coefficient because thermocouples implanted on the surface minichannels would disturb the established flow. In this communication, we are solving a 3D IHCP which consists in estimating using experimental data measurements the surface temperature and the surface heat flux in a minichannel during convective boiling under several gravity levels (g, 1g, 1.8g). The considered IHCP is formulated as a mathematical optimization problem and solved using the boundary element method (BEM).
DEFF Research Database (Denmark)
Pivnenko, Sergey; Nielsen, Jeppe Majlund; Breinbjerg, Olav
2011-01-01
correction of general high-order probes, including non-symmetric dual-polarized antennas with independent ports. The investigation was carried out by processing with each technique the same measurement data for a challenging case with an antenna under test significantly offset from the center of rotation...
Energy Technology Data Exchange (ETDEWEB)
Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy
2011-07-01
The Ground-Based Millimeter-wave Spectrometer (GBMS) was designed and built at the State University of New York at Stony Brook in the early 1990s and since then has carried out many measurement campaigns of stratospheric O{sub 3}, HNO{sub 3}, CO and N{sub 2}O at polar and mid-latitudes. Its HNO{sub 3} data set shed light on HNO{sub 3} annual cycles over the Antarctic continent and contributed to the validation of both generations of the satellite-based JPL Microwave Limb Sounder (MLS). Following the increasing need for long-term data sets of stratospheric constituents, we resolved to establish a long-term GMBS observation site at the Arctic station of Thule (76.5 N, 68.8 W), Greenland, beginning in January 2009, in order to track the long- and short-term interactions between the changing climate and the seasonal processes tied to the ozone depletion phenomenon. Furthermore, we updated the retrieval algorithm adapting the Optimal Estimation (OE) method to GBMS spectral data in order to conform to the standard of the Network for the Detection of Atmospheric Composition Change (NDACC) microwave group, and to provide our retrievals with a set of averaging kernels that allow more straightforward comparisons with other data sets. The new OE algorithm was applied to GBMS HNO{sub 3} data sets from 1993 South Pole observations to date, in order to produce HNO{sub 3} version 2 (v2) profiles. A sample of results obtained at Antarctic latitudes in fall and winter and at mid-latitudes is shown here. In most conditions, v2 inversions show a sensitivity (i.e., sum of column elements of the averaging kernel matrix) of 100{+-}20% from 20 to 45 km altitude, with somewhat worse (better) sensitivity in the Antarctic winter lower (upper) stratosphere. The 1{sigma} uncertainty on HNO{sub 3} v2 mixing ratio vertical profiles depends on altitude and is estimated at {proportional_to}15% or 0.3 ppbv, whichever is larger. Comparisons of v2 with former (v1) GBMS HNO{sub 3} vertical profiles
Vock, David M; Wolfson, Julian; Bandyopadhyay, Sunayan; Adomavicius, Gediminas; Johnson, Paul E; Vazquez-Benitez, Gabriela; O'Connor, Patrick J
2016-06-01
Models for predicting the probability of experiencing various health outcomes or adverse events over a certain time frame (e.g., having a heart attack in the next 5years) based on individual patient characteristics are important tools for managing patient care. Electronic health data (EHD) are appealing sources of training data because they provide access to large amounts of rich individual-level data from present-day patient populations. However, because EHD are derived by extracting information from administrative and clinical databases, some fraction of subjects will not be under observation for the entire time frame over which one wants to make predictions; this loss to follow-up is often due to disenrollment from the health system. For subjects without complete follow-up, whether or not they experienced the adverse event is unknown, and in statistical terms the event time is said to be right-censored. Most machine learning approaches to the problem have been relatively ad hoc; for example, common approaches for handling observations in which the event status is unknown include (1) discarding those observations, (2) treating them as non-events, (3) splitting those observations into two observations: one where the event occurs and one where the event does not. In this paper, we present a general-purpose approach to account for right-censored outcomes using inverse probability of censoring weighting (IPCW). We illustrate how IPCW can easily be incorporated into a number of existing machine learning algorithms used to mine big health care data including Bayesian networks, k-nearest neighbors, decision trees, and generalized additive models. We then show that our approach leads to better calibrated predictions than the three ad hoc approaches when applied to predicting the 5-year risk of experiencing a cardiovascular adverse event, using EHD from a large U.S. Midwestern healthcare system. Copyright © 2016 Elsevier Inc. All rights reserved.
Kim, Seong-Eun; Roberts, John A; Eisenmenger, Laura B; Aldred, Booth W; Jamil, Osama; Bolster, Bradley D; Bi, Xiaoming; Parker, Dennis L; Treiman, Gerald S; McNally, J Scott
2017-02-01
Carotid artery imaging is important in the clinical management of patients at risk for stroke. Carotid intraplaque hemorrhage (IPH) presents an important diagnostic challenge. 3D magnetization prepared rapid acquisition gradient echo (MPRAGE) has been shown to accurately image carotid IPH; however, this sequence can be limited due to motion- and flow-related artifact. The purpose of this work was to develop and evaluate an improved 3D carotid MPRAGE sequence for IPH detection. We hypothesized that a radial-based k-space trajectory sequence such as "Stack of Stars" (SOS) incorporated with inversion recovery preparation would offer reduced motion sensitivity and more robust flow suppression by oversampling of central k-space. A total of 31 patients with carotid disease (62 carotid arteries) were imaged at 3T magnetic resonance imaging (MRI) with 3D IR-prep Cartesian and SOS sequences. Image quality was determined between SOS and Cartesian MPRAGE in 62 carotid arteries using t-tests and multivariable linear regression. Kappa analysis was used to determine interrater reliability. In all, 25 among 62 carotid plaques had carotid IPH by consensus from the reviewers on SOS compared to 24 on Cartesian sequence. Image quality was significantly higher with SOS compared to Cartesian (mean 3.74 vs. 3.11, P SOS acquisition yielded sharper image features with less motion (19.4% vs. 45.2%, P SOS (kappa = 0.89), higher than that of Cartesian (kappa = 0.84). By minimizing flow and motion artifacts and retaining high interrater reliability, the SOS MPRAGE has important advantages over Cartesian MPRAGE in carotid IPH detection. 1 J. Magn. Reson. Imaging 2017;45:410-417. © 2016 International Society for Magnetic Resonance in Medicine.
Inverse problems of geophysics
International Nuclear Information System (INIS)
Yanovskaya, T.B.
2003-07-01
This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given
EDITORIAL: Inverse Problems in Engineering
West, Robert M.; Lesnic, Daniel
2007-01-01
Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.
Full Waveform Inversion for Reservoir Characterization - A Synthetic Study
Zabihi Naeini, E.; Kamath, N.; Tsvankin, I.; Alkhalifah, Tariq Ali
2017-01-01
Most current reservoir-characterization workflows are based on classic amplitude-variation-with-offset (AVO) inversion techniques. Although these methods have generally served us well over the years, here we examine full-waveform inversion (FWI
Third Harmonic Imaging using a Pulse Inversion
DEFF Research Database (Denmark)
Rasmussen, Joachim; Du, Yigang; Jensen, Jørgen Arendt
2011-01-01
The pulse inversion (PI) technique can be utilized to separate and enhance harmonic components of a waveform for tissue harmonic imaging. While most ultrasound systems can perform pulse inversion, only few image the 3rd harmonic component. PI pulse subtraction can isolate and enhance the 3rd...
Inverse Kinematics of a Serial Robot
Directory of Open Access Journals (Sweden)
Amici Cinzia
2016-01-01
Full Text Available This work describes a technique to treat the inverse kinematics of a serial manipulator. The inverse kinematics is obtained through the numerical inversion of the Jacobian matrix, that represents the equation of motion of the manipulator. The inversion is affected by numerical errors and, in different conditions, due to the numerical nature of the solver, it does not converge to a reasonable solution. Thus a soft computing approach is adopted to mix different traditional methods to obtain an increment of algorithmic convergence.
International Nuclear Information System (INIS)
Hicks, H.R.; Dory, R.A.; Holmes, J.A.
1983-01-01
We illustrate in some detail a 2D inverse-equilibrium solver that was constructed to analyze tokamak configurations and stellarators (the latter in the context of the average method). To ensure that the method is suitable not only to determine equilibria, but also to provide appropriately represented data for existing stability codes, it is important to be able to control the Jacobian, tilde J is identical to delta(R,Z)/delta(rho, theta). The form chosen is tilde J = J 0 (rho)R/sup l/rho where rho is a flux surface label, and l is an integer. The initial implementation is for a fixed conducting-wall boundary, but the technique can be extended to a free-boundary model
Inverse photoemission of uranium oxides
International Nuclear Information System (INIS)
Roussel, P.; Morrall, P.; Tull, S.J.
2009-01-01
Understanding the itinerant-localised bonding role of the 5f electrons in the light actinides will afford an insight into their unusual physical and chemical properties. In recent years, the combination of core and valance band electron spectroscopies with theoretic modelling have already made significant progress in this area. However, information of the unoccupied density of states is still scarce. When compared to the forward photoemission techniques, measurements of the unoccupied states suffer from significantly less sensitivity and lower resolution. In this paper, we report on our experimental apparatus, which is designed to measure the inverse photoemission spectra of the light actinides. Inverse photoemission spectra of UO 2 and UO 2.2 along with the corresponding core and valance electron spectra are presented in this paper. UO 2 has been reported previously, although through its inclusion here it allows us to compare and contrast results from our experimental apparatus to the previous Bremsstrahlung Isochromat Spectroscopy and Inverse Photoemission Spectroscopy investigations
Reverse Universal Resolving Algorithm and inverse driving
DEFF Research Database (Denmark)
Pécseli, Thomas
2012-01-01
Inverse interpretation is a semantics based, non-standard interpretation of programs. Given a program and a value, an inverse interpreter finds all or one of the inputs, that would yield the given value as output with normal forward evaluation. The Reverse Universal Resolving Algorithm is a new...... variant of the Universal Resolving Algorithm for inverse interpretation. The new variant outperforms the original algorithm in several cases, e.g., when unpacking a list using inverse interpretation of a pack program. It uses inverse driving as its main technique, which has not been described in detail...... before. Inverse driving may find application with, e.g., supercompilation, thus suggesting a new kind of program inverter....
An inverse problem for evolution inclusions
Ton, Bui An
2002-01-01
An inverse problem, the determination of the shape and a convective coefficient on a part of the boundary from partial measurements of the solution, is studied using 2-person optimal control techniques.
Forward modeling. Route to electromagnetic inversion
Energy Technology Data Exchange (ETDEWEB)
Groom, R; Walker, P [PetRos EiKon Incorporated, Ontario (Canada)
1996-05-01
Inversion of electromagnetic data is a topical subject in the literature, and much time has been devoted to understanding the convergence properties of various inverse methods. The relative lack of success of electromagnetic inversion techniques is partly attributable to the difficulties in the kernel forward modeling software. These difficulties come in two broad classes: (1) Completeness and robustness, and (2) convergence, execution time and model simplicity. If such problems exist in the forward modeling kernel, it was demonstrated that inversion can fail to generate reasonable results. It was suggested that classical inversion techniques, which are based on minimizing a norm of the error between data and the simulated data, will only be successful when these difficulties in forward modeling kernels are properly dealt with. 4 refs., 5 figs.
A Joint Method of Envelope Inversion Combined with Hybrid-domain Full Waveform Inversion
CUI, C.; Hou, W.
2017-12-01
Full waveform inversion (FWI) aims to construct high-precision subsurface models by fully using the information in seismic records, including amplitude, travel time, phase and so on. However, high non-linearity and the absence of low frequency information in seismic data lead to the well-known cycle skipping problem and make inversion easily fall into local minima. In addition, those 3D inversion methods that are based on acoustic approximation ignore the elastic effects in real seismic field, and make inversion harder. As a result, the accuracy of final inversion results highly relies on the quality of initial model. In order to improve stability and quality of inversion results, multi-scale inversion that reconstructs subsurface model from low to high frequency are applied. But, the absence of very low frequencies (time domain and inversion in the frequency domain. To accelerate the inversion, we adopt CPU/GPU heterogeneous computing techniques. There were two levels of parallelism. In the first level, the inversion tasks are decomposed and assigned to each computation node by shot number. In the second level, GPU multithreaded programming is used for the computation tasks in each node, including forward modeling, envelope extraction, DFT (discrete Fourier transform) calculation and gradients calculation. Numerical tests demonstrated that the combined envelope inversion + hybrid-domain FWI could obtain much faithful and accurate result than conventional hybrid-domain FWI. The CPU/GPU heterogeneous parallel computation could improve the performance speed.
Time-reversal and Bayesian inversion
Debski, Wojciech
2017-04-01
Probabilistic inversion technique is superior to the classical optimization-based approach in all but one aspects. It requires quite exhaustive computations which prohibit its use in huge size inverse problems like global seismic tomography or waveform inversion to name a few. The advantages of the approach are, however, so appealing that there is an ongoing continuous afford to make the large inverse task as mentioned above manageable with the probabilistic inverse approach. One of the perspective possibility to achieve this goal relays on exploring the internal symmetry of the seismological modeling problems in hand - a time reversal and reciprocity invariance. This two basic properties of the elastic wave equation when incorporating into the probabilistic inversion schemata open a new horizons for Bayesian inversion. In this presentation we discuss the time reversal symmetry property, its mathematical aspects and propose how to combine it with the probabilistic inverse theory into a compact, fast inversion algorithm. We illustrate the proposed idea with the newly developed location algorithm TRMLOC and discuss its efficiency when applied to mining induced seismic data.
Joint Inversion of Direct Current Resistivity and Seismic Refraction Data
International Nuclear Information System (INIS)
Kurt, B.B.
2007-01-01
In this study, I assumed the underground consist of horizontal layers. I developed one-dimensional (1D) Direct Current Resistivity (DCR) and seismic refraction inversion code using MATLAB package and attempt to find velocity, resistivity and depth of the layers. The code uses damped least square technique. The code can do inversion on DCR and seismic data either individually or jointly. I tested the joint inversion code on synthetic data. Eventually, I saw that the result of joint inversion is better than the result of individual inversions. The joint inversion found depth of models of each layer and, in addition, velocity and resistivity closer to real values
INVERSE FILTERING TECHNIQUES IN SPEECH ANALYSIS
African Journals Online (AJOL)
Dr Obe
domain or in the frequency domain. However their .... computer to speech analysis led to important elaborations ... tool for the estimation of formant trajectory (10), ... prediction Linear prediction In effect determines the filter .... Radio Res. Lab.
Fast wavelet based sparse approximate inverse preconditioner
Energy Technology Data Exchange (ETDEWEB)
Wan, W.L. [Univ. of California, Los Angeles, CA (United States)
1996-12-31
Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.
Acute puerperal uterine inversion
International Nuclear Information System (INIS)
Hussain, M.; Liaquat, N.; Noorani, K.; Bhutta, S.Z; Jabeen, T.
2004-01-01
Objective: To determine the frequency, causes, clinical presentations, management and maternal mortality associated with acute puerperal inversion of the uterus. Materials and Methods: All the patients who developed acute puerperal inversion of the uterus either in or outside the JPMC were included in the study. Patients of chronic uterine inversion were not included in the present study. Abdominal and vaginal examination was done to confirm and classify inversion into first, second or third degrees. Results: 57036 deliveries and 36 acute uterine inversions occurred during the study period, so the frequency of uterine inversion was 1 in 1584 deliveries. Mismanagement of third stage of labour was responsible for uterine inversion in 75% of patients. Majority of the patients presented with shock, either hypovolemic (69%) or neurogenic (13%) in origin. Manual replacement of the uterus under general anaesthesia with 2% halothane was successfully done in 35 patients (97.5%). Abdominal hysterectomy was done in only one patient. There were three maternal deaths due to inversion. Conclusion: Proper education and training regarding placental delivery, diagnosis and management of uterine inversion must be imparted to the maternity care providers especially to traditional birth attendants and family physicians to prevent this potentially life-threatening condition. (author)
Real Variable Inversion of Laplace Transforms: An Application in Plasma Physics.
Bohn, C. L.; Flynn, R. W.
1978-01-01
Discusses the nature of Laplace transform techniques and explains an alternative to them: the Widder's real inversion. To illustrate the power of this new technique, it is applied to a difficult inversion: the problem of Landau damping. (GA)
3rd Annual Workshop on Inverse Problem
2015-01-01
This proceeding volume is based on papers presented on the Third Annual Workshop on Inverse Problems which was organized by the Department of Mathematical Sciences, Chalmers University of Technology and University of Gothenburg, and took place in May 2013 in Stockholm. The purpose of this workshop was to present new analytical developments and numerical techniques for solution of inverse problems for a wide range of applications in acoustics, electromagnetics, optical fibers, medical imaging, geophysics, etc. The contributions in this volume reflect these themes and will be beneficial to researchers who are working in the area of applied inverse problems.
Inverse logarithmic potential problem
Cherednichenko, V G
1996-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Inverse Kinematics using Quaternions
DEFF Research Database (Denmark)
Henriksen, Knud; Erleben, Kenny; Engell-Nørregård, Morten
In this project I describe the status of inverse kinematics research, with the focus firmly on the methods that solve the core problem. An overview of the different methods are presented Three common methods used in inverse kinematics computation have been chosen as subject for closer inspection....
Magnetotelluric inversion via reverse time migration algorithm of seismic data
International Nuclear Information System (INIS)
Ha, Taeyoung; Shin, Changsoo
2007-01-01
We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data
Bayesian inversion of refraction seismic traveltime data
Ryberg, T.; Haberland, Ch
2018-03-01
We apply a Bayesian Markov chain Monte Carlo (McMC) formalism to the inversion of refraction seismic, traveltime data sets to derive 2-D velocity models below linear arrays (i.e. profiles) of sources and seismic receivers. Typical refraction data sets, especially when using the far-offset observations, are known as having experimental geometries which are very poor, highly ill-posed and far from being ideal. As a consequence, the structural resolution quickly degrades with depth. Conventional inversion techniques, based on regularization, potentially suffer from the choice of appropriate inversion parameters (i.e. number and distribution of cells, starting velocity models, damping and smoothing constraints, data noise level, etc.) and only local model space exploration. McMC techniques are used for exhaustive sampling of the model space without the need of prior knowledge (or assumptions) of inversion parameters, resulting in a large number of models fitting the observations. Statistical analysis of these models allows to derive an average (reference) solution and its standard deviation, thus providing uncertainty estimates of the inversion result. The highly non-linear character of the inversion problem, mainly caused by the experiment geometry, does not allow to derive a reference solution and error map by a simply averaging procedure. We present a modified averaging technique, which excludes parts of the prior distribution in the posterior values due to poor ray coverage, thus providing reliable estimates of inversion model properties even in those parts of the models. The model is discretized by a set of Voronoi polygons (with constant slowness cells) or a triangulated mesh (with interpolation within the triangles). Forward traveltime calculations are performed by a fast, finite-difference-based eikonal solver. The method is applied to a data set from a refraction seismic survey from Northern Namibia and compared to conventional tomography. An inversion test
International Nuclear Information System (INIS)
Burkhard, N.R.
1979-01-01
The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables
Probabilistic inversion for chicken processing lines
International Nuclear Information System (INIS)
Cooke, Roger M.; Nauta, Maarten; Havelaar, Arie H.; Fels, Ine van der
2006-01-01
We discuss an application of probabilistic inversion techniques to a model of campylobacter transmission in chicken processing lines. Such techniques are indicated when we wish to quantify a model which is new and perhaps unfamiliar to the expert community. In this case there are no measurements for estimating model parameters, and experts are typically unable to give a considered judgment. In such cases, experts are asked to quantify their uncertainty regarding variables which can be predicted by the model. The experts' distributions (after combination) are then pulled back onto the parameter space of the model, a process termed 'probabilistic inversion'. This study illustrates two such techniques, iterative proportional fitting (IPF) and PARmeter fitting for uncertain models (PARFUM). In addition, we illustrate how expert judgement on predicted observable quantities in combination with probabilistic inversion may be used for model validation and/or model criticism
Sharp spatially constrained inversion
DEFF Research Database (Denmark)
Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....
International Nuclear Information System (INIS)
Rosenwald, J.-C.
2008-01-01
The lecture addressed the following topics: Optimizing radiotherapy dose distribution; IMRT contributes to optimization of energy deposition; Inverse vs direct planning; Main steps of IMRT; Background of inverse planning; General principle of inverse planning; The 3 main components of IMRT inverse planning; The simplest cost function (deviation from prescribed dose); The driving variable : the beamlet intensity; Minimizing a 'cost function' (or 'objective function') - the walker (or skier) analogy; Application to IMRT optimization (the gradient method); The gradient method - discussion; The simulated annealing method; The optimization criteria - discussion; Hard and soft constraints; Dose volume constraints; Typical user interface for definition of optimization criteria; Biological constraints (Equivalent Uniform Dose); The result of the optimization process; Semi-automatic solutions for IMRT; Generalisation of the optimization problem; Driving and driven variables used in RT optimization; Towards multi-criteria optimization; and Conclusions for the optimization phase. (P.A.)
Inverse problem in hydrogeology
Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.
2005-03-01
cas dans d'autres cas de figure. Par ailleurs, il peut être vu comme une des étapes dans le processus de détermination du comportement de l'aquifère. Il est montré que les méthodes d'évaluation des paramètres actuels ne diffèrent pas si ce n'est dans les détails des calculs informatiques. Il est montré qu'il existe une large panoplie de techniques d'inversion : codes de calcul utilisables par tout-un-chacun, accommodation de la variabilité via la géostatistique, incorporation d'informations géologiques et de différents types de données (température, occurrence, concentration en isotopes, âge, etc.), détermination de l'incertitude. Vu ces développements, la calibration automatique facilite énormément la modélisation. Par ailleurs, il est souhaitable que son utilisation devienne une pratique standardisée. Se sintetiza el estado del problema inverso en aguas subterráneas. El énfasis se ubica en la caracterización de acuíferos, donde los modeladores tienen que enfrentar la incertidumbre del modelo conceptual (principalmente variabilidad temporal y espacial), dependencia de escala, muchos tipos de parámetros desconocidos (transmisividad, recarga, condiciones limitantes, etc), no linealidad, y frecuentemente baja sensibilidad de variables de estado (típicamente presiones y concentraciones) a las propiedades del acuífero. Debido a estas dificultades, no puede separarse la calibración de los procesos de modelado, como frecuentemente se hace en otros campos. En su lugar, debe de visualizarse como un paso en el proceso de enten dimiento del comportamiento del acuífero. En realidad, se muestra que los métodos reales de estimación de parámetros no difieren uno del otro en lo esencial, aunque sí pueden diferir en los detalles computacionales. Se discute que existe amplio espacio para la mejora del problema inverso en aguas subterráneas: desarrollo de códigos amigables alusuario, acomodamiento de variabilidad a través de geoestad
International Nuclear Information System (INIS)
Carver, M.B.; Hanley, D.V.; Chaplin, K.R.
1979-02-01
MAKSIMA-CHEMIST was written to compute the kinetics of simultaneous chemical reactions. The ordinary differential equations, which are automatically derived from the stated chemical equations, are difficult to integrate, as they are coupled in a highly nonlinear manner and frequently involve a large range in the magnitude of the reaction rates. They form a classic 'stiff' differential equaton set which can be integrated efficiently only by recently developed advanced techniques. The new program also contains provision for higher order chemical reactions, and has a dynamic storage and decision feature. This permits it to accept any number of chemical reactions and species, and choose an integraton scheme which will perform most efficiently within the available memory. Sparse matrix techniques are used when the size and structure of the equation set is suitable. Finally, a number of post-analysis options are available, including printer and Calcomp plots of transient response of selected species, and graphical representation of the reaction matrix. (auth)
Energy Technology Data Exchange (ETDEWEB)
Lebrun, D.
1997-05-22
The aim of the dissertation is the linearized inversion of multicomponent seismic data for 3D elastic horizontally stratified media, using Born approximation. A Jacobian matrix is constructed; it will be used to model seismic data from elastic parameters. The inversion technique, relying on single value decomposition (SVD) of the Jacobian matrix, is described. Next, the resolution of inverted elastic parameters is quantitatively studies. A first use of the technique is shown in the frame of an evaluation of a sea bottom acquisition (synthetic data). Finally, a real data set acquired with conventional marine technique is inverted. (author) 70 refs.
Fuzzy logic guided inverse treatment planning
International Nuclear Information System (INIS)
Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho
2003-01-01
A fuzzy logic technique was applied to optimize the weighting factors in the objective function of an inverse treatment planning system for intensity-modulated radiation therapy (IMRT). Based on this technique, the optimization of weighting factors is guided by the fuzzy rules while the intensity spectrum is optimized by a fast-monotonic-descent method. The resultant fuzzy logic guided inverse planning system is capable of finding the optimal combination of weighting factors for different anatomical structures involved in treatment planning. This system was tested using one simulated (but clinically relevant) case and one clinical case. The results indicate that the optimal balance between the target dose and the critical organ dose is achieved by a refined combination of weighting factors. With the help of fuzzy inference, the efficiency and effectiveness of inverse planning for IMRT are substantially improved
Inverse problems in the Bayesian framework
International Nuclear Information System (INIS)
Calvetti, Daniela; Somersalo, Erkki; Kaipio, Jari P
2014-01-01
The history of Bayesian methods dates back to the original works of Reverend Thomas Bayes and Pierre-Simon Laplace: the former laid down some of the basic principles on inverse probability in his classic article ‘An essay towards solving a problem in the doctrine of chances’ that was read posthumously in the Royal Society in 1763. Laplace, on the other hand, in his ‘Memoirs on inverse probability’ of 1774 developed the idea of updating beliefs and wrote down the celebrated Bayes’ formula in the form we know today. Although not identified yet as a framework for investigating inverse problems, Laplace used the formalism very much in the spirit it is used today in the context of inverse problems, e.g., in his study of the distribution of comets. With the evolution of computational tools, Bayesian methods have become increasingly popular in all fields of human knowledge in which conclusions need to be drawn based on incomplete and noisy data. Needless to say, inverse problems, almost by definition, fall into this category. Systematic work for developing a Bayesian inverse problem framework can arguably be traced back to the 1980s, (the original first edition being published by Elsevier in 1987), although articles on Bayesian methodology applied to inverse problems, in particular in geophysics, had appeared much earlier. Today, as testified by the articles in this special issue, the Bayesian methodology as a framework for considering inverse problems has gained a lot of popularity, and it has integrated very successfully with many traditional inverse problems ideas and techniques, providing novel ways to interpret and implement traditional procedures in numerical analysis, computational statistics, signal analysis and data assimilation. The range of applications where the Bayesian framework has been fundamental goes from geophysics, engineering and imaging to astronomy, life sciences and economy, and continues to grow. There is no question that Bayesian
DEFF Research Database (Denmark)
Mosegaard, Klaus
2012-01-01
For non-linear inverse problems, the mathematical structure of the mapping from model parameters to data is usually unknown or partly unknown. Absence of information about the mathematical structure of this function prevents us from presenting an analytical solution, so our solution depends on our......-heuristics are inefficient for large-scale, non-linear inverse problems, and that the 'no-free-lunch' theorem holds. We discuss typical objections to the relevance of this theorem. A consequence of the no-free-lunch theorem is that algorithms adapted to the mathematical structure of the problem perform more efficiently than...... pure meta-heuristics. We study problem-adapted inversion algorithms that exploit the knowledge of the smoothness of the misfit function of the problem. Optimal sampling strategies exist for such problems, but many of these problems remain hard. © 2012 Springer-Verlag....
Inverse scale space decomposition
DEFF Research Database (Denmark)
Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane
2018-01-01
We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...
Generalized inverses theory and computations
Wang, Guorong; Qiao, Sanzheng
2018-01-01
This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.
Some results on inverse scattering
International Nuclear Information System (INIS)
Ramm, A.G.
2008-01-01
A review of some of the author's results in the area of inverse scattering is given. The following topics are discussed: (1) Property C and applications, (2) Stable inversion of fixed-energy 3D scattering data and its error estimate, (3) Inverse scattering with 'incomplete' data, (4) Inverse scattering for inhomogeneous Schroedinger equation, (5) Krein's inverse scattering method, (6) Invertibility of the steps in Gel'fand-Levitan, Marchenko, and Krein inversion methods, (7) The Newton-Sabatier and Cox-Thompson procedures are not inversion methods, (8) Resonances: existence, location, perturbation theory, (9) Born inversion as an ill-posed problem, (10) Inverse obstacle scattering with fixed-frequency data, (11) Inverse scattering with data at a fixed energy and a fixed incident direction, (12) Creating materials with a desired refraction coefficient and wave-focusing properties. (author)
Atmospheric inverse modeling via sparse reconstruction
Hase, Nils; Miller, Scot M.; Maaß, Peter; Notholt, Justus; Palm, Mathias; Warneke, Thorsten
2017-10-01
Many applications in atmospheric science involve ill-posed inverse problems. A crucial component of many inverse problems is the proper formulation of a priori knowledge about the unknown parameters. In most cases, this knowledge is expressed as a Gaussian prior. This formulation often performs well at capturing smoothed, large-scale processes but is often ill equipped to capture localized structures like large point sources or localized hot spots. Over the last decade, scientists from a diverse array of applied mathematics and engineering fields have developed sparse reconstruction techniques to identify localized structures. In this study, we present a new regularization approach for ill-posed inverse problems in atmospheric science. It is based on Tikhonov regularization with sparsity constraint and allows bounds on the parameters. We enforce sparsity using a dictionary representation system. We analyze its performance in an atmospheric inverse modeling scenario by estimating anthropogenic US methane (CH4) emissions from simulated atmospheric measurements. Different measures indicate that our sparse reconstruction approach is better able to capture large point sources or localized hot spots than other methods commonly used in atmospheric inversions. It captures the overall signal equally well but adds details on the grid scale. This feature can be of value for any inverse problem with point or spatially discrete sources. We show an example for source estimation of synthetic methane emissions from the Barnett shale formation.
Improving Inversions of the Overlap Operator
International Nuclear Information System (INIS)
Krieg, S.; Cundy, N.; Eshof, J. van den; Frommer, A.; Lippert, Th.; Schaefer, K.
2005-01-01
We present relaxation and preconditioning techniques which accelerate the inversion of the overlap operator by a factor of four on small lattices, with larger gains as the lattice size increases. These improvements can be used in both propagator calculations and dynamical simulations
BOOK REVIEW: Inverse Problems. Activities for Undergraduates
Yamamoto, Masahiro
2003-06-01
into the nature of inverse problems and the appropriate mode of thought, chapter 1 offers historical vignettes, most of which have played an essential role in the development of natural science. These vignettes cover the first successful application of `non-destructive testing' by Archimedes (page 4) via Newton's laws of motion up to literary tomography, and readers will be able to enjoy a wide overview of inverse problems. Therefore, as the author asks, the reader should not skip this chapter. This may not be hard to do, since the headings of the sections are quite intriguing (`Archimedes' Bath', `Another World', `Got the Time?', `Head Games', etc). The author embarks on the technical approach to inverse problems in chapter 2. He has elegantly designed each section with a guide specifying course level, objective, mathematical and scientifical background and appropriate technology (e.g. types of calculators required). The guides are designed such that teachers may be able to construct effective and attractive courses by themselves. The book is not intended to offer one rigidly determined course, but should be used flexibly and independently according to the situation. Moreover, every section closes with activities which can be chosen according to the students' interests and levels of ability. Some of these exercises do not have ready solutions, but require long-term study, so readers are not required to solve all of them. After chapter 5, which contains discrete inverse problems such as the algebraic reconstruction technique and the Backus - Gilbert method, there are answers and commentaries to the activities. Finally, scripts in MATLAB are attached, although they can also be downloaded from the author's web page (http://math.uc.edu/~groetsch/). This book is aimed at students but it will be very valuable to researchers wishing to retain a wide overview of inverse problems in the midst of busy research activities. A Japanese version was published in 2002.
Inversion assuming weak scattering
DEFF Research Database (Denmark)
Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus
2013-01-01
due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...
Calculation of the inverse data space via sparse inversion
Saragiotis, Christos
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.
Inverse transient thermoelastic deformations in thin circular plates
Indian Academy of Sciences (India)
Bessel's functions with the help of the integral transform technique. Thermoelastic deformations are discussed with the help of temperature and are illustrated numer- ically. Keywords. Inverse transient; thermoelastic deformation. 1. Introduction. The inverse thermoelastic problem consists of determination of the temperature, ...
Level set methods for inverse scattering—some recent developments
International Nuclear Information System (INIS)
Dorn, Oliver; Lesselier, Dominique
2009-01-01
We give an update on recent techniques which use a level set representation of shapes for solving inverse scattering problems, completing in that matter the exposition made in (Dorn and Lesselier 2006 Inverse Problems 22 R67) and (Dorn and Lesselier 2007 Deformable Models (New York: Springer) pp 61–90), and bringing it closer to the current state of the art
Inverse problems in systems biology
International Nuclear Information System (INIS)
Engl, Heinz W; Lu, James; Müller, Stefan; Flamm, Christoph; Schuster, Peter; Kügler, Philipp
2009-01-01
Systems biology is a new discipline built upon the premise that an understanding of how cells and organisms carry out their functions cannot be gained by looking at cellular components in isolation. Instead, consideration of the interplay between the parts of systems is indispensable for analyzing, modeling, and predicting systems' behavior. Studying biological processes under this premise, systems biology combines experimental techniques and computational methods in order to construct predictive models. Both in building and utilizing models of biological systems, inverse problems arise at several occasions, for example, (i) when experimental time series and steady state data are used to construct biochemical reaction networks, (ii) when model parameters are identified that capture underlying mechanisms or (iii) when desired qualitative behavior such as bistability or limit cycle oscillations is engineered by proper choices of parameter combinations. In this paper we review principles of the modeling process in systems biology and illustrate the ill-posedness and regularization of parameter identification problems in that context. Furthermore, we discuss the methodology of qualitative inverse problems and demonstrate how sparsity enforcing regularization allows the determination of key reaction mechanisms underlying the qualitative behavior. (topical review)
Nonlinear adaptive inverse control via the unified model neural network
Jeng, Jin-Tsong; Lee, Tsu-Tian
1999-03-01
In this paper, we propose a new nonlinear adaptive inverse control via a unified model neural network. In order to overcome nonsystematic design and long training time in nonlinear adaptive inverse control, we propose the approximate transformable technique to obtain a Chebyshev Polynomials Based Unified Model (CPBUM) neural network for the feedforward/recurrent neural networks. It turns out that the proposed method can use less training time to get an inverse model. Finally, we apply this proposed method to control magnetic bearing system. The experimental results show that the proposed nonlinear adaptive inverse control architecture provides a greater flexibility and better performance in controlling magnetic bearing systems.
Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I
2017-01-01
This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.
Electrochemically driven emulsion inversion
Johans, Christoffer; Kontturi, Kyösti
2007-09-01
It is shown that emulsions stabilized by ionic surfactants can be inverted by controlling the electrical potential across the oil-water interface. The potential dependent partitioning of sodium dodecyl sulfate (SDS) was studied by cyclic voltammetry at the 1,2-dichlorobenzene|water interface. In the emulsion the potential control was achieved by using a potential-determining salt. The inversion of a 1,2-dichlorobenzene-in-water (O/W) emulsion stabilized by SDS was followed by conductometry as a function of added tetrapropylammonium chloride. A sudden drop in conductivity was observed, indicating the change of the continuous phase from water to 1,2-dichlorobenzene, i.e. a water-in-1,2-dichlorobenzene emulsion was formed. The inversion potential is well in accordance with that predicted by the hydrophilic-lipophilic deviation if the interfacial potential is appropriately accounted for.
DEFF Research Database (Denmark)
Gale, A.S.; Surlyk, Finn; Anderskouv, Kresten
2013-01-01
Evidence from regional stratigraphical patterns in Santonian−Campanian chalk is used to infer the presence of a very broad channel system (5 km across) with a depth of at least 50 m, running NNW−SSE across the eastern Isle of Wight; only the western part of the channel wall and fill is exposed. W......−Campanian chalks in the eastern Isle of Wight, involving penecontemporaneous tectonic inversion of the underlying basement structure, are rejected....
Reactivity in inverse micelles
International Nuclear Information System (INIS)
Brochette, Pascal
1987-01-01
This research thesis reports the study of the use of micro-emulsions of water in oil as reaction support. Only the 'inverse micelles' domain of the ternary mixing (water/AOT/isooctane) has been studied. The main addressed issues have been: the micro-emulsion disturbance in presence of reactants, the determination of reactant distribution and the resulting kinetic theory, the effect of the interface on electron transfer reactions, and finally protein solubilization [fr
Kitchin, CR
2013-01-01
DetectorsOptical DetectionRadio and Microwave DetectionX-Ray and Gamma-Ray DetectionCosmic Ray DetectorsNeutrino DetectorsGravitational Radiation Dark Matter and Dark Energy Detection ImagingThe Inverse ProblemPhotographyElectronic ImagingScanningInterferometrySpeckle InterferometryOccultationsRadarElectronic ImagesPhotometryPhotometryPhotometersSpectroscopySpectroscopy SpectroscopesOther TechniquesAstrometryPolarimetrySolar StudiesMagnetometryComputers and The Internet.
International Nuclear Information System (INIS)
Steinhauer, L.C.; Romea, R.D.; Kimura, W.D.
1997-01-01
A new method for laser acceleration is proposed based upon the inverse process of transition radiation. The laser beam intersects an electron-beam traveling between two thin foils. The principle of this acceleration method is explored in terms of its classical and quantum bases and its inverse process. A closely related concept based on the inverse of diffraction radiation is also presented: this concept has the significant advantage that apertures are used to allow free passage of the electron beam. These concepts can produce net acceleration because they do not satisfy the conditions in which the Lawson-Woodward theorem applies (no net acceleration in an unbounded vacuum). Finally, practical aspects such as damage limits at optics are employed to find an optimized set of parameters. For reasonable assumptions an acceleration gradient of 200 MeV/m requiring a laser power of less than 1 GW is projected. An interesting approach to multi-staging the acceleration sections is also presented. copyright 1997 American Institute of Physics
A solution to the inverse problem in ocean acoustics
Digital Repository Service at National Institute of Oceanography (India)
Murty, T.V.R.; Somayajulu, Y.K.; Mahadevan, R.; Murty, C.S.; Sastry, J.S.
stratified ocean, considering the range independent nature of the medium, geophysical inverse techniques are employed to reconstruct the sound speed profile. The reconstructed profile for a six layer ocean, with five energetic modes, is in good agreement...
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang; Youssef, Marzouk; Willcox, Karen
2014-01-01
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce
Inversion based on computational simulations
International Nuclear Information System (INIS)
Hanson, K.M.; Cunningham, G.S.; Saquib, S.S.
1998-01-01
A standard approach to solving inversion problems that involve many parameters uses gradient-based optimization to find the parameters that best match the data. The authors discuss enabling techniques that facilitate application of this approach to large-scale computational simulations, which are the only way to investigate many complex physical phenomena. Such simulations may not seem to lend themselves to calculation of the gradient with respect to numerous parameters. However, adjoint differentiation allows one to efficiently compute the gradient of an objective function with respect to all the variables of a simulation. When combined with advanced gradient-based optimization algorithms, adjoint differentiation permits one to solve very large problems of optimization or parameter estimation. These techniques will be illustrated through the simulation of the time-dependent diffusion of infrared light through tissue, which has been used to perform optical tomography. The techniques discussed have a wide range of applicability to modeling including the optimization of models to achieve a desired design goal
Testing earthquake source inversion methodologies
Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data
Inverse kinematic-based robot control
Wolovich, W. A.; Flueckiger, K. F.
1987-01-01
A fundamental problem which must be resolved in virtually all non-trivial robotic operations is the well-known inverse kinematic question. More specifically, most of the tasks which robots are called upon to perform are specified in Cartesian (x,y,z) space, such as simple tracking along one or more straight line paths or following a specified surfacer with compliant force sensors and/or visual feedback. In all cases, control is actually implemented through coordinated motion of the various links which comprise the manipulator; i.e., in link space. As a consequence, the control computer of every sophisticated anthropomorphic robot must contain provisions for solving the inverse kinematic problem which, in the case of simple, non-redundant position control, involves the determination of the first three link angles, theta sub 1, theta sub 2, and theta sub 3, which produce a desired wrist origin position P sub xw, P sub yw, and P sub zw at the end of link 3 relative to some fixed base frame. Researchers outline a new inverse kinematic solution and demonstrate its potential via some recent computer simulations. They also compare it to current inverse kinematic methods and outline some of the remaining problems which will be addressed in order to render it fully operational. Also discussed are a number of practical consequences of this technique beyond its obvious use in solving the inverse kinematic question.
Alternating minimisation for glottal inverse filtering
International Nuclear Information System (INIS)
Bleyer, Ismael Rodrigo; Lybeck, Lasse; Auvinen, Harri; Siltanen, Samuli; Airaksinen, Manu; Alku, Paavo
2017-01-01
A new method is proposed for solving the glottal inverse filtering (GIF) problem. The goal of GIF is to separate an acoustical speech signal into two parts: the glottal airflow excitation and the vocal tract filter. To recover such information one has to deal with a blind deconvolution problem. This ill-posed inverse problem is solved under a deterministic setting, considering unknowns on both sides of the underlying operator equation. A stable reconstruction is obtained using a double regularization strategy, alternating between fixing either the glottal source signal or the vocal tract filter. This enables not only splitting the nonlinear and nonconvex problem into two linear and convex problems, but also allows the use of the best parameters and constraints to recover each variable at a time. This new technique, called alternating minimization glottal inverse filtering (AM-GIF), is compared with two other approaches: Markov chain Monte Carlo glottal inverse filtering (MCMC-GIF), and iterative adaptive inverse filtering (IAIF), using synthetic speech signals. The recent MCMC-GIF has good reconstruction quality but high computational cost. The state-of-the-art IAIF method is computationally fast but its accuracy deteriorates, particularly for speech signals of high fundamental frequency ( F 0). The results show the competitive performance of the new method: With high F 0, the reconstruction quality is better than that of IAIF and close to MCMC-GIF while reducing the computational complexity by two orders of magnitude. (paper)
Introduction to Schroedinger inverse scattering
International Nuclear Information System (INIS)
Roberts, T.M.
1991-01-01
Schroedinger inverse scattering uses scattering coefficients and bound state data to compute underlying potentials. Inverse scattering has been studied extensively for isolated potentials q(x), which tend to zero as vertical strokexvertical stroke→∞. Inverse scattering for isolated impurities in backgrounds p(x) that are periodic, are Heaviside steps, are constant for x>0 and periodic for x<0, or that tend to zero as x→∞ and tend to ∞ as x→-∞, have also been studied. This paper identifies literature for the five inverse problems just mentioned, and for four other inverse problems. Heaviside-step backgrounds are discussed at length. (orig.)
Turbo-SMT: Parallel Coupled Sparse Matrix-Tensor Factorizations and Applications
Papalexakis, Evangelos E.; Faloutsos, Christos; Mitchell, Tom M.; Talukdar, Partha Pratim; Sidiropoulos, Nicholas D.; Murphy, Brian
2016-01-01
How can we correlate the neural activity in the human brain as it responds to typed words, with properties of these terms (like ’edible’, ’fits in hand’)? In short, we want to find latent variables, that jointly explain both the brain activity, as well as the behavioral responses. This is one of many settings of the Coupled Matrix-Tensor Factorization (CMTF) problem. Can we enhance any CMTF solver, so that it can operate on potentially very large datasets that may not fit in main memory? We introduce Turbo-SMT, a meta-method capable of doing exactly that: it boosts the performance of any CMTF algorithm, produces sparse and interpretable solutions, and parallelizes any CMTF algorithm, producing sparse and interpretable solutions (up to 65 fold). Additionally, we improve upon ALS, the work-horse algorithm for CMTF, with respect to efficiency and robustness to missing values. We apply Turbo-SMT to BrainQ, a dataset consisting of a (nouns, brain voxels, human subjects) tensor and a (nouns, properties) matrix, with coupling along the nouns dimension. Turbo-SMT is able to find meaningful latent variables, as well as to predict brain activity with competitive accuracy. Finally, we demonstrate the generality of Turbo-SMT, by applying it on a Facebook dataset (users, ’friends’, wall-postings); there, Turbo-SMT spots spammer-like anomalies. PMID:27672406
GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase I
National Aeronautics and Space Administration — Many large-scale numerical simulations can be broken down into common mathematical routines. While the applications may differ, the need to perform functions such as...
Optimal Sparse Matrix Dense Vector Multiplication in the I/O-Model
DEFF Research Database (Denmark)
Bender, Michael A.; Brodal, Gerth Stølting; Fagerberg, Rolf
2010-01-01
of nonzero entries is kN, i.e., where the average number of nonzero entries per column is k. We investigate what is the external worst-case complexity, i.e., the best possible upper bound on the number of I/Os, as a function of k and N. We determine this complexity up to a constant factor for all meaningful...
Speculative segmented sum for sparse matrix-vector multiplication on heterogeneous processors
DEFF Research Database (Denmark)
Liu, Weifeng; Vinter, Brian
2015-01-01
of the same chip is triggered to re-arrange the predicted partial sums for a correct resulting vector. On three heterogeneous processors from Intel, AMD and nVidia, using 20 sparse matrices as a benchmark suite, the experimental results show that our method obtains significant performance improvement over...
GPU-Accelerated Sparse Matrix Solvers for Large-Scale Simulations, Phase II
National Aeronautics and Space Administration — At the heart of scientific computing and numerical analysis are linear algebra solvers. In scientific computing, the focus is on the partial differential equations...
Developments in inverse photoemission spectroscopy
International Nuclear Information System (INIS)
Sheils, W.; Leckey, R.C.G.; Riley, J.D.
1996-01-01
In the 1950's and 1960's, Photoemission Spectroscopy (PES) established itself as the major technique for the study of the occupied electronic energy levels of solids. During this period the field divided into two branches: X-ray Photoemission Spectroscopy (XPS) for photon energies greater than ∼l000eV, and Ultra-violet Photoemission Spectroscopy (UPS) for photon energies below ∼100eV. By the 1970's XPS and UPS had become mature techniques. Like XPS, BIS (at x-ray energies) does not have the momentum-resolving ability of UPS that has contributed much to the understanding of the occupied band structures of solids. BIS moved into a new energy regime in 1977 when Dose employed a Geiger-Mueller tube to obtain density of unoccupied states data from a tantalum sample at a photon energy of ∼9.7eV. At similar energies, the technique has since become known as Inverse Photoemission Spectroscopy (IPS), in acknowledgment of its complementary relationship to UPS and to distinguish it from the higher energy BIS. Drawing on decades of UPS expertise, IPS has quickly moved into areas of interest where UPS has been applied; metals, semiconductors, layer compounds, adsorbates, ferromagnets, and superconductors. At La Trobe University an IPS facility has been constructed. This presentation reports on developments in the experimental and analytical techniques of IPS that have been made there. The results of a study of the unoccupied bulk and surface bands of GaAs are presented
International Nuclear Information System (INIS)
Sugimoto, Yoshihiro
2014-01-01
A restricted stripe-like zone suffered major damage due to the 1995 Hyogo-ken Nanbu earthquake, and ground motion of the south side of the Kashiwazaki NPP site was much greater than that of the north side in the 2007 Niigata-ken Chuetsu-oki earthquake. One reason for these phenomena is thought to be the focusing effect due to irregularly shaped sedimentary basins (e.g., basin-edge structure, fold structure, etc.) This indicates that precise evaluation of S-wave velocity structure is important. A calculation program that was developed to make S-wave velocity models using the joint inversion method was presented. This program unifies various geophysical and geological data and can make a complex structure model for evaluating strong ground motion with high precision. (author)
Inverse Faraday Effect Revisited
Mendonça, J. T.; Ali, S.; Davies, J. R.
2010-11-01
The inverse Faraday effect is usually associated with circularly polarized laser beams. However, it was recently shown that it can also occur for linearly polarized radiation [1]. The quasi-static axial magnetic field by a laser beam propagating in plasma can be calculated by considering both the spin and the orbital angular momenta of the laser pulse. A net spin is present when the radiation is circularly polarized and a net orbital angular momentum is present if there is any deviation from perfect rotational symmetry. This orbital angular momentum has recently been discussed in the plasma context [2], and can give an additional contribution to the axial magnetic field, thus enhancing or reducing the inverse Faraday effect. As a result, this effect that is usually attributed to circular polarization can also be excited by linearly polarized radiation, if the incident laser propagates in a Laguerre-Gauss mode carrying a finite amount of orbital angular momentum.[4pt] [1] S. ALi, J.R. Davies and J.T. Mendonca, Phys. Rev. Lett., 105, 035001 (2010).[0pt] [2] J. T. Mendonca, B. Thidé, and H. Then, Phys. Rev. Lett. 102, 185005 (2009).
Inverse vs. forward breast IMRT planning
International Nuclear Information System (INIS)
Mihai, Alina; Rakovitch, Eileen; Sixel, Katharina; Woo, Tony; Cardoso, Marlene; Bell, Chris; Ruschin, Mark; Pignol, Jean-Philippe
2005-01-01
Breast intensity-modulated radiation therapy (IMRT) improves dose distribution homogeneity within the whole breast. Previous publications report the use of inverse or forward dose optimization algorithms. Because the inverse technique is not widely available in commercial treatment planning systems, it is important to compare the 2 algorithms. The goal of this work is to compare them on a prospective cohort of 30 patients. Dose distributions were evaluated on differential dose-volume histograms using the volumes receiving more than 105% (V 105 ) and 110% (V 110 ) of the prescribed dose, and on the maximum dose (D max ) or hot spot and the sagittal dose gradient (SDG) being the gradient between the dose on inframammary crease and the dose prescribed. The data were analyzed using Wilcoxon signed rank test. The inverse planning significantly improves the V 105 (mean value 9.7% vs. 14.5%, p = 0.002), and the V 110 (mean value 1.4% vs. 3.2%, p = 0.006). However, the SDG is not statistically significantly different for either algorithm. Looking at the potential impact on skin acute reaction, although there is a significant reduction of V 110 using an inverse algorithm, it is unlikely this 1.6% volume reduction will present a significant clinical advantage over a forward algorithm. Both algorithms are equivalent in removing the hot spots on the inframammary fold, where acute skin reactions occur more frequently using a conventional wedge technique. Based on these results, we recommend that both forward and inverse algorithms should be considered for breast IMRT planning
Computer-Aided Numerical Inversion of Laplace Transform
Directory of Open Access Journals (Sweden)
Umesh Kumar
2000-01-01
Full Text Available This paper explores the technique for the computer aided numerical inversion of Laplace transform. The inversion technique is based on the properties of a family of three parameter exponential probability density functions. The only limitation in the technique is the word length of the computer being used. The Laplace transform has been used extensively in the frequency domain solution of linear, lumped time invariant networks but its application to the time domain has been limited, mainly because of the difficulty in finding the necessary poles and residues. The numerical inversion technique mentioned above does away with the poles and residues but uses precomputed numbers to find the time response. This technique is applicable to the solution of partially differentiable equations and certain classes of linear systems with time varying components.
Directory of Open Access Journals (Sweden)
Markus Spiliotis
Full Text Available Inverse fusion PCR cloning (IFPC is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.
Transmuted Generalized Inverse Weibull Distribution
Merovci, Faton; Elbatal, Ibrahim; Ahmed, Alaa
2013-01-01
A generalization of the generalized inverse Weibull distribution so-called transmuted generalized inverse Weibull dis- tribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM) in order to generate a flexible family of probability distributions taking generalized inverse Weibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expression...
Inverse planning for x-ray rotation therapy: a general solution of the inverse problem
International Nuclear Information System (INIS)
Oelfke, U.; Bortfeld, T.
1999-01-01
Rotation therapy with photons is currently under investigation for the delivery of intensity modulated radiotherapy (IMRT). An analytical approach for inverse treatment planning of this radiotherapy technique is described. The inverse problem for the delivery of arbitrary 2D dose profiles is first formulated and then solved analytically. In contrast to previously applied strategies for solving the inverse problem, it is shown that the most general solution for the fluence profiles consists of two independent solutions of different parity. A first analytical expression for both fluence profiles is derived. The mathematical derivation includes two different strategies, an elementary expansion of fluence and dose into polynomials and a more practical approach in terms of Fourier transforms. The obtained results are discussed in the context of previous work on this problem. (author)
Calculation of the inverse data space via sparse inversion
Saragiotis, Christos; Doulgeris, Panagiotis C.; Verschuur, Dirk Jacob Eric
2011-01-01
The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from
Inverse feasibility problems of the inverse maximum flow problems
Indian Academy of Sciences (India)
199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.
Inverse problem in radionuclide transport
International Nuclear Information System (INIS)
Yu, C.
1988-01-01
The disposal of radioactive waste must comply with the performance objectives set forth in 10 CFR 61 for low-level waste (LLW) and 10 CFR 60 for high-level waste (HLW). To determine probable compliance, the proposed disposal system can be modeled to predict its performance. One of the difficulties encountered in such a study is modeling the migration of radionuclides through a complex geologic medium for the long term. Although many radionuclide transport models exist in the literature, the accuracy of the model prediction is highly dependent on the model parameters used. The problem of using known parameters in a radionuclide transport model to predict radionuclide concentrations is a direct problem (DP); whereas the reverse of DP, i.e., the parameter identification problem of determining model parameters from known radionuclide concentrations, is called the inverse problem (IP). In this study, a procedure to solve IP is tested, using the regression technique. Several nonlinear regression programs are examined, and the best one is recommended. 13 refs., 1 tab
Displacement Parameter Inversion for a Novel Electromagnetic Underground Displacement Sensor
Directory of Open Access Journals (Sweden)
Nanying Shentu
2014-05-01
Full Text Available Underground displacement monitoring is an effective method to explore deep into rock and soil masses for execution of subsurface displacement measurements. It is not only an important means of geological hazards prediction and forecasting, but also a forefront, hot and sophisticated subject in current geological disaster monitoring. In previous research, the authors had designed a novel electromagnetic underground horizontal displacement sensor (called the H-type sensor by combining basic electromagnetic induction principles with modern sensing techniques and established a mutual voltage measurement theoretical model called the Equation-based Equivalent Loop Approach (EELA. Based on that work, this paper presents an underground displacement inversion approach named “EELA forward modeling-approximate inversion method”. Combining the EELA forward simulation approach with the approximate optimization inversion theory, it can deduce the underground horizontal displacement through parameter inversion of the H-type sensor. Comprehensive and comparative studies have been conducted between the experimentally measured and theoretically inversed values of horizontal displacement under counterpart conditions. The results show when the measured horizontal displacements are in the 0–100 mm range, the horizontal displacement inversion discrepancy is generally tested to be less than 3 mm under varied tilt angles and initial axial distances conditions, which indicates that our proposed parameter inversion method can predict underground horizontal displacement measurements effectively and robustly for the H-type sensor and the technique is applicable for practical geo-engineering applications.
Inverse radiative transfer problems in two-dimensional heterogeneous media
International Nuclear Information System (INIS)
Tito, Mariella Janette Berrocal
2001-01-01
The analysis of inverse problems in participating media where emission, absorption and scattering take place has several relevant applications in engineering and medicine. Some of the techniques developed for the solution of inverse problems have as a first step the solution of the direct problem. In this work the discrete ordinates method has been used for the solution of the linearized Boltzmann equation in two dimensional cartesian geometry. The Levenberg - Marquardt method has been used for the solution of the inverse problem of internal source and absorption and scattering coefficient estimation. (author)
Unfolding in particle physics: A window on solving inverse problems
International Nuclear Information System (INIS)
Spano, F.
2013-01-01
Unfolding is the ensemble of techniques aimed at resolving inverse, ill-posed problems. A pedagogical introduction to the origin and main problems related to unfolding is presented and used as the the stepping stone towards the illustration of some of the most common techniques that are currently used in particle physics experiments. (authors)
Face inversion increases attractiveness.
Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A
2017-07-01
Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.
Zhang, Dongliang
2013-01-01
To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.
An interpretation of signature inversion
International Nuclear Information System (INIS)
Onishi, Naoki; Tajima, Naoki
1988-01-01
An interpretation in terms of the cranking model is presented to explain why signature inversion occurs for positive γ of the axially asymmetric deformation parameter and emerges into specific orbitals. By introducing a continuous variable, the eigenvalue equation can be reduced to a one dimensional Schroedinger equation by means of which one can easily understand the cause of signature inversion. (author)
Inverse problems for Maxwell's equations
Romanov, V G
1994-01-01
The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.
Moebius inverse problem for distorted black holes
International Nuclear Information System (INIS)
Rosu, H.
1993-01-01
Hawking ''thermal'' radiation could be a means to detect black holes of micron sizes, which may be hovering through the universe. We consider these micro-black holes to be distorted by the presence of some distribution of matter representing a convolution factor for their Hawking radiation. One may hope to determine from their Hawking signals the temperature distribution of their material shells by the inverse black body problem. In 1990, Nan-xian Chen has used a so-called modified Moebius transform to solve the inverse black body problem. We discuss and apply this technique to Hawking radiation. Some comments on supersymmetric applications of Moebius function and transform are also added. (author). 22 refs
Algebraic properties of generalized inverses
Cvetković‐Ilić, Dragana S
2017-01-01
This book addresses selected topics in the theory of generalized inverses. Following a discussion of the “reverse order law” problem and certain problems involving completions of operator matrices, it subsequently presents a specific approach to solving the problem of the reverse order law for {1} -generalized inverses. Particular emphasis is placed on the existence of Drazin invertible completions of an upper triangular operator matrix; on the invertibility and different types of generalized invertibility of a linear combination of operators on Hilbert spaces and Banach algebra elements; on the problem of finding representations of the Drazin inverse of a 2x2 block matrix; and on selected additive results and algebraic properties for the Drazin inverse. In addition to the clarity of its content, the book discusses the relevant open problems for each topic discussed. Comments on the latest references on generalized inverses are also included. Accordingly, the book will be useful for graduate students, Ph...
Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing
Chu, W. P.
1985-01-01
The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.
International Nuclear Information System (INIS)
Kimura, W.D.
1993-01-01
The final report describes work performed to investigate inverse Cherenkov acceleration (ICA) as a promising method for laser particle acceleration. In particular, an improved configuration of ICA is being tested in a experiment presently underway on the Accelerator Test Facility (ATF). In the experiment, the high peak power (∼ 10 GW) linearly polarized ATF CO 2 laser beam is converted to a radially polarized beam. This is beam is focused with an axicon at the Cherenkov angle onto the ATF 50-MeV e-beam inside a hydrogen gas cell, where the gas acts as the phase matching medium of the interaction. An energy gain of ∼12 MeV is predicted assuming a delivered laser peak power of 5 GW. The experiment is divided into two phases. The Phase I experiments, which were completed in the spring of 1992, were conducted before the ATF e-beam was available and involved several successful tests of the optical systems. Phase II experiments are with the e-beam and laser beam, and are still in progress. The ATF demonstrated delivery of the e-beam to the experiment in Dec. 1992. A preliminary ''debugging'' run with the e-beam and laser beam occurred in May 1993. This revealed the need for some experimental modifications, which have been implemented. The second run is tentatively scheduled for October or November 1993. In parallel to the experimental efforts has been ongoing theoretical work to support the experiment and investigate improvement and/or offshoots. One exciting offshoot has been theoretical work showing that free-space laser acceleration of electrons is possible using a radially-polarized, axicon-focused laser beam, but without any phase-matching gas. The Monte Carlo code used to model the ICA process has been upgraded and expanded to handle different types of laser beam input profiles
Proportional Derivative Control with Inverse Dead-Zone for Pendulum Systems
Directory of Open Access Journals (Sweden)
José de Jesús Rubio
2013-01-01
Full Text Available A proportional derivative controller with inverse dead-zone is proposed for the control of pendulum systems. The proposed method has the characteristic that the inverse dead-zone is cancelled with the pendulum dead-zone. Asymptotic stability of the proposed technique is guaranteed by the Lyapunov analysis. Simulations of two pendulum systems show the effectiveness of the proposed technique.
A Generalization of the Spherical Inversion
Ramírez, José L.; Rubiano, Gustavo N.
2017-01-01
In the present article, we introduce a generalization of the spherical inversion. In particular, we define an inversion with respect to an ellipsoid, and prove several properties of this new transformation. The inversion in an ellipsoid is the generalization of the elliptic inversion to the three-dimensional space. We also study the inverse images…
Neutron inverse kinetics via Gaussian Processes
International Nuclear Information System (INIS)
Picca, Paolo; Furfaro, Roberto
2012-01-01
Highlights: ► A novel technique for the interpretation of experiments in ADS is presented. ► The technique is based on Bayesian regression, implemented via Gaussian Processes. ► GPs overcome the limits of classical methods, based on PK approximation. ► Results compares GPs and ANN performance, underlining similarities and differences. - Abstract: The paper introduces the application of Gaussian Processes (GPs) to determine the subcriticality level in accelerator-driven systems (ADSs) through the interpretation of pulsed experiment data. ADSs have peculiar kinetic properties due to their special core design. For this reason, classical – inversion techniques based on point kinetic (PK) generally fail to generate an accurate estimate of reactor subcriticality. Similarly to Artificial Neural Networks (ANNs), Gaussian Processes can be successfully trained to learn the underlying inverse neutron kinetic model and, as such, they are not limited to the model choice. Importantly, GPs are strongly rooted into the Bayes’ theorem which makes them a powerful tool for statistical inference. Here, GPs have been designed and trained on a set of kinetics models (e.g. point kinetics and multi-point kinetics) for homogeneous and heterogeneous settings. The results presented in the paper show that GPs are very efficient and accurate in predicting the reactivity for ADS-like systems. The variance computed via GPs may provide an indication on how to generate additional data as function of the desired accuracy.
Automatic differentiation in geophysical inverse problems
Sambridge, M.; Rickwood, P.; Rawlinson, N.; Sommacal, S.
2007-07-01
Automatic differentiation (AD) is the technique whereby output variables of a computer code evaluating any complicated function (e.g. the solution to a differential equation) can be differentiated with respect to the input variables. Often AD tools take the form of source to source translators and produce computer code without the need for deriving and hand coding of explicit mathematical formulae by the user. The power of AD lies in the fact that it combines the generality of finite difference techniques and the accuracy and efficiency of analytical derivatives, while at the same time eliminating `human' coding errors. It also provides the possibility of accurate, efficient derivative calculation from complex `forward' codes where no analytical derivatives are possible and finite difference techniques are too cumbersome. AD is already having a major impact in areas such as optimization, meteorology and oceanography. Similarly it has considerable potential for use in non-linear inverse problems in geophysics where linearization is desirable, or for sensitivity analysis of large numerical simulation codes, for example, wave propagation and geodynamic modelling. At present, however, AD tools appear to be little used in the geosciences. Here we report on experiments using a state of the art AD tool to perform source to source code translation in a range of geoscience problems. These include calculating derivatives for Gibbs free energy minimization, seismic receiver function inversion, and seismic ray tracing. Issues of accuracy and efficiency are discussed.
A compressive sensing approach to the calculation of the inverse data space
Khan, Babar Hasan
2012-01-01
Seismic processing in the Inverse Data Space (IDS) has its advantages like the task of removing the multiples simply becomes muting the zero offset and zero time data in the inverse domain. Calculation of the Inverse Data Space by sparse inversion techniques has seen mitigation of some artifacts. We reformulate the problem by taking advantage of some of the developments from the field of Compressive Sensing. The seismic data is compressed at the sensor level by recording projections of the traces. We then process this compressed data directly to estimate the inverse data space. Due to the smaller number of data set we also gain in terms of computational complexity.
Inverse Opal Scaffolds and Their Biomedical Applications.
Zhang, Yu Shrike; Zhu, Chunlei; Xia, Younan
2017-09-01
Three-dimensional porous scaffolds play a pivotal role in tissue engineering and regenerative medicine by functioning as biomimetic substrates to manipulate cellular behaviors. While many techniques have been developed to fabricate porous scaffolds, most of them rely on stochastic processes that typically result in scaffolds with pores uncontrolled in terms of size, structure, and interconnectivity, greatly limiting their use in tissue regeneration. Inverse opal scaffolds, in contrast, possess uniform pores inheriting from the template comprised of a closely packed lattice of monodispersed microspheres. The key parameters of such scaffolds, including architecture, pore structure, porosity, and interconnectivity, can all be made uniform across the same sample and among different samples. In conjunction with a tight control over pore sizes, inverse opal scaffolds have found widespread use in biomedical applications. In this review, we provide a detailed discussion on this new class of advanced materials. After a brief introduction to their history and fabrication, we highlight the unique advantages of inverse opal scaffolds over their non-uniform counterparts. We then showcase their broad applications in tissue engineering and regenerative medicine, followed by a summary and perspective on future directions. © 2017 WILEY-VCH Verlag GmbH & Co. KGaA, Weinheim.
Inverse hydrochemical models of aqueous extracts tests
Energy Technology Data Exchange (ETDEWEB)
Zheng, L.; Samper, J.; Montenegro, L.
2008-10-10
Aqueous extract test is a laboratory technique commonly used to measure the amount of soluble salts of a soil sample after adding a known mass of distilled water. Measured aqueous extract data have to be re-interpreted in order to infer porewater chemical composition of the sample because porewater chemistry changes significantly due to dilution and chemical reactions which take place during extraction. Here we present an inverse hydrochemical model to estimate porewater chemical composition from measured water content, aqueous extract, and mineralogical data. The model accounts for acid-base, redox, aqueous complexation, mineral dissolution/precipitation, gas dissolution/ex-solution, cation exchange and surface complexation reactions, of which are assumed to take place at local equilibrium. It has been solved with INVERSE-CORE{sup 2D} and been tested with bentonite samples taken from FEBEX (Full-scale Engineered Barrier EXperiment) in situ test. The inverse model reproduces most of the measured aqueous data except bicarbonate and provides an effective, flexible and comprehensive method to estimate porewater chemical composition of clays. Main uncertainties are related to kinetic calcite dissolution and variations in CO2(g) pressure.
On the Duality of Forward and Inverse Light Transport.
Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi
2011-10-01
Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.
Frequency-domain waveform inversion using the unwrapped phase
Choi, Yun Seok
2011-01-01
Phase wrapping in the frequency-domain (or cycle skipping in the time-domain) is the major cause of the local minima problem in the waveform inversion. The unwrapped phase has the potential to provide us with a robust and reliable waveform inversion, with reduced local minima. We propose a waveform inversion algorithm using the unwrapped phase objective function in the frequency-domain. The unwrapped phase, or what we call the instantaneous traveltime, is given by the imaginary part of dividing the derivative of the wavefield with respect to the angular frequency by the wavefield itself. As a result, the objective function is given a traveltime-like function, which allows us to smooth it and reduce its nonlinearity. The gradient of the objective function is computed using the back-propagation algorithm based on the adjoint-state technique. We apply both our waveform inversion algorithm using the unwrapped phase and the conventional waveform inversion and show that our inversion algorithm gives better convergence to the true model than the conventional waveform inversion. © 2011 Society of Exploration Geophysicists.
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-01
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded
Wave-equation dispersion inversion
Li, Jing; Feng, Zongcai; Schuster, Gerard T.
2016-01-01
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained
Testing earthquake source inversion methodologies
Page, Morgan T.
2011-01-01
Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.
Inverse amplitude method and Adler zeros
International Nuclear Information System (INIS)
Gomez Nicola, A.; Pelaez, J. R.; Rios, G.
2008-01-01
The inverse amplitude method is a powerful unitarization technique to enlarge the energy applicability region of effective Lagrangians. It has been widely used to describe resonances in hadronic physics, combined with chiral perturbation theory, as well as in the strongly interacting symmetry breaking sector. In this work we show how it can be slightly modified to also account for the subthreshold region, incorporating correctly the Adler zeros required by chiral symmetry and eliminating spurious poles. These improvements produce negligible effects on the physical region.
Solving inversion problems with neural networks
Kamgar-Parsi, Behzad; Gualtieri, J. A.
1990-01-01
A class of inverse problems in remote sensing can be characterized by Q = F(x), where F is a nonlinear and noninvertible (or hard to invert) operator, and the objective is to infer the unknowns, x, from the observed quantities, Q. Since the number of observations is usually greater than the number of unknowns, these problems are formulated as optimization problems, which can be solved by a variety of techniques. The feasibility of neural networks for solving such problems is presently investigated. As an example, the problem of finding the atmospheric ozone profile from measured ultraviolet radiances is studied.
Inversion Therapy: Can It Relieve Back Pain?
Inversion therapy: Can it relieve back pain? Does inversion therapy relieve back pain? Is it safe? Answers from Edward R. Laskowski, M.D. Inversion therapy doesn't provide lasting relief from back ...
Computation of inverse magnetic cascades
International Nuclear Information System (INIS)
Montgomery, D.
1981-10-01
Inverse cascades of magnetic quantities for turbulent incompressible magnetohydrodynamics are reviewed, for two and three dimensions. The theory is extended to the Strauss equations, a description intermediate between two and three dimensions appropriate to tokamak magnetofluids. Consideration of the absolute equilibrium Gibbs ensemble for the system leads to a prediction of an inverse cascade of magnetic helicity, which may manifest itself as a major disruption. An agenda for computational investigation of this conjecture is proposed
Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set
Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.
2017-12-01
In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of
Point sources and multipoles in inverse scattering theory
Potthast, Roland
2001-01-01
Over the last twenty years, the growing availability of computing power has had an enormous impact on the classical fields of direct and inverse scattering. The study of inverse scattering, in particular, has developed rapidly with the ability to perform computational simulations of scattering processes and led to remarkable advances in a range of applications, from medical imaging and radar to remote sensing and seismic exploration. Point Sources and Multipoles in Inverse Scattering Theory provides a survey of recent developments in inverse acoustic and electromagnetic scattering theory. Focusing on methods developed over the last six years by Colton, Kirsch, and the author, this treatment uses point sources combined with several far-reaching techniques to obtain qualitative reconstruction methods. The author addresses questions of uniqueness, stability, and reconstructions for both two-and three-dimensional problems.With interest in extracting information about an object through scattered waves at an all-ti...
AI-guided parameter optimization in inverse treatment planning
International Nuclear Information System (INIS)
Yan Hui; Yin Fangfang; Guan Huaiqun; Kim, Jae Ho
2003-01-01
An artificial intelligence (AI)-guided inverse planning system was developed to optimize the combination of parameters in the objective function for intensity-modulated radiation therapy (IMRT). In this system, the empirical knowledge of inverse planning was formulated with fuzzy if-then rules, which then guide the parameter modification based on the on-line calculated dose. Three kinds of parameters (weighting factor, dose specification, and dose prescription) were automatically modified using the fuzzy inference system (FIS). The performance of the AI-guided inverse planning system (AIGIPS) was examined using the simulated and clinical examples. Preliminary results indicate that the expected dose distribution was automatically achieved using the AI-guided inverse planning system, with the complicated compromising between different parameters accomplished by the fuzzy inference technique. The AIGIPS provides a highly promising method to replace the current trial-and-error approach
International Nuclear Information System (INIS)
Yan, W.; Dong, Q.Q.; Sun, L.N.; Deng, W.; Wu, Sh.
2013-01-01
Most primary cells use Zn or Li as the anode, a metallic oxide as the cathode, and an acidic or alkaline solution or moist past as the electrolytic solution. In this paper, highly ordered poly pyrrole (PPy) inverse opals have been successfully synthesized in the acetonitrile solution containing [bmim]PF 6 . PPy films were prepared under the same experimental conditions. Cyclic voltammograms of the PPy film and the PPy inverse opal in neutral phosphate buffer solution (PBS) were recorded. X-ray photoelectron spectroscopy technique was used to investigate the structural surface of the PPy films and the PPy inverse opals. It is found that the PF 6 - anions kept de doping from the PPy films during the potential scanning process, resulting in the electrochemical inactivity. Although PF 6 - anions also kept de doping from the PPy inverse opals, the PO 4 3- anions from PBS could dope into the inverse opal, explaining why the PPy inverse opals kept their electrochemical activity. An environmental friendly cell prototype was constructed, using the PPy inverse opal as the anode. The electrolytes in both the cathodic and anodic half-cells were neutral PBSs. The open-circuit potential of the cell prototype reached 0.487 V and showed a stable output over several hundred hours
Inverse and Predictive Modeling
Energy Technology Data Exchange (ETDEWEB)
Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-09-27
The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.
Global seismic inversion as the next standard step in the processing sequence
Energy Technology Data Exchange (ETDEWEB)
Maver, Kim G.; Hansen, Lars S.; Jepsen, Anne-Marie; Rasmussen, Klaus B.
1998-12-31
Seismic inversion of post stack seismic data has until recently been regarded as a reservoir oriented method since the standard inversion techniques rely on extensive well control and a detailed user derived input model. Most seismic inversion techniques further requires a stable wavelet. As a consequence seismic inversion is mainly utilised in mature areas focusing of specific zones only after the seismic data has been interpreted and is well understood. By using an advanced 3-D global technique, seismic inversion is presented as the next standard step in the processing sequence. The technique is robust towards noise within the seismic data, utilizes a time variant wavelet, and derives a low frequency model utilizing the stacking velocities and only limited well control. 4 figs.
Ultra-high-speed inversion recovery echo planar MR imaging
International Nuclear Information System (INIS)
Stehling, M.K.; Ordidge, R.J.; Coxon, R.; Chapman, B.; Houseman, A.M.; Guifoyle, D.; Blamire, A.; Gibbs, P.; Mansfield, P.
1988-01-01
Fast two-dimensional FT MR imaging techniques such as fast low-angle shot do not allow inversion recovery (IR). Rapid repetition of low-angle pulses is incompatible with a 180 0 inversion pulse. Echo planar imaging (EPI) can be applied in conjunction with IR, because after preparation of the spin system, a complete image is acquired. Data acquisition in less than 100 msec and real-time display allows interactive optimization of inversion time (4.0-9,000 msec) with little time penalty. The authors have applied IR EPI to the study of the brain, liver, and kidneys in normal volunteers and patients. Technical details are presented, and the applications of this first ultra-high-speed IR technique will be shown
Inverse problems in classical and quantum physics
International Nuclear Information System (INIS)
Almasy, A.A.
2007-01-01
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A
Inverse problems in classical and quantum physics
Energy Technology Data Exchange (ETDEWEB)
Almasy, A.A.
2007-06-29
The subject of this thesis is in the area of Applied Mathematics known as Inverse Problems. Inverse problems are those where a set of measured data is analysed in order to get as much information as possible on a model which is assumed to represent a system in the real world. We study two inverse problems in the fields of classical and quantum physics: QCD condensates from tau-decay data and the inverse conductivity problem. Despite a concentrated effort by physicists extending over many years, an understanding of QCD from first principles continues to be elusive. Fortunately, data continues to appear which provide a rather direct probe of the inner workings of the strong interactions. We use a functional method which allows us to extract within rather general assumptions phenomenological parameters of QCD (the condensates) from a comparison of the time-like experimental data with asymptotic space-like results from theory. The price to be paid for the generality of assumptions is relatively large errors in the values of the extracted parameters. Although we do not claim that our method is superior to other approaches, we hope that our results lend additional confidence to the numerical results obtained with the help of methods based on QCD sum rules. EIT is a technology developed to image the electrical conductivity distribution of a conductive medium. The technique works by performing simultaneous measurements of direct or alternating electric currents and voltages on the boundary of an object. These are the data used by an image reconstruction algorithm to determine the electrical conductivity distribution within the object. In this thesis, two approaches of EIT image reconstruction are proposed. The first is based on reformulating the inverse problem in terms of integral equations. This method uses only a single set of measurements for the reconstruction. The second approach is an algorithm based on linearisation which uses more then one set of measurements. A
Inverse source problems in elastodynamics
Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao
2018-04-01
We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.
Inversion of the star transform
International Nuclear Information System (INIS)
Zhao, Fan; Schotland, John C; Markel, Vadim A
2014-01-01
We define the star transform as a generalization of the broken ray transform introduced by us in previous work. The advantages of using the star transform include the possibility to reconstruct the absorption and the scattering coefficients of the medium separately and simultaneously (from the same data) and the possibility to utilize scattered radiation which, in the case of conventional x-ray tomography, is discarded. In this paper, we derive the star transform from physical principles, discuss its mathematical properties and analyze numerical stability of inversion. In particular, it is shown that stable inversion of the star transform can be obtained only for configurations involving odd number of rays. Several computationally-efficient inversion algorithms are derived and tested numerically. (paper)
Inverse comptonization vs. thermal synchrotron
International Nuclear Information System (INIS)
Fenimore, E.E.; Klebesadel, R.W.; Laros, J.G.
1983-01-01
There are currently two radiation mechanisms being considered for gamma-ray bursts: thermal synchrotron and inverse comptonization. They are mutually exclusive since thermal synchrotron requires a magnetic field of approx. 10 12 Gauss whereas inverse comptonization cannot produce a monotonic spectrum if the field is larger than 10 11 and is too inefficient relative to thermal synchrotron unless the field is less than 10 9 Gauss. Neither mechanism can explain completely the observed characteristics of gamma-ray bursts. However, we conclude that thermal synchrotron is more consistent with the observations if the sources are approx. 40 kpc away whereas inverse comptonization is more consistent if they are approx. 300 pc away. Unfortunately, the source distance is still not known and, thus, the radiation mechanism is still uncertain
Optimization for nonlinear inverse problem
International Nuclear Information System (INIS)
Boyadzhiev, G.; Brandmayr, E.; Pinat, T.; Panza, G.F.
2007-06-01
The nonlinear inversion of geophysical data in general does not yield a unique solution, but a single model, representing the investigated field, is preferred for an easy geological interpretation of the observations. The analyzed region is constituted by a number of sub-regions where the multi-valued nonlinear inversion is applied, which leads to a multi-valued solution. Therefore, combining the values of the solution in each sub-region, many acceptable models are obtained for the entire region and this complicates the geological interpretation of geophysical investigations. In this paper are presented new methodologies, capable to select one model, among all acceptable ones, that satisfies different criteria of smoothness in the explored space of solutions. In this work we focus on the non-linear inversion of surface waves dispersion curves, which gives structural models of shear-wave velocity versus depth, but the basic concepts have a general validity. (author)
Some Phenomena on Negative Inversion Constructions
Sung, Tae-Soo
2013-01-01
We examine the characteristics of NDI (negative degree inversion) and its relation with other inversion phenomena such as SVI (subject-verb inversion) and SAI (subject-auxiliary inversion). The negative element in the NDI construction may be" not," a negative adverbial, or a negative verb. In this respect, NDI has similar licensing…
The Inverse of Banded Matrices
2013-01-01
indexed entries all zeros. In this paper, generalizing a method of Mallik (1999) [5], we give the LU factorization and the inverse of the matrix Br,n (if it...r ≤ i ≤ r, 1 ≤ j ≤ r, with the remaining un-indexed entries all zeros. In this paper generalizing a method of Mallik (1999) [5...matrices and applications to piecewise cubic approximation, J. Comput. Appl. Math. 8 (4) (1982) 285–288. [5] R.K. Mallik , The inverse of a lower
Size Estimates in Inverse Problems
Di Cristo, Michele
2014-01-06
Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.
-Dimensional Fractional Lagrange's Inversion Theorem
Directory of Open Access Journals (Sweden)
F. A. Abd El-Salam
2013-01-01
Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.
Darwin's "strange inversion of reasoning".
Dennett, Daniel
2009-06-16
Darwin's theory of evolution by natural selection unifies the world of physics with the world of meaning and purpose by proposing a deeply counterintuitive "inversion of reasoning" (according to a 19th century critic): "to make a perfect and beautiful machine, it is not requisite to know how to make it" [MacKenzie RB (1868) (Nisbet & Co., London)]. Turing proposed a similar inversion: to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is. Together, these ideas help to explain how we human intelligences came to be able to discern the reasons for all of the adaptations of life, including our own.
Inverse transport theory and applications
International Nuclear Information System (INIS)
Bal, Guillaume
2009-01-01
Inverse transport consists of reconstructing the optical properties of a domain from measurements performed at the domain's boundary. This review concerns several types of measurements: time-dependent, time-independent, angularly resolved and angularly averaged measurements. We review recent results on the reconstruction of the optical parameters from such measurements and the stability of such reconstructions. Inverse transport finds applications e.g. in medical imaging (optical tomography, optical molecular imaging) and in geophysical imaging (remote sensing in the Earth's atmosphere). (topical review)
Inverse Interval Matrix: A Survey
Czech Academy of Sciences Publication Activity Database
Rohn, Jiří; Farhadsefat, R.
2011-01-01
Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf
Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.
Han, Lei; Zhang, Yu; Zhang, Tong
2016-08-01
The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.
Physics-based Inverse Problem to Deduce Marine Atmospheric Boundary Layer Parameters
2017-03-07
knowledge and capabilities in the use and development of inverse problem techniques to deduce atmospheric parameters. WORK COMPLETED The research completed...please find the Final Technical Report with SF 298 for Dr. Erin E. Hackett’s ONR grant entitled Physics -based Inverse Problem to Deduce Marine...From- To) 07/03/2017 Final Technica l Dec 2012- Dec 2016 4. TITLE AND SUBTITLE 5a. CONTRACT NUMBER Physics -based Inverse Problem to Deduce Marine
Seismic Broadband Full Waveform Inversion by shot/receiver refocusing
Haffinger, P.R.
2013-01-01
Full waveform inversion is a tool to obtain high-resolution property models of the subsurface from seismic data. However, the technique is computationally expens- ive and so far no multi-dimensional implementation exists to achieve a resolution that can directly be used for seismic interpretation
Data-Driven Model Order Reduction for Bayesian Inverse Problems
Cui, Tiangang
2014-01-06
One of the major challenges in using MCMC for the solution of inverse problems is the repeated evaluation of computationally expensive numerical models. We develop a data-driven projection- based model order reduction technique to reduce the computational cost of numerical PDE evaluations in this context.
Inverse boundary element calculations based on structural modes
DEFF Research Database (Denmark)
Juhl, Peter Møller
2007-01-01
The inverse problem of calculating the flexural velocity of a radiating structure of a general shape from measurements in the field is often solved by combining a Boundary Element Method with the Singular Value Decomposition and a regularization technique. In their standard form these methods sol...
Synthesis and inversion of Stokes spectral profiles. Thesis
International Nuclear Information System (INIS)
Murphy, G.A.
1990-01-01
Observations of Stokes spectral profiles enable the magnetic fields on the Sun's surface to be determined. Inversion is the process whereby the profiles are reduced to magnetic field vectors. One of the most robust, accurate and rapid methods available for inversion uses the least-squares fitting of analytical Stokes profiles. As this technique is suitable for the automated reduction of large sets of data, it has been adopted for use with the Advanced Stokes Polarimeter, presently under development. The limitations of inversion by analytical profile fitting have not been firmly established. Confident analysis of magnet field vectors depends upon the precise interpretation of reduced data. In this work, a framework is introduced which allows such an assessment to be made. The magnetofluid-static sunspot models presented here provide a self-consistent range of physical conditions similar to those in sunspots. Inversion can then be carried out on Stokes profiles synthesized from these known realistic conditions. The capabilities of an inversion technique can be evaluated by comparison between the models and the deduced values
Term frequency inverse document frequency (TF-IDF) technique and ...
African Journals Online (AJOL)
Journal of Computer Science and Its Application. Journal Home · ABOUT THIS JOURNAL · Advanced Search · Current Issue · Archives · Journal Home > Vol 22, No 1 (2015) >. Log in or Register to get access to full text downloads.
Application of inversion techniques on marine magnetic data: Andaman shelf
Digital Repository Service at National Institute of Oceanography (India)
Sarma, K.V.L.N.S.; Ramana, M.V.; Murty, G.P.S.; Subrahmanyam, V.; Krishna, K.S.; Chaubey, A.K.; Rao, M.M.M.; Narayana, S.L.
with optimisation procedure of iteration modelling. The depths derived from these methods match well with the acoustic basement mapped by seismic reflection survey across the Andaman shelf. The interpretation by these methods demonstrates the rapid utility in virgin...
Superconductivity in Pb inverse opal
International Nuclear Information System (INIS)
Aliev, Ali E.; Lee, Sergey B.; Zakhidov, Anvar A.; Baughman, Ray H.
2007-01-01
Type-II superconducting behavior was observed in highly periodic three-dimensional lead inverse opal prepared by infiltration of melted Pb in blue (D = 160 nm), green (D = 220 nm) and red (D = 300 nm) opals and followed by the extraction of the SiO 2 spheres by chemical etching. The onset of a broad phase transition (ΔT = 0.3 K) was shifted from T c = 7.196 K for bulk Pb to T c = 7.325 K. The upper critical field H c2 (3150 Oe) measured from high-field hysteresis loops exceeds the critical field for bulk lead (803 Oe) fourfold. Two well resolved peaks observed in the hysteresis loops were ascribed to flux penetration into the cylindrical void space that can be found in inverse opal structure and into the periodic structure of Pb nanoparticles. The red inverse opal shows pronounced oscillations of magnetic moment in the mixed state at low temperatures, T 0.9T c has been observed for all of the samples studied. The magnetic field periodicity of resistivity modulation is in good agreement with the lattice parameter of the inverse opal structure. We attribute the failure to observe pronounced modulation in magneto-resistive measurement to difficulties in the precision orientation of the sample along the magnetic field
Inverse problem of solar oscillations
International Nuclear Information System (INIS)
Sekii, T.; Shibahashi, H.
1987-01-01
The authors present some preliminary results of numerical simulation to infer the sound velocity distribution in the solar interior from the oscillation data of the Sun as the inverse problem. They analyze the acoustic potential itself by taking account of some factors other than the sound velocity, and infer the sound velocity distribution in the deep interior of the Sun
Wave-equation dispersion inversion
Li, Jing
2016-12-08
We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained from Rayleigh waves recorded by vertical-component geophones. Similar to wave-equation traveltime tomography, the complicated surface wave arrivals in traces are skeletonized as simpler data, namely the picked dispersion curves in the phase-velocity and frequency domains. Solutions to the elastic wave equation and an iterative optimization method are then used to invert these curves for 2-D or 3-D S-wave velocity models. This procedure, denoted as wave-equation dispersion inversion (WD), does not require the assumption of a layered model and is significantly less prone to the cycle-skipping problems of full waveform inversion. The synthetic and field data examples demonstrate that WD can approximately reconstruct the S-wave velocity distributions in laterally heterogeneous media if the dispersion curves can be identified and picked. The WD method is easily extended to anisotropic data and the inversion of dispersion curves associated with Love waves.
Workflows for Full Waveform Inversions
Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas
2017-04-01
Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.
Solving inverse problems of mathematical physics by means of the PHOENICS software package
Energy Technology Data Exchange (ETDEWEB)
Matsevity, Y; Lushpenko, S [Institute for Problems in Machinery, National Academy of Sciences of Ukraine Pozharskogo, Kharkov (Ukraine)
1998-12-31
Several approaches on organizing solution of inverse problems by means of PHOENICS on the basis of the technique of automated fitting are proposing. A version of a `nondestructive` method of using PHOENICS in the inverse problem solution regime and the ways of altering the program in the case of introducing optimization facilities in it are under consideration. (author) 12 refs.
Nonlinear Stimulated Raman Exact Passage by Resonance-Locked Inverse Engineering
Dorier, V.; Gevorgyan, M.; Ishkhanyan, A.; Leroy, C.; Jauslin, H. R.; Guérin, S.
2017-12-01
We derive an exact and robust stimulated Raman process for nonlinear quantum systems driven by pulsed external fields. The external fields are designed with closed-form expressions from the inverse engineering of a given efficient and stable dynamics. This technique allows one to induce a controlled population inversion which surpasses the usual nonlinear stimulated Raman adiabatic passage efficiency.
Solving inverse problems of mathematical physics by means of the PHOENICS software package
Energy Technology Data Exchange (ETDEWEB)
Matsevity, Y.; Lushpenko, S. [Institute for Problems in Machinery, National Academy of Sciences of Ukraine Pozharskogo, Kharkov (Ukraine)
1997-12-31
Several approaches on organizing solution of inverse problems by means of PHOENICS on the basis of the technique of automated fitting are proposing. A version of a `nondestructive` method of using PHOENICS in the inverse problem solution regime and the ways of altering the program in the case of introducing optimization facilities in it are under consideration. (author) 12 refs.
Revealing stacking sequences in inverse opals by microradian X-ray diffraction
Sinitskii, A.; Abramova, V.; Grigorieva, N.; Grigoriev, S.; Snigirev, A.; Byelov, D.; Petukhov, A.V.
2010-01-01
We present the results of the structural analysis of inverse opal photonic crystals by microradian X-ray diffraction. Inverse opals based on different oxide materials (TiO2, SiO2 and Fe2O3) were fabricated by templating polystyrene colloidal crystal films grown by the vertical deposition technique.
Optimisation in radiotherapy II: Programmed and inversion optimisation algorithms
International Nuclear Information System (INIS)
Ebert, M.
1997-01-01
This is the second article in a three part examination of optimisation in radiotherapy. The previous article established the bases of optimisation in radiotherapy, and the formulation of the optimisation problem. This paper outlines several algorithms that have been used in radiotherapy, for searching for the best irradiation strategy within the full set of possible strategies. Two principle classes of algorithm are considered - those associated with mathematical programming which employ specific search techniques, linear programming type searches or artificial intelligence - and those which seek to perform a numerical inversion of the optimisation problem, finishing with deterministic iterative inversion. (author)
Multi-resolution inversion algorithm for the attenuated radon transform
Barbano, Paolo Emilio
2011-09-01
We present a FAST implementation of the Inverse Attenuated Radon Transform which incorporates accurate collimator response, as well as artifact rejection due to statistical noise and data corruption. This new reconstruction procedure is performed by combining a memory-efficient implementation of the analytical inversion formula (AIF [1], [2]) with a wavelet-based version of a recently discovered regularization technique [3]. The paper introduces all the main aspects of the new AIF, as well numerical experiments on real and simulated data. Those display a substantial improvement in reconstruction quality when compared to linear or iterative algorithms. © 2011 IEEE.
Quantum Effects in Inverse Opal Structures
Bleiweiss, Michael; Datta, Timir; Lungu, Anca; Yin, Ming; Iqbal, Zafar; Palm, Eric; Brandt, Bruce
2002-03-01
Properties of bismuth inverse opals and carbon opal replicas were studied. The bismuth nanostructures were fabricated by pressure infiltration into porous artificial opal, while the carbon opal replicas were created via CVD. These structures form a regular three-dimensional network in which the bismuth and carbon regions percolate in all directions between the close packed spheres of SiO_2. The sizes of the conducting regions are of the order of tens of nanometers. Static susceptibility of the bismuth inverse opal showed clear deHaas-vanAlphen oscillations. Transport measurements, including Hall, were done using standard ac four and six probe techniques in fields up to 17 T* and temperatures between 4.2 and 200 K. Observations of Shubnikov-deHaas oscillations in magnetoresistance, one-dimensional weak localization, quantum Hall and other effects will be discussed. *Performed at the National High Magnetic Field Lab (NHMFL) FSU, Tallahassee, FL. This work was partially supported by grants from DARPA-nanothermoelectrics, NASA-EPSCOR and the USC nanocenter.
Structural level set inversion for microwave breast screening
International Nuclear Information System (INIS)
Irishina, Natalia; Álvarez, Diego; Dorn, Oliver; Moscoso, Miguel
2010-01-01
We present a new inversion strategy for the early detection of breast cancer from microwave data which is based on a new multiphase level set technique. This novel structural inversion method uses a modification of the color level set technique adapted to the specific situation of structural breast imaging taking into account the high complexity of the breast tissue. We only use data of a few microwave frequencies for detecting the tumors hidden in this complex structure. Three level set functions are employed for describing four different types of breast tissue, where each of these four regions is allowed to have a complicated topology and to have an interior structure which needs to be estimated from the data simultaneously with the region interfaces. The algorithm consists of several stages of increasing complexity. In each stage more details about the anatomical structure of the breast interior is incorporated into the inversion model. The synthetic breast models which are used for creating simulated data are based on real MRI images of the breast and are therefore quite realistic. Our results demonstrate the potential and feasibility of the proposed level set technique for detecting, locating and characterizing a small tumor in its early stage of development embedded in such a realistic breast model. Both the data acquisition simulation and the inversion are carried out in 2D
3D stochastic inversion and joint inversion of potential fields for multi scale parameters
Shamsipour, Pejman
In this thesis we present the development of new techniques for the interpretation of potential field (gravity and magnetic data), which are the most widespread economic geophysical methods used for oil and mineral exploration. These new techniques help to address the long-standing issue with the interpretation of potential fields, namely the intrinsic non-uniqueness inversion of these types of data. The thesis takes the form of three papers (four including Appendix), which have been published, or soon to be published, in respected international journals. The purpose of the thesis is to introduce new methods based on 3D stochastical approaches for: 1) Inversion of potential field data (magnetic), 2) Multiscale Inversion using surface and borehole data and 3) Joint inversion of geophysical potential field data. We first present a stochastic inversion method based on a geostatistical approach to recover 3D susceptibility models from magnetic data. The aim of applying geostatistics is to provide quantitative descriptions of natural variables distributed in space or in time and space. We evaluate the uncertainty on the parameter model by using geostatistical unconditional simulations. The realizations are post-conditioned by cokriging to observation data. In order to avoid the natural tendency of the estimated structure to lay near the surface, depth weighting is included in the cokriging system. Then, we introduce algorithm for multiscale inversion, the presented algorithm has the capability of inverting data on multiple supports. The method involves four main steps: i. upscaling of borehole parameters (It could be density or susceptibility) to block parameters, ii. selection of block to use as constraints based on a threshold on kriging variance, iii. inversion of observation data with selected block densities as constraints, and iv. downscaling of inverted parameters to small prisms. Two modes of application are presented: estimation and simulation. Finally, a novel
Rapid fabrication of 2D and 3D photonic crystals and their inversed structures
International Nuclear Information System (INIS)
Huang, C-K; Chan, C-H; Chen, C-Y; Tsai, Y-L; Chen, C-C; Han, J-L; Hsieh, K-H
2007-01-01
In this paper a new technique is proposed for the fabrication of two-dimensional (2D) and three-dimensional (3D) photonic crystals using monodisperse polystyrene microspheres as the templates. In addition, the approaches toward the creation of their corresponding inversed structures are described. The inversed structures were prepared by subjecting an introduced silica source to a sol-gel process; programmed heating was then performed to remove the template without spoiling the inversed structures. Utilizing these approaches, 2D and 3D photonic crystals and their highly ordered inversed hexagonal multilayer or monolayer structures were obtained on the substrate
Application of the kernel method to the inverse geosounding problem.
Hidalgo, Hugo; Sosa León, Sonia; Gómez-Treviño, Enrique
2003-01-01
Determining the layered structure of the earth demands the solution of a variety of inverse problems; in the case of electromagnetic soundings at low induction numbers, the problem is linear, for the measurements may be represented as a linear functional of the electrical conductivity distribution. In this paper, an application of the support vector (SV) regression technique to the inversion of electromagnetic data is presented. We take advantage of the regularizing properties of the SV learning algorithm and use it as a modeling technique with synthetic and field data. The SV method presents better recovery of synthetic models than Tikhonov's regularization. As the SV formulation is solved in the space of the data, which has a small dimension in this application, a smaller problem than that considered with Tikhonov's regularization is produced. For field data, the SV formulation develops models similar to those obtained via linear programming techniques, but with the added characteristic of robustness.
Evaluation of Inversion Methods Applied to Ionospheric ro Observations
Rios Caceres, Arq. Estela Alejandra; Rios, Victor Hugo; Guyot, Elia
The new technique of radio-occultation can be used to study the Earth's ionosphere. The retrieval processes of ionospheric profiling from radio occultation observations usually assume spherical symmetry of electron density distribution at the locality of occultation and use the Abel integral transform to invert the measured total electron content (TEC) values. This pa-per presents a set of ionospheric profiles obtained from SAC-C satellite with the Abel inversion technique. The effects of the ionosphere on the GPS signal during occultation, such as bending and scintillation, are examined. Electron density profiles are obtained using the Abel inversion technique. Ionospheric radio occultations are validated using vertical profiles of electron con-centration from inverted ionograms , obtained from ionosonde sounding in the vicinity of the occultation. Results indicate that the Abel transform works well in the mid-latitudes during the daytime, but is less accurate during the night-time.
Directed Neutron Beams From Inverse Kinematic Reactions
Vanhoy, J. R.; Guardala, N. A.; Glass, G. A.
2011-06-01
Kinematic focusing of an emitted fairly mono-energetic neutron beam by the use of inverse-kinematic reactions, i.e. where the projectile mass is greater than the target atom's mass, can provide for the utilization of a significant fraction of the fast neutron yield and also provide for a safer radiation environment. We examine the merit of various neutron production reactions and consider the practicalities of producing the primary beam using the suitable accelerator technologies. Preliminary progress at the NSWC-Carderock Positive Ion Accelerator Facility is described. Possible important applications for this type of neutron-based system can be both advanced medical imaging techniques and active "stand-off" interrogation of contraband items.
Inversion of a lateral log using neural networks
International Nuclear Information System (INIS)
Garcia, G.; Whitman, W.W.
1992-01-01
In this paper a technique using neural networks is demonstrated for the inversion of a lateral log. The lateral log is simulated by a finite difference method which in turn is used as an input to a backpropagation neural network. An initial guess earth model is generated from the neural network, which is then input to a Marquardt inversion. The neural network reacts to gross and subtle data features in actual logs and produces a response inferred from the knowledge stored in the network during a training process. The neural network inversion of lateral logs is tested on synthetic and field data. Tests using field data resulted in a final earth model whose simulated lateral is in good agreement with the actual log data
Indium oxide inverse opal films synthesized by structure replication method
Amrehn, Sabrina; Berghoff, Daniel; Nikitin, Andreas; Reichelt, Matthias; Wu, Xia; Meier, Torsten; Wagner, Thorsten
2016-04-01
We present the synthesis of indium oxide (In2O3) inverse opal films with photonic stop bands in the visible range by a structure replication method. Artificial opal films made of poly(methyl methacrylate) (PMMA) spheres are utilized as template. The opal films are deposited via sedimentation facilitated by ultrasonication, and then impregnated by indium nitrate solution, which is thermally converted to In2O3 after drying. The quality of the resulting inverse opal film depends on many parameters; in this study the water content of the indium nitrate/PMMA composite after drying is investigated. Comparison of the reflectance spectra recorded by vis-spectroscopy with simulated data shows a good agreement between the peak position and calculated stop band positions for the inverse opals. This synthesis is less complex and highly efficient compared to most other techniques and is suitable for use in many applications.
Mathematical properties of numerical inversion for jet calibrations
Energy Technology Data Exchange (ETDEWEB)
Cukierman, Aviv [Physics Department, Stanford University, Stanford, CA 94305 (United States); SLAC National Accelerator Laboratory, Stanford University, Menlo Park, CA 94025 (United States); Nachman, Benjamin, E-mail: bnachman@cern.ch [Physics Department, Stanford University, Stanford, CA 94305 (United States); SLAC National Accelerator Laboratory, Stanford University, Menlo Park, CA 94025 (United States); Physics Division, Lawrence Berkeley National Laboratory, Berkeley, CA 94704 (United States)
2017-06-21
Numerical inversion is a general detector calibration technique that is independent of the underlying spectrum. This procedure is formalized and important statistical properties are presented, using high energy jets at the Large Hadron Collider as an example setting. In particular, numerical inversion is inherently biased and common approximations to the calibrated jet energy tend to over-estimate the resolution. Analytic approximations to the closure and calibrated resolutions are demonstrated to effectively predict the full forms under realistic conditions. Finally, extensions of numerical inversion are presented which can reduce the inherent biases. These methods will be increasingly important to consider with degraded resolution at low jet energies due to a much higher instantaneous luminosity in the near future.
RNA inverse folding using Monte Carlo tree search.
Yang, Xiufeng; Yoshizoe, Kazuki; Taneda, Akito; Tsuda, Koji
2017-11-06
Artificially synthesized RNA molecules provide important ways for creating a variety of novel functional molecules. State-of-the-art RNA inverse folding algorithms can design simple and short RNA sequences of specific GC content, that fold into the target RNA structure. However, their performance is not satisfactory in complicated cases. We present a new inverse folding algorithm called MCTS-RNA, which uses Monte Carlo tree search (MCTS), a technique that has shown exceptional performance in Computer Go recently, to represent and discover the essential part of the sequence space. To obtain high accuracy, initial sequences generated by MCTS are further improved by a series of local updates. Our algorithm has an ability to control the GC content precisely and can deal with pseudoknot structures. Using common benchmark datasets for evaluation, MCTS-RNA showed a lot of promise as a standard method of RNA inverse folding. MCTS-RNA is available at https://github.com/tsudalab/MCTS-RNA .
Estimating surface acoustic impedance with the inverse method.
Piechowicz, Janusz
2011-01-01
Sound field parameters are predicted with numerical methods in sound control systems, in acoustic designs of building and in sound field simulations. Those methods define the acoustic properties of surfaces, such as sound absorption coefficients or acoustic impedance, to determine boundary conditions. Several in situ measurement techniques were developed; one of them uses 2 microphones to measure direct and reflected sound over a planar test surface. Another approach is used in the inverse boundary elements method, in which estimating acoustic impedance of a surface is expressed as an inverse boundary problem. The boundary values can be found from multipoint sound pressure measurements in the interior of a room. This method can be applied to arbitrarily-shaped surfaces. This investigation is part of a research programme on using inverse methods in industrial room acoustics.
Full Waveform Inversion for Reservoir Characterization - A Synthetic Study
Zabihi Naeini, E.
2017-05-26
Most current reservoir-characterization workflows are based on classic amplitude-variation-with-offset (AVO) inversion techniques. Although these methods have generally served us well over the years, here we examine full-waveform inversion (FWI) as an alternative tool for higher-resolution reservoir characterization. An important step in developing reservoir-oriented FWI is the implementation of facies-based rock physics constraints adapted from the classic methods. We show that such constraints can be incorporated into FWI by adding appropriately designed regularization terms to the objective function. The advantages of the proposed algorithm are demonstrated on both isotropic and VTI (transversely isotropic with a vertical symmetry axis) models with pronounced lateral and vertical heterogeneity. The inversion results are explained using the theoretical radiation patterns produced by perturbations in the medium parameters.
Inverse photon-photon processes
International Nuclear Information System (INIS)
Carimalo, C.; Crozon, M.; Kesler, P.; Parisi, J.
1981-12-01
We here consider inverse photon-photon processes, i.e. AB → γγX (where A, B are hadrons, in particular protons or antiprotons), at high energies. As regards the production of a γγ continuum, we show that, under specific conditions the study of such processes might provide some information on the subprocess gg γγ, involving a quark box. It is also suggested to use those processes in order to systematically look for heavy C = + structures (quarkonium states, gluonia, etc.) showing up in the γγ channel. Inverse photon-photon processes might thus become a new and fertile area of investigation in high-energy physics, provided the difficult problem of discriminating between direct photons and indirect ones can be handled in a satisfactory way
Hedland, D. A.; Degonia, P. K.
1974-01-01
The RAE-1 spacecraft inversion performed October 31, 1972 is described based upon the in-orbit dynamical data in conjunction with results obtained from previously developed computer simulation models. The computer simulations used are predictive of the satellite dynamics, including boom flexing, and are applicable during boom deployment and retraction, inter-phase coast periods, and post-deployment operations. Attitude data, as well as boom tip data, were analyzed in order to obtain a detailed description of the dynamical behavior of the spacecraft during and after the inversion. Runs were made using the computer model and the results were analyzed and compared with the real time data. Close agreement between the actual recorded spacecraft attitude and the computer simulation results was obtained.
Inverse problem in nuclear physics
International Nuclear Information System (INIS)
Zakhariev, B.N.
1976-01-01
The method of reconstruction of interaction from the scattering data is formulated in the frame of the R-matrix theory in which the potential is determined by position of resonance Esub(lambda) and their reduced widths γ 2 lambda. In finite difference approximation for the Schroedinger equation this new approach allows to make the logics of the inverse problem IP more clear. A possibility of applications of IP formalism to various nuclear systems is discussed. (author)
Digital Repository Service at National Institute of Oceanography (India)
PrasannaKumar, S.; Navelkar, G.S.; Murty, T.V.R.; Murty, C.S.
generalised inverse, based on singular value decomposition technique. The numerical experiment shows that 18 eigen rays with 9 layers enable reconstruction of the eddy profile adequately using 9 eigen modes...
CSIR Research Space (South Africa)
Cordeiro, N
2011-01-01
Full Text Available Inverse gas chromatography (IGC) is a suitable method to determine surface energy of natural fibres when compared to wetting techniques. In the present study, the surface properties of raw and modified lignocellulosic fibres have been investigated...
Elastic reflection waveform inversion with variable density
Li, Yuanyuan; Li, Zhenchun; Alkhalifah, Tariq Ali; Guo, Qiang
2017-01-01
Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion
An inverse problem approach to pattern recognition in industry
Directory of Open Access Journals (Sweden)
Ali Sever
2015-01-01
Full Text Available Many works have shown strong connections between learning and regularization techniques for ill-posed inverse problems. A careful analysis shows that a rigorous connection between learning and regularization for inverse problem is not straightforward. In this study, pattern recognition will be viewed as an ill-posed inverse problem and applications of methods from the theory of inverse problems to pattern recognition are studied. A new learning algorithm derived from a well-known regularization model is generated and applied to the task of reconstruction of an inhomogeneous object as pattern recognition. Particularly, it is demonstrated that pattern recognition can be reformulated in terms of inverse problems defined by a Riesz-type kernel. This reformulation can be employed to design a learning algorithm based on a numerical solution of a system of linear equations. Finally, numerical experiments have been carried out with synthetic experimental data considering a reasonable level of noise. Good recoveries have been achieved with this methodology, and the results of these simulations are compatible with the existing methods. The comparison results show that the Regularization-based learning algorithm (RBA obtains a promising performance on the majority of the test problems. In prospects, this method can be used for the creation of automated systems for diagnostics, testing, and control in various fields of scientific and applied research, as well as in industry.
On a complete topological inverse polycyclic monoid
Directory of Open Access Journals (Sweden)
S. O. Bardyla
2016-12-01
Full Text Available We give sufficient conditions when a topological inverse $\\lambda$-polycyclic monoid $P_{\\lambda}$ is absolutely $H$-closed in the class of topological inverse semigroups. For every infinite cardinal $\\lambda$ we construct the coarsest semigroup inverse topology $\\tau_{mi}$ on $P_\\lambda$ and give an example of a topological inverse monoid $S$ which contains the polycyclic monoid $P_2$ as a dense discrete subsemigroup.
Inversion of self-potential anomalies caused by 2D inclined sheets using neural networks
International Nuclear Information System (INIS)
El-Kaliouby, Hesham M; Al-Garni, Mansour A
2009-01-01
The modular neural network (MNN) inversion method has been used for inversion of self-potential (SP) data anomalies caused by 2D inclined sheets of infinite horizontal extent. The analysed parameters are the depth (h), the half-width (a), the inclination (α), the zero distance from the origin (x o ) and the polarization amplitude (k). The MNN inversion has been first tested on a synthetic example and then applied to two field examples from the Surda area of Rakha mines, India, and Kalava fault zone, India. The effect of random noise has been studied, and the technique showed satisfactory results. The inversion results show good agreement with the measured field data compared with other inversion techniques in use
Energy Technology Data Exchange (ETDEWEB)
Krukovsky, P G [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)
1998-12-31
The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.
Energy Technology Data Exchange (ETDEWEB)
Krukovsky, P.G. [Institute of Engineering Thermophysics, National Academy of Sciences of Ukraine, Kiev (Ukraine)
1997-12-31
The description of method and software FRIEND which provide a possibility of solution of inverse and inverse design problems on the basis of existing (base) CFD-software for solution of direct problems (in particular, heat-transfer and fluid-flow problems using software PHOENICS) are presented. FRIEND is an independent additional module that widens the operational capacities of the base software unified with this module. This unifying does not require any change or addition to the base software. Interfacing of FRIEND and the base software takes place through input and output files of the base software. A brief description of the computational technique applied for the inverse problem solution, same detailed information on the interfacing of FRIEND and CFD-software and solution results for testing inverse and inverse design problems, obtained using the tandem CFD-software PHOENICS and FRIEND, are presented. (author) 9 refs.
Inversion: A Most Useful Kind of Transformation.
Dubrovsky, Vladimir
1992-01-01
The transformation assigning to every point its inverse with respect to a circle with given radius and center is called an inversion. Discusses inversion with respect to points, circles, angles, distances, space, and the parallel postulate. Exercises related to these topics are included. (MDH)
Probabilistic Geoacoustic Inversion in Complex Environments
2015-09-30
Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is
Energy Technology Data Exchange (ETDEWEB)
Renard, F.
2003-01-01
The goal of seismic inversion is to recover an Earth model that best fits some observed data. To reach that goal, we have to minimize an objective function that measures the amplitude of the misfits according to a norm to be chosen in data space. In general, the used norm is the L2 norm. Unfortunately, such a norm is not adapted to data corrupted by correlated noise: the noise is in that case inverted as signal and the inversion results are unacceptable. The goal of this thesis is to obtain satisfactory results to the inverse problem in that situation. For this purpose, we study two inverse problems: reflection tomography and waveform inversion. In reflection tomography, we propose a new formulation of the continuum inverse problem which relies on a H1 norm in data space. This allows us to account for the correlated nature of the noise that corrupts the kinematic information. However, this norm does not give more satisfactory results than the ones obtained with the classical formalism. This is why, for sake of simplicity, we recommend to use this classical formalism. Then we try to understand how to properly sample the kinematic information so as to obtain an accurate approximation of the continuum inverse problem. In waveform inversion, we propose to directly invert data corrupted by some correlated noise. A first idea consists in rejecting the noise in the residues. In that goal, we can use a semi-norm to formulate the inverse problem. This technique gives very good results, except when the data are corrupted by random noise. Thus we propose a second method which consists in retrieving, by solving an inverse problem, the signal and the noise whose sum best fits the data. This technique gives very satisfactory results, even if some random noise pollutes the data, and is moreover solved, thanks to an original algorithm, in a very efficient way. (author)
Large-scale inverse model analyses employing fast randomized data reduction
Lin, Youzuo; Le, Ellen B.; O'Malley, Daniel; Vesselinov, Velimir V.; Bui-Thanh, Tan
2017-08-01
When the number of observations is large, it is computationally challenging to apply classical inverse modeling techniques. We have developed a new computationally efficient technique for solving inverse problems with a large number of observations (e.g., on the order of 107 or greater). Our method, which we call the randomized geostatistical approach (RGA), is built upon the principal component geostatistical approach (PCGA). We employ a data reduction technique combined with the PCGA to improve the computational efficiency and reduce the memory usage. Specifically, we employ a randomized numerical linear algebra technique based on a so-called "sketching" matrix to effectively reduce the dimension of the observations without losing the information content needed for the inverse analysis. In this way, the computational and memory costs for RGA scale with the information content rather than the size of the calibration data. Our algorithm is coded in Julia and implemented in the MADS open-source high-performance computational framework (http://mads.lanl.gov). We apply our new inverse modeling method to invert for a synthetic transmissivity field. Compared to a standard geostatistical approach (GA), our method is more efficient when the number of observations is large. Most importantly, our method is capable of solving larger inverse problems than the standard GA and PCGA approaches. Therefore, our new model inversion method is a powerful tool for solving large-scale inverse problems. The method can be applied in any field and is not limited to hydrogeological applications such as the characterization of aquifer heterogeneity.
Inverse Compton gamma-rays from pulsars
International Nuclear Information System (INIS)
Morini, M.
1983-01-01
A model is proposed for pulsar optical and gamma-ray emission where relativistic electrons beams: (i) scatter the blackbody photons from the polar cap surface giving inverse Compton gamma-rays and (ii) produce synchrotron optical photons in the light cylinder region which are then inverse Compton scattered giving other gamma-rays. The model is applied to the Vela pulsar, explaining the first gamma-ray pulse by inverse Compton scattering of synchrotron photons near the light cylinder and the second gamma-ray pulse partly by inverse Compton scattering of synchrotron photons and partly by inverse Compton scattering of the thermal blackbody photons near the star surface. (author)
Inversion of GPS meteorology data
Directory of Open Access Journals (Sweden)
K. Hocke
Full Text Available The GPS meteorology (GPS/MET experiment, led by the Universities Corporation for Atmospheric Research (UCAR, consists of a GPS receiver aboard a low earth orbit (LEO satellite which was launched on 3 April 1995. During a radio occultation the LEO satellite rises or sets relative to one of the 24 GPS satellites at the Earth's horizon. Thereby the atmospheric layers are successively sounded by radio waves which propagate from the GPS satellite to the LEO satellite. From the observed phase path increases, which are due to refraction of the radio waves by the ionosphere and the neutral atmosphere, the atmospheric parameter refractivity, density, pressure and temperature are calculated with high accuracy and resolution (0.5–1.5 km. In the present study, practical aspects of the GPS/MET data analysis are discussed. The retrieval is based on the Abelian integral inversion of the atmospheric bending angle profile into the refractivity index profile. The problem of the upper boundary condition of the Abelian integral is described by examples. The statistical optimization approach which is applied to the data above 40 km and the use of topside bending angle profiles from model atmospheres stabilize the inversion. The retrieved temperature profiles are compared with corresponding profiles which have already been calculated by scientists of UCAR and Jet Propulsion Laboratory (JPL, using Abelian integral inversion too. The comparison shows that in some cases large differences occur (5 K and more. This is probably due to different treatment of the upper boundary condition, data runaways and noise. Several temperature profiles with wavelike structures at tropospheric and stratospheric heights are shown. While the periodic structures at upper stratospheric heights could be caused by residual errors of the ionospheric correction method, the periodic temperature fluctuations at heights below 30 km are most likely caused by atmospheric waves (vertically
Topological inversion for solution of geodesy-constrained geophysical problems
Saltogianni, Vasso; Stiros, Stathis
2015-04-01
Geodetic data, mostly GPS observations, permit to measure displacements of selected points around activated faults and volcanoes, and on the basis of geophysical models, to model the underlying physical processes. This requires inversion of redundant systems of highly non-linear equations with >3 unknowns; a situation analogous to the adjustment of geodetic networks. However, in geophysical problems inversion cannot be based on conventional least-squares techniques, and is based on numerical inversion techniques (a priori fixing of some variables, optimization in steps with values of two variables each time to be regarded fixed, random search in the vicinity of approximate solutions). Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solutions with poor error control (usually sampling-based approaches). To overcome these problems, a numerical-topological, grid-search based technique in the RN space is proposed (N the number of unknown variables). This technique is in fact a generalization and refinement of techniques used in lighthouse positioning and in some cases of low-accuracy 2-D positioning using Wi-Fi etc. The basic concept is to assume discrete possible ranges of each variable, and from these ranges to define a grid G in the RN space, with some of the gridpoints to approximate the true solutions of the system. Each point of hyper-grid G is then tested whether it satisfies the observations, given their uncertainty level, and successful grid points define a sub-space of G containing the true solutions. The optimal (minimal) space containing one or more solutions is obtained using a trial-and-error approach, and a single optimization factor. From this essentially deterministic identification of the set of gridpoints satisfying the system of equations, at a following step, a stochastic optimal solution is computed corresponding to the center of gravity of this set of gridpoints. This solution corresponds to a
Constraint treatment techniques and parallel algorithms for multibody dynamic analysis. Ph.D. Thesis
Chiou, Jin-Chern
1990-01-01
Computational procedures for kinematic and dynamic analysis of three-dimensional multibody dynamic (MBD) systems are developed from the differential-algebraic equations (DAE's) viewpoint. Constraint violations during the time integration process are minimized and penalty constraint stabilization techniques and partitioning schemes are developed. The governing equations of motion, a two-stage staggered explicit-implicit numerical algorithm, are treated which takes advantage of a partitioned solution procedure. A robust and parallelizable integration algorithm is developed. This algorithm uses a two-stage staggered central difference algorithm to integrate the translational coordinates and the angular velocities. The angular orientations of bodies in MBD systems are then obtained by using an implicit algorithm via the kinematic relationship between Euler parameters and angular velocities. It is shown that the combination of the present solution procedures yields a computationally more accurate solution. To speed up the computational procedures, parallel implementation of the present constraint treatment techniques, the two-stage staggered explicit-implicit numerical algorithm was efficiently carried out. The DAE's and the constraint treatment techniques were transformed into arrowhead matrices to which Schur complement form was derived. By fully exploiting the sparse matrix structural analysis techniques, a parallel preconditioned conjugate gradient numerical algorithm is used to solve the systems equations written in Schur complement form. A software testbed was designed and implemented in both sequential and parallel computers. This testbed was used to demonstrate the robustness and efficiency of the constraint treatment techniques, the accuracy of the two-stage staggered explicit-implicit numerical algorithm, and the speed up of the Schur-complement-based parallel preconditioned conjugate gradient algorithm on a parallel computer.
Towards full waveform ambient noise inversion
Sager, Korbinian; Ermert, Laura; Boehm, Christian; Fichtner, Andreas
2018-01-01
In this work we investigate fundamentals of a method—referred to as full waveform ambient noise inversion—that improves the resolution of tomographic images by extracting waveform information from interstation correlation functions that cannot be used without knowing the distribution of noise sources. The fundamental idea is to drop the principle of Green function retrieval and to establish correlation functions as self-consistent observables in seismology. This involves the following steps: (1) We introduce an operator-based formulation of the forward problem of computing correlation functions. It is valid for arbitrary distributions of noise sources in both space and frequency, and for any type of medium, including 3-D elastic, heterogeneous and attenuating media. In addition, the formulation allows us to keep the derivations independent of time and frequency domain and it facilitates the application of adjoint techniques, which we use to derive efficient expressions to compute first and also second derivatives. The latter are essential for a resolution analysis that accounts for intra- and interparameter trade-offs. (2) In a forward modelling study we investigate the effect of noise sources and structure on different observables. Traveltimes are hardly affected by heterogeneous noise source distributions. On the other hand, the amplitude asymmetry of correlations is at least to first order insensitive to unmodelled Earth structure. Energy and waveform differences are sensitive to both structure and the distribution of noise sources. (3) We design and implement an appropriate inversion scheme, where the extraction of waveform information is successively increased. We demonstrate that full waveform ambient noise inversion has the potential to go beyond ambient noise tomography based on Green function retrieval and to refine noise source location, which is essential for a better understanding of noise generation. Inherent trade-offs between source and structure
International Nuclear Information System (INIS)
Azimi, A.; Hannani, S.K.; Farhanieh, B.
2005-01-01
In this article, a comparison between two iterative inverse techniques to solve simultaneously two unknown functions of axisymmetric transient inverse heat conduction problems in semi complex geometries is presented. The multi-block structured grid together with blocked-interface nodes is implemented for geometric decomposition of physical domain. Numerical scheme for solution of transient heat conduction equation is the finite element method with frontal technique to solve algebraic system of discrete equations. The inverse heat conduction problem involves simultaneous unknown time varying heat generation and time-space varying boundary condition estimation. Two parameter-estimation techniques are considered, Levenberg-Marquardt scheme and conjugate gradient method with adjoint problem. Numerically computed exact and noisy data are used for the measured transient temperature data needed in the inverse solution. The results of the present study for a configuration including two joined disks with different heights are compared to those of exact heat source and temperature boundary condition, and show good agreement. (author)
The seismic reflection inverse problem
International Nuclear Information System (INIS)
Symes, W W
2009-01-01
The seismic reflection method seeks to extract maps of the Earth's sedimentary crust from transient near-surface recording of echoes, stimulated by explosions or other controlled sound sources positioned near the surface. Reasonably accurate models of seismic energy propagation take the form of hyperbolic systems of partial differential equations, in which the coefficients represent the spatial distribution of various mechanical characteristics of rock (density, stiffness, etc). Thus the fundamental problem of reflection seismology is an inverse problem in partial differential equations: to find the coefficients (or at least some of their properties) of a linear hyperbolic system, given the values of a family of solutions in some part of their domains. The exploration geophysics community has developed various methods for estimating the Earth's structure from seismic data and is also well aware of the inverse point of view. This article reviews mathematical developments in this subject over the last 25 years, to show how the mathematics has both illuminated innovations of practitioners and led to new directions in practice. Two themes naturally emerge: the importance of single scattering dominance and compensation for spectral incompleteness by spatial redundancy. (topical review)
Inversion theory and conformal mapping
Blair, David E
2000-01-01
It is rarely taught in an undergraduate or even graduate curriculum that the only conformal maps in Euclidean space of dimension greater than two are those generated by similarities and inversions in spheres. This is in stark contrast to the wealth of conformal maps in the plane. The principal aim of this text is to give a treatment of this paucity of conformal maps in higher dimensions. The exposition includes both an analytic proof in general dimension and a differential-geometric proof in dimension three. For completeness, enough complex analysis is developed to prove the abundance of conformal maps in the plane. In addition, the book develops inversion theory as a subject, along with the auxiliary theme of circle-preserving maps. A particular feature is the inclusion of a paper by Carath�odory with the remarkable result that any circle-preserving transformation is necessarily a M�bius transformation, not even the continuity of the transformation is assumed. The text is at the level of advanced undergr...
LHC Report: 2 inverse femtobarns!
Mike Lamont for the LHC Team
2011-01-01
The LHC is enjoying a confluence of twos. This morning (Friday 5 August) we passed 2 inverse femtobarns delivered in 2011; the peak luminosity is now just over 2 x1033 cm-2s-1; and recently fill 2000 was in for nearly 22 hours and delivered around 90 inverse picobarns, almost twice 2010's total. In order to increase the luminosity we can increase of number of bunches, increase the number of particles per bunch, or decrease the transverse beam size at the interaction point. The beam size can be tackled in two ways: either reduce the size of the injected bunches or squeeze harder with the quadrupole magnets situated on either side of the experiments. Having increased the number of bunches to 1380, the maximum possible with a 50 ns bunch spacing, a one day meeting in Crozet decided to explore the other possibilities. The size of the beams coming from the injectors has been reduced to the minimum possible. This has brought an increase in the peak luminosity of about 50% and the 2 x 1033 cm...
Inverse design of multicomponent assemblies
Piñeros, William D.; Lindquist, Beth A.; Jadrich, Ryan B.; Truskett, Thomas M.
2018-03-01
Inverse design can be a useful strategy for discovering interactions that drive particles to spontaneously self-assemble into a desired structure. Here, we extend an inverse design methodology—relative entropy optimization—to determine isotropic interactions that promote assembly of targeted multicomponent phases, and we apply this extension to design interactions for a variety of binary crystals ranging from compact triangular and square architectures to highly open structures with dodecagonal and octadecagonal motifs. We compare the resulting optimized (self- and cross) interactions for the binary assemblies to those obtained from optimization of analogous single-component systems. This comparison reveals that self-interactions act as a "primer" to position particles at approximately correct coordination shell distances, while cross interactions act as the "binder" that refines and locks the system into the desired configuration. For simpler binary targets, it is possible to successfully design self-assembling systems while restricting one of these interaction types to be a hard-core-like potential. However, optimization of both self- and cross interaction types appears necessary to design for assembly of more complex or open structures.
Instrument developments for inverse photoemission
International Nuclear Information System (INIS)
Brenac, A.
1987-02-01
Experimental developments principally concerning electron sources for inverse photoemission are presented. The specifications of the electron beam are derived from experiment requirements, taking into account the limitations encountered (space charge divergence). For a wave vector resolution of 0.2 A -1 , the maximum current is 25 microA at 20 eV. The design of a gun providing such a beam in the range 5 to 50 eV is presented. Angle-resolved inverse photoemission experiments show angular effects at 30 eV. For an energy of 10 eV, angular effects should be stronger, but the low efficiency of the spectrometer in this range makes the experiments difficult. The total energy resolution of 0.3 eV is the result mainly of electron energy spread, as expected. The electron sources are based on field effect electron emission from a cathode consisting of a large number of microtips. The emission arises from a few atomic cells for each tip. The ultimate theoretical energy spread is 0.1 eV. This value is not attained because of an interface resistance problem. A partial solution of this problem allows measurement of an energy spread of 0.9 eV for a current of 100 microA emitted at 60 eV. These cathodes have a further advantage in that emission can occur at a low temperature [fr
The Earthquake Source Inversion Validation (SIV) - Project: Summary, Status, Outlook
Mai, P. M.
2017-12-01
Finite-fault earthquake source inversions infer the (time-dependent) displacement on the rupture surface from geophysical data. The resulting earthquake source models document the complexity of the rupture process. However, this kinematic source inversion is ill-posed and returns non-unique solutions, as seen for instance in multiple source models for the same earthquake, obtained by different research teams, that often exhibit remarkable dissimilarities. To address the uncertainties in earthquake-source inversions and to understand strengths and weaknesses of various methods, the Source Inversion Validation (SIV) project developed a set of forward-modeling exercises and inversion benchmarks. Several research teams then use these validation exercises to test their codes and methods, but also to develop and benchmark new approaches. In this presentation I will summarize the SIV strategy, the existing benchmark exercises and corresponding results. Using various waveform-misfit criteria and newly developed statistical comparison tools to quantify source-model (dis)similarities, the SIV platforms is able to rank solutions and identify particularly promising source inversion approaches. Existing SIV exercises (with related data and descriptions) and all computational tools remain available via the open online collaboration platform; additional exercises and benchmark tests will be uploaded once they are fully developed. I encourage source modelers to use the SIV benchmarks for developing and testing new methods. The SIV efforts have already led to several promising new techniques for tackling the earthquake-source imaging problem. I expect that future SIV benchmarks will provide further innovations and insights into earthquake source kinematics that will ultimately help to better understand the dynamics of the rupture process.
Production of radioactive nuclides in inverse reaction kinematics
International Nuclear Information System (INIS)
Traykov, E.; Rogachevskiy, A.; Bosswell, M.; Dammalapati, U.; Dendooven, P.; Dermois, O.C.; Jungmann, K.; Onderwater, C.J.G.; Sohani, M.; Willmann, L.; Wilschut, H.W.; Young, A.R.
2007-01-01
Efficient production of short-lived radioactive isotopes in inverse reaction kinematics is an important technique for various applications. It is particularly relevant when the isotope of interest is only a few nucleons away from a stable isotope. In this article production via charge exchange and stripping reactions in combination with a magnetic separator is explored. The relation between the separator transmission efficiency, the production yield, and the choice of beam energy is discussed. The results of some exploratory experiments will be presented
Introduction to ground penetrating radar inverse scattering and data processing
Persico, Raffaele
2014-01-01
This book presents a comprehensive treatment of ground penetrating radar using both forward and inverse scattering mathematical techniques. Use of field data instead of laboratory data enables readers to envision real-life underground imaging; a full color insert further clarifies understanding. Along with considering the practical problem of achieving interpretable underground images, this book also features significant coverage of the problem's mathematical background. This twofold approach provides a resource that will appeal both to application oriented geologists and testing specialists,
Inverse problems and inverse scattering of plane waves
Ghosh Roy, Dilip N
2001-01-01
The purpose of this text is to present the theory and mathematics of inverse scattering, in a simple way, to the many researchers and professionals who use it in their everyday research. While applications range across a broad spectrum of disciplines, examples in this text will focus primarly, but not exclusively, on acoustics. The text will be especially valuable for those applied workers who would like to delve more deeply into the fundamentally mathematical character of the subject matter.Practitioners in this field comprise applied physicists, engineers, and technologists, whereas the theory is almost entirely in the domain of abstract mathematics. This gulf between the two, if bridged, can only lead to improvement in the level of scholarship in this highly important discipline. This is the book''s primary focus.
The development of computational algorithms for manipulator inverse kinematics
International Nuclear Information System (INIS)
Sasaki, Shinobu
1989-10-01
A solution technique of the inverse kinematics for multi-joint robot manipulators has been considered to be one of the most cumbersome treatment due to non-linearity properties inclusive of trigonometric functions. The most traditional approach is to use the Jacobian matrix on linearization assumptions. This iterative technique, however, is attended with numerical problems having significant influences on the solution characteristics such as initial guess dependence and singularities. Taking these facts into consideration, new approaches have been proposed from different standpoints, which are based on polynomial transformation of kinematic model, the minimization technique in mathematical programming, vector-geometrical concept, and the separation of joint variables associated with the optimization problem. In terms of computer simulations, each approach was identified to be a useful algorithm which leads to theoretically accurate solutions to complicated inverse problems. In this way, the short-term goal of our studies on manipulator inverse problem in the R and D project of remote handling technology was accomplished with success, and consequently the present report sums up the results of basic studies on this matter. (author)
Mohr, Stephan; Dawson, William; Wagner, Michael; Caliste, Damien; Nakajima, Takahito; Genovese, Luigi
2017-10-10
We present CheSS, the "Chebyshev Sparse Solvers" library, which has been designed to solve typical problems arising in large-scale electronic structure calculations using localized basis sets. The library is based on a flexible and efficient expansion in terms of Chebyshev polynomials and presently features the calculation of the density matrix, the calculation of matrix powers for arbitrary powers, and the extraction of eigenvalues in a selected interval. CheSS is able to exploit the sparsity of the matrices and scales linearly with respect to the number of nonzero entries, making it well-suited for large-scale calculations. The approach is particularly adapted for setups leading to small spectral widths of the involved matrices and outperforms alternative methods in this regime. By coupling CheSS to the DFT code BigDFT, we show that such a favorable setup is indeed possible in practice. In addition, the approach based on Chebyshev polynomials can be massively parallelized, and CheSS exhibits excellent scaling up to thousands of cores even for relatively small matrix sizes.
Learning Inverse Rig Mappings by Nonlinear Regression.
Holden, Daniel; Saito, Jun; Komura, Taku
2017-03-01
We present a framework to design inverse rig-functions-functions that map low level representations of a character's pose such as joint positions or surface geometry to the representation used by animators called the animation rig. Animators design scenes using an animation rig, a framework widely adopted in animation production which allows animators to design character poses and geometry via intuitive parameters and interfaces. Yet most state-of-the-art computer animation techniques control characters through raw, low level representations such as joint angles, joint positions, or vertex coordinates. This difference often stops the adoption of state-of-the-art techniques in animation production. Our framework solves this issue by learning a mapping between the low level representations of the pose and the animation rig. We use nonlinear regression techniques, learning from example animation sequences designed by the animators. When new motions are provided in the skeleton space, the learned mapping is used to estimate the rig controls that reproduce such a motion. We introduce two nonlinear functions for producing such a mapping: Gaussian process regression and feedforward neural networks. The appropriate solution depends on the nature of the rig and the amount of data available for training. We show our framework applied to various examples including articulated biped characters, quadruped characters, facial animation rigs, and deformable characters. With our system, animators have the freedom to apply any motion synthesis algorithm to arbitrary rigging and animation pipelines for immediate editing. This greatly improves the productivity of 3D animation, while retaining the flexibility and creativity of artistic input.
Wake Vortex Inverse Model User's Guide
Lai, David; Delisi, Donald
2008-01-01
NorthWest Research Associates (NWRA) has developed an inverse model for inverting landing aircraft vortex data. The data used for the inversion are the time evolution of the lateral transport position and vertical position of both the port and starboard vortices. The inverse model performs iterative forward model runs using various estimates of vortex parameters, vertical crosswind profiles, and vortex circulation as a function of wake age. Forward model predictions of lateral transport and altitude are then compared with the observed data. Differences between the data and model predictions guide the choice of vortex parameter values, crosswind profile and circulation evolution in the next iteration. Iterations are performed until a user-defined criterion is satisfied. Currently, the inverse model is set to stop when the improvement in the rms deviation between the data and model predictions is less than 1 percent for two consecutive iterations. The forward model used in this inverse model is a modified version of the Shear-APA model. A detailed description of this forward model, the inverse model, and its validation are presented in a different report (Lai, Mellman, Robins, and Delisi, 2007). This document is a User's Guide for the Wake Vortex Inverse Model. Section 2 presents an overview of the inverse model program. Execution of the inverse model is described in Section 3. When executing the inverse model, a user is requested to provide the name of an input file which contains the inverse model parameters, the various datasets, and directories needed for the inversion. A detailed description of the list of parameters in the inversion input file is presented in Section 4. A user has an option to save the inversion results of each lidar track in a mat-file (a condensed data file in Matlab format). These saved mat-files can be used for post-inversion analysis. A description of the contents of the saved files is given in Section 5. An example of an inversion input
International Nuclear Information System (INIS)
Kılıç, Emre; Eibert, Thomas F.
2015-01-01
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained
Energy Technology Data Exchange (ETDEWEB)
Kılıç, Emre, E-mail: emre.kilic@tum.de; Eibert, Thomas F.
2015-05-01
An approach combining boundary integral and finite element methods is introduced for the solution of three-dimensional inverse electromagnetic medium scattering problems. Based on the equivalence principle, unknown equivalent electric and magnetic surface current densities on a closed surface are utilized to decompose the inverse medium problem into two parts: a linear radiation problem and a nonlinear cavity problem. The first problem is formulated by a boundary integral equation, the computational burden of which is reduced by employing the multilevel fast multipole method (MLFMM). Reconstructed Cauchy data on the surface allows the utilization of the Lorentz reciprocity and the Poynting's theorems. Exploiting these theorems, the noise level and an initial guess are estimated for the cavity problem. Moreover, it is possible to determine whether the material is lossy or not. In the second problem, the estimated surface currents form inhomogeneous boundary conditions of the cavity problem. The cavity problem is formulated by the finite element technique and solved iteratively by the Gauss–Newton method to reconstruct the properties of the object. Regularization for both the first and the second problems is achieved by a Krylov subspace method. The proposed method is tested against both synthetic and experimental data and promising reconstruction results are obtained.
Inverse diffusion theory of photoacoustics
International Nuclear Information System (INIS)
Bal, Guillaume; Uhlmann, Gunther
2010-01-01
This paper analyzes the reconstruction of diffusion and absorption parameters in an elliptic equation from knowledge of internal data. In the application of photoacoustics, the internal data are the amount of thermal energy deposited by high frequency radiation propagating inside a domain of interest. These data are obtained by solving an inverse wave equation, which is well studied in the literature. We show that knowledge of two internal data based on well-chosen boundary conditions uniquely determines two constitutive parameters in diffusion and Schrödinger equations. Stability of the reconstruction is guaranteed under additional geometric constraints of strict convexity. No geometric constraints are necessary when 2n internal data for well-chosen boundary conditions are available, where n is spatial dimension. The set of well-chosen boundary conditions is characterized in terms of appropriate complex geometrical optics solutions
Action understanding as inverse planning.
Baker, Chris L; Saxe, Rebecca; Tenenbaum, Joshua B
2009-12-01
Humans are adept at inferring the mental states underlying other agents' actions, such as goals, beliefs, desires, emotions and other thoughts. We propose a computational framework based on Bayesian inverse planning for modeling human action understanding. The framework represents an intuitive theory of intentional agents' behavior based on the principle of rationality: the expectation that agents will plan approximately rationally to achieve their goals, given their beliefs about the world. The mental states that caused an agent's behavior are inferred by inverting this model of rational planning using Bayesian inference, integrating the likelihood of the observed actions with the prior over mental states. This approach formalizes in precise probabilistic terms the essence of previous qualitative approaches to action understanding based on an "intentional stance" [Dennett, D. C. (1987). The intentional stance. Cambridge, MA: MIT Press] or a "teleological stance" [Gergely, G., Nádasdy, Z., Csibra, G., & Biró, S. (1995). Taking the intentional stance at 12 months of age. Cognition, 56, 165-193]. In three psychophysical experiments using animated stimuli of agents moving in simple mazes, we assess how well different inverse planning models based on different goal priors can predict human goal inferences. The results provide quantitative evidence for an approximately rational inference mechanism in human goal inference within our simplified stimulus paradigm, and for the flexible nature of goal representations that human observers can adopt. We discuss the implications of our experimental results for human action understanding in real-world contexts, and suggest how our framework might be extended to capture other kinds of mental state inferences, such as inferences about beliefs, or inferring whether an entity is an intentional agent.
2D Inversion of Transient Electromagnetic Method (TEM)
Bortolozo, Cassiano Antonio; Luís Porsani, Jorge; Acácio Monteiro dos Santos, Fernando
2017-04-01
A new methodology was developed for 2D inversion of Transient Electromagnetic Method (TEM). The methodology consists in the elaboration of a set of routines in Matlab code for modeling and inversion of TEM data and the determination of the most efficient field array for the problem. In this research, the 2D TEM modeling uses the finite differences discretization. To solve the inversion problem, were applied an algorithm based on Marquardt technique, also known as Ridge Regression. The algorithm is stable and efficient and it is widely used in geoelectrical inversion problems. The main advantage of 1D survey is the rapid data acquisition in a large area, but in regions with two-dimensional structures or that need more details, is essential to use two-dimensional interpretation methodologies. For an efficient field acquisition we used in an innovative form the fixed-loop array, with a square transmitter loop (200m x 200m) and 25m spacing between the sounding points. The TEM surveys were conducted only inside the transmitter loop, in order to not deal with negative apparent resistivity values. Although it is possible to model the negative values, it makes the inversion convergence more difficult. Therefore the methodology described above has been developed in order to achieve maximum optimization of data acquisition. Since it is necessary only one transmitter loop disposition in the surface for each series of soundings inside the loop. The algorithms were tested with synthetic data and the results were essential to the interpretation of the results with real data and will be useful in future situations. With the inversion of the real data acquired over the Paraná Sedimentary Basin (PSB) was successful realized a 2D TEM inversion. The results indicate a robust geoelectrical characterization for the sedimentary and crystalline aquifers in the PSB. Therefore, using a new and relevant approach for 2D TEM inversion, this research effectively contributed to map the most
ANNIT - An Efficient Inversion Algorithm based on Prediction Principles
Růžek, B.; Kolář, P.
2009-04-01
Solution of inverse problems represents meaningful job in geophysics. The amount of data is continuously increasing, methods of modeling are being improved and the computer facilities are also advancing great technical progress. Therefore the development of new and efficient algorithms and computer codes for both forward and inverse modeling is still up to date. ANNIT is contributing to this stream since it is a tool for efficient solution of a set of non-linear equations. Typical geophysical problems are based on parametric approach. The system is characterized by a vector of parameters p, the response of the system is characterized by a vector of data d. The forward problem is usually represented by unique mapping F(p)=d. The inverse problem is much more complex and the inverse mapping p=G(d) is available in an analytical or closed form only exceptionally and generally it may not exist at all. Technically, both forward and inverse mapping F and G are sets of non-linear equations. ANNIT solves such situation as follows: (i) joint subspaces {pD, pM} of original data and model spaces D, M, resp. are searched for, within which the forward mapping F is sufficiently smooth that the inverse mapping G does exist, (ii) numerical approximation of G in subspaces {pD, pM} is found, (iii) candidate solution is predicted by using this numerical approximation. ANNIT is working in an iterative way in cycles. The subspaces {pD, pM} are searched for by generating suitable populations of individuals (models) covering data and model spaces. The approximation of the inverse mapping is made by using three methods: (a) linear regression, (b) Radial Basis Function Network technique, (c) linear prediction (also known as "Kriging"). The ANNIT algorithm has built in also an archive of already evaluated models. Archive models are re-used in a suitable way and thus the number of forward evaluations is minimized. ANNIT is now implemented both in MATLAB and SCILAB. Numerical tests show good
Optimization and inverse problems in electromagnetism
Wiak, Sławomir
2003-01-01
From 12 to 14 September 2002, the Academy of Humanities and Economics (AHE) hosted the workshop "Optimization and Inverse Problems in Electromagnetism". After this bi-annual event, a large number of papers were assembled and combined in this book. During the workshop recent developments and applications in optimization and inverse methodologies for electromagnetic fields were discussed. The contributions selected for the present volume cover a wide spectrum of inverse and optimal electromagnetic methodologies, ranging from theoretical to practical applications. A number of new optimal and inverse methodologies were proposed. There are contributions related to dedicated software. Optimization and Inverse Problems in Electromagnetism consists of three thematic chapters, covering: -General papers (survey of specific aspects of optimization and inverse problems in electromagnetism), -Methodologies, -Industrial Applications. The book can be useful to students of electrical and electronics engineering, computer sci...
Inferior olive mirrors joint dynamics to implement an inverse controller.
Alvarez-Icaza, Rodrigo; Boahen, Kwabena
2012-10-01
To produce smooth and coordinated motion, our nervous systems need to generate precisely timed muscle activation patterns that, due to axonal conduction delay, must be generated in a predictive and feedforward manner. Kawato proposed that the cerebellum accomplishes this by acting as an inverse controller that modulates descending motor commands to predictively drive the spinal cord such that the musculoskeletal dynamics are canceled out. This and other cerebellar theories do not, however, account for the rich biophysical properties expressed by the olivocerebellar complex's various cell types, making these theories difficult to verify experimentally. Here we propose that a multizonal microcomplex's (MZMC) inferior olivary neurons use their subthreshold oscillations to mirror a musculoskeletal joint's underdamped dynamics, thereby achieving inverse control. We used control theory to map a joint's inverse model onto an MZMC's biophysics, and we used biophysical modeling to confirm that inferior olivary neurons can express the dynamics required to mirror biomechanical joints. We then combined both techniques to predict how experimentally injecting current into the inferior olive would affect overall motor output performance. We found that this experimental manipulation unmasked a joint's natural dynamics, as observed by motor output ringing at the joint's natural frequency, with amplitude proportional to the amount of current. These results support the proposal that the cerebellum-in particular an MZMC-is an inverse controller; the results also provide a biophysical implementation for this controller and allow one to make an experimentally testable prediction.
Multisource waveform inversion of marine streamer data using normalized wavefield
Choi, Yun Seok
2013-09-01
Multisource full-waveform inversion based on the L1- and L2-norm objective functions cannot be applied to marine streamer data because it does not take into account the unmatched acquisition geometries between the observed and modeled data. To apply multisource full-waveform inversion to marine streamer data, we construct the L1- and L2-norm objective functions using the normalized wavefield. The new residual seismograms obtained from the L1- and L2-norms using the normalized wavefield mitigate the problem of unmatched acquisition geometries, which enables multisource full-waveform inversion to work with marine streamer data. In the new approaches using the normalized wavefield, we used the back-propagation algorithm based on the adjoint-state technique to efficiently calculate the gradients of the objective functions. Numerical examples showed that multisource full-waveform inversion using the normalized wavefield yields much better convergence for marine streamer data than conventional approaches. © 2013 Society of Exploration Geophysicists.
A Survey on Inverse Problems for Applied Sciences
Directory of Open Access Journals (Sweden)
Fatih Yaman
2013-01-01
Full Text Available The aim of this paper is to introduce inversion-based engineering applications and to investigate some of the important ones from mathematical point of view. To do this we employ acoustic, electromagnetic, and elastic waves for presenting different types of inverse problems. More specifically, we first study location, shape, and boundary parameter reconstruction algorithms for the inaccessible targets in acoustics. The inverse problems for the time-dependent differential equations of isotropic and anisotropic elasticity are reviewed in the following section of the paper. These problems were the objects of the study by many authors in the last several decades. The physical interpretations for almost all of these problems are given, and the geophysical applications for some of them are described. In our last section, an introduction with many links into the literature is given for modern algorithms which combine techniques from classical inverse problems with stochastic tools into ensemble methods both for data assimilation as well as for forecasting.
Inverse treatment planning based on MRI for HDR prostate brachytherapy
International Nuclear Information System (INIS)
Citrin, Deborah; Ning, Holly; Guion, Peter; Li Guang; Susil, Robert C.; Miller, Robert W.; Lessard, Etienne; Pouliot, Jean; Xie Huchen; Capala, Jacek; Coleman, C. Norman; Camphausen, Kevin; Menard, Cynthia
2005-01-01
Purpose: To develop and optimize a technique for inverse treatment planning based solely on magnetic resonance imaging (MRI) during high-dose-rate brachytherapy for prostate cancer. Methods and materials: Phantom studies were performed to verify the spatial integrity of treatment planning based on MRI. Data were evaluated from 10 patients with clinically localized prostate cancer who had undergone two high-dose-rate prostate brachytherapy boosts under MRI guidance before and after pelvic radiotherapy. Treatment planning MRI scans were systematically evaluated to derive a class solution for inverse planning constraints that would reproducibly result in acceptable target and normal tissue dosimetry. Results: We verified the spatial integrity of MRI for treatment planning. MRI anatomic evaluation revealed no significant displacement of the prostate in the left lateral decubitus position, a mean distance of 14.47 mm from the prostatic apex to the penile bulb, and clear demarcation of the neurovascular bundles on postcontrast imaging. Derivation of a class solution for inverse planning constraints resulted in a mean target volume receiving 100% of the prescribed dose of 95.69%, while maintaining a rectal volume receiving 75% of the prescribed dose of <5% (mean 1.36%) and urethral volume receiving 125% of the prescribed dose of <2% (mean 0.54%). Conclusion: Systematic evaluation of image spatial integrity, delineation uncertainty, and inverse planning constraints in our procedure reduced uncertainty in planning and treatment
Inverse planning and class solutions for brachytherapy treatment planning
International Nuclear Information System (INIS)
Trnkova, P.
2010-01-01
Brachytherapy or interventional radiooncology is a method of radiation therapy. It is a method, where a small encapsulated radioactive source is placed near to / in the tumour and therefore delivers high doses directly to the target volume. Organs at risk (OARs) are spared due to the inverse square dose fall-off. In the past years there was a slight stagnation in the development of techniques for brachytherapy treatment. While external beam radiotherapy became more and more sophisticated, in brachytherapy traditional methods have been still used. Recently, 3D imaging was considered also as the modality for brachytherapy and more precise brachytherapy could expand. Nowadays, an image guided brachytherapy is state-of-art in many centres. Integration of imaging methods lead to the dose distribution individually tailored for each patient. Treatment plan optimization is mostly performed manually as an adaptation of a standard loading pattern. Recently, inverse planning approaches have been introduced into brachytherapy. The aim of this doctoral thesis was to analyze inverse planning and to develop concepts how to integrate inverse planning into cervical cancer brachytherapy. First part of the thesis analyzes the Hybrid Inverse treatment Planning and Optimization (HIPO) algorithm and proposes a workflow how to safely work with this algorithm. The problem of inverse planning generally is that only the dose and volume parameters are taken into account and spatial dose distribution is neglected. This fact can lead to unwanted high dose regions in a normal tissue. A unique implementation of HIPO into the treatment planning system using additional features enabled to create treatment plans similar to the plans resulting from manual optimization and to shape the high dose regions inside the CTV. In the second part the HIPO algorithm is compared to the Inverse Planning Simulated Annealing (IPSA) algorithm. IPSA is implemented into the commercial treatment planning system. It
Inverse kinematics of OWI-535 robotic arm
DEBENEC, PRIMOŽ
2015-01-01
The thesis aims to calculate the inverse kinematics for the OWI-535 robotic arm. The calculation of the inverse kinematics determines the joint parameters that provide the right pose of the end effector. The pose consists of the position and orientation, however, we will focus only on the second one. Due to arm limitations, we have created our own type of the calculation of the inverse kinematics. At first we have derived it only theoretically, and then we have transferred the derivation into...
Automatic Flight Controller With Model Inversion
Meyer, George; Smith, G. Allan
1992-01-01
Automatic digital electronic control system based on inverse-model-follower concept being developed for proposed vertical-attitude-takeoff-and-landing airplane. Inverse-model-follower control places inverse mathematical model of dynamics of controlled plant in series with control actuators of controlled plant so response of combination of model and plant to command is unity. System includes feedback to compensate for uncertainties in mathematical model and disturbances imposed from without.
Lectures on the inverse scattering method
International Nuclear Information System (INIS)
Zakharov, V.E.
1983-06-01
In a series of six lectures an elementary introduction to the theory of inverse scattering is given. The first four lectures contain a detailed theory of solitons in the framework of the KdV equation, together with the inverse scattering theory of the one-dimensional Schroedinger equation. In the fifth lecture the dressing method is described, while the sixth lecture gives a brief review of the equations soluble by the inverse scattering method. (author)
2.5D inversion of CSEM data in a vertically anisotropic earth
International Nuclear Information System (INIS)
Ramananjaona, Christophe; MacGregor, Lucy
2010-01-01
The marine Controlled-Source Electromagnetic (CSEM) method is a low frequency (diffusive) electromagnetic subsurface imaging technique aimed at mapping the electric resistivity of the earth by measuring the response to a source dipole emitting an electromagnetic field in a marine environment. Although assuming isotropy for the inversion is the most straightforward approach, in many situations horizontal layering of the earth strata and grain alignment within earth materials creates electric anisotropy. Ignoring this during interpretation may create artifacts in the inversion results. Accounting for this effect therefore requires adequate forward modelling and inversion procedures. We present here an inversion algorithm for vertically anisotropic media based on finite element modelling, the use of Frechet derivatives, and different types of regularisation. Comparisons between isotropic and anisotropic inversion results are given for the characterisation of an anisotropic earth from data measured in line with the source dipole for both synthetic and real data examples.
The α-chymotrypsin and its hydrophobic derivatives in inverse micelles
International Nuclear Information System (INIS)
Pitre, Franck
1993-01-01
The α-chymotrypsin is among the most used enzymes, notably and particularly in medicine for therapeutic treatments as well as in biochemistry to determine the amine acid sequence of proteins. This research thesis addresses the study of interactions between a micro-emulsion system and an enzymatic system, and more particularly the behaviour of α-chymotrypsin in AOT inverse micelles. After a brief description of the inverse micellar system and of previously obtained results on the solubilisation of α-chymotrypsin in inverse micelles, the author reports the study of the inverse micellar phase in presence of α-chymotrypsin at the vicinity of the maximum solubility. Various techniques are used for this purpose: UV-visible absorption spectrophotometry, conductometry, and X ray scattering. Then, the author describes the chemical modification of α-chymotrypsin, and reports the study of structural as well as reaction modifications introduced during the solubilisation of α-chymotrypsin modified in inverse micelles [fr
International Nuclear Information System (INIS)
Lasche, G.P.; Coldwell, R.L.
2001-01-01
and of the sources in the spectrum. (To quantify absolute activities, of course, detector sensitivity, time, and distance must be known). However, with the inclusion of these additional parameters, the values of all the coefficients become strongly dependent upon each other value in a highly nonlinear way, and the solution becomes much more difficult. In the continuum fit in which the knots and coefficients of the Splines are optimized, the optimization problem is highly nonlinear from the start. The greatest challenge in this approach lies in finding the true minimum of chi-square on a multidimensional surface that may contain many local minima. Also, the problems with inversion of large sparse matrices must be overcome. These problems were solved by Coldwell with the development of the RobFit code. Efficient convergence to the vector for the true minimum on the multidimensional chi-square surface is accomplished with Newton-Raphson techniques used to estimate the best Marquardt parameter to add to the diagonal elements of the inversion matrix for the next step in the search. Stability with large, sparse matrix inversion is achieved with Cholesky minimization and numerical techniques to remedy apparent singularities resulting from numerical truncation. Although it requires knowledgeable interactive operation for best results and is computationally intensive, nuclear spectral analysis with nonlinear robust fitting has been shown to be capable of exceptional sensitivity in detecting weak radionuclides in the presence of strong interference and in noisy spectra, sparse spectra, and low-resolution spectra. This increased sensitivity is due to the simultaneous optimization of all the data for all the free variables of the analysis and the iterative construction of a well-determined continuum spanning the entire spectrum. (authors)
Bayesian approach to inverse statistical mechanics
Habeck, Michael
2014-05-01
Inverse statistical mechanics aims to determine particle interactions from ensemble properties. This article looks at this inverse problem from a Bayesian perspective and discusses several statistical estimators to solve it. In addition, a sequential Monte Carlo algorithm is proposed that draws the interaction parameters from their posterior probability distribution. The posterior probability involves an intractable partition function that is estimated along with the interactions. The method is illustrated for inverse problems of varying complexity, including the estimation of a temperature, the inverse Ising problem, maximum entropy fitting, and the reconstruction of molecular interaction potentials.
Anisotropic magnetotelluric inversion using a mutual information constraint
Mandolesi, E.; Jones, A. G.
2012-12-01
In recent years, several authors pointed that the electrical conductivity of many subsurface structures cannot be described properly by a scalar field. With the development of field devices and techniques, data quality improved to the point that the anisotropy in conductivity of rocks (microscopic anisotropy) and tectonic structures (macroscopic anisotropy) cannot be neglected. Therefore a correct use of high quality data has to include electrical anisotropy and a correct interpretation of anisotropic data characterizes directly a non-negligible part of the subsurface. In this work we test an inversion routine that takes advantage of the classic Levenberg-Marquardt (LM) algorithm to invert magnetotelluric (MT) data generated from a bi-dimensional (2D) anisotropic domain. The LM method is routinely used in inverse problems due its performance and robustness. In non-linear inverse problems -such the MT problem- the LM method provides a spectacular compromise betwee quick and secure convergence at the price of the explicit computation and storage of the sensitivity matrix. Regularization in inverse MT problems has been used extensively, due to the necessity to constrain model space and to reduce the ill-posedness of the anisotropic MT problem, which makes MT inversions extremely challenging. In order to reduce non-uniqueness of the MT problem and to reach a model compatible with other different tomographic results from the same target region, we used a mutual information (MI) based constraint. MI is a basic quantity in information theory that can be used to define a metric between images, and it is routinely used in fields as computer vision, image registration and medical tomography, to cite some applications. We -thus- inverted for the model that best fits the anisotropic data and that is the closest -in a MI sense- to a tomographic model of the target area. The advantage of this technique is that the tomographic model of the studied region may be produced by any
Uhlmann, Gunther
2008-07-01
This volume represents the proceedings of the fourth Applied Inverse Problems (AIP) international conference and the first congress of the Inverse Problems International Association (IPIA) which was held in Vancouver, Canada, June 25 29, 2007. The organizing committee was formed by Uri Ascher, University of British Columbia, Richard Froese, University of British Columbia, Gary Margrave, University of Calgary, and Gunther Uhlmann, University of Washington, chair. The conference was part of the activities of the Pacific Institute of Mathematical Sciences (PIMS) Collaborative Research Group on inverse problems (http://www.pims.math.ca/scientific/collaborative-research-groups/past-crgs). This event was also supported by grants from NSF and MITACS. Inverse Problems (IP) are problems where causes for a desired or an observed effect are to be determined. They lie at the heart of scientific inquiry and technological development. The enormous increase in computing power and the development of powerful algorithms have made it possible to apply the techniques of IP to real-world problems of growing complexity. Applications include a number of medical as well as other imaging techniques, location of oil and mineral deposits in the earth's substructure, creation of astrophysical images from telescope data, finding cracks and interfaces within materials, shape optimization, model identification in growth processes and, more recently, modelling in the life sciences. The series of Applied Inverse Problems (AIP) Conferences aims to provide a primary international forum for academic and industrial researchers working on all aspects of inverse problems, such as mathematical modelling, functional analytic methods, computational approaches, numerical algorithms etc. The steering committee of the AIP conferences consists of Heinz Engl (Johannes Kepler Universität, Austria), Joyce McLaughlin (RPI, USA), William Rundell (Texas A&M, USA), Erkki Somersalo (Helsinki University of Technology
Directory of Open Access Journals (Sweden)
Wei Yan
2013-01-01
Full Text Available Most primary cells use Zn or Li as the anode, a metallic oxide as the cathode, and an acidic or alkaline solution or moist past as the electrolytic solution. In this paper, highly ordered polypyrrole (PPy inverse opals have been successfully synthesized in the acetonitrile solution containing [bmim]PF6. PPy films were prepared under the same experimental conditions. Cyclic voltammograms of the PPy film and the PPy inverse opal in neutral phosphate buffer solution (PBS were recorded. X-ray photoelectron spectroscopy technique was used to investigate the structural surface of the PPy films and the PPy inverse opals. It is found that the PF6- anions kept dedoping from the PPy films during the potential scanning process, resulting in the electrochemical inactivity. Although PF6- anions also kept dedoping from the PPy inverse opals, the PO43- anions from PBS could dope into the inverse opal, explaining why the PPy inverse opals kept their electrochemical activity. An environmental friendly cell prototype was constructed, using the PPy inverse opal as the anode. The electrolytes in both the cathodic and anodic half-cells were neutral PBSs. The open-circuit potential of the cell prototype reached 0.487 V and showed a stable output over several hundred hours.
Testing the gravitational inverse-square law
International Nuclear Information System (INIS)
Adelberger, Eric; Heckel, B.; Hoyle, C.D.
2005-01-01
theorists seriously entertain the idea that there are actually six or seven additional spatial dimensions; these extra dimensions are needed to make the theory both mathematically consistent and capable of describing gravity. One of the big puzzles about gravity is the fact that it is so much weaker than the other forces: it is a factor of about 1040 times weaker than the electrostatic and magnetic forces. In 1998 three theorists - Nima Arkani-Hamed, Savas Dimopoulos and Gia Dvali - offered a bold explanation for this weakness (see further reading). Gravity appears weak, they said, because some of the extra dimensions predicted by string theory are surprisingly large compared with the Planck length. Even without large extra dimensions and fat gravitons, string theories contain many new and as yet unobserved particles. These include the dilaton (which is the partner of the graviton in string theory), the radion (which stabilizes the size of the extra dimensions) and various 'moduli' (particles that set the values of coupling strengths, particle masses and other parameters in the Standard Model). The quantum-mechanical exchange of these particles would lead to very strong, short-range forces that could show up in tests of the inverse-square law. Promising new techniques involving small oscillators and micro cantilevers are also being introduced to search for new physics hidden in the behaviour of gravity over short distances. Although these devices have not yet achieved the sensitivity of torsion pendulums, modern fabrication techniques allow them to be much smaller and stiffer. This suppresses the problems associated with seismic noise and alignment, and allows much smaller separations of the test masses to be explored. (U.K.)
Laterally constrained inversion for CSAMT data interpretation
Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun
2015-10-01
Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.
Full Waveform Inversion Using Nonlinearly Smoothed Wavefields
Li, Y.; Choi, Yun Seok; Alkhalifah, Tariq Ali; Li, Z.
2017-01-01
The lack of low frequency information in the acquired data makes full waveform inversion (FWI) conditionally converge to the accurate solution. An initial velocity model that results in data with events within a half cycle of their location in the observed data was required to converge. The multiplication of wavefields with slightly different frequencies generates artificial low frequency components. This can be effectively utilized by multiplying the wavefield with itself, which is nonlinear operation, followed by a smoothing operator to extract the artificially produced low frequency information. We construct the objective function using the nonlinearly smoothed wavefields with a global-correlation norm to properly handle the energy imbalance in the nonlinearly smoothed wavefield. Similar to the multi-scale strategy, we progressively reduce the smoothing width applied to the multiplied wavefield to welcome higher resolution. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to the conventional FWI except for the adjoint source. Examples on the Marmousi 2 model demonstrate the feasibility of the proposed FWI method to mitigate the cycle-skipping problem in the case of a lack of low frequency information.
Full Waveform Inversion Using Nonlinearly Smoothed Wavefields
Li, Y.
2017-05-26
The lack of low frequency information in the acquired data makes full waveform inversion (FWI) conditionally converge to the accurate solution. An initial velocity model that results in data with events within a half cycle of their location in the observed data was required to converge. The multiplication of wavefields with slightly different frequencies generates artificial low frequency components. This can be effectively utilized by multiplying the wavefield with itself, which is nonlinear operation, followed by a smoothing operator to extract the artificially produced low frequency information. We construct the objective function using the nonlinearly smoothed wavefields with a global-correlation norm to properly handle the energy imbalance in the nonlinearly smoothed wavefield. Similar to the multi-scale strategy, we progressively reduce the smoothing width applied to the multiplied wavefield to welcome higher resolution. We calculate the gradient of the objective function using the adjoint-state technique, which is similar to the conventional FWI except for the adjoint source. Examples on the Marmousi 2 model demonstrate the feasibility of the proposed FWI method to mitigate the cycle-skipping problem in the case of a lack of low frequency information.
Inverse electronic scattering by Green's functions and singular values decomposition
International Nuclear Information System (INIS)
Mayer, A.; Vigneron, J.-P.
2000-01-01
An inverse scattering technique is developed to enable a sample reconstruction from the diffraction figures obtained by electronic projection microscopy. In its Green's functions formulation, this technique takes account of all orders of diffraction by performing an iterative reconstruction of the wave function on the observation screen. This scattered wave function is then backpropagated to the sample to determine the potential-energy distribution, which is assumed real valued. The method relies on the use of singular values decomposition techniques, thus providing the best least-squares solutions and enabling a reduction of noise. The technique is applied to the analysis of a two-dimensional nanometric sample that is observed in Fresnel conditions with an electronic energy of 25 eV. The algorithm turns out to provide results with a mean relative error of the order of 5% and to be very stable against random noise
Inverse Monte Carlo: a unified reconstruction algorithm for SPECT
International Nuclear Information System (INIS)
Floyd, C.E.; Coleman, R.E.; Jaszczak, R.J.
1985-01-01
Inverse Monte Carlo (IMOC) is presented as a unified reconstruction algorithm for Emission Computed Tomography (ECT) providing simultaneous compensation for scatter, attenuation, and the variation of collimator resolution with depth. The technique of inverse Monte Carlo is used to find an inverse solution to the photon transport equation (an integral equation for photon flux from a specified source) for a parameterized source and specific boundary conditions. The system of linear equations so formed is solved to yield the source activity distribution for a set of acquired projections. For the studies presented here, the equations are solved using the EM (Maximum Likelihood) algorithm although other solution algorithms, such as Least Squares, could be employed. While the present results specifically consider the reconstruction of camera-based Single Photon Emission Computed Tomographic (SPECT) images, the technique is equally valid for Positron Emission Tomography (PET) if a Monte Carlo model of such a system is used. As a preliminary evaluation, experimentally acquired SPECT phantom studies for imaging Tc-99m (140 keV) are presented which demonstrate the quantitative compensation for scatter and attenuation for a two dimensional (single slice) reconstruction. The algorithm may be expanded in a straight forward manner to full three dimensional reconstruction including compensation for out of plane scatter
Strategies for source space limitation in tomographic inverse procedures
International Nuclear Information System (INIS)
George, J.S.; Lewis, P.S.; Schlitt, H.A.; Kaplan, L.; Gorodnitsky, I.; Wood, C.C.
1994-01-01
The use of magnetic recordings for localization of neural activity requires the solution of an ill-posed inverse problem: i.e. the determination of the spatial configuration, orientation, and timecourse of the currents that give rise to a particular observed field distribution. In its general form, this inverse problem has no unique solution; due to superposition and the existence of silent source configurations, a particular magnetic field distribution at the head surface could be produced by any number of possible source configurations. However, by making assumptions concerning the number and properties of neural sources, it is possible to use numerical minimization techniques to determine the source model parameters that best account for the experimental observations while satisfying numerical or physical criteria. In this paper the authors describe progress on the development and validation of inverse procedures that produce distributed estimates of neuronal currents. The goal is to produce a temporal sequence of 3-D tomographic reconstructions of the spatial patterns of neural activation. Such approaches have a number of advantages, in principle. Because they do not require estimates of model order and parameter values (beyond specification of the source space), they minimize the influence of investigator decisions and are suitable for automated analyses. These techniques also allow localization of sources that are not point-like; experimental studies of cognitive processes and of spontaneous brain activity are likely to require distributed source models
A hybrid algorithm for solving inverse problems in elasticity
Directory of Open Access Journals (Sweden)
Barabasz Barbara
2014-12-01
Full Text Available The paper offers a new approach to handling difficult parametric inverse problems in elasticity and thermo-elasticity, formulated as global optimization ones. The proposed strategy is composed of two phases. In the first, global phase, the stochastic hp-HGS algorithm recognizes the basins of attraction of various objective minima. In the second phase, the local objective minimizers are closer approached by steepest descent processes executed singly in each basin of attraction. The proposed complex strategy is especially dedicated to ill-posed problems with multimodal objective functionals. The strategy offers comparatively low computational and memory costs resulting from a double-adaptive technique in both forward and inverse problem domains. We provide a result on the Lipschitz continuity of the objective functional composed of the elastic energy and the boundary displacement misfits with respect to the unknown constitutive parameters. It allows common scaling of the accuracy of solving forward and inverse problems, which is the core of the introduced double-adaptive technique. The capability of the proposed method of finding multiple solutions is illustrated by a computational example which consists in restoring all feasible Young modulus distributions minimizing an objective functional in a 3D domain of a photo polymer template obtained during step and flash imprint lithography.
A 2D nonlinear inversion of well-seismic data
International Nuclear Information System (INIS)
Métivier, Ludovic; Lailly, Patrick; Delprat-Jannaud, Florence; Halpern, Laurence
2011-01-01
Well-seismic data such as vertical seismic profiles are supposed to provide detailed information about the elastic properties of the subsurface at the vicinity of the well. Heterogeneity of sedimentary terrains can lead to far from negligible multiple scattering, one of the manifestations of the nonlinearity involved in the mapping between elastic parameters and seismic data. We present a 2D extension of an existing 1D nonlinear inversion technique in the context of acoustic wave propagation. In the case of a subsurface with gentle lateral variations, we propose a regularization technique which aims at ensuring the stability of the inversion in a context where the recorded seismic waves provide a very poor illumination of the subsurface. We deal with a huge size inverse problem. Special care has been taken for its numerical solution, regarding both the choice of the algorithms and the implementation on a cluster-based supercomputer. Our tests on synthetic data show the effectiveness of our regularization. They also show that our efforts in accounting for the nonlinearities are rewarded by an exceptional seismic resolution at distances of about 100 m from the well. They also show that the result is not very sensitive to errors in the estimation of the velocity distribution, as far as these errors remain realistic in the context of a medium with gentle lateral variations
Full-Physics Inverse Learning Machine for Satellite Remote Sensing Retrievals
Loyola, D. G.
2017-12-01
The satellite remote sensing retrievals are usually ill-posed inverse problems that are typically solved by finding a state vector that minimizes the residual between simulated data and real measurements. The classical inversion methods are very time-consuming as they require iterative calls to complex radiative-transfer forward models to simulate radiances and Jacobians, and subsequent inversion of relatively large matrices. In this work we present a novel and extremely fast algorithm for solving inverse problems called full-physics inverse learning machine (FP-ILM). The FP-ILM algorithm consists of a training phase in which machine learning techniques are used to derive an inversion operator based on synthetic data generated using a radiative transfer model (which expresses the "full-physics" component) and the smart sampling technique, and an operational phase in which the inversion operator is applied to real measurements. FP-ILM has been successfully applied to the retrieval of the SO2 plume height during volcanic eruptions and to the retrieval of ozone profile shapes from UV/VIS satellite sensors. Furthermore, FP-ILM will be used for the near-real-time processing of the upcoming generation of European Sentinel sensors with their unprecedented spectral and spatial resolution and associated large increases in the amount of data.
The inverse method parametric verification of real-time embedded systems
André , Etienne
2013-01-01
This book introduces state-of-the-art verification techniques for real-time embedded systems, based on the inverse method for parametric timed automata. It reviews popular formalisms for the specification and verification of timed concurrent systems and, in particular, timed automata as well as several extensions such as timed automata equipped with stopwatches, linear hybrid automata and affine hybrid automata.The inverse method is introduced, and its benefits for guaranteeing robustness in real-time systems are shown. Then, it is shown how an iteration of the inverse method can solv
Zhukovsky, K
2014-01-01
We present a general method of operational nature to analyze and obtain solutions for a variety of equations of mathematical physics and related mathematical problems. We construct inverse differential operators and produce operational identities, involving inverse derivatives and families of generalised orthogonal polynomials, such as Hermite and Laguerre polynomial families. We develop the methodology of inverse and exponential operators, employing them for the study of partial differential equations. Advantages of the operational technique, combined with the use of integral transforms, generating functions with exponentials and their integrals, for solving a wide class of partial derivative equations, related to heat, wave, and transport problems, are demonstrated.
Semi-local inversion of the geodesic ray transform in the hyperbolic plane
International Nuclear Information System (INIS)
Courdurier, Matias; Saez, Mariel
2013-01-01
The inversion of the ray transform on the hyperbolic plane has applications in geophysical exploration and in medical imaging techniques (such as electrical impedance tomography). The geodesic ray transform has been studied in more general geometries and including attenuation, but all of the available inversion formulas require knowledge of the ray transform for all the geodesics. In this paper we present a different inversion formula for the ray transform on the hyperbolic plane, which has the advantage of only requiring knowledge of the ray transform in a reduced family of geodesics. The required family of geodesics is directly related to the set where the original function is to be recovered. (paper)
Sensitivity study on hydraulic well testing inversion using simulated annealing
International Nuclear Information System (INIS)
Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi
1997-11-01
For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion
Sensitivity study on hydraulic well testing inversion using simulated annealing
Energy Technology Data Exchange (ETDEWEB)
Nakao, Shinsuke; Najita, J.; Karasaki, Kenzi
1997-11-01
For environmental remediation, management of nuclear waste disposal, or geothermal reservoir engineering, it is very important to evaluate the permeabilities, spacing, and sizes of the subsurface fractures which control ground water flow. Cluster variable aperture (CVA) simulated annealing has been used as an inversion technique to construct fluid flow models of fractured formations based on transient pressure data from hydraulic tests. A two-dimensional fracture network system is represented as a filled regular lattice of fracture elements. The algorithm iteratively changes an aperture of cluster of fracture elements, which are chosen randomly from a list of discrete apertures, to improve the match to observed pressure transients. The size of the clusters is held constant throughout the iterations. Sensitivity studies using simple fracture models with eight wells show that, in general, it is necessary to conduct interference tests using at least three different wells as pumping well in order to reconstruct the fracture network with a transmissivity contrast of one order of magnitude, particularly when the cluster size is not known a priori. Because hydraulic inversion is inherently non-unique, it is important to utilize additional information. The authors investigated the relationship between the scale of heterogeneity and the optimum cluster size (and its shape) to enhance the reliability and convergence of the inversion. It appears that the cluster size corresponding to about 20--40 % of the practical range of the spatial correlation is optimal. Inversion results of the Raymond test site data are also presented and the practical range of spatial correlation is evaluated to be about 5--10 m from the optimal cluster size in the inversion.
MODEL SELECTION FOR SPECTROPOLARIMETRIC INVERSIONS
International Nuclear Information System (INIS)
Asensio Ramos, A.; Manso Sainz, R.; Martínez González, M. J.; Socas-Navarro, H.; Viticchié, B.; Orozco Suárez, D.
2012-01-01
Inferring magnetic and thermodynamic information from spectropolarimetric observations relies on the assumption of a parameterized model atmosphere whose parameters are tuned by comparison with observations. Often, the choice of the underlying atmospheric model is based on subjective reasons. In other cases, complex models are chosen based on objective reasons (for instance, the necessity to explain asymmetries in the Stokes profiles) but it is not clear what degree of complexity is needed. The lack of an objective way of comparing models has, sometimes, led to opposing views of the solar magnetism because the inferred physical scenarios are essentially different. We present the first quantitative model comparison based on the computation of the Bayesian evidence ratios for spectropolarimetric observations. Our results show that there is not a single model appropriate for all profiles simultaneously. Data with moderate signal-to-noise ratios (S/Ns) favor models without gradients along the line of sight. If the observations show clear circular and linear polarization signals above the noise level, models with gradients along the line are preferred. As a general rule, observations with large S/Ns favor more complex models. We demonstrate that the evidence ratios correlate well with simple proxies. Therefore, we propose to calculate these proxies when carrying out standard least-squares inversions to allow for model comparison in the future.
Inverse transport theory of photoacoustics
International Nuclear Information System (INIS)
Bal, Guillaume; Jollivet, Alexandre; Jugnon, Vincent
2010-01-01
We consider the reconstruction of optical parameters in a domain of interest from photoacoustic data. Photoacoustic tomography (PAT) radiates high-frequency electromagnetic waves into the domain and measures acoustic signals emitted by the resulting thermal expansion. Acoustic signals are then used to construct the deposited thermal energy map. The latter depends on the constitutive optical parameters in a nontrivial manner. In this paper, we develop and use an inverse transport theory with internal measurements to extract information on the optical coefficients from knowledge of the deposited thermal energy map. We consider the multi-measurement setting in which many electromagnetic radiation patterns are used to probe the domain of interest. By developing an expansion of the measurement operator into singular components, we show that the spatial variations of the intrinsic attenuation and the scattering coefficients may be reconstructed. We also reconstruct coefficients describing anisotropic scattering of photons, such as the anisotropy coefficient g(x) in a Henyey–Greenstein phase function model. Finally, we derive stability estimates for the reconstructions
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander
2014-01-06
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Problems and Uncertainty Quantification
Litvinenko, Alexander; Matthies, Hermann G.
2014-01-01
In a Bayesian setting, inverse problems and uncertainty quantification (UQ) - the propagation of uncertainty through a computational (forward) modelare strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Inverse Free Electron Laser accelerator
International Nuclear Information System (INIS)
Fisher, A.; Gallardo, J.; van Steenbergen, A.; Sandweiss, J.
1992-09-01
The study of the INVERSE FREE ELECTRON LASER, as a potential mode of electron acceleration, is being pursued at Brookhaven National Laboratory. Recent studies have focussed on the development of a low energy, high gradient, multi stage linear accelerator. The elementary ingredients for the IFEL interaction are the 50 MeV Linac e - beam and the 10 11 Watt CO 2 laser beam of BNL's Accelerator Test Facility (ATF), Center for Accelerator Physics (CAP) and a wiggler. The latter element is designed as a fast excitation unit making use of alternating stacks of Vanadium Permendur (VaP) ferromagnetic laminations, periodically interspersed with conductive, nonmagnetic laminations, which act as eddy current induced field reflectors. Wiggler parameters and field distribution data will be presented for a prototype wiggler in a constant period and in a ∼ 1.5 %/cm tapered period configuration. The CO 2 laser beam will be transported through the IFEL interaction region by means of a low loss, dielectric coated, rectangular waveguide. Short waveguide test sections have been constructed and have been tested using a low power cw CO 2 laser. Preliminary results of guide attenuation and mode selectivity will be given, together with a discussion of the optical issues for the IFEL accelerator. The IFEL design is supported by the development and use of 1D and 3D simulation programs. The results of simulation computations, including also wiggler errors, for a single module accelerator and for a multi-module accelerator will be presented
Inverse problems and uncertainty quantification
Litvinenko, Alexander
2013-12-18
In a Bayesian setting, inverse problems and uncertainty quantification (UQ)— the propagation of uncertainty through a computational (forward) model—are strongly connected. In the form of conditional expectation the Bayesian update becomes computationally attractive. This is especially the case as together with a functional or spectral approach for the forward UQ there is no need for time- consuming and slowly convergent Monte Carlo sampling. The developed sampling- free non-linear Bayesian update is derived from the variational problem associated with conditional expectation. This formulation in general calls for further discretisa- tion to make the computation possible, and we choose a polynomial approximation. After giving details on the actual computation in the framework of functional or spectral approximations, we demonstrate the workings of the algorithm on a number of examples of increasing complexity. At last, we compare the linear and quadratic Bayesian update on the small but taxing example of the chaotic Lorenz 84 model, where we experiment with the influence of different observation or measurement operators on the update.
Machine Learning and Inverse Problem in Geodynamics
Shahnas, M. H.; Yuen, D. A.; Pysklywec, R.
2017-12-01
During the past few decades numerical modeling and traditional HPC have been widely deployed in many diverse fields for problem solutions. However, in recent years the rapid emergence of machine learning (ML), a subfield of the artificial intelligence (AI), in many fields of sciences, engineering, and finance seems to mark a turning point in the replacement of traditional modeling procedures with artificial intelligence-based techniques. The study of the circulation in the interior of Earth relies on the study of high pressure mineral physics, geochemistry, and petrology where the number of the mantle parameters is large and the thermoelastic parameters are highly pressure- and temperature-dependent. More complexity arises from the fact that many of these parameters that are incorporated in the numerical models as input parameters are not yet well established. In such complex systems the application of machine learning algorithms can play a valuable role. Our focus in this study is the application of supervised machine learning (SML) algorithms in predicting mantle properties with the emphasis on SML techniques in solving the inverse problem. As a sample problem we focus on the spin transition in ferropericlase and perovskite that may cause slab and plume stagnation at mid-mantle depths. The degree of the stagnation depends on the degree of negative density anomaly at the spin transition zone. The training and testing samples for the machine learning models are produced by the numerical convection models with known magnitudes of density anomaly (as the class labels of the samples). The volume fractions of the stagnated slabs and plumes which can be considered as measures for the degree of stagnation are assigned as sample features. The machine learning models can determine the magnitude of the spin transition-induced density anomalies that can cause flow stagnation at mid-mantle depths. Employing support vector machine (SVM) algorithms we show that SML techniques
Magnetic topology of Co-based inverse opal-like structures
Grigoryeva, N.A.; Mistonov, A.A.; Napolskii, K.S.; Sapoletova, N.A.; Eliseev, A.A.; Bouwman, W.; Byelov, D.; Petukhov, A.V.; Chernyshov, D.Y.; Eckerlebe, H.; Vasilieva, A.V.; Grigoriev, S.V.
2011-01-01
The magnetic and structural properties of a cobalt inverse opal-like crystal have been studied by a combination of complementary techniques ranging from polarized neutron scattering and superconducting quantum interference device (SQUID) magnetometry to x-ray diffraction. Microradian small-angle x-ray diffraction shows that the inverse opal-like structure (OLS) synthesized by the electrochemical method fully duplicates the three-dimensional net of voids of the template artificial opal. The in...
Resolution analysis in full waveform inversion
Fichtner, A.; Trampert, J.
2011-01-01
We propose a new method for the quantitative resolution analysis in full seismic waveform inversion that overcomes the limitations of classical synthetic inversions while being computationally more efficient and applicable to any misfit measure. The method rests on (1) the local quadratic
Abel inverse transformation applied to plasma diagnostics
International Nuclear Information System (INIS)
Zhu Shiyao
1987-01-01
Two methods of Abel inverse transformation are applied to two different test profiles. The effects of random errors of input data, position uncertainty and number of points of input data on the accuracy of inverse transformation have been studied. The two methods are compared in each other
Automated gravity gradient tensor inversion for underwater object detection
International Nuclear Information System (INIS)
Wu, Lin; Tian, Jinwen
2010-01-01
Underwater abnormal object detection is a current need for the navigation security of autonomous underwater vehicles (AUVs). In this paper, an automated gravity gradient tensor inversion algorithm is proposed for the purpose of passive underwater object detection. Full-tensor gravity gradient anomalies induced by an object in the partial area can be measured with the technique of gravity gradiometry on an AUV. Then the automated algorithm utilizes the anomalies, using the inverse method to estimate the mass and barycentre location of the arbitrary-shaped object. A few tests on simple synthetic models will be illustrated, in order to evaluate the feasibility and accuracy of the new algorithm. Moreover, the method is applied to a complicated model of an abnormal object with gradiometer and AUV noise, and interference from a neighbouring illusive smaller object. In all cases tested, the estimated mass and barycentre location parameters are found to be in good agreement with the actual values
Reiter, D. T.; Rodi, W. L.
2015-12-01
Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.
Review on solving the inverse problem in EEG source analysis
Directory of Open Access Journals (Sweden)
Fabri Simon G
2008-11-01
Full Text Available Abstract In this primer, we give a review of the inverse problem for EEG source localization. This is intended for the researchers new in the field to get insight in the state-of-the-art techniques used to find approximate solutions of the brain sources giving rise to a scalp potential recording. Furthermore, a review of the performance results of the different techniques is provided to compare these different inverse solutions. The authors also include the results of a Monte-Carlo analysis which they performed to compare four non parametric algorithms and hence contribute to what is presently recorded in the literature. An extensive list of references to the work of other researchers is also provided. This paper starts off with a mathematical description of the inverse problem and proceeds to discuss the two main categories of methods which were developed to solve the EEG inverse problem, mainly the non parametric and parametric methods. The main difference between the two is to whether a fixed number of dipoles is assumed a priori or not. Various techniques falling within these categories are described including minimum norm estimates and their generalizations, LORETA, sLORETA, VARETA, S-MAP, ST-MAP, Backus-Gilbert, LAURA, Shrinking LORETA FOCUSS (SLF, SSLOFO and ALF for non parametric methods and beamforming techniques, BESA, subspace techniques such as MUSIC and methods derived from it, FINES, simulated annealing and computational intelligence algorithms for parametric methods. From a review of the performance of these techniques as documented in the literature, one could conclude that in most cases the LORETA solution gives satisfactory results. In situations involving clusters of dipoles, higher resolution algorithms such as MUSIC or FINES are however preferred. Imposing reliable biophysical and psychological constraints, as done by LAURA has given superior results. The Monte-Carlo analysis performed, comparing WMN, LORETA, sLORETA and SLF
Stochastic Gabor reflectivity and acoustic impedance inversion
Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John
2018-02-01
To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also
Inverse m-matrices and ultrametric matrices
Dellacherie, Claude; San Martin, Jaime
2014-01-01
The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.
Recurrent Neural Network for Computing Outer Inverse.
Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin
2016-05-01
Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.
Inverse analysis of turbidites by machine learning
Naruse, H.; Nakao, K.
2017-12-01
This study aims to propose a method to estimate paleo-hydraulic conditions of turbidity currents from ancient turbidites by using machine-learning technique. In this method, numerical simulation was repeated under various initial conditions, which produces a data set of characteristic features of turbidites. Then, this data set of turbidites is used for supervised training of a deep-learning neural network (NN). Quantities of characteristic features of turbidites in the training data set are given to input nodes of NN, and output nodes are expected to provide the estimates of initial condition of the turbidity current. The optimization of weight coefficients of NN is then conducted to reduce root-mean-square of the difference between the true conditions and the output values of NN. The empirical relationship with numerical results and the initial conditions is explored in this method, and the discovered relationship is used for inversion of turbidity currents. This machine learning can potentially produce NN that estimates paleo-hydraulic conditions from data of ancient turbidites. We produced a preliminary implementation of this methodology. A forward model based on 1D shallow-water equations with a correction of density-stratification effect was employed. This model calculates a behavior of a surge-like turbidity current transporting mixed-size sediment, and outputs spatial distribution of volume per unit area of each grain-size class on the uniform slope. Grain-size distribution was discretized 3 classes. Numerical simulation was repeated 1000 times, and thus 1000 beds of turbidites were used as the training data for NN that has 21000 input nodes and 5 output nodes with two hidden-layers. After the machine learning finished, independent simulations were conducted 200 times in order to evaluate the performance of NN. As a result of this test, the initial conditions of validation data were successfully reconstructed by NN. The estimated values show very small
International Nuclear Information System (INIS)
Tsuchihashi, Toshio; Yoshizawa, Satoshi; Maki, Toshio; Kitagawa, Matsuo; Suzuki, Ken; Fujita, Isao
1998-01-01
A technique that increases slice thickness so that it becomes wider than the excitation width of the 180deg inversion pulse and in which TR is partitioned twice has been investigated with regard to fast FLAIR. This is a technique that reduces the flow artifact of CSF. It is thought that, with this technique, the flow artifact is reduced because the CSF that flows onto the slice reaches the null point. The cross talk effect of the 180deg inversion pulse appears as a high CSF signal. As a result, the number of slices needs to be partitioned two or three times before imaging. Thus the imaging time is doubled or tripled. Considering the cross talk effect of the 180deg inversion pulse and the imaging time needed for this technique, the optimal imaging technique would be one that uses an inversion pulse that is four times slice thickness plus slice space and for which the number of slices is partitioned twice. Furthermore, the null point of CSF was dependent on dividing TR in half. (author)
Digital Repository Service at National Institute of Oceanography (India)
Rao, M.M.M.; Murty, T.V.R.; SuryaPrakash, S.; Chandramouli, P.; Murthy, K.S.R.
. Indust. Appl. Math, 11 (1963) 431-441. 10. Pedersen L B, Interpretation of potential field data – A generalised inverse approach, Geophy. Prosp. 25 (1977) 199-230. 11. Radhakrishna Murthy I V, Swamy K V & Jagannadha Rao S, Automatic inversion... generalised inverse technique in reconstruction of gravity anomalies due to a fault, Indian J. Pure. Appl. Math., 34 (2003) 31-47. 16. Ramana Murty T V, Somayajulu Y K & Murty C S, Reconstruction of sound speed profile through natural generalised inverse...
New techniques in digital holography
Picart, Pascal
2015-01-01
A state of the art presentation of important advances in the field of digital holography, detailing advances related to fundamentals of digital holography, in-line holography applied to fluid mechanics, digital color holography, digital holographic microscopy, infrared holography, special techniques in full field vibrometry and inverse problems in digital holography
Identification of polymorphic inversions from genotypes
Directory of Open Access Journals (Sweden)
Cáceres Alejandro
2012-02-01
Full Text Available Abstract Background Polymorphic inversions are a source of genetic variability with a direct impact on recombination frequencies. Given the difficulty of their experimental study, computational methods have been developed to infer their existence in a large number of individuals using genome-wide data of nucleotide variation. Methods based on haplotype tagging of known inversions attempt to classify individuals as having a normal or inverted allele. Other methods that measure differences between linkage disequilibrium attempt to identify regions with inversions but unable to classify subjects accurately, an essential requirement for association studies. Results We present a novel method to both identify polymorphic inversions from genome-wide genotype data and classify individuals as containing a normal or inverted allele. Our method, a generalization of a published method for haplotype data 1, utilizes linkage between groups of SNPs to partition a set of individuals into normal and inverted subpopulations. We employ a sliding window scan to identify regions likely to have an inversion, and accumulation of evidence from neighboring SNPs is used to accurately determine the inversion status of each subject. Further, our approach detects inversions directly from genotype data, thus increasing its usability to current genome-wide association studies (GWAS. Conclusions We demonstrate the accuracy of our method to detect inversions and classify individuals on principled-simulated genotypes, produced by the evolution of an inversion event within a coalescent model 2. We applied our method to real genotype data from HapMap Phase III to characterize the inversion status of two known inversions within the regions 17q21 and 8p23 across 1184 individuals. Finally, we scan the full genomes of the European Origin (CEU and Yoruba (YRI HapMap samples. We find population-based evidence for 9 out of 15 well-established autosomic inversions, and for 52 regions
Convex blind image deconvolution with inverse filtering
Lv, Xiao-Guang; Li, Fang; Zeng, Tieyong
2018-03-01
Blind image deconvolution is the process of estimating both the original image and the blur kernel from the degraded image with only partial or no information about degradation and the imaging system. It is a bilinear ill-posed inverse problem corresponding to the direct problem of convolution. Regularization methods are used to handle the ill-posedness of blind deconvolution and get meaningful solutions. In this paper, we investigate a convex regularized inverse filtering method for blind deconvolution of images. We assume that the support region of the blur object is known, as has been done in a few existing works. By studying the inverse filters of signal and image restoration problems, we observe the oscillation structure of the inverse filters. Inspired by the oscillation structure of the inverse filters, we propose to use the star norm to regularize the inverse filter. Meanwhile, we use the total variation to regularize the resulting image obtained by convolving the inverse filter with the degraded image. The proposed minimization model is shown to be convex. We employ the first-order primal-dual method for the solution of the proposed minimization model. Numerical examples for blind image restoration are given to show that the proposed method outperforms some existing methods in terms of peak signal-to-noise ratio (PSNR), structural similarity (SSIM), visual quality and time consumption.
A pericentric inversion of chromosome X disrupting F8 and resulting in haemophilia A.
Xin, Yu; Zhou, Jingyi; Ding, Qiulan; Chen, Changming; Wu, Xi; Wang, Xuefeng; Wang, Hongli; Jiang, Xiaofeng
2017-08-01
The frequency of X chromosome pericentric inversion is much less than that of autosome chromosome. We hereby characterise a pericentric inversion of X chromosome associated with severe factor VIII (FVIII) deficiency in a sporadic haemophilia A (HA) pedigree. PCR primer walking and genome walking strategies were adopted to identify the exact breakpoints of the inversion. Copy number variations (CNVs) of the F8 and the whole chromosomes were detected by AccuCopy and Affymetrix CytoScan High Definition (HD) assays, respectively. A karyotype analysis was performed by cytogenetic G banding technique. We identified a previously undescribed type of pericentric inversion of the X chromosome [inv(X)(p11.21q28)] in the proband with FVIII:C inversion segment was approximately 64.4% of the total chromosomal length. The karyotype analysis of the X chromosome confirmed the pericentric inversion of the X chromosome in the proband and his mother. A haplotype analysis traced the inversion to his maternal grandfather, who was not a somatic mosaic of the inversion. This finding indicated that the causative mutation may originate from his germ cells or a rare possibility of germ-cell mosaicism. The characterisation of pericentric inversion involving F8 extended the molecular mechanisms causing HA. The pericentric inversion rearrangement involves F8 by non-homologous end joining is responsible for pathogensis of severe HA. Published by the BMJ Publishing Group Limited. For permission to use (where not already granted under a licence) please go to http://www.bmj.com/company/products-services/rights-and-licensing/.
International Nuclear Information System (INIS)
Fee, David; Izbekov, Pavel; Kim, Keehoon; Yokoo, Akihiko; Lopez, Taryn
2017-01-01
Eruption mass and mass flow rate are critical parameters for determining the aerial extent and hazard of volcanic emissions. Infrasound waveform inversion is a promising technique to quantify volcanic emissions. Although topography may substantially alter the infrasound waveform as it propagates, advances in wave propagation modeling and station coverage permit robust inversion of infrasound data from volcanic explosions. The inversion can estimate eruption mass flow rate and total eruption mass if the flow density is known. However, infrasound-based eruption flow rates and mass estimates have yet to be validated against independent measurements, and numerical modeling has only recently been applied to the inversion technique. Furthermore we present a robust full-waveform acoustic inversion method, and use it to calculate eruption flow rates and masses from 49 explosions from Sakurajima Volcano, Japan.
Support minimized inversion of acoustic and elastic wave scattering
International Nuclear Information System (INIS)
Safaeinili, A.
1994-01-01
This report discusses the following topics on support minimized inversion of acoustic and elastic wave scattering: Minimum support inversion; forward modelling of elastodynamic wave scattering; minimum support linearized acoustic inversion; support minimized nonlinear acoustic inversion without absolute phase; and support minimized nonlinear elastic inversion
NLSE: Parameter-Based Inversion Algorithm
Sabbagh, Harold A.; Murphy, R. Kim; Sabbagh, Elias H.; Aldrin, John C.; Knopp, Jeremy S.
Chapter 11 introduced us to the notion of an inverse problem and gave us some examples of the value of this idea to the solution of realistic industrial problems. The basic inversion algorithm described in Chap. 11 was based upon the Gauss-Newton theory of nonlinear least-squares estimation and is called NLSE in this book. In this chapter we will develop the mathematical background of this theory more fully, because this algorithm will be the foundation of inverse methods and their applications during the remainder of this book. We hope, thereby, to introduce the reader to the application of sophisticated mathematical concepts to engineering practice without introducing excessive mathematical sophistication.
Inverse problems for the Boussinesq system
International Nuclear Information System (INIS)
Fan, Jishan; Jiang, Yu; Nakamura, Gen
2009-01-01
We obtain two results on inverse problems for a 2D Boussinesq system. One is that we prove the Lipschitz stability for the inverse source problem of identifying a time-independent external force in the system with observation data in an arbitrary sub-domain over a time interval of the velocity and the data of velocity and temperature at a fixed positive time t 0 > 0 over the whole spatial domain. The other one is that we prove a conditional stability estimate for an inverse problem of identifying the two initial conditions with a single observation on a sub-domain
Population inversion in a stationary recombining plasma
International Nuclear Information System (INIS)
Otsuka, M.
1980-01-01
Population inversion, which occurs in a recombining plasma when a stationary He plasma is brought into contact with a neutral gas, is examined. With hydrogen as a contact gas, noticeable inversion between low-lying levels of H as been found. The overpopulation density is of the order of 10 8 cm -3 , which is much higher then that (approx. =10 5 cm -3 ) obtained previously with He as a contact gas. Relations between these experimental results and the conditions for population inversion are discussed with the CR model
Multiparameter Optimization for Electromagnetic Inversion Problem
Directory of Open Access Journals (Sweden)
M. Elkattan
2017-10-01
Full Text Available Electromagnetic (EM methods have been extensively used in geophysical investigations such as mineral and hydrocarbon exploration as well as in geological mapping and structural studies. In this paper, we developed an inversion methodology for Electromagnetic data to determine physical parameters of a set of horizontal layers. We conducted Forward model using transmission line method. In the inversion part, we solved multi parameter optimization problem where, the parameters are conductivity, dielectric constant, and permeability of each layer. The optimization problem was solved by simulated annealing approach. The inversion methodology was tested using a set of models representing common geological formations.
Frequency-domain elastic full waveform inversion using encoded simultaneous sources
Jeong, W.; Son, W.; Pyun, S.; Min, D.
2011-12-01
Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results
Efficient Stochastic Inversion Using Adjoint Models and Kernel-PCA
Energy Technology Data Exchange (ETDEWEB)
Thimmisetty, Charanraj A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Zhao, Wenju [Florida State Univ., Tallahassee, FL (United States). Dept. of Scientific Computing; Chen, Xiao [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Tong, Charles H. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; White, Joshua A. [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Atmospheric, Earth and Energy Division
2017-10-18
Performing stochastic inversion on a computationally expensive forward simulation model with a high-dimensional uncertain parameter space (e.g. a spatial random field) is computationally prohibitive even when gradient information can be computed efficiently. Moreover, the ‘nonlinear’ mapping from parameters to observables generally gives rise to non-Gaussian posteriors even with Gaussian priors, thus hampering the use of efficient inversion algorithms designed for models with Gaussian assumptions. In this paper, we propose a novel Bayesian stochastic inversion methodology, which is characterized by a tight coupling between the gradient-based Langevin Markov Chain Monte Carlo (LMCMC) method and a kernel principal component analysis (KPCA). This approach addresses the ‘curse-of-dimensionality’ via KPCA to identify a low-dimensional feature space within the high-dimensional and nonlinearly correlated parameter space. In addition, non-Gaussian posterior distributions are estimated via an efficient LMCMC method on the projected low-dimensional feature space. We will demonstrate this computational framework by integrating and adapting our recent data-driven statistics-on-manifolds constructions and reduction-through-projection techniques to a linear elasticity model.
Parametric optimization of inverse trapezoid oleophobic surfaces
DEFF Research Database (Denmark)
Cavalli, Andrea; Bøggild, Peter; Okkels, Fridolin
2012-01-01
In this paper, we introduce a comprehensive and versatile approach to the parametric shape optimization of oleophobic surfaces. We evaluate the performance of inverse trapezoid microstructures in terms of three objective parameters: apparent contact angle, maximum sustainable hydrostatic pressure...
An inverse method for radiation transport
Energy Technology Data Exchange (ETDEWEB)
Favorite, J. A. (Jeffrey A.); Sanchez, R. (Richard)
2004-01-01
Adjoint functions have been used with forward functions to compute gradients in implicit (iterative) solution methods for inverse problems in optical tomography, geoscience, thermal science, and other fields, but only once has this approach been used for inverse solutions to the Boltzmann transport equation. In this paper, this approach is used to develop an inverse method that requires only angle-independent flux measurements, rather than angle-dependent measurements as was done previously. The method is applied to a simplified form of the transport equation that does not include scattering. The resulting procedure uses measured values of gamma-ray fluxes of discrete, characteristic energies to determine interface locations in a multilayer shield. The method was implemented with a Newton-Raphson optimization algorithm, and it worked very well in numerical one-dimensional spherical test cases. A more sophisticated optimization method would better exploit the potential of the inverse method.
The Transmuted Generalized Inverse Weibull Distribution
Directory of Open Access Journals (Sweden)
Faton Merovci
2014-05-01
Full Text Available A generalization of the generalized inverse Weibull distribution the so-called transmuted generalized inverse Weibull distribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM in order to generate a flexible family of probability distributions taking the generalized inverseWeibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expressions for the moments, quantiles, and moment generating function of the new distribution are derived. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the flexibility of the transmuted version versus the generalized inverse Weibull distribution.
Parameterization analysis and inversion for orthorhombic media
Masmoudi, Nabil
2018-01-01
Accounting for azimuthal anisotropy is necessary for the processing and inversion of wide-azimuth and wide-aperture seismic data because wave speeds naturally depend on the wave propagation direction. Orthorhombic anisotropy is considered the most
Wave-equation reflection traveltime inversion
Zhang, Sanzong; Schuster, Gerard T.; Luo, Yi
2011-01-01
The main difficulty with iterative waveform inversion using a gradient optimization method is that it tends to get stuck in local minima associated within the waveform misfit function. This is because the waveform misfit function is highly nonlinear
Voxel inversion of airborne EM data
DEFF Research Database (Denmark)
Fiandaca, Gianluca G.; Auken, Esben; Christiansen, Anders Vest C A.V.C.
2013-01-01
We present a geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which allows for straightforward integration of different data types in joint inversion, for informing geological/hydrogeological models directly and for easier incorporation...... of prior information. Inversion of geophysical data usually refers to a model space being linked to the actual observation points. For airborne surveys the spatial discretization of the model space reflects the flight lines. Often airborne surveys are carried out in areas where other ground......-based geophysical data are available. The model space of geophysical inversions is usually referred to the positions of the measurements, and ground-based model positions do not generally coincide with the airborne model positions. Consequently, a model space based on the measuring points is not well suited...
Deep controls on intraplate basin inversion
DEFF Research Database (Denmark)
Nielsen, S.B.; Stephenson, Randell Alexander; Schiffer, Christian
2014-01-01
Basin inversion is an intermediate-scale manifestation of continental intraplate deformation, which produces earthquake activity in the interior of continents. The sedimentary basins of central Europe, inverted in the Late Cretaceous– Paleocene, represent a classic example of this phenomenon....... It is known that inversion of these basins occurred in two phases: an initial one of transpressional shortening involving reverse activation of former normal faults and a subsequent one of uplift of the earlier developed inversion axis and a shift of sedimentary depocentres, and that this is a response...... to changes in the regional intraplate stress field. This European intraplate deformation is considered in thecontext of a new model of the present-day stress field of Europe (and the North Atlantic) caused by lithospheric potential energy variations. Stresses causingbasin inversion of Europe must have been...
Multiscattering inversion for low-model wavenumbers
Alkhalifah, Tariq Ali; Wu, Zedong
2016-01-01
A successful full-waveform inversion implementation updates the low-wavenumber model components first for a proper description of the wavefield propagation and slowly adds the high wavenumber potentially scattering parts of the model. The low
Full traveltime inversion in source domain
Liu, Lu; Guo, Bowen; Luo, Yi
2017-01-01
This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity
On two-spectra inverse problems
Guliyev, Namig J.
2018-01-01
We consider a two-spectra inverse problem for the one-dimensional Schr\\"{o}dinger equation with boundary conditions containing rational Herglotz--Nevanlinna functions of the eigenvalue parameter and provide a complete solution of this problem.
Hopping absorption edge in silicon inversion layers
International Nuclear Information System (INIS)
Kostadinov, I.Z.
1983-09-01
The low frequency gap observed in the absorption spectrum of silicon inversion layers is related to the AC variable range hopping. The frequency dependence of the absorption coefficient is calculated. (author)
Full traveltime inversion in source domain
Liu, Lu
2017-06-01
This paper presents a new method of source-domain full traveltime inversion (FTI). The objective of this study is automatically building near-surface velocity using the early arrivals of seismic data. This method can generate the inverted velocity that can kinetically best match the reconstructed plane-wave source of early arrivals with true source in source domain. It does not require picking first arrivals for tomography, which is one of the most challenging aspects of ray-based tomographic inversion. Besides, this method does not need estimate the source wavelet, which is a necessity for receiver-domain wave-equation velocity inversion. Furthermore, we applied our method on one synthetic dataset; the results show our method could generate a reasonable background velocity even when shingling first arrivals exist and could provide a good initial velocity for the conventional full waveform inversion (FWI).
The inverse square law of gravitation
International Nuclear Information System (INIS)
Cook, A.H.
1987-01-01
The inverse square law of gravitation is very well established over the distances of celestial mechanics, while in electrostatics the law has been shown to be followed to very high precision. However, it is only within the last century that any laboratory experiments have been made to test the inverse square law for gravitation, and all but one has been carried out in the last ten years. At the same time, there has been considerable interest in the possibility of deviations from the inverse square law, either because of a possible bearing on unified theories of forces, including gravitation or, most recently, because of a possible additional fifth force of nature. In this article the various lines of evidence for the inverse square law are summarized, with emphasis upon the recent laboratory experiments. (author)
Neglected puerperal inversion of the uterus
African Journals Online (AJOL)
abp
2012-07-27
Jul 27, 2012 ... managed surgically with Haultain's operation and discharged after 5 ... Acute inversion generally occurs during or immediately after childbirth, while chronic ... is difficult unless the fundal depression can be palpated on rectal.
n-Colour self-inverse compositions
Indian Academy of Sciences (India)
inverse composition. This introduces four new sequences which satisfy the same recurrence relation with different initial conditions like the famous Fibonacci and Lucas sequences. For these new sequences explicit formulas, recurrence relations ...
Invariant-Based Inverse Engineering of Crane Control Parameters
González-Resines, S.; Guéry-Odelin, D.; Tobalina, A.; Lizuain, I.; Torrontegui, E.; Muga, J. G.
2017-11-01
By applying invariant-based inverse engineering in the small-oscillation regime, we design the time dependence of the control parameters of an overhead crane (trolley displacement and rope length) to transport a load between two positions at different heights with minimal final-energy excitation for a microcanonical ensemble of initial conditions. The analogy between ion transport in multisegmented traps or neutral-atom transport in moving optical lattices and load manipulation by cranes opens a route for a useful transfer of techniques among very different fields.
Refinement monoids, equidecomposability types, and boolean inverse semigroups
Wehrung, Friedrich
2017-01-01
Adopting a new universal algebraic approach, this book explores and consolidates the link between Tarski's classical theory of equidecomposability types monoids, abstract measure theory (in the spirit of Hans Dobbertin's work on monoid-valued measures on Boolean algebras) and the nonstable K-theory of rings. This is done via the study of a monoid invariant, defined on Boolean inverse semigroups, called the type monoid. The new techniques contrast with the currently available topological approaches. Many positive results, but also many counterexamples, are provided.
Inverse scattering solution of the Chew-Low equation
International Nuclear Information System (INIS)
Nakano, K.
1985-01-01
Techniques for solving the inverse scattering problem are applied to the Chew-Low equation to obtain the nucleon form factor directly from the experimental phase shifts. A new dispersion relation is derived for the P 11 wave because of its sign-changing phase shift. A self-consistent solution for each channel is obtained, but the universality of form factor is not confirmed. Also, an iterative procedure based on Omnes' method is developed in order to solve coupled-channel, singular integral equations. (orig.)
Physicochemical characterization of some solid materials by inverse gas chromatography
International Nuclear Information System (INIS)
Hamieh, T.; Abdessater, S.
2004-01-01
Full text.New equations and models on two-dimensional state of solid surfaces were previously elaborated in many other studies. results obtained were used in this paper to the determination and the quantification of some physicochemical properties of some solid surfaces, and especially, to study the acid-base superficial characteristics of some solid substrates like oxides and/or polymer adsorbed on oxides, carbon fibers, cements, etc. The technique used was the inverse gas chromatography (CGI) at infinite dilution. The acid-base constants were calculated for many solid surfaces: Al 2 O 3 , SiO 2 , MgO, ZnO, some cements, textiles and carbon fibers
From inverse problems to learning: a Statistical Mechanics approach
Baldassi, Carlo; Gerace, Federica; Saglietti, Luca; Zecchina, Riccardo
2018-01-01
We present a brief introduction to the statistical mechanics approaches for the study of inverse problems in data science. We then provide concrete new results on inferring couplings from sampled configurations in systems characterized by an extensive number of stable attractors in the low temperature regime. We also show how these result are connected to the problem of learning with realistic weak signals in computational neuroscience. Our techniques and algorithms rely on advanced mean-field methods developed in the context of disordered systems.
Time-reversed absorbing condition: application to inverse problems
International Nuclear Information System (INIS)
Assous, F; Kray, M; Nataf, F; Turkel, E
2011-01-01
The aim of this paper is to introduce time-reversed absorbing conditions in time-reversal methods. They enable one to 'recreate the past' without knowing the source which has emitted the signals that are back-propagated. We present two applications in inverse problems: the reduction of the size of the computational domain and the determination, from boundary measurements, of the location and volume of an unknown inclusion. The method does not rely on any a priori knowledge of the physical properties of the inclusion. Numerical tests with the wave and Helmholtz equations illustrate the efficiency of the method. This technique is fairly insensitive to noise in the data
Tietze, Kristina; Ritter, Oliver
2013-10-01
3-D inversion techniques have become a widely used tool in magnetotelluric (MT) data interpretation. However, with real data sets, many of the controlling factors for the outcome of 3-D inversion are little explored, such as alignment of the coordinate system, handling and influence of data errors and model regularization. Here we present 3-D inversion results of 169 MT sites from the central San Andreas Fault in California. Previous extensive 2-D inversion and 3-D forward modelling of the data set revealed significant along-strike variation of the electrical conductivity structure. 3-D inversion can recover these features but only if the inversion parameters are tuned in accordance with the particularities of the data set. Based on synthetic 3-D data we explore the model space and test the impacts of a wide range of inversion settings. The tests showed that the recovery of a pronounced regional 2-D structure in inversion of the complete impedance tensor depends on the coordinate system. As interdependencies between data components are not considered in standard 3-D MT inversion codes, 2-D subsurface structures can vanish if data are not aligned with the regional strike direction. A priori models and data weighting, that is, how strongly individual components of the impedance tensor and/or vertical magnetic field transfer functions dominate the solution, are crucial controls for the outcome of 3-D inversion. If deviations from a prior model are heavily penalized, regularization is prone to result in erroneous and misleading 3-D inversion models, particularly in the presence of strong conductivity contrasts. A `good' overall rms misfit is often meaningless or misleading as a huge range of 3-D inversion results exist, all with similarly `acceptable' misfits but producing significantly differing images of the conductivity structures. Reliable and meaningful 3-D inversion models can only be recovered if data misfit is assessed systematically in the frequency
Variability in surface inversion characteristics over India in winter ...
Indian Academy of Sciences (India)
inversion depth at most of the other stations show that shallow and moderate inversions occur more frequently than deep ..... processed and several checks were applied to ensure homogeneity ... simply inversions) is defined as the layer from ...
Population inversion in recombining hydrogen plasma
International Nuclear Information System (INIS)
Furukane, Utaro; Yokota, Toshiaki; Oda, Toshiatsu.
1978-11-01
The collisional-radiative model is applied to a recombining hydrogen plasma in order to investigate the plasma condition in which the population inversion between the energy levels of hydrogen can be generated. The population inversion is expected in a plasma where the three body recombination has a large contribution to the recombining processes and the effective recombination rate is beyond a certain value for a given electron density and temperature. Calculated results are presented in figures and tables. (author)
Inverse kinematic control of LDUA and TWRMS
International Nuclear Information System (INIS)
Yih, T.C.; Burks, B.L.; Kwon, Dong-Soo
1995-01-01
A general inverse kinematic analysis is formulated particularly for the redundant Light Duty Utility Arm (LDUA) and Tank Waste Retrieval Manipulator System (TWRMS). The developed approach is applicable to the inverse kinematic simulation and control of LDUA, TWRMS, and other general robot manipulators. The 4 x 4 homogeneous Cylindrical coordinates-Bryant angles (C-B) notation is adopted to model LDUA, TWRMS, and any robot composed of R (revolute), P (prismatic), and/or S (spherical) joints
Approximation of Bayesian Inverse Problems for PDEs
Cotter, S. L.; Dashti, M.; Stuart, A. M.
2010-01-01
Inverse problems are often ill posed, with solutions that depend sensitively on data.n any numerical approach to the solution of such problems, regularization of some form is needed to counteract the resulting instability. This paper is based on an approach to regularization, employing a Bayesian formulation of the problem, which leads to a notion of well posedness for inverse problems, at the level of probability measures. The stability which results from this well posedness may be used as t...
An Inversion Recovery NMR Kinetics Experiment
Williams, Travis J.; Kershaw, Allan D.; Li, Vincent; Wu, Xinping
2011-01-01
A convenient laboratory experiment is described in which NMR magnetization transfer by inversion recovery is used to measure the kinetics and thermochemistry of amide bond rotation. The experiment utilizes Varian spectrometers with the VNMRJ 2.3 software, but can be easily adapted to any NMR platform. The procedures and sample data sets in this article will enable instructors to use inversion recovery as a laboratory activity in applied NMR classes and provide research students with a conveni...
Inverse semigroups the theory of partial symmetries
Lawson, Mark V
1998-01-01
Symmetry is one of the most important organising principles in the natural sciences. The mathematical theory of symmetry has long been associated with group theory, but it is a basic premise of this book that there are aspects of symmetry which are more faithfully represented by a generalization of groups called inverse semigroups. The theory of inverse semigroups is described from its origins in the foundations of differential geometry through to its most recent applications in combinatorial group theory, and the theory tilings.
Runtime and Inversion Impacts on Estimation of Moisture Retention Relations by Centrifuge
Sigda, J. M.; Wilson, J. L.
2003-12-01
Standard laboratory methods in soil physics for measuring the moisture retention relation (drainage matric potential-volumetric moisture content relation) are each limited to only part of the moisture content range. Centrifuge systems allow intensive accurate measurements across much of the saturation range, and typically require much less time than traditional laboratory methods. An initially liquid-saturated sample is subjected to a stepwise-increasing series of angular velocities while carefully monitoring changes in liquid content. Angular velocity is held constant until the capillary and centrifugal forces equilibrate, forcing liquid flux to zero, and then a final average liquid content is noted. The procedure is repeated after increasing the angular velocity. Centrifuge measurement time is greatly reduced because the centrifugal body force gradient can far exceed the driving forces utilized in standard lab methods. Widely-used in the petroleum industry for decades, centrifuge measurement of moisture retention relations is seldom encountered in the soil physics or vadose hydrology literatures. Yet there is a need to better understand and improve the experimental methodology given the increasing number of centrifuges employed in these fields. Errors in centrifuge measurement of moisture retention relations originate from both experimental protocol and from data inversion. Like standard methods, centrifuge methods assume equilibrium conditions, and so are sensitive to errors introduced by insufficient runtimes. Unlike standard methods, centrifuge experiments require inversion of the angular velocity and average sample moisture content data to a location-specific pair of matric potential and moisture content values, The force balance causes matric potential and moisture content to vary with sample length while the sample is spinning. Numerous data inversion techniques exist, each yielding different moisture retention relations. We present analyses demonstrating
Inverse scattering problems with multi-frequencies
International Nuclear Information System (INIS)
Bao, Gang; Li, Peijun; Lin, Junshan; Triki, Faouzi
2015-01-01
This paper is concerned with computational approaches and mathematical analysis for solving inverse scattering problems in the frequency domain. The problems arise in a diverse set of scientific areas with significant industrial, medical, and military applications. In addition to nonlinearity, there are two common difficulties associated with the inverse problems: ill-posedness and limited resolution (diffraction limit). Due to the diffraction limit, for a given frequency, only a low spatial frequency part of the desired parameter can be observed from measurements in the far field. The main idea developed here is that if the reconstruction is restricted to only the observable part, then the inversion will become stable. The challenging task is how to design stable numerical methods for solving these inverse scattering problems inspired by the diffraction limit. Recently, novel recursive linearization based algorithms have been presented in an attempt to answer the above question. These methods require multi-frequency scattering data and proceed via a continuation procedure with respect to the frequency from low to high. The objective of this paper is to give a brief review of these methods, their error estimates, and the related mathematical analysis. More attention is paid to the inverse medium and inverse source problems. Numerical experiments are included to illustrate the effectiveness of these methods. (topical review)
Three-dimensional induced polarization data inversion for complex resistivity
Energy Technology Data Exchange (ETDEWEB)
Commer, M.; Newman, G.A.; Williams, K.H.; Hubbard, S.S.
2011-03-15
The conductive and capacitive material properties of the subsurface can be quantified through the frequency-dependent complex resistivity. However, the routine three-dimensional (3D) interpretation of voluminous induced polarization (IP) data sets still poses a challenge due to large computational demands and solution nonuniqueness. We have developed a flexible methodology for 3D (spectral) IP data inversion. Our inversion algorithm is adapted from a frequency-domain electromagnetic (EM) inversion method primarily developed for large-scale hydrocarbon and geothermal energy exploration purposes. The method has proven to be efficient by implementing the nonlinear conjugate gradient method with hierarchical parallelism and by using an optimal finite-difference forward modeling mesh design scheme. The method allows for a large range of survey scales, providing a tool for both exploration and environmental applications. We experimented with an image focusing technique to improve the poor depth resolution of surface data sets with small survey spreads. The algorithm's underlying forward modeling operator properly accounts for EM coupling effects; thus, traditionally used EM coupling correction procedures are not needed. The methodology was applied to both synthetic and field data. We tested the benefit of directly inverting EM coupling contaminated data using a synthetic large-scale exploration data set. Afterward, we further tested the monitoring capability of our method by inverting time-lapse data from an environmental remediation experiment near Rifle, Colorado. Similar trends observed in both our solution and another 2D inversion were in accordance with previous findings about the IP effects due to subsurface microbial activity.
On the feasibility of inversion methods based on models of urban sky glow
International Nuclear Information System (INIS)
Kolláth, Z.; Kránicz, B.
2014-01-01
Multi-wavelength imaging luminance photometry of sky glow provides a huge amount of information on light pollution. However, the understanding of the measured data involves the combination of different processes and data of radiation transfer, atmospheric physics and atmospheric constitution. State-of-the-art numerical radiation transfer models provide the possibility to define an inverse problem to obtain information on the emission intensity distribution of a city and perhaps the physical properties of the atmosphere. We provide numerical tests on the solvability and feasibility of such procedures. - Highlights: • A method of urban sky glow inversion is introduced based on Monte-Carlo calculations. • Imaging photometry can provide enough information for basic inversions. • The inversion technique can be used to construct maps of light pollution. • The inclusion of multiple scattering in the models plays an important role
Simon, Martin
2015-01-01
This monograph is concerned with the analysis and numerical solution of a stochastic inverse anomaly detection problem in electrical impedance tomography (EIT). Martin Simon studies the problem of detecting a parameterized anomaly in an isotropic, stationary and ergodic conductivity random field whose realizations are rapidly oscillating. For this purpose, he derives Feynman-Kac formulae to rigorously justify stochastic homogenization in the case of the underlying stochastic boundary value problem. The author combines techniques from the theory of partial differential equations and functional analysis with probabilistic ideas, paving the way to new mathematical theorems which may be fruitfully used in the treatment of the problem at hand. Moreover, the author proposes an efficient numerical method in the framework of Bayesian inversion for the practical solution of the stochastic inverse anomaly detection problem. Contents Feynman-Kac formulae Stochastic homogenization Statistical inverse problems Targe...
An application of sparse inversion on the calculation of the inverse data space of geophysical data
Saragiotis, Christos
2011-07-01
Multiple reflections as observed in seismic reflection measurements often hide arrivals from the deeper target reflectors and need to be removed. The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function and by constraining the 1 norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal. © 2011 IEEE.
Chromatid Painting for Chromosomal Inversion Detection, Phase I
National Aeronautics and Space Administration — We propose a novel approach to the detection of chromosomal inversions. Transmissible chromosome aberrations (translocations and inversions) have profound genetic...
Experimental techniques; Techniques experimentales
Energy Technology Data Exchange (ETDEWEB)
Roussel-Chomaz, P. [GANIL CNRS/IN2P3, CEA/DSM, 14 - Caen (France)
2007-07-01
This lecture presents the experimental techniques, developed in the last 10 or 15 years, in order to perform a new class of experiments with exotic nuclei, where the reactions induced by these nuclei allow to get information on their structure. A brief review of the secondary beams production methods will be given, with some examples of facilities in operation or under project. The important developments performed recently on cryogenic targets will be presented. The different detection systems will be reviewed, both the beam detectors before the targets, and the many kind of detectors necessary to detect all outgoing particles after the reaction: magnetic spectrometer for the heavy fragment, detection systems for the target recoil nucleus, {gamma} detectors. Finally, several typical examples of experiments will be detailed, in order to illustrate the use of each detector either alone, or in coincidence with others. (author)
Probabilistic Magnetotelluric Inversion with Adaptive Regularisation Using the No-U-Turns Sampler
Conway, Dennis; Simpson, Janelle; Didana, Yohannes; Rugari, Joseph; Heinson, Graham
2018-04-01
We present the first inversion of magnetotelluric (MT) data using a Hamiltonian Monte Carlo algorithm. The inversion of MT data is an underdetermined problem which leads to an ensemble of feasible models for a given dataset. A standard approach in MT inversion is to perform a deterministic search for the single solution which is maximally smooth for a given data-fit threshold. An alternative approach is to use Markov Chain Monte Carlo (MCMC) methods, which have been used in MT inversion to explore the entire solution space and produce a suite of likely models. This approach has the advantage of assigning confidence to resistivity models, leading to better geological interpretations. Recent advances in MCMC techniques include the No-U-Turns Sampler (NUTS), an efficient and rapidly converging method which is based on Hamiltonian Monte Carlo. We have implemented a 1D MT inversion which uses the NUTS algorithm. Our model includes a fixed number of layers of variable thickness and resistivity, as well as probabilistic smoothing constraints which allow sharp and smooth transitions. We present the results of a synthetic study and show the accuracy of the technique, as well as the fast convergence, independence of starting models, and sampling efficiency. Finally, we test our technique on MT data collected from a site in Boulia, Queensland, Australia to show its utility in geological interpretation and ability to provide probabilistic estimates of features such as depth to basement.
Using machine learning to accelerate sampling-based inversion
Valentine, A. P.; Sambridge, M.
2017-12-01
In most cases, a complete solution to a geophysical inverse problem (including robust understanding of the uncertainties associated with the result) requires a sampling-based approach. However, the computational burden is high, and proves intractable for many problems of interest. There is therefore considerable value in developing techniques that can accelerate sampling procedures.The main computational cost lies in evaluation of the forward operator (e.g. calculation of synthetic seismograms) for each candidate model. Modern machine learning techniques-such as Gaussian Processes-offer a route for constructing a computationally-cheap approximation to this calculation, which can replace the accurate solution during sampling. Importantly, the accuracy of the approximation can be refined as inversion proceeds, to ensure high-quality results.In this presentation, we describe and demonstrate this approach-which can be seen as an extension of popular current methods, such as the Neighbourhood Algorithm, and bridges the gap between prior- and posterior-sampling frameworks.
NON-INVASIVE INVERSE PROBLEM IN CIVIL ENGINEERING
Directory of Open Access Journals (Sweden)
Jan Havelka
2017-11-01
Full Text Available In this contribution we focus on recovery of spatial distribution of material parameters utilizing only non-invasive boundary measurements. Such methods has gained its importance as imaging techniques in medicine, geophysics or archaeology. We apply similar principles for non-stationary heat transfer in civil engineering. In oppose to standard technique which rely on external loading devices, we assume the natural fluctuation of temperature throughout day and night can provide sufficient information to recover the underlying material parameters. The inverse problem was solved by a modified regularised Gauss-Newton iterative scheme and the underlying forward problem is solved with a finite element space-time discretisation. We show a successful reconstruction of material parameters on a synthetic example with real measurements. The virtual experiment also reveals the insensitivity to practical precision of sensor measurements.
Multiscattering inversion for low-model wavenumbers
Alkhalifah, Tariq Ali
2016-09-21
A successful full-waveform inversion implementation updates the low-wavenumber model components first for a proper description of the wavefield propagation and slowly adds the high wavenumber potentially scattering parts of the model. The low-wavenumber components can be extracted from the transmission parts of the recorded wavefield emanating directly from the source or the transmission parts from the single- or double-scattered wavefield computed from a predicted scatter field acting as secondary sources.We use a combined inversion of data modeled from the source and those corresponding to single and double scattering to update the velocity model and the component of the velocity (perturbation) responsible for the single and double scattering. The combined inversion helps us access most of the potential model wavenumber information that may be embedded in the data. A scattering-angle filter is used to divide the gradient of the combined inversion, so initially the high-wavenumber (low-scattering-angle) components of the gradient are directed to the perturbation model and the low-wavenumber (highscattering- angle) components are directed to the velocity model. As our background velocity matures, the scatteringangle divide is slowly lowered to allow for more of the higher wavenumbers to contribute the velocity model. Synthetic examples including the Marmousi model are used to demonstrate the additional illumination and improved velocity inversion obtained when including multiscattered energy. © 2016 Society of Exploration Geophysicists.
QCD-instantons and conformal inversion symmetry
International Nuclear Information System (INIS)
Klammer, D.
2006-07-01
Instantons are an essential and non-perturbative part of Quantum Chromodynamics, the theory of strong interactions. One of the most relevant quantities in the instanton calculus is the instanton-size distribution, which can be described on the one hand within the framework of instanton perturbation theory and on the other hand investigated numerically by means of lattice computations. A rapid onset of a drastic discrepancy between these respective results indicates that the underlying physics is not yet well understood. In this work we investigate the appealing possibility of a symmetry under conformal inversion of space-time leading to this deviation. The motivation being that the lattice data seem to be invariant under an inversion of the instanton size. Since the instanton solution of a given size turns into an anti-instanton solution having an inverted size under conformal inversion of space-time, we ask in a first investigation, whether this property is transferred to the quantum level. In order to introduce a new scale, which is indicated by the lattice data and corresponds to the average instanton size as inversion radius, we project the instanton calculus onto the four-dimensional surface of a five-dimensional sphere via stereographic projection. The radius of this sphere is associated with the average instanton size. The result for the instanton size-distribution projected onto the sphere agrees surprisingly well with the lattice data at qualitative level. The resulting symmetry under an inversion of the instanton size is almost perfect. (orig.)
QCD-instantons and conformal inversion symmetry
Energy Technology Data Exchange (ETDEWEB)
Klammer, D.
2006-07-15
Instantons are an essential and non-perturbative part of Quantum Chromodynamics, the theory of strong interactions. One of the most relevant quantities in the instanton calculus is the instanton-size distribution, which can be described on the one hand within the framework of instanton perturbation theory and on the other hand investigated numerically by means of lattice computations. A rapid onset of a drastic discrepancy between these respective results indicates that the underlying physics is not yet well understood. In this work we investigate the appealing possibility of a symmetry under conformal inversion of space-time leading to this deviation. The motivation being that the lattice data seem to be invariant under an inversion of the instanton size. Since the instanton solution of a given size turns into an anti-instanton solution having an inverted size under conformal inversion of space-time, we ask in a first investigation, whether this property is transferred to the quantum level. In order to introduce a new scale, which is indicated by the lattice data and corresponds to the average instanton size as inversion radius, we project the instanton calculus onto the four-dimensional surface of a five-dimensional sphere via stereographic projection. The radius of this sphere is associated with the average instanton size. The result for the instanton size-distribution projected onto the sphere agrees surprisingly well with the lattice data at qualitative level. The resulting symmetry under an inversion of the instanton size is almost perfect. (orig.)
Varying prior information in Bayesian inversion
International Nuclear Information System (INIS)
Walker, Matthew; Curtis, Andrew
2014-01-01
Bayes' rule is used to combine likelihood and prior probability distributions. The former represents knowledge derived from new data, the latter represents pre-existing knowledge; the Bayesian combination is the so-called posterior distribution, representing the resultant new state of knowledge. While varying the likelihood due to differing data observations is common, there are also situations where the prior distribution must be changed or replaced repeatedly. For example, in mixture density neural network (MDN) inversion, using current methods the neural network employed for inversion needs to be retrained every time prior information changes. We develop a method of prior replacement to vary the prior without re-training the network. Thus the efficiency of MDN inversions can be increased, typically by orders of magnitude when applied to geophysical problems. We demonstrate this for the inversion of seismic attributes in a synthetic subsurface geological reservoir model. We also present results which suggest that prior replacement can be used to control the statistical properties (such as variance) of the final estimate of the posterior in more general (e.g., Monte Carlo based) inverse problem solutions. (paper)
Unwrapped phase inversion with an exponential damping
Choi, Yun Seok
2015-07-28
Full-waveform inversion (FWI) suffers from the phase wrapping (cycle skipping) problem when the frequency of data is not low enough. Unless we obtain a good initial velocity model, the phase wrapping problem in FWI causes a result corresponding to a local minimum, usually far away from the true solution, especially at depth. Thus, we have developed an inversion algorithm based on a space-domain unwrapped phase, and we also used exponential damping to mitigate the nonlinearity associated with the reflections. We construct the 2D phase residual map, which usually contains the wrapping discontinuities, especially if the model is complex and the frequency is high. We then unwrap the phase map and remove these cycle-based jumps. However, if the phase map has several residues, the unwrapping process becomes very complicated. We apply a strong exponential damping to the wavefield to eliminate much of the residues in the phase map, thus making the unwrapping process simple. We finally invert the unwrapped phases using the back-propagation algorithm to calculate the gradient. We progressively reduce the damping factor to obtain a high-resolution image. Numerical examples determined that the unwrapped phase inversion with a strong exponential damping generated convergent long-wavelength updates without low-frequency information. This model can be used as a good starting model for a subsequent inversion with a reduced damping, eventually leading to conventional waveform inversion.
The inverse problems of reconstruction in the X-rays, gamma or positron tomographic imaging systems
International Nuclear Information System (INIS)
Grangeat, P.
1999-01-01
The revolution in imagery, brought by the tomographic technic in the years 70, allows the computation of local values cartography for the attenuation or the emission activity. The reconstruction techniques thus allow the connection from integral measurements to characteristic information distribution by inversion of the measurement equations. They are a main application of the solution technic for inverse problems. In a first part the author recalls the physical principles for measures in X-rays, gamma and positron imaging. Then he presents the various problems with their associated inversion techniques. The third part is devoted to the activity sector and examples, to conclude in the last part with the forecast. (A.L.B.)
Tanabe, Koji; Nishikawa, Keiichi; Sano, Tsukasa; Sakai, Osamu; Jara, Hernán
2010-05-01
To test a newly developed fat suppression magnetic resonance imaging (MRI) prepulse that synergistically uses the principles of fat suppression via inversion recovery (STIR) and spectral fat saturation (CHESS), relative to pure CHESS and STIR. This new technique is termed dual fat suppression (Dual-FS). To determine if Dual-FS could be chemically specific for fat, the phantom consisted of the fat-mimicking NiCl(2) aqueous solution, porcine fat, porcine muscle, and water was imaged with the three fat-suppression techniques. For Dual-FS and STIR, several inversion times were used. Signal intensities of each image obtained with each technique were compared. To determine if Dual-FS could be robust to magnetic field inhomogeneities, the phantom consisting of different NiCl(2) aqueous solutions, porcine fat, porcine muscle, and water was imaged with Dual-FS and CHESS at the several off-resonance frequencies. To compare fat suppression efficiency in vivo, 10 volunteer subjects were also imaged with the three fat-suppression techniques. Dual-FS could suppress fat sufficiently within the inversion time of 110-140 msec, thus enabling differentiation between fat and fat-mimicking aqueous structures. Dual-FS was as robust to magnetic field inhomogeneities as STIR and less vulnerable than CHESS. The same results for fat suppression were obtained in volunteers. The Dual-FS-STIR-CHESS is an alternative and promising fat suppression technique for turbo spin echo MRI. Copyright 2010 Wiley-Liss, Inc.
Ramig, Keith; Subramaniam, Gopal; Karimi, Sasan; Szalda, David J; Ko, Allen; Lam, Aaron; Li, Jeffrey; Coaderaj, Ani; Cavdar, Leyla; Bogdan, Lukasz; Kwon, Kitae; Greer, Edyta M
2016-04-15
A series of 2,4-disubstituted 1H-1-benzazepines, 2a-d, 4, and 6, were studied, varying both the substituents at C2 and C4 and at the nitrogen atom. The conformational inversion (ring-flip) and nitrogen-atom inversion (N-inversion) energetics were studied by variable-temperature NMR spectroscopy and computations. The steric bulk of the nitrogen-atom substituent was found to affect both the conformation of the azepine ring and the geometry around the nitrogen atom. Also affected were the Gibbs free energy barriers for the ring-flip and the N-inversion. When the nitrogen-atom substituent was alkyl, as in 2a-c, the geometry of the nitrogen atom was nearly planar and the azepine ring was highly puckered; the result was a relatively high-energy barrier to ring-flip and a low barrier to N-inversion. Conversely, when the nitrogen-atom substituent was a hydrogen atom, as in 2d, 4, and 6, the nitrogen atom was significantly pyramidalized and the azepine ring was less puckered; the result here was a relatively high energy barrier to N-inversion and a low barrier to ring-flip. In these N-unsubstituted compounds, it was found computationally that the lowest-energy stereodynamic process was ring-flip coupled with N-inversion, as N-inversion alone had a much higher energy barrier.