WorldWideScience

Sample records for constrained matrix inversion

  1. Sharp spatially constrained inversion

    DEFF Research Database (Denmark)

    Vignoli, Giulio G.; Fiandaca, Gianluca G.; Christiansen, Anders Vest C A.V.C.

    2013-01-01

    We present sharp reconstruction of multi-layer models using a spatially constrained inversion with minimum gradient support regularization. In particular, its application to airborne electromagnetic data is discussed. Airborne surveys produce extremely large datasets, traditionally inverted...... by using smoothly varying 1D models. Smoothness is a result of the regularization constraints applied to address the inversion ill-posedness. The standard Occam-type regularized multi-layer inversion produces results where boundaries between layers are smeared. The sharp regularization overcomes...... inversions are compared against classical smooth results and available boreholes. With the focusing approach, the obtained blocky results agree with the underlying geology and allow for easier interpretation by the end-user....

  2. Laterally constrained inversion for CSAMT data interpretation

    Science.gov (United States)

    Wang, Ruo; Yin, Changchun; Wang, Miaoyue; Di, Qingyun

    2015-10-01

    Laterally constrained inversion (LCI) has been successfully applied to the inversion of dc resistivity, TEM and airborne EM data. However, it hasn't been yet applied to the interpretation of controlled-source audio-frequency magnetotelluric (CSAMT) data. In this paper, we apply the LCI method for CSAMT data inversion by preconditioning the Jacobian matrix. We apply a weighting matrix to Jacobian to balance the sensitivity of model parameters, so that the resolution with respect to different model parameters becomes more uniform. Numerical experiments confirm that this can improve the convergence of the inversion. We first invert a synthetic dataset with and without noise to investigate the effect of LCI applications to CSAMT data, for the noise free data, the results show that the LCI method can recover the true model better compared to the traditional single-station inversion; and for the noisy data, the true model is recovered even with a noise level of 8%, indicating that LCI inversions are to some extent noise insensitive. Then, we re-invert two CSAMT datasets collected respectively in a watershed and a coal mine area in Northern China and compare our results with those from previous inversions. The comparison with the previous inversion in a coal mine shows that LCI method delivers smoother layer interfaces that well correlate to seismic data, while comparison with a global searching algorithm of simulated annealing (SA) in a watershed shows that though both methods deliver very similar good results, however, LCI algorithm presented in this paper runs much faster. The inversion results for the coal mine CSAMT survey show that a conductive water-bearing zone that was not revealed by the previous inversions has been identified by the LCI. This further demonstrates that the method presented in this paper works for CSAMT data inversion.

  3. Optimization of radiotherapy to target volumes with concave outlines: target-dose homogenization and selective sparing of critical structures by constrained matrix inversion

    Energy Technology Data Exchange (ETDEWEB)

    Colle, C; Van den Berge, D; De Wagter, C; Fortan, L; Van Duyse, B; De Neve, W

    1995-12-01

    The design of 3D-conformal dose distributions for targets with concave outlines is a technical challenge in conformal radiotherapy. For these targets, it is impossible to find beam incidences for which the target volume can be isolated from the tissues at risk. Commonly occurring examples are most thyroid cancers and the targets located at the lower neck and upper mediastinal levels related to some head and neck. A solution to this problem was developed, using beam intensity modulation executed with a multileaf collimator by applying a static beam-segmentation technique. The method includes the definition of beam incidences and beam segments of specific shape as well as the calculation of segment weights. Tests on Sherouse`s GRATISTM planning system allowed to escalate the dose to these targets to 65-70 Gy without exceeding spinal cord tolerance. Further optimization by constrained matrix inversion was investigated to explore the possibility of further dose escalation.

  4. Revising the retrieval technique of a long-term stratospheric HNO{sub 3} data set. From a constrained matrix inversion to the optimal estimation algorithm

    Energy Technology Data Exchange (ETDEWEB)

    Fiorucci, I.; Muscari, G. [Istituto Nazionale di Geofisica e Vulcanologia, Rome (Italy); De Zafra, R.L. [State Univ. of New York, Stony Brook, NY (United States). Dept. of Physics and Astronomy

    2011-07-01

    , obtained employing the constrained matrix inversion method, show that v1 and v2 profiles are overall consistent. The main difference is at the HNO{sub 3} mixing ratio maximum in the 20-25 km altitude range, which is smaller in v2 than v1 profiles by up to 2 ppbv at mid-latitudes and during the Antarctic fall. This difference suggests a better agreement of GBMS HNO{sub 3} v2 profiles with both UARS/ and EOS Aura/MLS HNO{sub 3} data than previous v1 profiles. (orig.)

  5. Inverse Interval Matrix: A Survey

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří; Farhadsefat, R.

    2011-01-01

    Roč. 22, - (2011), s. 704-719 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * inverse interval matrix * NP-hardness * enclosure * unit midpoint * inverse sign stability * nonnegative invertibility * absolute value equation * algorithm Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp704-719.pdf

  6. Self-constrained inversion of potential fields

    Science.gov (United States)

    Paoletti, V.; Ialongo, S.; Florio, G.; Fedi, M.; Cella, F.

    2013-11-01

    We present a potential-field-constrained inversion procedure based on a priori information derived exclusively from the analysis of the gravity and magnetic data (self-constrained inversion). The procedure is designed to be applied to underdetermined problems and involves scenarios where the source distribution can be assumed to be of simple character. To set up effective constraints, we first estimate through the analysis of the gravity or magnetic field some or all of the following source parameters: the source depth-to-the-top, the structural index, the horizontal position of the source body edges and their dip. The second step is incorporating the information related to these constraints in the objective function as depth and spatial weighting functions. We show, through 2-D and 3-D synthetic and real data examples, that potential field-based constraints, for example, structural index, source boundaries and others, are usually enough to obtain substantial improvement in the density and magnetization models.

  7. Facies Constrained Elastic Full Waveform Inversion

    KAUST Repository

    Zhang, Z.

    2017-05-26

    Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.

  8. Facies Constrained Elastic Full Waveform Inversion

    KAUST Repository

    Zhang, Z.; Zabihi Naeini, E.; Alkhalifah, Tariq Ali

    2017-01-01

    Current efforts to utilize full waveform inversion (FWI) as a tool beyond acoustic imaging applications, for example for reservoir analysis, face inherent limitations on resolution and also on the potential trade-off between elastic model parameters. Adding rock physics constraints does help to mitigate these issues. However, current approaches to add such constraints are based on averaged type rock physics regularization terms. Since the true earth model consists of different facies, averaging over those facies naturally leads to smoothed models. To overcome this, we propose a novel way to utilize facies based constraints in elastic FWI. A so-called confidence map is calculated and updated at each iteration of the inversion using both the inverted models and the prior information. The numerical example shows that the proposed method can reduce the cross-talks and also can improve the resolution of inverted elastic properties.

  9. Constraining inverse curvature gravity with supernovae

    Energy Technology Data Exchange (ETDEWEB)

    Mena, Olga; Santiago, Jose; /Fermilab; Weller, Jochen; /University Coll., London /Fermilab

    2005-10-01

    We show that the current accelerated expansion of the Universe can be explained without resorting to dark energy. Models of generalized modified gravity, with inverse powers of the curvature can have late time accelerating attractors without conflicting with solar system experiments. We have solved the Friedman equations for the full dynamical range of the evolution of the Universe. This allows us to perform a detailed analysis of Supernovae data in the context of such models that results in an excellent fit. Hence, inverse curvature gravity models represent an example of phenomenologically viable models in which the current acceleration of the Universe is driven by curvature instead of dark energy. If we further include constraints on the current expansion rate of the Universe from the Hubble Space Telescope and on the age of the Universe from globular clusters, we obtain that the matter content of the Universe is 0.07 {le} {omega}{sub m} {le} 0.21 (95% Confidence). Hence the inverse curvature gravity models considered can not explain the dynamics of the Universe just with a baryonic matter component.

  10. Constrained KP models as integrable matrix hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Ferreira, L.A.; Gomes, J.F.; Zimerman, A.H.

    1997-01-01

    We formulate the constrained KP hierarchy (denoted by cKP K+1,M ) as an affine [cflx sl](M+K+1) matrix integrable hierarchy generalizing the Drinfeld endash Sokolov hierarchy. Using an algebraic approach, including the graded structure of the generalized Drinfeld endash Sokolov hierarchy, we are able to find several new universal results valid for the cKP hierarchy. In particular, our method yields a closed expression for the second bracket obtained through Dirac reduction of any untwisted affine Kac endash Moody current algebra. An explicit example is given for the case [cflx sl](M+K+1), for which a closed expression for the general recursion operator is also obtained. We show how isospectral flows are characterized and grouped according to the semisimple non-regular element E of sl(M+K+1) and the content of the center of the kernel of E. copyright 1997 American Institute of Physics

  11. Constraining inverse-curvature gravity with supernovae.

    Science.gov (United States)

    Mena, Olga; Santiago, José; Weller, Jochen

    2006-02-03

    We show that models of generalized modified gravity, with inverse powers of the curvature, can explain the current accelerated expansion of the Universe without resorting to dark energy and without conflicting with solar system experiments. We have solved the Friedmann equations for the full dynamical range of the evolution of the Universe and performed a detailed analysis of supernovae data in the context of such models that results in an excellent fit. If we further include constraints on the current expansion of the Universe and on its age, we obtain that the matter content of the Universe is 0.07baryonic matter component.

  12. Solving of L0 norm constrained EEG inverse problem.

    Science.gov (United States)

    Xu, Peng; Lei, Xu; Hu, Xiao; Yao, Dezhong

    2009-01-01

    l(0) norm is an effective constraint used to solve EEG inverse problem for a sparse solution. However, due to the discontinuous and un-differentiable properties, it is an open issue to solve the l(0) norm constrained problem, which is usually instead solved by using some alternative functions like l(1) norm to approximate l(0) norm. In this paper, a continuous and differentiable function having the same form as the transfer function of Butterworth low-pass filter is introduced to approximate l(0) norm constraint involved in EEG inverse problem. The new approximation based approach was compared with l(1) norm and LORETA solutions on a realistic head model using simulated sources. The preliminary results show that this alternative approximation to l(0) norm is promising for the estimation of EEG sources with sparse distribution.

  13. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    Science.gov (United States)

    Shukla, Rohit; Khoram, Soroosh; Jorgensen, Erik; Li, Jing; Lipasti, Mikko; Wright, Stephen

    2018-01-01

    Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines. PMID:29593483

  14. Computing Generalized Matrix Inverse on Spiking Neural Substrate

    Directory of Open Access Journals (Sweden)

    Rohit Shukla

    2018-03-01

    Full Text Available Emerging neural hardware substrates, such as IBM's TrueNorth Neurosynaptic System, can provide an appealing platform for deploying numerical algorithms. For example, a recurrent Hopfield neural network can be used to find the Moore-Penrose generalized inverse of a matrix, thus enabling a broad class of linear optimizations to be solved efficiently, at low energy cost. However, deploying numerical algorithms on hardware platforms that severely limit the range and precision of representation for numeric quantities can be quite challenging. This paper discusses these challenges and proposes a rigorous mathematical framework for reasoning about range and precision on such substrates. The paper derives techniques for normalizing inputs and properly quantizing synaptic weights originating from arbitrary systems of linear equations, so that solvers for those systems can be implemented in a provably correct manner on hardware-constrained neural substrates. The analytical model is empirically validated on the IBM TrueNorth platform, and results show that the guarantees provided by the framework for range and precision hold under experimental conditions. Experiments with optical flow demonstrate the energy benefits of deploying a reduced-precision and energy-efficient generalized matrix inverse engine on the IBM TrueNorth platform, reflecting 10× to 100× improvement over FPGA and ARM core baselines.

  15. Inverse Operation of Four-dimensional Vector Matrix

    OpenAIRE

    H J Bao; A J Sang; H X Chen

    2011-01-01

    This is a new series of study to define and prove multidimensional vector matrix mathematics, which includes four-dimensional vector matrix determinant, four-dimensional vector matrix inverse and related properties. There are innovative concepts of multi-dimensional vector matrix mathematics created by authors with numerous applications in engineering, math, video conferencing, 3D TV, and other fields.

  16. A New Self-Constrained Inversion Method of Potential Fields Based on Probability Tomography

    Science.gov (United States)

    Sun, S.; Chen, C.; WANG, H.; Wang, Q.

    2014-12-01

    The self-constrained inversion method of potential fields uses a priori information self-extracted from potential field data. Differing from external a priori information, the self-extracted information are generally parameters derived exclusively from the analysis of the gravity and magnetic data (Paoletti et al., 2013). Here we develop a new self-constrained inversion method based on probability tomography. Probability tomography doesn't need any priori information, as well as large inversion matrix operations. Moreover, its result can describe the sources, especially the distribution of which is complex and irregular, entirely and clearly. Therefore, we attempt to use the a priori information extracted from the probability tomography results to constrain the inversion for physical properties. The magnetic anomaly data was taken as an example in this work. The probability tomography result of magnetic total field anomaly(ΔΤ) shows a smoother distribution than the anomalous source and cannot display the source edges exactly. However, the gradients of ΔΤ are with higher resolution than ΔΤ in their own direction, and this characteristic is also presented in their probability tomography results. So we use some rules to combine the probability tomography results of ∂ΔΤ⁄∂x, ∂ΔΤ⁄∂y and ∂ΔΤ⁄∂z into a new result which is used for extracting a priori information, and then incorporate the information into the model objective function as spatial weighting functions to invert the final magnetic susceptibility. Some magnetic synthetic examples incorporated with and without a priori information extracted from the probability tomography results were made to do comparison, results of which show that the former are more concentrated and with higher resolution of the source body edges. This method is finally applied in an iron mine in China with field measured ΔΤ data and performs well. ReferencesPaoletti, V., Ialongo, S., Florio, G., Fedi, M

  17. Refractive index inversion based on Mueller matrix method

    Science.gov (United States)

    Fan, Huaxi; Wu, Wenyuan; Huang, Yanhua; Li, Zhaozhao

    2016-03-01

    Based on Stokes vector and Jones vector, the correlation between Mueller matrix elements and refractive index was studied with the result simplified, and through Mueller matrix way, the expression of refractive index inversion was deduced. The Mueller matrix elements, under different incident angle, are simulated through the expression of specular reflection so as to analyze the influence of the angle of incidence and refractive index on it, which is verified through the measure of the Mueller matrix elements of polished metal surface. Research shows that, under the condition of specular reflection, the result of Mueller matrix inversion is consistent with the experiment and can be used as an index of refraction of inversion method, and it provides a new way for target detection and recognition technology.

  18. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin

    2012-12-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc. In this paper, we provide an algorithm that allows the factorization to have linear or approximatly linear constraints with respect to each factor. We prove that if the constraint function is linear, algorithms within our multiplicative framework will converge. This theory supports a large variety of equality and inequality constraints, and can facilitate application of NMF to a much larger domain. Taking the recommender system as an example, we demonstrate how a specialized weighted and constrained NMF algorithm can be developed to fit exactly for the problem, and the tests justify that our constraints improve the performance for both weighted and unweighted NMF algorithms under several different metrics. In particular, on the Movielens data with 94% of items, the Constrained NMF improves recall rate 3% compared to SVD50 and 45% compared to SVD150, which were reported as the best two in the top-N metric. © 2012 IEEE.

  19. A study of block algorithms for fermion matrix inversion

    International Nuclear Information System (INIS)

    Henty, D.

    1990-01-01

    We compare the convergence properties of Lanczos and Conjugate Gradient algorithms applied to the calculation of columns of the inverse fermion matrix for Kogut-Susskind and Wilson fermions in lattice QCD. When several columns of the inverse are required simultaneously, a block version of the Lanczos algorithm is most efficient at small mass, being over 5 times faster than the single algorithms. The block algorithm is also less susceptible to critical slowing down. (orig.)

  20. Connection between Dirac and matrix Schroedinger inverse-scattering transforms

    International Nuclear Information System (INIS)

    Jaulent, M.; Leon, J.J.P.

    1978-01-01

    The connection between two applications of the inverse scattering method for solving nonlinear equations is established. The inverse method associated with the massive Dirac system (D) : (iσ 3 d/dx - i q 3 σ 1 - q 1 σ 2 + mσ 2 )Y = epsilonY is rediscovered from the inverse method associated with the 2 x 2 matrix Schroedinger equation (S) : Ysub(xx) + (k 2 -Q)Y = 0. Here Q obeys a nonlinear constraint equivalent to a linear constraint on the reflection coefficient for (S). (author)

  1. Topological inversion for solution of geodesy-constrained geophysical problems

    Science.gov (United States)

    Saltogianni, Vasso; Stiros, Stathis

    2015-04-01

    Geodetic data, mostly GPS observations, permit to measure displacements of selected points around activated faults and volcanoes, and on the basis of geophysical models, to model the underlying physical processes. This requires inversion of redundant systems of highly non-linear equations with >3 unknowns; a situation analogous to the adjustment of geodetic networks. However, in geophysical problems inversion cannot be based on conventional least-squares techniques, and is based on numerical inversion techniques (a priori fixing of some variables, optimization in steps with values of two variables each time to be regarded fixed, random search in the vicinity of approximate solutions). Still these techniques lead to solutions trapped in local minima, to correlated estimates and to solutions with poor error control (usually sampling-based approaches). To overcome these problems, a numerical-topological, grid-search based technique in the RN space is proposed (N the number of unknown variables). This technique is in fact a generalization and refinement of techniques used in lighthouse positioning and in some cases of low-accuracy 2-D positioning using Wi-Fi etc. The basic concept is to assume discrete possible ranges of each variable, and from these ranges to define a grid G in the RN space, with some of the gridpoints to approximate the true solutions of the system. Each point of hyper-grid G is then tested whether it satisfies the observations, given their uncertainty level, and successful grid points define a sub-space of G containing the true solutions. The optimal (minimal) space containing one or more solutions is obtained using a trial-and-error approach, and a single optimization factor. From this essentially deterministic identification of the set of gridpoints satisfying the system of equations, at a following step, a stochastic optimal solution is computed corresponding to the center of gravity of this set of gridpoints. This solution corresponds to a

  2. Supplementary Appendix for: Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Alnaffouri, Tareq Y.

    2016-01-01

    In this supplementary appendix we provide proofs and additional simulation results that complement the paper (constrained perturbation regularization approach for signal estimation using random matrix theory).

  3. Recursive Matrix Inverse Update On An Optical Processor

    Science.gov (United States)

    Casasent, David P.; Baranoski, Edward J.

    1988-02-01

    A high accuracy optical linear algebraic processor (OLAP) using the digital multiplication by analog convolution (DMAC) algorithm is described for use in an efficient matrix inverse update algorithm with speed and accuracy advantages. The solution of the parameters in the algorithm are addressed and the advantages of optical over digital linear algebraic processors are advanced.

  4. AMDLIBF, IBM 360 Subroutine Library, Eigenvalues, Eigenvectors, Matrix Inversion

    International Nuclear Information System (INIS)

    Wang, Jesse Y.

    1980-01-01

    Description of problem or function: AMDLIBF is a subset of the IBM 360 Subroutine Library at the Applied Mathematics Division at Argonne. This subset includes library category F: Identification/Description: F152S F SYMINV: Invert sym. matrices, solve lin. systems; F154S A DOTP: Double plus precision accum. inner prod.; F156S F RAYCOR: Rayleigh corrections for eigenvalues; F161S F XTRADP: A fast extended precision inner product; F162S A XTRADP: Inner product of two DP real vectors; F202S F1 EIGEN: Eigen-system for real symmetric matrix; F203S F: Driver for F202S; F248S F RITZIT: Largest eigenvalue and vec. real sym. matrix; F261S F EIGINV: Inverse eigenvalue problem; F313S F CQZHES: Reduce cmplx matrices to upper Hess and tri; F314S F CQZVAL: Reduce complex matrix to upper Hess. form; F315S F CQZVEC: Eigenvectors of cmplx upper triang. syst.; F316S F CGG: Driver for complex general Eigen-problem; F402S F MATINV: Matrix inversion and sol. of linear eqns.; F403S F: Driver for F402S; F452S F CHOLLU,CHOLEQ: Sym. decomp. of pos. def. band matrices; F453S F MATINC: Inversion of complex matrices; F454S F CROUT: Solution of simultaneous linear equations; F455S F CROUTC: Sol. of simultaneous complex linear eqns.; F456S F1 DIAG: Integer preserving Gaussian elimination

  5. Explicit Inverse of an Interval Matrix with Unit Midpoint

    Czech Academy of Sciences Publication Activity Database

    Rohn, Jiří

    2011-01-01

    Roč. 22, - (2011), s. 138-150 E-ISSN 1081-3810 R&D Projects: GA ČR GA201/09/1957; GA ČR GC201/08/J020 Institutional research plan: CEZ:AV0Z10300504 Keywords : interval matrix * unit midpoint * inverse interval matrix * regularity Subject RIV: BA - General Mathematics Impact factor: 0.808, year: 2010 http://www.math.technion.ac.il/iic/ ela / ela -articles/articles/vol22_pp138-150.pdf

  6. An algorithm for mass matrix calculation of internally constrained molecular geometries.

    Science.gov (United States)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-28

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model.

  7. An algorithm for mass matrix calculation of internally constrained molecular geometries

    International Nuclear Information System (INIS)

    Aryanpour, Masoud; Dhanda, Abhishek; Pitsch, Heinz

    2008-01-01

    Dynamic models for molecular systems require the determination of corresponding mass matrix. For constrained geometries, these computations are often not trivial but need special considerations. Here, assembling the mass matrix of internally constrained molecular structures is formulated as an optimization problem. Analytical expressions are derived for the solution of the different possible cases depending on the rank of the constraint matrix. Geometrical interpretations are further used to enhance the solution concept. As an application, we evaluate the mass matrix for a constrained molecule undergoing an electron-transfer reaction. The preexponential factor for this reaction is computed based on the harmonic model

  8. Inverse mass matrix via the method of localized lagrange multipliers

    Czech Academy of Sciences Publication Activity Database

    González, José A.; Kolman, Radek; Cho, S.S.; Felippa, C.A.; Park, K.C.

    2018-01-01

    Roč. 113, č. 2 (2018), s. 277-295 ISSN 0029-5981 R&D Projects: GA MŠk(CZ) EF15_003/0000493; GA ČR GA17-22615S Institutional support: RVO:61388998 Keywords : explicit time integration * inverse mass matrix * localized Lagrange multipliers * partitioned analysis Subject RIV: BI - Acoustics OBOR OECD: Applied mechanics Impact factor: 2.162, year: 2016 https://onlinelibrary.wiley.com/doi/10.1002/nme.5613

  9. A conditioning technique for matrix inversion for Wilson fermions

    International Nuclear Information System (INIS)

    DeGrand, T.A.

    1988-01-01

    I report a simple technique for conditioning conjugate gradient or conjugate residue matrix inversion as applied to the lattice gauge theory problem of computing the propagator of Wilson fermions. One form of the technique provides about a factor of three speedup over an unconditioned algorithm while running at the same speed as an unconditioned algorithm. I illustrate the method as it is applied to a conjugate residue algorithm. (orig.)

  10. Layered and Laterally Constrained 2D Inversion of Time Domain Induced Polarization Data

    DEFF Research Database (Denmark)

    Fiandaca, Gianluca; Ramm, James; Auken, Esben

    description of the transmitter waveform and of the receiver transfer function allowing for a quantitative interpretation of the parameters. The code has been optimized for parallel computation and the inversion time is comparable to codes inverting just for direct current resistivity. The new inversion......In a sedimentary environment, quasi-layered models often represent the actual geology more accurately than smooth minimum-structure models. We have developed a new layered and laterally constrained inversion algorithm for time domain induced polarization data. The algorithm is based on the time...... transform of a complex resistivity forward response and the inversion extracts the spectral information of the time domain measures in terms of the Cole-Cole parameters. The developed forward code and inversion algorithm use the full time decay of the induced polarization response, together with an accurate...

  11. Degenerated-Inverse-Matrix-Based Channel Estimation for OFDM Systems

    Directory of Open Access Journals (Sweden)

    Makoto Yoshida

    2009-01-01

    Full Text Available This paper addresses time-domain channel estimation for pilot-symbol-aided orthogonal frequency division multiplexing (OFDM systems. By using a cyclic sinc-function matrix uniquely determined by Nc transmitted subcarriers, the performance of our proposed scheme approaches perfect channel state information (CSI, within a maximum of 0.4 dB degradation, regardless of the delay spread of the channel, Doppler frequency, and subcarrier modulation. Furthermore, reducing the matrix size by splitting the dispersive channel impulse response into clusters means that the degenerated inverse matrix estimator (DIME is feasible for broadband, high-quality OFDM transmission systems. In addition to theoretical analysis on normalized mean squared error (NMSE performance of DIME, computer simulations over realistic nonsample spaced channels also showed that the DIME is robust for intersymbol interference (ISI channels and fast time-invariant channels where a minimum mean squared error (MMSE estimator does not work well.

  12. A penalty method for PDE-constrained optimization in inverse problems

    International Nuclear Information System (INIS)

    Leeuwen, T van; Herrmann, F J

    2016-01-01

    Many inverse and parameter estimation problems can be written as PDE-constrained optimization problems. The goal is to infer the parameters, typically coefficients of the PDE, from partial measurements of the solutions of the PDE for several right-hand sides. Such PDE-constrained problems can be solved by finding a stationary point of the Lagrangian, which entails simultaneously updating the parameters and the (adjoint) state variables. For large-scale problems, such an all-at-once approach is not feasible as it requires storing all the state variables. In this case one usually resorts to a reduced approach where the constraints are explicitly eliminated (at each iteration) by solving the PDEs. These two approaches, and variations thereof, are the main workhorses for solving PDE-constrained optimization problems arising from inverse problems. In this paper, we present an alternative method that aims to combine the advantages of both approaches. Our method is based on a quadratic penalty formulation of the constrained optimization problem. By eliminating the state variable, we develop an efficient algorithm that has roughly the same computational complexity as the conventional reduced approach while exploiting a larger search space. Numerical results show that this method indeed reduces some of the nonlinearity of the problem and is less sensitive to the initial iterate. (paper)

  13. A recursive algorithm for computing the inverse of the Vandermonde matrix

    Directory of Open Access Journals (Sweden)

    Youness Aliyari Ghassabeh

    2016-12-01

    Full Text Available The inverse of a Vandermonde matrix has been used for signal processing, polynomial interpolation, curve fitting, wireless communication, and system identification. In this paper, we propose a novel fast recursive algorithm to compute the inverse of a Vandermonde matrix. The algorithm computes the inverse of a higher order Vandermonde matrix using the available lower order inverse matrix with a computational cost of $ O(n^2 $. The proposed algorithm is given in a matrix form, which makes it appropriate for hardware implementation. The running time of the proposed algorithm to find the inverse of a Vandermonde matrix using a lower order Vandermonde matrix is compared with the running time of the matrix inversion function implemented in MATLAB.

  14. Multiplicative algorithms for constrained non-negative matrix factorization

    KAUST Repository

    Peng, Chengbin; Wong, Kachun; Rockwood, Alyn; Zhang, Xiangliang; Jiang, Jinling; Keyes, David E.

    2012-01-01

    Non-negative matrix factorization (NMF) provides the advantage of parts-based data representation through additive only combinations. It has been widely adopted in areas like item recommending, text mining, data clustering, speech denoising, etc

  15. Efficient non-negative constrained model-based inversion in optoacoustic tomography

    International Nuclear Information System (INIS)

    Ding, Lu; Luís Deán-Ben, X; Lutzweiler, Christian; Razansky, Daniel; Ntziachristos, Vasilis

    2015-01-01

    The inversion accuracy in optoacoustic tomography depends on a number of parameters, including the number of detectors employed, discrete sampling issues or imperfectness of the forward model. These parameters result in ambiguities on the reconstructed image. A common ambiguity is the appearance of negative values, which have no physical meaning since optical absorption can only be higher or equal than zero. We investigate herein algorithms that impose non-negative constraints in model-based optoacoustic inversion. Several state-of-the-art non-negative constrained algorithms are analyzed. Furthermore, an algorithm based on the conjugate gradient method is introduced in this work. We are particularly interested in investigating whether positive restrictions lead to accurate solutions or drive the appearance of errors and artifacts. It is shown that the computational performance of non-negative constrained inversion is higher for the introduced algorithm than for the other algorithms, while yielding equivalent results. The experimental performance of this inversion procedure is then tested in phantoms and small animals, showing an improvement in image quality and quantitativeness with respect to the unconstrained approach. The study performed validates the use of non-negative constraints for improving image accuracy compared to unconstrained methods, while maintaining computational efficiency. (paper)

  16. A fast algorithm for sparse matrix computations related to inversion

    International Nuclear Information System (INIS)

    Li, S.; Wu, W.; Darve, E.

    2013-01-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G r and G for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round-off errors

  17. A fast algorithm for sparse matrix computations related to inversion

    Energy Technology Data Exchange (ETDEWEB)

    Li, S., E-mail: lisong@stanford.edu [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Wu, W. [Department of Electrical Engineering, Stanford University, 350 Serra Mall, Packard Building, Room 268, Stanford, CA 94305 (United States); Darve, E. [Institute for Computational and Mathematical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Stanford, CA 94305 (United States); Department of Mechanical Engineering, Stanford University, 496 Lomita Mall, Durand Building, Room 209, Stanford, CA 94305 (United States)

    2013-06-01

    We have developed a fast algorithm for computing certain entries of the inverse of a sparse matrix. Such computations are critical to many applications, such as the calculation of non-equilibrium Green’s functions G{sup r} and G{sup <} for nano-devices. The FIND (Fast Inverse using Nested Dissection) algorithm is optimal in the big-O sense. However, in practice, FIND suffers from two problems due to the width-2 separators used by its partitioning scheme. One problem is the presence of a large constant factor in the computational cost of FIND. The other problem is that the partitioning scheme used by FIND is incompatible with most existing partitioning methods and libraries for nested dissection, which all use width-1 separators. Our new algorithm resolves these problems by thoroughly decomposing the computation process such that width-1 separators can be used, resulting in a significant speedup over FIND for realistic devices — up to twelve-fold in simulation. The new algorithm also has the added advantage that desired off-diagonal entries can be computed for free. Consequently, our algorithm is faster than the current state-of-the-art recursive methods for meshes of any size. Furthermore, the framework used in the analysis of our algorithm is the first attempt to explicitly apply the widely-used relationship between mesh nodes and matrix computations to the problem of multiple eliminations with reuse of intermediate results. This framework makes our algorithm easier to generalize, and also easier to compare against other methods related to elimination trees. Finally, our accuracy analysis shows that the algorithms that require back-substitution are subject to significant extra round-off errors, which become extremely large even for some well-conditioned matrices or matrices with only moderately large condition numbers. When compared to these back-substitution algorithms, our algorithm is generally a few orders of magnitude more accurate, and our produced round

  18. Mantle viscosity structure constrained by joint inversions of seismic velocities and density

    Science.gov (United States)

    Rudolph, M. L.; Moulik, P.; Lekic, V.

    2017-12-01

    The viscosity structure of Earth's deep mantle affects the thermal evolution of Earth, the ascent of mantle upwellings, sinking of subducted oceanic lithosphere, and the mixing of compositional heterogeneities in the mantle. Modeling the long-wavelength dynamic geoid allows us to constrain the radial viscosity profile of the mantle. Typically, in inversions for the mantle viscosity structure, wavespeed variations are mapped into density variations using a constant- or depth-dependent scaling factor. Here, we use a newly developed joint model of anisotropic Vs, Vp, density and transition zone topographies to generate a suite of solutions for the mantle viscosity structure directly from the seismologically constrained density structure. The density structure used to drive our forward models includes contributions from both thermal and compositional variations, including important contributions from compositionally dense material in the Large Low Velocity Provinces at the base of the mantle. These compositional variations have been neglected in the forward models used in most previous inversions and have the potential to significantly affect large-scale flow and thus the inferred viscosity structure. We use a transdimensional, hierarchical, Bayesian approach to solve the inverse problem, and our solutions for viscosity structure include an increase in viscosity below the base of the transition zone, in the shallow lower mantle. Using geoid dynamic response functions and an analysis of the correlation between the observed geoid and mantle structure, we demonstrate the underlying reason for this inference. Finally, we present a new family of solutions in which the data uncertainty is accounted for using covariance matrices associated with the mantle structure models.

  19. Mini-batch optimized full waveform inversion with geological constrained gradient filtering

    Science.gov (United States)

    Yang, Hui; Jia, Junxiong; Wu, Bangyu; Gao, Jinghuai

    2018-05-01

    High computation cost and generating solutions without geological sense have hindered the wide application of Full Waveform Inversion (FWI). Source encoding technique is a way to dramatically reduce the cost of FWI but subject to fix-spread acquisition setup requirement and slow convergence for the suppression of cross-talk. Traditionally, gradient regularization or preconditioning is applied to mitigate the ill-posedness. An isotropic smoothing filter applied on gradients generally gives non-geological inversion results, and could also introduce artifacts. In this work, we propose to address both the efficiency and ill-posedness of FWI by a geological constrained mini-batch gradient optimization method. The mini-batch gradient descent optimization is adopted to reduce the computation time by choosing a subset of entire shots for each iteration. By jointly applying the structure-oriented smoothing to the mini-batch gradient, the inversion converges faster and gives results with more geological meaning. Stylized Marmousi model is used to show the performance of the proposed method on realistic synthetic model.

  20. A Globally Convergent Matrix-Free Method for Constrained Equations and Its Linear Convergence Rate

    Directory of Open Access Journals (Sweden)

    Min Sun

    2014-01-01

    Full Text Available A matrix-free method for constrained equations is proposed, which is a combination of the well-known PRP (Polak-Ribière-Polyak conjugate gradient method and the famous hyperplane projection method. The new method is not only derivative-free, but also completely matrix-free, and consequently, it can be applied to solve large-scale constrained equations. We obtain global convergence of the new method without any differentiability requirement on the constrained equations. Compared with the existing gradient methods for solving such problem, the new method possesses linear convergence rate under standard conditions, and a relax factor γ is attached in the update step to accelerate convergence. Preliminary numerical results show that it is promising in practice.

  1. Self-constrained inversion of microgravity data along a segment of the Irpinia fault

    Science.gov (United States)

    Lo Re, Davide; Florio, Giovanni; Ferranti, Luigi; Ialongo, Simone; Castiello, Gabriella

    2016-01-01

    A microgravity survey was completed to precisely locate and better characterize the near-surface geometry of a recent fault with small throw in a mountainous area in the Southern Apennines (Italy). The site is on a segment of the Irpinia fault, which is the source of the M6.9 1980 earthquake. This fault cuts a few meter of Mesozoic carbonate bedrock and its younger, mostly Holocene continental deposits cover. The amplitude of the complete Bouguer anomaly along two profiles across the fault is about 50 μGal. The data were analyzed and interpreted according to a self-constrained strategy, where some rapid estimation of source parameters was later used as constraint for the inversion. The fault has been clearly identified and localized in its horizontal position and depth. Interesting features in the overburden have been identified and their interpretation has allowed us to estimate the fault slip-rate, which is consistent with independent geological estimates.

  2. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag

    2016-10-06

    In this work, we propose a new regularization approach for linear least-squares problems with random matrices. In the proposed constrained perturbation regularization approach, an artificial perturbation matrix with a bounded norm is forced into the system model matrix. This perturbation is introduced to improve the singular-value structure of the model matrix and, hence, the solution of the estimation problem. Relying on the randomness of the model matrix, a number of deterministic equivalents from random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various estimated signal characteristics. In addition, simulations show that our approach is robust in the presence of model uncertainty.

  3. Constraining the composition and thermal state of the moon from an inversion of electromagnetic lunar day-side transfer functions

    DEFF Research Database (Denmark)

    Khan, Amir; Connolly, J.A.D.; Olsen, Nils

    2006-01-01

    We present a general method to constrain planetary composition and thermal state from an inversion of long-period electromagnetic sounding data. As an example of our approach, we reexamine the problem of inverting lunar day-side transfer functions to constrain the internal structure of the Moon. We...... to significantly influence the inversion results. In order to improve future inferences about lunar composition and thermal state, more electrical conductivity measurements are needed especially for minerals appropriate to the Moon, such as pyrope and almandine....

  4. Solving large-scale PDE-constrained Bayesian inverse problems with Riemann manifold Hamiltonian Monte Carlo

    Science.gov (United States)

    Bui-Thanh, T.; Girolami, M.

    2014-11-01

    We consider the Riemann manifold Hamiltonian Monte Carlo (RMHMC) method for solving statistical inverse problems governed by partial differential equations (PDEs). The Bayesian framework is employed to cast the inverse problem into the task of statistical inference whose solution is the posterior distribution in infinite dimensional parameter space conditional upon observation data and Gaussian prior measure. We discretize both the likelihood and the prior using the H1-conforming finite element method together with a matrix transfer technique. The power of the RMHMC method is that it exploits the geometric structure induced by the PDE constraints of the underlying inverse problem. Consequently, each RMHMC posterior sample is almost uncorrelated/independent from the others providing statistically efficient Markov chain simulation. However this statistical efficiency comes at a computational cost. This motivates us to consider computationally more efficient strategies for RMHMC. At the heart of our construction is the fact that for Gaussian error structures the Fisher information matrix coincides with the Gauss-Newton Hessian. We exploit this fact in considering a computationally simplified RMHMC method combining state-of-the-art adjoint techniques and the superiority of the RMHMC method. Specifically, we first form the Gauss-Newton Hessian at the maximum a posteriori point and then use it as a fixed constant metric tensor throughout RMHMC simulation. This eliminates the need for the computationally costly differential geometric Christoffel symbols, which in turn greatly reduces computational effort at a corresponding loss of sampling efficiency. We further reduce the cost of forming the Fisher information matrix by using a low rank approximation via a randomized singular value decomposition technique. This is efficient since a small number of Hessian-vector products are required. The Hessian-vector product in turn requires only two extra PDE solves using the adjoint

  5. Constraining climate sensitivity and continental versus seafloor weathering using an inverse geological carbon cycle model.

    Science.gov (United States)

    Krissansen-Totton, Joshua; Catling, David C

    2017-05-22

    The relative influences of tectonics, continental weathering and seafloor weathering in controlling the geological carbon cycle are unknown. Here we develop a new carbon cycle model that explicitly captures the kinetics of seafloor weathering to investigate carbon fluxes and the evolution of atmospheric CO 2 and ocean pH since 100 Myr ago. We compare model outputs to proxy data, and rigorously constrain model parameters using Bayesian inverse methods. Assuming our forward model is an accurate representation of the carbon cycle, to fit proxies the temperature dependence of continental weathering must be weaker than commonly assumed. We find that 15-31 °C (1σ) surface warming is required to double the continental weathering flux, versus 3-10 °C in previous work. In addition, continental weatherability has increased 1.7-3.3 times since 100 Myr ago, demanding explanation by uplift and sea-level changes. The average Earth system climate sensitivity is  K (1σ) per CO 2 doubling, which is notably higher than fast-feedback estimates. These conclusions are robust to assumptions about outgassing, modern fluxes and seafloor weathering kinetics.

  6. Exact Inverse Matrices of Fermat and Mersenne Circulant Matrix

    Directory of Open Access Journals (Sweden)

    Yanpeng Zheng

    2015-01-01

    Full Text Available The well known circulant matrices are applied to solve networked systems. In this paper, circulant and left circulant matrices with the Fermat and Mersenne numbers are considered. The nonsingularity of these special matrices is discussed. Meanwhile, the exact determinants and inverse matrices of these special matrices are presented.

  7. Inversion of Love wave phase velocity using smoothness-constrained least-squares technique; Heikatsuka seiyakutsuki saisho jijoho ni yoru love ha iso sokudo no inversion

    Energy Technology Data Exchange (ETDEWEB)

    Kawamura, S [Nippon Geophysical Prospecting Co. Ltd., Tokyo (Japan)

    1996-10-01

    Smoothness-constrained least-squares technique with ABIC minimization was applied to the inversion of phase velocity of surface waves during geophysical exploration, to confirm its usefulness. Since this study aimed mainly at the applicability of the technique, Love wave was used which is easier to treat theoretically than Rayleigh wave. Stable successive approximation solutions could be obtained by the repeated improvement of velocity model of S-wave, and an objective model with high reliability could be determined. While, for the inversion with simple minimization of the residuals squares sum, stable solutions could be obtained by the repeated improvement, but the judgment of convergence was very hard due to the smoothness-constraint, which might make the obtained model in a state of over-fitting. In this study, Love wave was used to examine the applicability of the smoothness-constrained least-squares technique with ABIC minimization. Applicability of this to Rayleigh wave will be investigated. 8 refs.

  8. Geodynamic inversion to constrain the non-linear rheology of the lithosphere

    Science.gov (United States)

    Baumann, T. S.; Kaus, Boris J. P.

    2015-08-01

    One of the main methods to determine the strength of the lithosphere is by estimating it's effective elastic thickness. This method assumes that the lithosphere is a thin elastic plate that floats on the mantle and uses both topography and gravity anomalies to estimate the plate thickness. Whereas this seems to work well for oceanic plates, it has given controversial results in continental collision zones. For most of these locations, additional geophysical data sets such as receiver functions and seismic tomography exist that constrain the geometry of the lithosphere and often show that it is rather complex. Yet, lithospheric geometry by itself is insufficient to understand the dynamics of the lithosphere as this also requires knowledge of the rheology of the lithosphere. Laboratory experiments suggest that rocks deform in a viscous manner if temperatures are high and stresses low, or in a plastic/brittle manner if the yield stress is exceeded. Yet, the experimental results show significant variability between various rock types and there are large uncertainties in extrapolating laboratory values to nature, which leaves room for speculation. An independent method is thus required to better understand the rheology and dynamics of the lithosphere in collision zones. The goal of this paper is to discuss such an approach. Our method relies on performing numerical thermomechanical forward models of the present-day lithosphere with an initial geometry that is constructed from geophysical data sets. We employ experimentally determined creep-laws for the various parts of the lithosphere, but assume that the parameters of these creep-laws as well as the temperature structure of the lithosphere are uncertain. This is used as a priori information to formulate a Bayesian inverse problem that employs topography, gravity, horizontal and vertical surface velocities to invert for the unknown material parameters and temperature structure. In order to test the general methodology

  9. Matrix theory from generalized inverses to Jordan form

    CERN Document Server

    Piziak, Robert

    2007-01-01

    Each chapter ends with a list of references for further reading. Undoubtedly, these will be useful for anyone who wishes to pursue the topics deeper. … the book has many MATLAB examples and problems presented at appropriate places. … the book will become a widely used classroom text for a second course on linear algebra. It can be used profitably by graduate and advanced level undergraduate students. It can also serve as an intermediate course for more advanced texts in matrix theory. This is a lucidly written book by two authors who have made many contributions to linear and multilinear algebra.-K.C. Sivakumar, IMAGE, No. 47, Fall 2011Always mathematically constructive, this book helps readers delve into elementary linear algebra ideas at a deeper level and prepare for further study in matrix theory and abstract algebra.-L'enseignement Mathématique, January-June 2007, Vol. 53, No. 1-2.

  10. Optimal control of large space structures via generalized inverse matrix

    Science.gov (United States)

    Nguyen, Charles C.; Fang, Xiaowen

    1987-01-01

    Independent Modal Space Control (IMSC) is a control scheme that decouples the space structure into n independent second-order subsystems according to n controlled modes and controls each mode independently. It is well-known that the IMSC eliminates control and observation spillover caused when the conventional coupled modal control scheme is employed. The independent control of each mode requires that the number of actuators be equal to the number of modelled modes, which is very high for a faithful modeling of large space structures. A control scheme is proposed that allows one to use a reduced number of actuators to control all modeled modes suboptimally. In particular, the method of generalized inverse matrices is employed to implement the actuators such that the eigenvalues of the closed-loop system are as closed as possible to those specified by the optimal IMSC. Computer simulation of the proposed control scheme on a simply supported beam is given.

  11. A Rank-Constrained Matrix Representation for Hypergraph-Based Subspace Clustering

    Directory of Open Access Journals (Sweden)

    Yubao Sun

    2015-01-01

    Full Text Available This paper presents a novel, rank-constrained matrix representation combined with hypergraph spectral analysis to enable the recovery of the original subspace structures of corrupted data. Real-world data are frequently corrupted with both sparse error and noise. Our matrix decomposition model separates the low-rank, sparse error, and noise components from the data in order to enhance robustness to the corruption. In order to obtain the desired rank representation of the data within a dictionary, our model directly utilizes rank constraints by restricting the upper bound of the rank range. An alternative projection algorithm is proposed to estimate the low-rank representation and separate the sparse error from the data matrix. To further capture the complex relationship between data distributed in multiple subspaces, we use hypergraph to represent the data by encapsulating multiple related samples into one hyperedge. The final clustering result is obtained by spectral decomposition of the hypergraph Laplacian matrix. Validation experiments on the Extended Yale Face Database B, AR, and Hopkins 155 datasets show that the proposed method is a promising tool for subspace clustering.

  12. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    OpenAIRE

    Soleimani, Farahnaz; Stanimirovi´c, Predrag; Soleymani, Fazlollah

    2015-01-01

    An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the ...

  13. An Innovative Approach to Balancing Chemical-Reaction Equations: A Simplified Matrix-Inversion Technique for Determining The Matrix Null Space

    OpenAIRE

    Thorne, Lawrence R.

    2011-01-01

    I propose a novel approach to balancing equations that is applicable to all chemical-reaction equations; it is readily accessible to students via scientific calculators and basic computer spreadsheets that have a matrix-inversion application. The new approach utilizes the familiar matrix-inversion operation in an unfamiliar and innovative way; its purpose is not to identify undetermined coefficients as usual, but, instead, to compute a matrix null space (or matrix kernel). The null space then...

  14. Identity of the conjugate gradient and Lanczos algorithms for matrix inversion in lattice fermion calculations

    International Nuclear Information System (INIS)

    Burkitt, A.N.; Irving, A.C.

    1988-01-01

    Two of the methods that are widely used in lattice gauge theory calculations requiring inversion of the fermion matrix are the Lanczos and the conjugate gradient algorithms. Those algorithms are already known to be closely related. In fact for matrix inversion, in exact arithmetic, they give identical results at each iteration and are just alternative formulations of a single algorithm. This equivalence survives rounding errors. We give the identities between the coefficients of the two formulations, enabling many of the best features of them to be combined. (orig.)

  15. Syrio. A program for the calculation of the inverse of a matrix

    International Nuclear Information System (INIS)

    Garcia de Viedma Alonso, L.

    1963-01-01

    SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)

  16. High performance matrix inversion based on LU factorization for multicore architectures

    KAUST Repository

    Dongarra, Jack

    2011-01-01

    The goal of this paper is to present an efficient implementation of an explicit matrix inversion of general square matrices on multicore computer architecture. The inversion procedure is split into four steps: 1) computing the LU factorization, 2) inverting the upper triangular U factor, 3) solving a linear system, whose solution yields inverse of the original matrix and 4) applying backward column pivoting on the inverted matrix. Using a tile data layout, which represents the matrix in the system memory with an optimized cache-aware format, the computation of the four steps is decomposed into computational tasks. A directed acyclic graph is generated on the fly which represents the program data flow. Its nodes represent tasks and edges the data dependencies between them. Previous implementations of matrix inversions, available in the state-of-the-art numerical libraries, are suffer from unnecessary synchronization points, which are non-existent in our implementation in order to fully exploit the parallelism of the underlying hardware. Our algorithmic approach allows to remove these bottlenecks and to execute the tasks with loose synchronization. A runtime environment system called QUARK is necessary to dynamically schedule our numerical kernels on the available processing units. The reported results from our LU-based matrix inversion implementation significantly outperform the state-of-the-art numerical libraries such as LAPACK (5x), MKL (5x) and ScaLAPACK (2.5x) on a contemporary AMD platform with four sockets and the total of 48 cores for a matrix of size 24000. A power consumption analysis shows that our high performance implementation is also energy efficient and substantially consumes less power than its competitors. © 2011 ACM.

  17. Time-lapse three-dimensional inversion of complex conductivity data using an active time constrained (ATC) approach

    Science.gov (United States)

    Karaoulis, M.; Revil, A.; Werkema, D.D.; Minsley, B.J.; Woodruff, W.F.; Kemna, A.

    2011-01-01

    Induced polarization (more precisely the magnitude and phase of impedance of the subsurface) is measured using a network of electrodes located at the ground surface or in boreholes. This method yields important information related to the distribution of permeability and contaminants in the shallow subsurface. We propose a new time-lapse 3-D modelling and inversion algorithm to image the evolution of complex conductivity over time. We discretize the subsurface using hexahedron cells. Each cell is assigned a complex resistivity or conductivity value. Using the finite-element approach, we model the in-phase and out-of-phase (quadrature) electrical potentials on the 3-D grid, which are then transformed into apparent complex resistivity. Inhomogeneous Dirichlet boundary conditions are used at the boundary of the domain. The calculation of the Jacobian matrix is based on the principles of reciprocity. The goal of time-lapse inversion is to determine the change in the complex resistivity of each cell of the spatial grid as a function of time. Each model along the time axis is called a 'reference space model'. This approach can be simplified into an inverse problem looking for the optimum of several reference space models using the approximation that the material properties vary linearly in time between two subsequent reference models. Regularizations in both space domain and time domain reduce inversion artefacts and improve the stability of the inversion problem. In addition, the use of the time-lapse equations allows the simultaneous inversion of data obtained at different times in just one inversion step (4-D inversion). The advantages of this new inversion algorithm are demonstrated on synthetic time-lapse data resulting from the simulation of a salt tracer test in a heterogeneous random material described by an anisotropic semi-variogram. ?? 2011 The Authors Geophysical Journal International ?? 2011 RAS.

  18. Calculation of total number of disintegrations after intake of radioactive nuclides using the pseudo inverse matrix

    International Nuclear Information System (INIS)

    Noh, Si Wan; Sol, Jeong; Lee, Jai Ki; Lee, Jong Il; Kim, Jang Lyul

    2012-01-01

    Calculation of total number of disintegrations after intake of radioactive nuclides is indispensable to calculate a dose coefficient which means committed effective dose per unit activity (Sv/Bq). In order to calculate the total number of disintegrations analytically, Birch all's algorithm has been commonly used. As described below, an inverse matrix should be calculated in the algorithm. As biokinetic models have been complicated, however, the inverse matrix does not exist sometime and the total number of disintegrations cannot be calculated. Thus, a numerical method has been applied to DCAL code used to calculate dose coefficients in ICRP publication and IMBA code. In this study, however, we applied the pseudo inverse matrix to solve the problem that the inverse matrix does not exist for. In order to validate our method, the method was applied to two examples and the results were compared to the tabulated data in ICRP publication. MATLAB 2012a was used to calculate the total number of disintegrations and exp m and p inv MATLAB built in functions were employed

  19. High performance matrix inversion based on LU factorization for multicore architectures

    KAUST Repository

    Dongarra, Jack; Faverge, Mathieu; Ltaief, Hatem; Luszczek, Piotr R.

    2011-01-01

    on the available processing units. The reported results from our LU-based matrix inversion implementation significantly outperform the state-of-the-art numerical libraries such as LAPACK (5x), MKL (5x) and ScaLAPACK (2.5x) on a contemporary AMD platform with four

  20. Linear System of Equations, Matrix Inversion, and Linear Programming Using MS Excel

    Science.gov (United States)

    El-Gebeily, M.; Yushau, B.

    2008-01-01

    In this note, we demonstrate with illustrations two different ways that MS Excel can be used to solve Linear Systems of Equation, Linear Programming Problems, and Matrix Inversion Problems. The advantage of using MS Excel is its availability and transparency (the user is responsible for most of the details of how a problem is solved). Further, we…

  1. Solution of the inverse scattering problem at fixed energy with non-physical S matrix elements

    International Nuclear Information System (INIS)

    Eberspaecher, M.; Amos, K.; Apagyi, B.

    1999-12-01

    The quantum mechanical inverse elastic scattering problem is solved with the modified Newton-Sabatier method. A set of S matrix elements calculated from a realistic analytic optical model potential serves as input data. It is demonstrated that the quality of the inversion potential can be improved by including non-physical S matrix elements to half, quarter and eighth valued partial waves if the original set does not contain enough information to determine the interaction potential. We demonstrate that results can be very sensitive to the choice of those non-physical S matrix values both with the analytic potential model and in a real application in which the experimental cross section for the symmetrical scattering system of 12 C+ 12 C at E=7.998 MeV is analyzed

  2. MATLAB Simulation of Gradient-Based Neural Network for Online Matrix Inversion

    Science.gov (United States)

    Zhang, Yunong; Chen, Ke; Ma, Weimu; Li, Xiao-Dong

    This paper investigates the simulation of a gradient-based recurrent neural network for online solution of the matrix-inverse problem. Several important techniques are employed as follows to simulate such a neural system. 1) Kronecker product of matrices is introduced to transform a matrix-differential-equation (MDE) to a vector-differential-equation (VDE); i.e., finally, a standard ordinary-differential-equation (ODE) is obtained. 2) MATLAB routine "ode45" is introduced to solve the transformed initial-value ODE problem. 3) In addition to various implementation errors, different kinds of activation functions are simulated to show the characteristics of such a neural network. Simulation results substantiate the theoretical analysis and efficacy of the gradient-based neural network for online constant matrix inversion.

  3. Constraining the composition and thermal state of the mantle beneath Europe from inversion of long-period electromagnetic sounding data

    DEFF Research Database (Denmark)

    Khan, Amir; Connolly, J.A.D.; Olsen, Nils

    2006-01-01

    We reexamine the problem of inverting C responses, covering periods between 1 month and 1 year collected from 42 European observatories, to constrain the internal structure of the Earth. Earlier studies used the C responses, which connect the magnetic vertical component and the horizontal gradient...... of the horizontal components of electromagnetic variations, to obtain the conductivity profile of the Earth's mantle. Here, we go beyond this approach by inverting directly for chemical composition and thermal state of the Earth, rather than subsurface electrical conductivity structure. The primary inversion...... of geophysical data for compositional parameters, planetary composition, and thermal state is feasible. The inversion indicates most probable lower mantle geothermal gradients of similar to 0.58 K/km, core mantle boundary temperatures of similar to 2900 degrees C, bulk Earth molar Mg/Si ratios of similar to 1...

  4. Constraining surface emissions of air pollutants using inverse modelling: method intercomparison and a new two-step two-scale regularization approach

    Energy Technology Data Exchange (ETDEWEB)

    Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))

    2011-07-15

    When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution

  5. Constraining earthquake source inversions with GPS data: 1. Resolution-based removal of artifacts

    Science.gov (United States)

    Page, M.T.; Custodio, S.; Archuleta, R.J.; Carlson, J.M.

    2009-01-01

    We present a resolution analysis of an inversion of GPS data from the 2004 Mw 6.0 Parkfield earthquake. This earthquake was recorded at thirteen 1-Hz GPS receivers, which provides for a truly coseismic data set that can be used to infer the static slip field. We find that the resolution of our inverted slip model is poor at depth and near the edges of the modeled fault plane that are far from GPS receivers. The spatial heterogeneity of the model resolution in the static field inversion leads to artifacts in poorly resolved areas of the fault plane. These artifacts look qualitatively similar to asperities commonly seen in the final slip models of earthquake source inversions, but in this inversion they are caused by a surplus of free parameters. The location of the artifacts depends on the station geometry and the assumed velocity structure. We demonstrate that a nonuniform gridding of model parameters on the fault can remove these artifacts from the inversion. We generate a nonuniform grid with a grid spacing that matches the local resolution length on the fault and show that it outperforms uniform grids, which either generate spurious structure in poorly resolved regions or lose recoverable information in well-resolved areas of the fault. In a synthetic test, the nonuniform grid correctly averages slip in poorly resolved areas of the fault while recovering small-scale structure near the surface. Finally, we present an inversion of the Parkfield GPS data set on the nonuniform grid and analyze the errors in the final model. Copyright 2009 by the American Geophysical Union.

  6. Refining mortality estimates in shark demographic analyses: a Bayesian inverse matrix approach.

    Science.gov (United States)

    Smart, Jonathan J; Punt, André E; White, William T; Simpfendorfer, Colin A

    2018-01-18

    Leslie matrix models are an important analysis tool in conservation biology that are applied to a diversity of taxa. The standard approach estimates the finite rate of population growth (λ) from a set of vital rates. In some instances, an estimate of λ is available, but the vital rates are poorly understood and can be solved for using an inverse matrix approach. However, these approaches are rarely attempted due to prerequisites of information on the structure of age or stage classes. This study addressed this issue by using a combination of Monte Carlo simulations and the sample-importance-resampling (SIR) algorithm to solve the inverse matrix problem without data on population structure. This approach was applied to the grey reef shark (Carcharhinus amblyrhynchos) from the Great Barrier Reef (GBR) in Australia to determine the demography of this population. Additionally, these outputs were applied to another heavily fished population from Papua New Guinea (PNG) that requires estimates of λ for fisheries management. The SIR analysis determined that natural mortality (M) and total mortality (Z) based on indirect methods have previously been overestimated for C. amblyrhynchos, leading to an underestimated λ. The updated Z distributions determined using SIR provided λ estimates that matched an empirical λ for the GBR population and corrected obvious error in the demographic parameters for the PNG population. This approach provides opportunity for the inverse matrix approach to be applied more broadly to situations where information on population structure is lacking. © 2018 by the Ecological Society of America.

  7. Some Matrix Iterations for Computing Generalized Inverses and Balancing Chemical Equations

    Directory of Open Access Journals (Sweden)

    Farahnaz Soleimani

    2015-11-01

    Full Text Available An application of iterative methods for computing the Moore–Penrose inverse in balancing chemical equations is considered. With the aim to illustrate proposed algorithms, an improved high order hyper-power matrix iterative method for computing generalized inverses is introduced and applied. The improvements of the hyper-power iterative scheme are based on its proper factorization, as well as on the possibility to accelerate the iterations in the initial phase of the convergence. Although the effectiveness of our approach is confirmed on the basis of the theoretical point of view, some numerical comparisons in balancing chemical equations, as well as on randomly-generated matrices are furnished.

  8. Constrained Inversion Of Aem Data For Mapping Of Bathymetry, Seabed Sediments And Aquifers

    DEFF Research Database (Denmark)

    Viezzoli, Andrea; Auken, Esben; Christiansen, Anders Vest

    A shallow (depth sediments and bedrock along the world's coastlines, rivers, lakes, and lagoons. Thesegeological units are extremely important, both environmentally and economically. Airborneelectromagnetic (AEM) data...... along the Murray river inAustralia. In both cases bird height was included as an inversion parameter, allowingcompensating for errors in laser altimeter reading over water....

  9. General factorization relations and consistency conditions in the sudden approximation via infinite matrix inversion

    International Nuclear Information System (INIS)

    Chan, C.K.; Hoffman, D.K.; Evans, J.W.

    1985-01-01

    Local, i.e., multiplicative, operators satisfy well-known linear factorization relations wherein matrix elements (between states associated with a complete set of wave functions) can be obtained as a linear combination of those out of the ground state (the input data). Analytic derivation of factorization relations for general state input data results in singular integral expressions for the coefficients, which can, however, be regularized using consistency conditions between matrix elements out of a single (nonground) state. Similar results hold for suitable ''symmetry class'' averaged matrix elements where the symmetry class projection operators are ''complete.'' In several cases where the wave functions or projection operators incorporate orthogonal polynomial dependence, we show that the ground state factorization relations have a simplified structure allowing an alternative derivation of the general factorization relations via an infinite matrix inversion procedure. This form is shown to have some advantages over previous versions. In addition, this matrix inversion procedure obtains all consistency conditions (which is not always the case from regularization of singular integrals)

  10. Improving water content estimation on landslide-prone hillslopes using structurally-constrained inversion of electrical resistivity data

    Science.gov (United States)

    Heinze, Thomas; Möhring, Simon; Budler, Jasmin; Weigand, Maximilian; Kemna, Andreas

    2017-04-01

    Rainfall-triggered landslides are a latent danger in almost any place of the world. Due to climate change heavy rainfalls might occur more often, increasing the risk of landslides. With pore pressure as mechanical trigger, knowledge of water content distribution in the ground is essential for hazard analysis during monitoring of potentially dangerous rainfall events. Geophysical methods like electrical resistivity tomography (ERT) can be utilized to determine the spatial distribution of water content using established soil physical relationships between bulk electrical resistivity and water content. However, often more dominant electrical contrasts due to lithological structures outplay these hydraulic signatures and blur the results in the inversion process. Additionally, the inversion of ERT data requires further constraints. In the standard Occam inversion method, a smoothness constraint is used, assuming that soil properties change softly in space. This applies in many scenarios, as for example during infiltration of water without a clear saturation front. Sharp lithological layers with strongly divergent hydrological parameters, as often found in landslide prone hillslopes, on the other hand, are typically badly resolved by standard ERT. We use a structurally constrained ERT inversion approach for improving water content estimation in landslide prone hills by including a-priori information about lithological layers. Here the standard smoothness constraint is reduced along layer boundaries identified using seismic data or other additional sources. This approach significantly improves water content estimations, because in landslide prone hills often a layer of rather high hydraulic conductivity is followed by a hydraulic barrier like clay-rich soil, causing higher pore pressures. One saturated layer and one almost drained layer typically result also in a sharp contrast in electrical resistivity, assuming that surface conductivity of the soil does not change in

  11. A projected back-tracking line-search for constrained interactive inverse kinematics

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten Pol; Erleben, Kenny

    2011-01-01

    Inverse kinematics is the problem of manipulating the pose of an articulated figure in order to achieve a desired goal disregarding inertia and forces. One can approach the problem as a non-linear optimization problem or as non-linear equation solving. The former approach is superior in its...... of joint limits in an interactive solver. This makes it possible to compute the pose in each frame without the discontinuities exhibited by existing key frame animation techniques....

  12. A Structure-dependent matrix representation of manipulator kinematics and its inverse solution

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-03-01

    In this paper, derivation of kinematic equations for a six-link manipulator is presented using the homogeneous transformation (A i -matrix) based on Denavit-Hartenberg method, and additionally a solution procedure of its inverse problem is outlined. In order to examine the validity of a system of equations, solutions were compared with the exact ones of the inverse kinematics (for the same type of a manipulator) expressed in arbitrarily given co-ordinate systems. Through complete agreement of joint solutions between the two, the present purpose was accomplished. As shown in this paper, an explicit description between adjacent links will give a possible clue to a systematic treatment of the inverse problem for a class of manipulators. (author)

  13. A matrix-inversion method for gamma-source mapping from gamma-count data - 59082

    International Nuclear Information System (INIS)

    Bull, Richard K.; Adsley, Ian; Burgess, Claire

    2012-01-01

    Gamma ray counting is often used to survey the distribution of active waste material in various locations. Ideally the output from such surveys would be a map of the activity of the waste. In this paper a simple matrix-inversion method is presented. This allows an array of gamma-count data to be converted to an array of source activities. For each survey area the response matrix is computed using the gamma-shielding code Microshield [1]. This matrix links the activity array to the count array. The activity array is then obtained via matrix inversion. The method was tested on artificially-created arrays of count-data onto which statistical noise had been added. The method was able to reproduce, quite faithfully, the original activity distribution used to generate the dataset. The method has been applied to a number of practical cases, including the distribution of activated objects in a hot cell and to activated Nimonic springs amongst fuel-element debris in vaults at a nuclear plant. (authors)

  14. Cohesive phase-field fracture and a PDE constrained optimization approach to fracture inverse problems

    Energy Technology Data Exchange (ETDEWEB)

    Tupek, Michael R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2016-06-30

    In recent years there has been a proliferation of modeling techniques for forward predictions of crack propagation in brittle materials, including: phase-field/gradient damage models, peridynamics, cohesive-zone models, and G/XFEM enrichment techniques. However, progress on the corresponding inverse problems has been relatively lacking. Taking advantage of key features of existing modeling approaches, we propose a parabolic regularization of Barenblatt cohesive models which borrows extensively from previous phase-field and gradient damage formulations. An efficient explicit time integration strategy for this type of nonlocal fracture model is then proposed and justified. In addition, we present a C++ computational framework for computing in- put parameter sensitivities efficiently for explicit dynamic problems using the adjoint method. This capability allows for solving inverse problems involving crack propagation to answer interesting engineering questions such as: 1) what is the optimal design topology and material placement for a heterogeneous structure to maximize fracture resistance, 2) what loads must have been applied to a structure for it to have failed in an observed way, 3) what are the existing cracks in a structure given various experimental observations, etc. In this work, we focus on the first of these engineering questions and demonstrate a capability to automatically and efficiently compute optimal designs intended to minimize crack propagation in structures.

  15. Comparing inversion techniques for constraining CO2 fluxes in the Brazilian Amazon Basin with aircraft observations

    Science.gov (United States)

    Chow, V. Y.; Gerbig, C.; Longo, M.; Koch, F.; Nehrkorn, T.; Eluszkiewicz, J.; Ceballos, J. C.; Longo, K.; Wofsy, S. C.

    2012-12-01

    The Balanço Atmosférico Regional de Carbono na Amazônia (BARCA) aircraft program spanned the dry to wet and wet to dry transition seasons in November 2008 & May 2009 respectively. It resulted in ~150 vertical profiles covering the Brazilian Amazon Basin (BAB). With the data we attempt to estimate a carbon budget for the BAB, to determine if regional aircraft experiments can provide strong constraints for a budget, and to compare inversion frameworks when optimizing flux estimates. We use a LPDM to integrate satellite-, aircraft-, & surface-data with mesoscale meteorological fields to link bottom-up and top-down models to provide constraints and error bounds for regional fluxes. The Stochastic Time-Inverted Lagrangian Transport (STILT) model driven by meteorological fields from BRAMS, ECMWF, and WRF are coupled to a biosphere model, the Vegetation Photosynthesis Respiration Model (VPRM), to determine regional CO2 fluxes for the BAB. The VPRM is a prognostic biosphere model driven by MODIS 8-day EVI and LSWI indices along with shortwave radiation and temperature from tower measurements and mesoscale meteorological data. VPRM parameters are tuned using eddy flux tower data from the Large-Scale Biosphere Atmosphere experiment. VPRM computes hourly CO2 fluxes by calculating Gross Ecosystem Exchange (GEE) and Respiration (R) for 8 different vegetation types. The VPRM fluxes are scaled up to the BAB by using time-averaged drivers (shortwave radiation & temperature) from high-temporal resolution runs of BRAMS, ECMWF, and WRF and vegetation maps from SYNMAP and IGBP2007. Shortwave radiation from each mesoscale model is validated using surface data and output from GL 1.2, a global radiation model based on GOES 8 visible imagery. The vegetation maps are updated to 2008 and 2009 using landuse scenarios modeled by Sim Amazonia 2 and Sim Brazil. A priori fluxes modeled by STILT-VPRM are optimized using data from BARCA, eddy covariance sites, and flask measurements. The

  16. Structure constrained semi-nonnegative matrix factorization for EEG-based motor imagery classification.

    Science.gov (United States)

    Lu, Na; Li, Tengfei; Pan, Jinjin; Ren, Xiaodong; Feng, Zuren; Miao, Hongyu

    2015-05-01

    Electroencephalogram (EEG) provides a non-invasive approach to measure the electrical activities of brain neurons and has long been employed for the development of brain-computer interface (BCI). For this purpose, various patterns/features of EEG data need to be extracted and associated with specific events like cue-paced motor imagery. However, this is a challenging task since EEG data are usually non-stationary time series with a low signal-to-noise ratio. In this study, we propose a novel method, called structure constrained semi-nonnegative matrix factorization (SCS-NMF), to extract the key patterns of EEG data in time domain by imposing the mean envelopes of event-related potentials (ERPs) as constraints on the semi-NMF procedure. The proposed method is applicable to general EEG time series, and the extracted temporal features by SCS-NMF can also be combined with other features in frequency domain to improve the performance of motor imagery classification. Real data experiments have been performed using the SCS-NMF approach for motor imagery classification, and the results clearly suggest the superiority of the proposed method. Comparison experiments have also been conducted. The compared methods include ICA, PCA, Semi-NMF, Wavelets, EMD and CSP, which further verified the effectivity of SCS-NMF. The SCS-NMF method could obtain better or competitive performance over the state of the art methods, which provides a novel solution for brain pattern analysis from the perspective of structure constraint. Copyright © 2015 Elsevier Ltd. All rights reserved.

  17. Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

    KAUST Repository

    Gower, Robert M.; Hanzely, Filip; Richtarik, Peter; Stich, Sebastian

    2018-01-01

    We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite

  18. Analysis of smart beams with piezoelectric elements using impedance matrix and inverse Laplace transform

    International Nuclear Information System (INIS)

    Li, Guo-Qing; Miao, Xing-Yuan; Hu, Yuan-Tai; Wang, Ji

    2013-01-01

    A comprehensive study on smart beams with piezoelectric elements using an impedance matrix and the inverse Laplace transform is presented. Based on the authors’ previous work, the dynamics of some elements in beam-like smart structures are represented by impedance matrix equations, including a piezoelectric stack, a piezoelectric bimorph, an elastic straight beam or a circular curved beam. A further transform is applied to the impedance matrix to obtain a set of implicit transfer function matrices. Apart from the analytical solutions to the matrices of smart beams, one computation procedure is proposed to obtained the impedance matrices and transfer function matrices using FEA. By these means the dynamic solution of the elements in the frequency domain is transformed to that in Laplacian s-domain and then inversely transformed to time domain. The connections between the elements and boundary conditions of the smart structures are investigated in detail, and one integrated system equation is finally obtained using the symbolic operation of TF matrices. A procedure is proposed for dynamic analysis and control analysis of the smart beam system using mode superposition and a numerical inverse Laplace transform. The first example is given to demonstrate building transfer function associated impedance matrices using both FEA and analytical solutions. The second example is to verify the ability of control analysis using a suspended beam with PZT patches under close-loop control. The third example is designed for dynamic analysis of beams with a piezoelectric stack and a piezoelectric bimorph under various excitations. The last example of one smart beam with a PPF controller shows the applicability to the control analysis of complex systems using the proposed method. All results show good agreement with the other results in the previous literature. The advantages of the proposed methods are also discussed at the end of this paper. (paper)

  19. S-matrix to potential inversion of low-energy. alpha. - sup 12 C phase shifts

    Energy Technology Data Exchange (ETDEWEB)

    Cooper, S.G.; Mackintosh, R.S. (Open Univ., Milton Keynes (UK). Dept. of Physics)

    1990-10-22

    The IP S-matrix to potential inversion procedure is applied to phase shifts for selected partial waves over a range of energies below the inelastic threshold for {alpha}-{sup 12}C scattering. The phase shifts were determined by Plaga et al. Potentials found by Buck and Rubio to fit the low-energy alpha cluster resonances need only an increased attraction in the surface to accurately reproduce the phase-shift behaviour. Substantial differences between the potentials for odd and even partial waves are necessary. The surface tail of the potential is postulated to be a threshold effect. (orig.).

  20. S-Matrix to potential inversion of low-energy α-12C phase shifts

    Science.gov (United States)

    Cooper, S. G.; Mackintosh, R. S.

    1990-10-01

    The IP S-matrix to potential inversion procedure is applied to phase shifts for selected partial waves over a range of energies below the inelastic threshold for α-12C scattering. The phase shifts were determined by Plaga et al. Potentials found by Buck and Rubio to fit the low-energy alpha cluster resonances need only an increased attraction in the surface to accurately reproduce the phase-shift behaviour. Substantial differences between the potentials for odd and even partial waves are necessary. The surface tail of the potential is postulated to be a threshold effect.

  1. Inversion of the fermion matrix and the equivalence of the conjugate gradient and Lanczos algorithms

    International Nuclear Information System (INIS)

    Burkitt, A.N.; Irving, A.C.

    1990-01-01

    The Lanczos and conjugate gradient algorithms are widely used in lattice QCD calculations. The previously known close relationship between the two methods is explored and two commonly used implementations are shown to give identically the same results at each iteration, in exact arithmetic, for matrix inversion. The identities between the coefficients of the two algorithms are given, and many of the features of the two algorithms can now be combined. The effects of finite arithmetic are investigated and the particular Lanczos formulation is found to be most stable with respect to rounding errors. (orig.)

  2. IMPACT OF MATRIX INVERSION ON THE COMPLEXITY OF THE FINITE ELEMENT METHOD

    Directory of Open Access Journals (Sweden)

    M. Sybis

    2016-04-01

    Full Text Available Purpose. The development of a wide construction market and a desire to design innovative architectural building constructions has resulted in the need to create complex numerical models of objects having increasingly higher computational complexity. The purpose of this work is to show that choosing a proper method for solving the set of equations can improve the calculation time (reduce the complexity by a few levels of magnitude. Methodology. The article presents an analysis of the impact of matrix inversion algorithm on the deflection calculation in the beam, using the finite element method (FEM. Based on the literature analysis, common methods of calculating set of equations were determined. From the found solutions the Gaussian elimination, LU and Cholesky decomposition methods have been implemented to determine the effect of the matrix inversion algorithm used for solving the equations set on the number of computational operations performed. In addition, each of the implemented method has been further optimized thereby reducing the number of necessary arithmetic operations. Findings. These optimizations have been performed on the use of certain properties of the matrix, such as symmetry or significant number of zero elements in the matrix. The results of the analysis are presented for the division of the beam to 5, 50, 100 and 200 nodes, for which the deflection has been calculated. Originality. The main achievement of this work is that it shows the impact of the used methodology on the complexity of solving the problem (or equivalently, time needed to obtain results. Practical value. The difference between the best (the less complex and the worst (the most complex is in the row of few orders of magnitude. This result shows that choosing wrong methodology may enlarge time needed to perform calculation significantly.

  3. Inverse problem to constrain the controlling parameters of large-scale heat transport processes: The Tiberias Basin example

    Science.gov (United States)

    Goretzki, Nora; Inbar, Nimrod; Siebert, Christian; Möller, Peter; Rosenthal, Eliyahu; Schneider, Michael; Magri, Fabien

    2015-04-01

    Salty and thermal springs exist along the lakeshore of the Sea of Galilee, which covers most of the Tiberias Basin (TB) in the northern Jordan- Dead Sea Transform, Israel/Jordan. As it is the only freshwater reservoir of the entire area, it is important to study the salinisation processes that pollute the lake. Simulations of thermohaline flow along a 35 km NW-SE profile show that meteoric and relic brines are flushed by the regional flow from the surrounding heights and thermally induced groundwater flow within the faults (Magri et al., 2015). Several model runs with trial and error were necessary to calibrate the hydraulic conductivity of both faults and major aquifers in order to fit temperature logs and spring salinity. It turned out that the hydraulic conductivity of the faults ranges between 30 and 140 m/yr whereas the hydraulic conductivity of the Upper Cenomanian aquifer is as high as 200 m/yr. However, large-scale transport processes are also dependent on other physical parameters such as thermal conductivity, porosity and fluid thermal expansion coefficient, which are hardly known. Here, inverse problems (IP) are solved along the NW-SE profile to better constrain the physical parameters (a) hydraulic conductivity, (b) thermal conductivity and (c) thermal expansion coefficient. The PEST code (Doherty, 2010) is applied via the graphical interface FePEST in FEFLOW (Diersch, 2014). The results show that both thermal and hydraulic conductivity are consistent with the values determined with the trial and error calibrations. Besides being an automatic approach that speeds up the calibration process, the IP allows to cover a wide range of parameter values, providing additional solutions not found with the trial and error method. Our study shows that geothermal systems like TB are more comprehensively understood when inverse models are applied to constrain coupled fluid flow processes over large spatial scales. References Diersch, H.-J.G., 2014. FEFLOW Finite

  4. An Improved TA-SVM Method Without Matrix Inversion and Its Fast Implementation for Nonstationary Datasets.

    Science.gov (United States)

    Shi, Yingzhong; Chung, Fu-Lai; Wang, Shitong

    2015-09-01

    Recently, a time-adaptive support vector machine (TA-SVM) is proposed for handling nonstationary datasets. While attractive performance has been reported and the new classifier is distinctive in simultaneously solving several SVM subclassifiers locally and globally by using an elegant SVM formulation in an alternative kernel space, the coupling of subclassifiers brings in the computation of matrix inversion, thus resulting to suffer from high computational burden in large nonstationary dataset applications. To overcome this shortcoming, an improved TA-SVM (ITA-SVM) is proposed using a common vector shared by all the SVM subclassifiers involved. ITA-SVM not only keeps an SVM formulation, but also avoids the computation of matrix inversion. Thus, we can realize its fast version, that is, improved time-adaptive core vector machine (ITA-CVM) for large nonstationary datasets by using the CVM technique. ITA-CVM has the merit of asymptotic linear time complexity for large nonstationary datasets as well as inherits the advantage of TA-SVM. The effectiveness of the proposed classifiers ITA-SVM and ITA-CVM is also experimentally confirmed.

  5. A method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix

    International Nuclear Information System (INIS)

    Godfrin, Elena

    1990-01-01

    This paper presents a method to compute the inverse of a complex n-block tridiagonal quasi-hermitian matrix using adequate partitions of the complete matrix. This type of matrix is very usual in quantum mechanics and, more specifically, in solid state physics (e.g., interfaces and superlattices), when the tight-binding approximation is used. The efficiency of the method is analyzed comparing the required CPU time and work-area for different usual techniques. (Author)

  6. Inversions

    Science.gov (United States)

    Brown, Malcolm

    2009-01-01

    Inversions are fascinating phenomena. They are reversals of the normal or expected order. They occur across a wide variety of contexts. What do inversions have to do with learning spaces? The author suggests that they are a useful metaphor for the process that is unfolding in higher education with respect to education. On the basis of…

  7. Estimation of Gas Hydrate Saturation Using Constrained Sparse Spike Inversion: Case Study from the Northern South China Sea

    Directory of Open Access Journals (Sweden)

    Xiujuan Wang

    2006-01-01

    Full Text Available Bottom-simulating reflectors (BSRs were observed beneath the seafloor in the northern continental margin of the South China Sea (SCS. Acoustic impedance profile was derived by Constrained Sparse Spike Inversion (CSSI method to provide information on rock properties and to estimate gas hydrate or free gas saturations in the sediments where BSRs are present. In general, gas hydrate-bearing sediments have positive impedance anomalies and free gas-bearing sediments have negative impedance anomalies. Based on well log data and Archie's equation, gas hydrate saturation can be estimated. But in regions where well log data is not available, a quantitative estimate of gas hydrate or free gas saturation is inferred by fitting the theoretical acoustic impedance to sediment impedance obtained by CSSI. Our study suggests that gas hydrate saturation in the Taixinan Basin is about 10 - 20% of the pore space, with the highest value of 50%, and free gas saturation below BSR is about 2 - 3% of the pore space, that can rise to 8 - 10% at a topographic high. The free gas is non-continuous and has low content in the southeastern slope of the Dongsha Islands. Moreover, BSR in the northern continental margin of the SCS is related to the presence of free gas. BSR is strong where free gas occurs.

  8. Hierarchical probing for estimating the trace of the matrix inverse on toroidal lattices

    Energy Technology Data Exchange (ETDEWEB)

    Stathopoulos, Andreas [College of William and Mary, Williamsburg, VA; Laeuchli, Jesse [College of William and Mary, Williamsburg, VA; Orginos, Kostas [College of William and Mary, Williamsburg, VA; Jefferson Lab

    2013-10-01

    The standard approach for computing the trace of the inverse of a very large, sparse matrix $A$ is to view the trace as the mean value of matrix quadratures, and use the Monte Carlo algorithm to estimate it. This approach is heavily used in our motivating application of Lattice QCD. Often, the elements of $A^{-1}$ display certain decay properties away from the non zero structure of $A$, but random vectors cannot exploit this induced structure of $A^{-1}$. Probing is a technique that, given a sparsity pattern of $A$, discovers elements of $A$ through matrix-vector multiplications with specially designed vectors. In the case of $A^{-1}$, the pattern is obtained by distance-$k$ coloring of the graph of $A$. For sufficiently large $k$, the method produces accurate trace estimates but the cost of producing the colorings becomes prohibitively expensive. More importantly, it is difficult to search for an optimal $k$ value, since none of the work for prior choices of $k$ can be reused.

  9. A regularized matrix factorization approach to induce structured sparse-low-rank solutions in the EEG inverse problem

    DEFF Research Database (Denmark)

    Montoya-Martinez, Jair; Artes-Rodriguez, Antonio; Pontil, Massimiliano

    2014-01-01

    We consider the estimation of the Brain Electrical Sources (BES) matrix from noisy electroencephalographic (EEG) measurements, commonly named as the EEG inverse problem. We propose a new method to induce neurophysiological meaningful solutions, which takes into account the smoothness, structured...... sparsity, and low rank of the BES matrix. The method is based on the factorization of the BES matrix as a product of a sparse coding matrix and a dense latent source matrix. The structured sparse-low-rank structure is enforced by minimizing a regularized functional that includes the ℓ21-norm of the coding...... matrix and the squared Frobenius norm of the latent source matrix. We develop an alternating optimization algorithm to solve the resulting nonsmooth-nonconvex minimization problem. We analyze the convergence of the optimization procedure, and we compare, under different synthetic scenarios...

  10. Constrained Perturbation Regularization Approach for Signal Estimation Using Random Matrix Theory

    KAUST Repository

    Suliman, Mohamed Abdalla Elhag; Ballal, Tarig; Kammoun, Abla; Al-Naffouri, Tareq Y.

    2016-01-01

    random matrix theory are applied to derive the near-optimum regularizer that minimizes the mean-squared error of the estimator. Simulation results demonstrate that the proposed approach outperforms a set of benchmark regularization methods for various

  11. ENDMEMBER EXTRACTION OF HIGHLY MIXED DATA USING L1 SPARSITY-CONSTRAINED MULTILAYER NONNEGATIVE MATRIX FACTORIZATION

    Directory of Open Access Journals (Sweden)

    H. Fang

    2018-04-01

    Full Text Available Due to the limited spatial resolution of remote hyperspectral sensors, pixels are usually highly mixed in the hyperspectral images. Endmember extraction refers to the process identifying the pure endmember signatures from the mixture, which is an important step towards the utilization of hyperspectral data. Nonnegative matrix factorization (NMF is a widely used method of endmember extraction due to its effectiveness and convenience. While most NMF-based methods have single-layer structures, which may have difficulties in effectively learning the structures of highly mixed and complex data. On the other hand, multilayer algorithms have shown great advantages in learning data features and been widely studied in many fields. In this paper, we presented a L1 sparsityconstrained multilayer NMF method for endmember extraction of highly mixed data. Firstly, the multilayer NMF structure was obtained by unfolding NMF into a certain number of layers. In each layer, the abundance matrix was decomposed into the endmember matrix and abundance matrix of the next layer. Besides, to improve the performance of NMF, we incorporated sparsity constraints to the multilayer NMF model by adding a L1 regularizer of the abundance matrix to each layer. At last, a layer-wise optimization method based on NeNMF was proposed to train the multilayer NMF structure. Experiments were conducted on both synthetic data and real data. The results demonstrate that our proposed algorithm can achieve better results than several state-of-art approaches.

  12. Renormalized nonlinear sensitivity kernel and inverse thin-slab propagator in T-matrix formalism for wave-equation tomography

    International Nuclear Information System (INIS)

    Wu, Ru-Shan; Wang, Benfeng; Hu, Chunhua

    2015-01-01

    We derived the renormalized nonlinear sensitivity operator and the related inverse thin-slab propagator (ITSP) for nonlinear tomographic waveform inversion based on the theory of nonlinear partial derivative operator and its De Wolf approximation. The inverse propagator is based on a renormalization procedure to the forward and inverse transition matrix scattering series. The ITSP eliminates the divergence of the inverse Born series for strong perturbations by stepwise partial summation (renormalization). Numerical tests showed that the inverse Born T-series starts to diverge at moderate perturbation (20% for the given model of Gaussian ball with a radius of 5 wavelength), while the ITSP has no divergence problem for any strong perturbations (up to 100% perturbation for test model). In addition, the ITSP is a non-iterative, marching algorithm with only one sweep, and therefore very efficient in comparison with the iterative inversion based on the inverse-Born scattering series. This convergence and efficiency improvement has potential applications to the iterative procedure of waveform inversion. (paper)

  13. Solution of the nonlinear inverse scattering problem by T-matrix completion. I. Theory.

    Science.gov (United States)

    Levinson, Howard W; Markel, Vadim A

    2016-10-01

    We propose a conceptually different method for solving nonlinear inverse scattering problems (ISPs) such as are commonly encountered in tomographic ultrasound imaging, seismology, and other applications. The method is inspired by the theory of nonlocality of physical interactions and utilizes the relevant formalism. We formulate the ISP as a problem whose goal is to determine an unknown interaction potential V from external scattering data. Although we seek a local (diagonally dominated) V as the solution to the posed problem, we allow V to be nonlocal at the intermediate stages of iterations. This allows us to utilize the one-to-one correspondence between V and the T matrix of the problem. Here it is important to realize that not every T corresponds to a diagonal V and we, therefore, relax the usual condition of strict diagonality (locality) of V. An iterative algorithm is proposed in which we seek T that is (i) compatible with the measured scattering data and (ii) corresponds to an interaction potential V that is as diagonally dominated as possible. We refer to this algorithm as to the data-compatible T-matrix completion. This paper is Part I in a two-part series and contains theory only. Numerical examples of image reconstruction in a strongly nonlinear regime are given in Part II [H. W. Levinson and V. A. Markel, Phys. Rev. E 94, 043318 (2016)10.1103/PhysRevE.94.043318]. The method described in this paper is particularly well suited for very large data sets that become increasingly available with the use of modern measurement techniques and instrumentation.

  14. Charge-constrained auxiliary-density-matrix methods for the Hartree–Fock exchange contribution

    DEFF Research Database (Denmark)

    Merlot, Patrick; Izsak, Robert; Borgoo, Alex

    2014-01-01

    Three new variants of the auxiliary-density-matrix method (ADMM) of Guidon, Hutter, and VandeVondele [J. Chem. Theory Comput. 6, 2348 (2010)] are presented with the common feature thatthey have a simplified constraint compared with the full orthonormality requirement of the earlier ADMM1 method. ....... All ADMM variants are tested for accuracy and performance in all-electron B3LYP calculations with several commonly used basis sets. The effect of the choice of the exchange functional for the ADMM exchange–correction term is also investigated....

  15. Efficient computation of the inverse of gametic relationship matrix for a marked QTL

    Directory of Open Access Journals (Sweden)

    Iwaisaki Hiroaki

    2006-04-01

    Full Text Available Abstract Best linear unbiased prediction of genetic merits for a marked quantitative trait locus (QTL using mixed model methodology includes the inverse of conditional gametic relationship matrix (G-1 for a marked QTL. When accounting for inbreeding, the conditional gametic relationships between two parents of individuals for a marked QTL are necessary to build G-1 directly. Up to now, the tabular method and its adaptations have been used to compute these relationships. In the present paper, an indirect method was implemented at the gametic level to compute these few relationships. Simulation results showed that the indirect method can perform faster with significantly less storage requirements than adaptation of the tabular method. The efficiency of the indirect method was mainly due to the use of the sparseness of G-1. The indirect method can also be applied to construct an approximate G-1 for populations with incomplete marker data, providing approximate probabilities of descent for QTL alleles for individuals with incomplete marker data.

  16. The impact of initialization procedures on unsupervised unmixing of hyperspectral imagery using the constrained positive matrix factorization

    Science.gov (United States)

    Masalmah, Yahya M.; Vélez-Reyes, Miguel

    2007-04-01

    The authors proposed in previous papers the use of the constrained Positive Matrix Factorization (cPMF) to perform unsupervised unmixing of hyperspectral imagery. Two iterative algorithms were proposed to compute the cPMF based on the Gauss-Seidel and penalty approaches to solve optimization problems. Results presented in previous papers have shown the potential of the proposed method to perform unsupervised unmixing in HYPERION and AVIRIS imagery. The performance of iterative methods is highly dependent on the initialization scheme. Good initialization schemes can improve convergence speed, whether or not a global minimum is found, and whether or not spectra with physical relevance are retrieved as endmembers. In this paper, different initializations using random selection, longest norm pixels, and standard endmembers selection routines are studied and compared using simulated and real data.

  17. Approximate L0 constrained Non-negative Matrix and Tensor Factorization

    DEFF Research Database (Denmark)

    Mørup, Morten; Madsen, Kristoffer Hougaard; Hansen, Lars Kai

    2008-01-01

    Non-negative matrix factorization (NMF), i.e. V = WH where both V, W and H are non-negative has become a widely used blind source separation technique due to its part based representation. The NMF decomposition is not in general unique and a part based representation not guaranteed. However...... constraint. In general, solving for a given L0 norm is an NP hard problem thus convex relaxatin to regularization by the L1 norm is often considered, i.e., minimizing ( 1/2 ||V-WHk||^2+lambda|H|_1). An open problem is to control the degree of sparsity imposed. We here demonstrate that a full regularization......, the L1 regularization strength lambda that best approximates a given L0 can be directly accessed and in effect used to control the sparsity of H. The MATLAB code for the NLARS algorithm is available for download....

  18. Matrix inversion tomosynthesis improvements in longitudinal x-ray slice imaging

    International Nuclear Information System (INIS)

    Dobbines, J.T. III.

    1990-01-01

    This patent describes a tomosynthesis apparatus. It comprises: an x-ray tomography machine for producing a plurality of x-ray projection images of a subject including an x-ray source, and detection means; and processing means, connected to receive the plurality of projection images, for: shifting and reconstructing the projection x-ray images to obtain a tomosynthesis matrix of images T; acquiring a blurring matrix F having components which represent out-of-focus and in-focus components of the matrix T; obtaining a matrix P representing only in-focus components of the imaged subject by solving a matrix equation including the matrix T and the matrix F; correcting the matrix P for low spatial frequency components; and displaying images indicative of contents of the matrix P

  19. A Lie-Theoretic Perspective on O(n) Mass Matrix Inversion for Serial Manipulators and Polypeptide Chains.

    Science.gov (United States)

    Lee, Kiju; Wang, Yunfeng; Chirikjian, Gregory S

    2007-11-01

    Over the past several decades a number of O(n) methods for forward and inverse dynamics computations have been developed in the multi-body dynamics and robotics literature. A method was developed in 1974 by Fixman for O(n) computation of the mass-matrix determinant for a serial polymer chain consisting of point masses. In other recent papers, we extended this method in order to compute the inverse of the mass matrix for serial chains consisting of point masses. In the present paper, we extend these ideas further and address the case of serial chains composed of rigid-bodies. This requires the use of relatively deep mathematics associated with the rotation group, SO(3), and the special Euclidean group, SE(3), and specifically, it requires that one differentiates functions of Lie-group-valued argument.

  20. Increased accuracy in mineral and hydrogeophysical modelling of HTEM data via detailed description of system transfer function and constrained inversion

    DEFF Research Database (Denmark)

    Viezzoli, Andrea; Christiansen, Anders Vest; Auken, Esben

    This paper aims at providing more insight into the parameters that need to be modelled during inversion of Helicopter TEM data for accurate modelling, both for hydrogeophysical and exploration applications. We use synthetic data to show in details the effect, both in data and in model space...

  1. Fire emissions constrained by the synergistic use of formaldehyde and glyoxal SCIAMACHY columns in a two-compound inverse modelling framework

    Science.gov (United States)

    Stavrakou, T.; Muller, J.; de Smedt, I.; van Roozendael, M.; Vrekoussis, M.; Wittrock, F.; Richter, A.; Burrows, J.

    2008-12-01

    Formaldehyde (HCHO) and glyoxal (CHOCHO) are carbonyls formed in the oxidation of volatile organic compounds (VOCs) emitted by plants, anthropogenic activities, and biomass burning. They are also directly emitted by fires. Although this primary production represents only a small part of the global source for both species, yet it can be locally important during intense fire events. Simultaneous observations of formaldehyde and glyoxal retrieved from the SCIAMACHY satellite instrument in 2005 and provided by the BIRA/IASB and the Bremen group, respectively, are compared with the corresponding columns simulated with the IMAGESv2 global CTM. The chemical mechanism has been optimized with respect to HCHO and CHOCHO production from pyrogenically emitted NMVOCs, based on the Master Chemical Mechanism (MCM) and on an explicit profile for biomass burning emissions. Gas-to-particle conversion of glyoxal in clouds and in aqueous aerosols is considered in the model. In this study we provide top-down estimates for fire emissions of HCHO and CHOCHO precursors by performing a two- compound inversion of emissions using the adjoint of the IMAGES model. The pyrogenic fluxes are optimized at the model resolution. The two-compound inversion offers the advantage that the information gained from measurements of one species constrains the sources of both compounds, due to the existence of common precursors. In a first inversion, only the burnt biomass amounts are optimized. In subsequent simulations, the emission factors for key individual NMVOC compounds are also varied.

  2. Retrieving the correlation matrix from a truncated PCA solution : The inverse principal component problem

    NARCIS (Netherlands)

    ten Berge, Jos M.F.; Kiers, Henk A.L.

    When r Principal Components are available for k variables, the correlation matrix is approximated in the least squares sense by the loading matrix times its transpose. The approximation is generally not perfect unless r = k. In the present paper it is shown that, when r is at or above the Ledermann

  3. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Science.gov (United States)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T.

    2013-01-01

    Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20° and 5 mm plane separation, seven MITS planes must be

  4. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    Energy Technology Data Exchange (ETDEWEB)

    Godfrey, Devon J. [Department of Radiation Oncology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Page McAdams, H. [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Duke University Medical Center, Durham, North Carolina 27705 (United States); Dobbins, James T. III [Carl E. Ravin Advanced Imaging Laboratories, Department of Radiology, Department of Biomedical Engineering, Department of Physics, and Medical Physics Graduate Program, Duke University Medical Center, Durham, North Carolina 27705 (United States)

    2013-02-15

    Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20 Degree-Sign and 5 mm plane separation, seven MITS

  5. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis.

    Science.gov (United States)

    Godfrey, Devon J; McAdams, H Page; Dobbins, James T

    2013-02-01

    Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. For scan angles of 20° and 5 mm plane separation, seven MITS planes must be averaged to sufficiently

  6. The effect of averaging adjacent planes for artifact reduction in matrix inversion tomosynthesis

    International Nuclear Information System (INIS)

    Godfrey, Devon J.; Page McAdams, H.; Dobbins, James T. III

    2013-01-01

    Purpose: Matrix inversion tomosynthesis (MITS) uses linear systems theory and knowledge of the imaging geometry to remove tomographic blur that is present in conventional backprojection tomosynthesis reconstructions, leaving in-plane detail rendered clearly. The use of partial-pixel interpolation during the backprojection process introduces imprecision in the MITS modeling of tomographic blur, and creates low-contrast artifacts in some MITS planes. This paper examines the use of MITS slabs, created by averaging several adjacent MITS planes, as a method for suppressing partial-pixel artifacts. Methods: Human chest tomosynthesis projection data, acquired as part of an IRB-approved pilot study, were used to generate MITS planes, three-plane MITS slabs (MITSa3), five-plane MITS slabs (MITSa5), and seven-plane MITS slabs (MITSa7). These were qualitatively examined for partial-pixel artifacts and the visibility of normal and abnormal anatomy. Additionally, small (5 mm) subtle pulmonary nodules were simulated and digitally superimposed upon human chest tomosynthesis projection images, and their visibility was qualitatively assessed in the different reconstruction techniques. Simulated images of a thin wire were used to generate modulation transfer function (MTF) and slice-sensitivity profile curves for the different MITS and MITS slab techniques, and these were examined for indications of partial-pixel artifacts and frequency response uniformity. Finally, mean-subtracted, exposure-normalized noise power spectra (ENNPS) estimates were computed and compared for MITS and MITS slab reconstructions, generated from 10 sets of tomosynthesis projection data of an acrylic slab. The simulated in-plane MTF response of each technique was also combined with the square root of the ENNPS estimate to yield stochastic signal-to-noise ratio (SNR) information about the different reconstruction techniques. Results: For scan angles of 20° and 5 mm plane separation, seven MITS planes must be

  7. Accelerated Stochastic Matrix Inversion: General Theory and Speeding up BFGS Rules for Faster Second-Order Optimization

    KAUST Repository

    Gower, Robert M.

    2018-02-12

    We present the first accelerated randomized algorithm for solving linear systems in Euclidean spaces. One essential problem of this type is the matrix inversion problem. In particular, our algorithm can be specialized to invert positive definite matrices in such a way that all iterates (approximate solutions) generated by the algorithm are positive definite matrices themselves. This opens the way for many applications in the field of optimization and machine learning. As an application of our general theory, we develop the {\\\\em first accelerated (deterministic and stochastic) quasi-Newton updates}. Our updates lead to provably more aggressive approximations of the inverse Hessian, and lead to speed-ups over classical non-accelerated rules in numerical experiments. Experiments with empirical risk minimization show that our rules can accelerate training of machine learning models.

  8. Retrieval of the projected potential by inversion from the scattering matrix in electron-crystal scattering

    International Nuclear Information System (INIS)

    Allen, L.J.; Spargo, A.E.C.; Leeb, H.

    1998-01-01

    The retrieval of a unique crystal potential from the scattering matrix S in high energy transmission electron diffraction is discussed. It is shown that, in general, data taken at a single orientation are not sufficient to determine all the elements of S. Additional measurements with tilted incident beam are required for the determination of the whole S-matrix. An algorithm for the extraction of the crystal potential from the S-matrix measured at a single energy and thickness is presented. The limiting case of thin crystals is discussed. Several examples with simulated data are considered

  9. Monotone matrix transformations defined by the group inverse and simultaneous diagonalizability

    International Nuclear Information System (INIS)

    Bogdanov, I I; Guterman, A E

    2007-01-01

    Bijective linear transformations of the matrix algebra over an arbitrary field that preserve simultaneous diagonalizability are characterized. This result is used for the characterization of bijective linear monotone transformations . Bibliography: 28 titles.

  10. Using an Equity/Performance Matrix to Address Salary Compression/Inversion and Performance Pay Issues

    Science.gov (United States)

    Richardson, Peter; Thomas, Steven

    2013-01-01

    Pay compression and inversion are significant problems for many organizations and are often severe in schools of business in particular. At the same time, there is more insistence on showing accountability and paying employees based on performance. The authors explain and show a detailed example of how to use a Compensation Equity/ Performance…

  11. Constraining the Magmatic System at Mount St. Helens (2004-2008) Using Bayesian Inversion With Physics-Based Models Including Gas Escape and Crystallization

    International Nuclear Information System (INIS)

    Wong, Ying-Qi; Segall, Paul; Bradley, Andrew; Anderson, Kyle

    2017-01-01

    Physics-based models of volcanic eruptions track conduit processes as functions of depth and time. When used in inversions, these models permit integration of diverse geological and geophysical data sets to constrain important parameters of magmatic systems. We develop a 1-D steady state conduit model for effusive eruptions including equilibrium crystallization and gas transport through the conduit and compare with the quasi-steady dome growth phase of Mount St. Helens in 2005. Viscosity increase resulting from pressure-dependent crystallization leads to a natural transition from viscous flow to frictional sliding on the conduit margin. Erupted mass flux depends strongly on wall rock and magma permeabilities due to their impact on magma density. Including both lateral and vertical gas transport reveals competing effects that produce nonmonotonic behavior in the mass flux when increasing magma permeability. Using this physics-based model in a Bayesian inversion, we link data sets from Mount St. Helens such as extrusion flux and earthquake depths with petrological data to estimate unknown model parameters, including magma chamber pressure and water content, magma permeability constants, conduit radius, and friction along the conduit walls. Even with this relatively simple model and limited data, we obtain improved constraints on important model parameters. We find that the magma chamber had low (<5 wt %) total volatiles and that the magma permeability scale is well constrained at ~10 –11.4 m 2 to reproduce observed dome rock porosities. Here, compared with previous results, higher magma overpressure and lower wall friction are required to compensate for increased viscous resistance while keeping extrusion rate at the observed value.

  12. Matrix-Inversion-Free Compressed Sensing With Variable Orthogonal Multi-Matching Pursuit Based on Prior Information for ECG Signals.

    Science.gov (United States)

    Cheng, Yih-Chun; Tsai, Pei-Yun; Huang, Ming-Hao

    2016-05-19

    Low-complexity compressed sensing (CS) techniques for monitoring electrocardiogram (ECG) signals in wireless body sensor network (WBSN) are presented. The prior probability of ECG sparsity in the wavelet domain is first exploited. Then, variable orthogonal multi-matching pursuit (vOMMP) algorithm that consists of two phases is proposed. In the first phase, orthogonal matching pursuit (OMP) algorithm is adopted to effectively augment the support set with reliable indices and in the second phase, the orthogonal multi-matching pursuit (OMMP) is employed to rescue the missing indices. The reconstruction performance is thus enhanced with the prior information and the vOMMP algorithm. Furthermore, the computation-intensive pseudo-inverse operation is simplified by the matrix-inversion-free (MIF) technique based on QR decomposition. The vOMMP-MIF CS decoder is then implemented in 90 nm CMOS technology. The QR decomposition is accomplished by two systolic arrays working in parallel. The implementation supports three settings for obtaining 40, 44, and 48 coefficients in the sparse vector. From the measurement result, the power consumption is 11.7 mW at 0.9 V and 12 MHz. Compared to prior chip implementations, our design shows good hardware efficiency and is suitable for low-energy applications.

  13. Syrio. A program for the calculation of the inverse of a matrix; Syrio. Programa para el calculo de la inversa de una matriz

    Energy Technology Data Exchange (ETDEWEB)

    Garcia de Viedma Alonso, L.

    1963-07-01

    SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)

  14. Syrio. A program for the calculation of the inverse of a matrix; Syrio. Programa para el calculo de la inversa de una matriz

    Energy Technology Data Exchange (ETDEWEB)

    Garcia de Viedma Alonso, L.

    1963-07-01

    SYRIO is a code for the inversion of a non-singular square matrix whose order is not higher than 40 for the UNIVAC-UCT (SS-90). The treatment stands from the inversion formula of sherman and Morrison, and following the Herbert S. Wilf's method for special matrices, generalize the procedure to any kind of non-singular square matrices. the limitation of the matrix order is not inherent of the program itself but imposed by the storage capacity of the computer for which it was coded. (Author)

  15. A genetic meta-algorithm-assisted inversion approach: hydrogeological study for the determination of volumetric rock properties and matrix and fluid parameters in unsaturated formations

    Science.gov (United States)

    Szabó, Norbert Péter

    2018-03-01

    An evolutionary inversion approach is suggested for the interpretation of nuclear and resistivity logs measured by direct-push tools in shallow unsaturated sediments. The efficiency of formation evaluation is improved by estimating simultaneously (1) the petrophysical properties that vary rapidly along a drill hole with depth and (2) the zone parameters that can be treated as constant, in one inversion procedure. In the workflow, the fractional volumes of water, air, matrix and clay are estimated in adjacent depths by linearized inversion, whereas the clay and matrix properties are updated using a float-encoded genetic meta-algorithm. The proposed inversion method provides an objective estimate of the zone parameters that appear in the tool response equations applied to solve the forward problem, which can significantly increase the reliability of the petrophysical model as opposed to setting these parameters arbitrarily. The global optimization meta-algorithm not only assures the best fit between the measured and calculated data but also gives a reliable solution, practically independent of the initial model, as laboratory data are unnecessary in the inversion procedure. The feasibility test uses engineering geophysical sounding logs observed in an unsaturated loessy-sandy formation in Hungary. The multi-borehole extension of the inversion technique is developed to determine the petrophysical properties and their estimation errors along a profile of drill holes. The genetic meta-algorithmic inversion method is recommended for hydrogeophysical logging applications of various kinds to automatically extract the volumetric ratios of rock and fluid constituents as well as the most important zone parameters in a reliable inversion procedure.

  16. Modified Covariance Matrix Adaptation – Evolution Strategy algorithm for constrained optimization under uncertainty, application to rocket design

    Directory of Open Access Journals (Sweden)

    Chocat Rudy

    2015-01-01

    Full Text Available The design of complex systems often induces a constrained optimization problem under uncertainty. An adaptation of CMA-ES(λ, μ optimization algorithm is proposed in order to efficiently handle the constraints in the presence of noise. The update mechanisms of the parametrized distribution used to generate the candidate solutions are modified. The constraint handling method allows to reduce the semi-principal axes of the probable research ellipsoid in the directions violating the constraints. The proposed approach is compared to existing approaches on three analytic optimization problems to highlight the efficiency and the robustness of the algorithm. The proposed method is used to design a two stage solid propulsion launch vehicle.

  17. Inverse modeling of rainfall infiltration with a dual permeability approach using different matrix-fracture coupling variants.

    Science.gov (United States)

    Blöcher, Johanna; Kuraz, Michal

    2017-04-01

    In this contribution we propose implementations of the dual permeability model with different inter-domain exchange descriptions and metaheuristic optimization algorithms for parameter identification and mesh optimization. We compare variants of the coupling term with different numbers of parameters to test if a reduction of parameters is feasible. This can reduce parameter uncertainty in inverse modeling, but also allow for different conceptual models of the domain and matrix coupling. The different variants of the dual permeability model are implemented in the open-source objective library DRUtES written in FORTRAN 2003/2008 in 1D and 2D. For parameter identification we use adaptations of the particle swarm optimization (PSO) and Teaching-learning-based optimization (TLBO), which are population-based metaheuristics with different learning strategies. These are high-level stochastic-based search algorithms that don't require gradient information or a convex search space. Despite increasing computing power and parallel processing, an overly fine mesh is not feasible for parameter identification. This creates the need to find a mesh that optimizes both accuracy and simulation time. We use a bi-objective PSO algorithm to generate a Pareto front of optimal meshes to account for both objectives. The dual permeability model and the optimization algorithms were tested on virtual data and field TDR sensor readings. The TDR sensor readings showed a very steep increase during rapid rainfall events and a subsequent steep decrease. This was theorized to be an effect of artificial macroporous envelopes surrounding TDR sensors creating an anomalous region with distinct local soil hydraulic properties. One of our objectives is to test how well the dual permeability model can describe this infiltration behavior and what coupling term would be most suitable.

  18. Introduction to the mathematics of inversion in remote sensing and indirect measurements

    CERN Document Server

    Twomey, S

    2013-01-01

    Developments in Geomathematics, 3: Introduction to the Mathematics of Inversion in Remote Sensing and Indirect Measurements focuses on the application of the mathematics of inversion in remote sensing and indirect measurements, including vectors and matrices, eigenvalues and eigenvectors, and integral equations. The publication first examines simple problems involving inversion, theory of large linear systems, and physical and geometric aspects of vectors and matrices. Discussions focus on geometrical view of matrix operations, eigenvalues and eigenvectors, matrix products, inverse of a matrix, transposition and rules for product inversion, and algebraic elimination. The manuscript then tackles the algebraic and geometric aspects of functions and function space and linear inversion methods, as well as the algebraic and geometric nature of constrained linear inversion, least squares solution, approximation by sums of functions, and integral equations. The text examines information content of indirect sensing m...

  19. Incomplete Dirac reduction of constrained Hamiltonian systems

    Energy Technology Data Exchange (ETDEWEB)

    Chandre, C., E-mail: chandre@cpt.univ-mrs.fr

    2015-10-15

    First-class constraints constitute a potential obstacle to the computation of a Poisson bracket in Dirac’s theory of constrained Hamiltonian systems. Using the pseudoinverse instead of the inverse of the matrix defined by the Poisson brackets between the constraints, we show that a Dirac–Poisson bracket can be constructed, even if it corresponds to an incomplete reduction of the original Hamiltonian system. The uniqueness of Dirac brackets is discussed. The relevance of this procedure for infinite dimensional Hamiltonian systems is exemplified.

  20. Viscoelastic material inversion using Sierra-SD and ROL

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aquino, Wilkins [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ridzal, Denis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kouri, Drew Philip [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); van Bloemen Waanders, Bart Gustaaf [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Urbina, Angel [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2014-11-01

    In this report we derive frequency-domain methods for inverse characterization of the constitutive parameters of viscoelastic materials. The inverse problem is cast in a PDE-constrained optimization framework with efficient computation of gradients and Hessian vector products through matrix free operations. The abstract optimization operators for first and second derivatives are derived from first principles. Various methods from the Rapid Optimization Library (ROL) are tested on the viscoelastic inversion problem. The methods described herein are applied to compute the viscoelastic bulk and shear moduli of a foam block model, which was recently used in experimental testing for viscoelastic property characterization.

  1. A new strategy for weak events in sparse networks: the first-motion polarity solutions constrained by single-station waveform inversion

    Czech Academy of Sciences Publication Activity Database

    Fojtíková, Lucia; Zahradník, J.

    2014-01-01

    Roč. 85, č. 6 (2014), s. 1265-1274 ISSN 0895-0695 R&D Projects: GA ČR GAP210/12/2336 Institutional support: RVO:67985891 Keywords : weak events * sparse networks * focal mechanism * waveform inversion Subject RIV: DC - Siesmology, Volcanology, Earth Structure Impact factor: 2.156, year: 2014 http://srl.geoscienceworld.org/content/85/6/1265.full

  2. Inferring Aggregated Functional Traits from Metagenomic Data Using Constrained Non-negative Matrix Factorization: Application to Fiber Degradation in the Human Gut Microbiota.

    Science.gov (United States)

    Raguideau, Sébastien; Plancade, Sandra; Pons, Nicolas; Leclerc, Marion; Laroche, Béatrice

    2016-12-01

    Whole Genome Shotgun (WGS) metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs) accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF) problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other metabolic processes in

  3. Inferring Aggregated Functional Traits from Metagenomic Data Using Constrained Non-negative Matrix Factorization: Application to Fiber Degradation in the Human Gut Microbiota.

    Directory of Open Access Journals (Sweden)

    Sébastien Raguideau

    2016-12-01

    Full Text Available Whole Genome Shotgun (WGS metagenomics is increasingly used to study the structure and functions of complex microbial ecosystems, both from the taxonomic and functional point of view. Gene inventories of otherwise uncultured microbial communities make the direct functional profiling of microbial communities possible. The concept of community aggregated trait has been adapted from environmental and plant functional ecology to the framework of microbial ecology. Community aggregated traits are quantified from WGS data by computing the abundance of relevant marker genes. They can be used to study key processes at the ecosystem level and correlate environmental factors and ecosystem functions. In this paper we propose a novel model based approach to infer combinations of aggregated traits characterizing specific ecosystemic metabolic processes. We formulate a model of these Combined Aggregated Functional Traits (CAFTs accounting for a hierarchical structure of genes, which are associated on microbial genomes, further linked at the ecosystem level by complex co-occurrences or interactions. The model is completed with constraints specifically designed to exploit available genomic information, in order to favor biologically relevant CAFTs. The CAFTs structure, as well as their intensity in the ecosystem, is obtained by solving a constrained Non-negative Matrix Factorization (NMF problem. We developed a multicriteria selection procedure for the number of CAFTs. We illustrated our method on the modelling of ecosystemic functional traits of fiber degradation by the human gut microbiota. We used 1408 samples of gene abundances from several high-throughput sequencing projects and found that four CAFTs only were needed to represent the fiber degradation potential. This data reduction highlighted biologically consistent functional patterns while providing a high quality preservation of the original data. Our method is generic and can be applied to other

  4. Inverse scattering transform and soliton solutions for square matrix nonlinear Schrödinger equations with non-zero boundary conditions

    Science.gov (United States)

    Prinari, Barbara; Demontis, Francesco; Li, Sitai; Horikis, Theodoros P.

    2018-04-01

    The inverse scattering transform (IST) with non-zero boundary conditions at infinity is developed for an m × m matrix nonlinear Schrödinger-type equation which, in the case m = 2, has been proposed as a model to describe hyperfine spin F = 1 spinor Bose-Einstein condensates with either repulsive interatomic interactions and anti-ferromagnetic spin-exchange interactions (self-defocusing case), or attractive interatomic interactions and ferromagnetic spin-exchange interactions (self-focusing case). The IST for this system was first presented by Ieda et al. (2007) , using a different approach. In our formulation, both the direct and the inverse problems are posed in terms of a suitable uniformization variable which allows to develop the IST on the standard complex plane, instead of a two-sheeted Riemann surface or the cut plane with discontinuities along the cuts. Analyticity of the scattering eigenfunctions and scattering data, symmetries, properties of the discrete spectrum, and asymptotics are derived. The inverse problem is posed as a Riemann-Hilbert problem for the eigenfunctions, and the reconstruction formula of the potential in terms of eigenfunctions and scattering data is provided. In addition, the general behavior of the soliton solutions is analyzed in detail in the 2 × 2 self-focusing case, including some special solutions not previously discussed in the literature.

  5. Joint inversion of satellite-detected tidal and magnetospheric signals constrains electrical conductivity and water content of the upper mantle and transition zone

    DEFF Research Database (Denmark)

    Grayver, Alexander V.; Munch, F. D.; Kuvshinov, Alexey V.

    2017-01-01

    and ocean tidal magnetic signals from the most recent Swarm and CHAMP data. The challenging task of properly accounting for the ocean effect in the data was addressed through full three-dimensional solution of Maxwell's equations. We show that simultaneous inversion of magnetospheric and tidal magnetic......We present a new global electrical conductivity model of Earth's mantle. The model was derived by using a novel methodology, which is based on inverting satellite magnetic field measurements from different sources simultaneously. Specifically, we estimated responses of magnetospheric origin...

  6. The Relaxation Matrix for Symmetric Tops with Inversion Symmetry. I. Effects of Line Coupling on Self-Broadened v (sub 1) and Pure Rotational Bands of NH3

    Science.gov (United States)

    Ma, Q.; Boulet, C.

    2016-01-01

    The Robert-Bonamy formalism has been commonly used to calculate half-widths and shifts of spectral lines for decades. This formalism is based on several approximations. Among them, two have not been fully addressed: the isolated line approximation and the neglect of coupling between the translational and internal motions. Recently, we have shown that the isolated line approximation is not necessary in developing semi-classical line shape theories. Based on this progress, we have been able to develop a new formalism that enables not only to reduce uncertainties on calculated half-widths and shifts, but also to model line mixing effects on spectra starting from the knowledge of the intermolecular potential. In our previous studies, the new formalism had been applied to linear and asymmetric-top molecules. In the present study, the method has been extended to symmetric-top molecules with inversion symmetry. As expected, the inversion splitting induces a complete failure of the isolated line approximation. We have calculated the complex relaxation matrices of selfbroadened NH3. The half-widths and shifts in the ?1 and the pure rotational bands are reported in the present paper. When compared with measurements, the calculated half-widths match the experimental data very well, since the inapplicable isolated line approximation has been removed. With respect to the shifts, only qualitative results are obtained and discussed. Calculated off-diagonal elements of the relaxation matrix and a comparison with the observed line mixing effects are reported in the companion paper (Paper II).

  7. The Relaxation Matrix for Symmetric Tops with Inversion Symmetry. II; Line Mixing Effects in the V1 Band of NH3

    Science.gov (United States)

    Boulet, C.; Ma, Q.

    2016-01-01

    Line mixing effects have been calculated in the ?1 parallel band of self-broadened NH3. The theoretical approach is an extension of a semi-classical model to symmetric-top molecules with inversion symmetry developed in the companion paper [Q. Ma and C. Boulet, J. Chem. Phys. 144, 224303 (2016)]. This model takes into account line coupling effects and hence enables the calculation of the entire relaxation matrix. A detailed analysis of the various coupling mechanisms is carried out for Q and R inversion doublets. The model has been applied to the calculation of the shape of the Q branch and of some R manifolds for which an obvious signature of line mixing effects has been experimentally demonstrated. Comparisons with measurements show that the present formalism leads to an accurate prediction of the available experimental line shapes. Discrepancies between the experimental and theoretical sets of first order mixing parameters are discussed as well as some extensions of both theory and experiment.

  8. Comparison Between 2-D and 3-D Stiffness Matrix Model Simulation of Sasw Inversion for Pavement Structure

    Directory of Open Access Journals (Sweden)

    Sri Atmaja P. Rosidi

    2007-01-01

    Full Text Available The Spectral Analysis of Surface Wave (SASW method is a non-destructive in situ seismic technique used to assess and evaluate the material stiffness (dynamic elastic modulus and thickness of pavement layers at low strains. These values can be used analytically to calculate load capacities in order to predict the performance of pavement system. The SASW method is based on the dispersion phenomena of Rayleigh waves in layered media. In order to get the actual shear wave velocities, 2-D and 3-D models are used in the simulation of the inversion process for best fitting between theoretical and empirical dispersion curves. The objective of this study is to simulate and compare the 2-D and 3-D model of SASW analysis in the construction of the theoretical dispersion curve for pavement structure evaluation. The result showed that the dispersion curve from the 3-D model was similar with the dispersion curve of the actual pavement profile compared to the 2-D model. The wave velocity profiles also showed that the 3-D model used in the SASW analysis is able to detect all the distinct layers of flexible pavement units.

  9. 4-D imaging of seepage in earthen embankments with time-lapse inversion of self-potential data constrained by acoustic emissions localization

    Science.gov (United States)

    Rittgers, J. B.; Revil, A.; Planes, T.; Mooney, M. A.; Koelewijn, A. R.

    2015-02-01

    New methods are required to combine the information contained in the passive electrical and seismic signals to detect, localize and monitor hydromechanical disturbances in porous media. We propose a field experiment showing how passive seismic and electrical data can be combined together to detect a preferential flow path associated with internal erosion in a Earth dam. Continuous passive seismic and electrical (self-potential) monitoring data were recorded during a 7-d full-scale levee (earthen embankment) failure test, conducted in Booneschans, Netherlands in 2012. Spatially coherent acoustic emissions events and the development of a self-potential anomaly, associated with induced concentrated seepage and internal erosion phenomena, were identified and imaged near the downstream toe of the embankment, in an area that subsequently developed a series of concentrated water flows and sand boils, and where liquefaction of the embankment toe eventually developed. We present a new 4-D grid-search algorithm for acoustic emissions localization in both time and space, and the application of the localization results to add spatially varying constraints to time-lapse 3-D modelling of self-potential data in the terms of source current localization. Seismic signal localization results are utilized to build a set of time-invariant yet spatially varying model weights used for the inversion of the self-potential data. Results from the combination of these two passive techniques show results that are more consistent in terms of focused ground water flow with respect to visual observation on the embankment. This approach to geophysical monitoring of earthen embankments provides an improved approach for early detection and imaging of the development of embankment defects associated with concentrated seepage and internal erosion phenomena. The same approach can be used to detect various types of hydromechanical disturbances at larger scales.

  10. Constraining neutrinoless double beta decay

    International Nuclear Information System (INIS)

    Dorame, L.; Meloni, D.; Morisi, S.; Peinado, E.; Valle, J.W.F.

    2012-01-01

    A class of discrete flavor-symmetry-based models predicts constrained neutrino mass matrix schemes that lead to specific neutrino mass sum-rules (MSR). We show how these theories may constrain the absolute scale of neutrino mass, leading in most of the cases to a lower bound on the neutrinoless double beta decay effective amplitude.

  11. Inverse dose-rate-effects on the expressions of extra-cellular matrix-related genes in low-dose-rate γ-ray irradiated murine cells

    International Nuclear Information System (INIS)

    Sugihara, Takashi; Tanaka, Kimio; Oghiso, Yoichi; Murano, Hayato

    2008-01-01

    Based on the results of previous microarray analyses of murine NIH3T3/PG13Luc cells irradiated with continuous low-dose-rate (LDR) γ-ray or end-high-dose-rate-irradiations (end-HDR) at the end of the LDR-irradiation period, the inverse dose-rate-effects on gene expression levels were observed. To compare differences of the effects between LDR-irradiation and HDR-irradiation, HDR-irradiations at 2 different times, one (ini-HDR) at the same time at the start of LDR-irradiation and the other (end-HDR), were performed. The up-regulated genes were classified into two types, in which one was up-regulated in LDR-, ini-HDR-, and end-HDR irradiation such as Cdkn1a and Ccng1, which were reported as p53-dependent genes, and the other was up-regulated in LDR- and ini-HDR irradiations such as pro-collagen TypeIa2/Colla2, TenascinC/Tnc, and Fibulin5/Fbln5, which were reported as extra-cellular matrix-related (ECM) genes. The time dependent gene expression patterns in LDR-irradiation were also classified into two types, in which one was an early response such as in Cdkn1a and Ccng1 and the other was a delayed response such as the ECM genes which have no linearity to total dose. The protein expression pattern of Cdkn1a increased dose dependently in LDR- and end-HDR-irradiations, but those of p53Ser15/18 and MDM2 in LDR-irradiations were different from end-HDR-irradiations. Furthermore, the gene expression levels of the ECM genes in embryonic fibroblasts from p53-deficient mice were not increased by LDR- and end-HDR-irradiation, so the delayed expressions of the ECM genes seem to be regulated by p53. Consequently, the inverse dose-rate-effects on the expression levels of the ECM genes in LDR- and end-HDR-irradiations may be explained from different time responses by p53 status. (author)

  12. Calculation of the inverse data space via sparse inversion

    KAUST Repository

    Saragiotis, Christos

    2011-01-01

    The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function by constraining the $ell_1$ norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal.

  13. Technical note: Avoiding the direct inversion of the numerator relationship matrix for genotyped animals in single-step genomic best linear unbiased prediction solved with the preconditioned conjugate gradient.

    Science.gov (United States)

    Masuda, Y; Misztal, I; Legarra, A; Tsuruta, S; Lourenco, D A L; Fragomeni, B O; Aguilar, I

    2017-01-01

    This paper evaluates an efficient implementation to multiply the inverse of a numerator relationship matrix for genotyped animals () by a vector (). The computation is required for solving mixed model equations in single-step genomic BLUP (ssGBLUP) with the preconditioned conjugate gradient (PCG). The inverse can be decomposed into sparse matrices that are blocks of the sparse inverse of a numerator relationship matrix () including genotyped animals and their ancestors. The elements of were rapidly calculated with the Henderson's rule and stored as sparse matrices in memory. Implementation of was by a series of sparse matrix-vector multiplications. Diagonal elements of , which were required as preconditioners in PCG, were approximated with a Monte Carlo method using 1,000 samples. The efficient implementation of was compared with explicit inversion of with 3 data sets including about 15,000, 81,000, and 570,000 genotyped animals selected from populations with 213,000, 8.2 million, and 10.7 million pedigree animals, respectively. The explicit inversion required 1.8 GB, 49 GB, and 2,415 GB (estimated) of memory, respectively, and 42 s, 56 min, and 13.5 d (estimated), respectively, for the computations. The efficient implementation required <1 MB, 2.9 GB, and 2.3 GB of memory, respectively, and <1 sec, 3 min, and 5 min, respectively, for setting up. Only <1 sec was required for the multiplication in each PCG iteration for any data sets. When the equations in ssGBLUP are solved with the PCG algorithm, is no longer a limiting factor in the computations.

  14. A cut-&-paste strategy for the 3-D inversion of helicopter-borne electromagnetic data - I. 3-D inversion using the explicit Jacobian and a tensor-based formulation

    Science.gov (United States)

    Scheunert, M.; Ullmann, A.; Afanasjew, M.; Börner, R.-U.; Siemon, B.; Spitzer, K.

    2016-06-01

    We present an inversion concept for helicopter-borne frequency-domain electromagnetic (HEM) data capable of reconstructing 3-D conductivity structures in the subsurface. Standard interpretation procedures often involve laterally constrained stitched 1-D inversion techniques to create pseudo-3-D models that are largely representative for smoothly varying conductivity distributions in the subsurface. Pronounced lateral conductivity changes may, however, produce significant artifacts that can lead to serious misinterpretation. Still, 3-D inversions of entire survey data sets are numerically very expensive. Our approach is therefore based on a cut-&-paste strategy whereupon the full 3-D inversion needs to be applied only to those parts of the survey where the 1-D inversion actually fails. The introduced 3-D Gauss-Newton inversion scheme exploits information given by a state-of-the-art (laterally constrained) 1-D inversion. For a typical HEM measurement, an explicit representation of the Jacobian matrix is inevitable which is caused by the unique transmitter-receiver relation. We introduce tensor quantities which facilitate the matrix assembly of the forward operator as well as the efficient calculation of the Jacobian. The finite difference forward operator incorporates the displacement currents because they may seriously affect the electromagnetic response at frequencies above 100. Finally, we deliver the proof of concept for the inversion using a synthetic data set with a noise level of up to 5%.

  15. Development of a focal-plane drift chamber for low-energetic pions and experimental determination of an inverse transfer matrix for the short-orbit spectrometer

    International Nuclear Information System (INIS)

    Ding, M.

    2004-10-01

    The three-spectrometer facility at the Mainz microtron MAMI was supplemented by an additional spectrometer, which is characterized by its short path-length and therefore is called Short Orbit Spectrometer (SOS). At nominal distance from target to SOS (66 cm) the particles to be detected cover a mean path-length between reaction point and detector of 165 cm. Thus for pion electroproduction close to threshold in comparison to the big spectrometers the surviving probability of charged pions with momentum 100 MeV/c raises from 15% to 73%. Consequently the systematic error (''myon contamination''), as for the proposed measurement of the weak form-factors G A (Q 2 ) and G P (Q 2 ), reduces significantly. The main subject of this thesis is the drift chamber for the SOS. Its small relative thickness (0.03% X 0 ), reducing multiple scattering, is optimized with regard to detecting low-energy pions. Due to the innovative character of the driftchamber geometry a dedicated software for track-reconstruction, efficiency-determination etc. had to be developed. A comfortable feature for calibrating the drift path-drift time-relation, represented by cubic splines, was implemented. The resolution of the track detector in the dispersive plane is 76 μaem for the spatial and 0.23 for the angular coordinate (most probable error) and, correspondingly, 110 μm and 0.29 in the non-dispersive plane. For backtracing the reaction quantities from the detector coordinates the inverse transfer-matrix of the spectrometer was determined. For this purpose electrons were scattered quasi-elastically from protons inside the 12 C-nucleus, thus defining the starting angles of the electrons by holes of a sieve collimator. The resulting experimental values for the angular resolution at the target amount to σ φ =1.3 mrad and σ θ =10.6 mrad resp. The momentum calibration of the SOS only can be achieved by quasi-elastic scattering (two-arm experiment). For this reason the contribution of the proton

  16. Constrained consequence

    CSIR Research Space (South Africa)

    Britz, K

    2011-09-01

    Full Text Available their basic properties and relationship. In Section 3 we present a modal instance of these constructions which also illustrates with an example how to reason abductively with constrained entailment in a causal or action oriented context. In Section 4 we... of models with the former approach, whereas in Section 3.3 we give an example illustrating ways in which C can be de ned with both. Here we employ the following versions of local consequence: De nition 3.4. Given a model M = hW;R;Vi and formulas...

  17. Generalized inverses theory and computations

    CERN Document Server

    Wang, Guorong; Qiao, Sanzheng

    2018-01-01

    This book begins with the fundamentals of the generalized inverses, then moves to more advanced topics. It presents a theoretical study of the generalization of Cramer's rule, determinant representations of the generalized inverses, reverse order law of the generalized inverses of a matrix product, structures of the generalized inverses of structured matrices, parallel computation of the generalized inverses, perturbation analysis of the generalized inverses, an algorithmic study of the computational methods for the full-rank factorization of a generalized inverse, generalized singular value decomposition, imbedding method, finite method, generalized inverses of polynomial matrices, and generalized inverses of linear operators. This book is intended for researchers, postdocs, and graduate students in the area of the generalized inverses with an undergraduate-level understanding of linear algebra.

  18. Inversion assuming weak scattering

    DEFF Research Database (Denmark)

    Xenaki, Angeliki; Gerstoft, Peter; Mosegaard, Klaus

    2013-01-01

    due to the complex nature of the field. A method based on linear inversion is employed to infer information about the statistical properties of the scattering field from the obtained cross-spectral matrix. A synthetic example based on an active high-frequency sonar demonstrates that the proposed...

  19. A new approach to the inverse kinematics of a multi-joint robot manipulator using a minimization method

    International Nuclear Information System (INIS)

    Sasaki, Shinobu

    1987-01-01

    This paper proposes a new approach to solve the inverse kinematics of a type of sixlink manipulator. Directing our attention to features of joint structures of the manipulator, the original problem is first formulated by a system of equations with four variables and solved by means of a minimization technique. The remaining two variables are determined from constrained conditions involved. This is the basic idea in the present approach. The results of computer simulation of the present algorithm showed that the accuracies of solutions and convergence speed are much higher and quite satisfactory for practical purposes, as compared with the linearization-iteration method based on the conventional inverse Jacobian matrix. (author)

  20. Inverse modeling of the terrestrial carbon flux in China with flux covariance among inverted regions

    Science.gov (United States)

    Wang, H.; Jiang, F.; Chen, J. M.; Ju, W.; Wang, H.

    2011-12-01

    Quantitative understanding of the role of ocean and terrestrial biosphere in the global carbon cycle, their response and feedback to climate change is required for the future projection of the global climate. China has the largest amount of anthropogenic CO2 emission, diverse terrestrial ecosystems and an unprecedented rate of urbanization. Thus information on spatial and temporal distributions of the terrestrial carbon flux in China is of great importance in understanding the global carbon cycle. We developed a nested inversion with focus in China. Based on Transcom 22 regions for the globe, we divide China and its neighboring countries into 17 regions, making 39 regions in total for the globe. A Bayesian synthesis inversion is made to estimate the terrestrial carbon flux based on GlobalView CO2 data. In the inversion, GEOS-Chem is used as the transport model to develop the transport matrix. A terrestrial ecosystem model named BEPS is used to produce the prior surface flux to constrain the inversion. However, the sparseness of available observation stations in Asia poses a challenge to the inversion for the 17 small regions. To obtain additional constraint on the inversion, a prior flux covariance matrix is constructed using the BEPS model through analyzing the correlation in the net carbon flux among regions under variable climate conditions. The use of the covariance among different regions in the inversion effectively extends the information content of CO2 observations to more regions. The carbon flux over the 39 land and ocean regions are inverted for the period from 2004 to 2009. In order to investigate the impact of introducing the covariance matrix with non-zero off-diagonal values to the inversion, the inverted terrestrial carbon flux over China is evaluated against ChinaFlux eddy-covariance observations after applying an upscaling methodology.

  1. Algebraic properties of generalized inverses

    CERN Document Server

    Cvetković‐Ilić, Dragana S

    2017-01-01

    This book addresses selected topics in the theory of generalized inverses. Following a discussion of the “reverse order law” problem and certain problems involving completions of operator matrices, it subsequently presents a specific approach to solving the problem of the reverse order law for {1} -generalized inverses. Particular emphasis is placed on the existence of Drazin invertible completions of an upper triangular operator matrix; on the invertibility and different types of generalized invertibility of a linear combination of operators on Hilbert spaces and Banach algebra elements; on the problem of finding representations of the Drazin inverse of a 2x2 block matrix; and on selected additive results and algebraic properties for the Drazin inverse. In addition to the clarity of its content, the book discusses the relevant open problems for each topic discussed. Comments on the latest references on generalized inverses are also included. Accordingly, the book will be useful for graduate students, Ph...

  2. Inverse photoemission

    International Nuclear Information System (INIS)

    Namatame, Hirofumi; Taniguchi, Masaki

    1994-01-01

    Photoelectron spectroscopy is regarded as the most powerful means since it can measure almost perfectly the occupied electron state. On the other hand, inverse photoelectron spectroscopy is the technique for measuring unoccupied electron state by using the inverse process of photoelectron spectroscopy, and in principle, the similar experiment to photoelectron spectroscopy becomes feasible. The development of the experimental technology for inverse photoelectron spectroscopy has been carried out energetically by many research groups so far. At present, the heightening of resolution of inverse photoelectron spectroscopy, the development of inverse photoelectron spectroscope in which light energy is variable and so on are carried out. But the inverse photoelectron spectroscope for vacuum ultraviolet region is not on the market. In this report, the principle of inverse photoelectron spectroscopy and the present state of the spectroscope are described, and the direction of the development hereafter is groped. As the experimental equipment, electron guns, light detectors and so on are explained. As the examples of the experiment, the inverse photoelectron spectroscopy of semimagnetic semiconductors and resonance inverse photoelectron spectroscopy are reported. (K.I.)

  3. Inverse Limits

    CERN Document Server

    Ingram, WT

    2012-01-01

    Inverse limits provide a powerful tool for constructing complicated spaces from simple ones. They also turn the study of a dynamical system consisting of a space and a self-map into a study of a (likely more complicated) space and a self-homeomorphism. In four chapters along with an appendix containing background material the authors develop the theory of inverse limits. The book begins with an introduction through inverse limits on [0,1] before moving to a general treatment of the subject. Special topics in continuum theory complete the book. Although it is not a book on dynamics, the influen

  4. On the inversion of geodetic integrals defined over the sphere using 1-D FFT

    Science.gov (United States)

    García, R. V.; Alejo, C. A.

    2005-08-01

    An iterative method is presented which performs inversion of integrals defined over the sphere. The method is based on one-dimensional fast Fourier transform (1-D FFT) inversion and is implemented with the projected Landweber technique, which is used to solve constrained least-squares problems reducing the associated 1-D cyclic-convolution error. The results obtained are as precise as the direct matrix inversion approach, but with better computational efficiency. A case study uses the inversion of Hotine’s integral to obtain gravity disturbances from geoid undulations. Numerical convergence is also analyzed and comparisons with respect to the direct matrix inversion method using conjugate gradient (CG) iteration are presented. Like the CG method, the number of iterations needed to get the optimum (i.e., small) error decreases as the measurement noise increases. Nevertheless, for discrete data given over a whole parallel band, the method can be applied directly without implementing the projected Landweber method, since no cyclic convolution error exists.

  5. Recurrent Neural Network for Computing Outer Inverse.

    Science.gov (United States)

    Živković, Ivan S; Stanimirović, Predrag S; Wei, Yimin

    2016-05-01

    Two linear recurrent neural networks for generating outer inverses with prescribed range and null space are defined. Each of the proposed recurrent neural networks is based on the matrix-valued differential equation, a generalization of dynamic equations proposed earlier for the nonsingular matrix inversion, the Moore-Penrose inversion, as well as the Drazin inversion, under the condition of zero initial state. The application of the first approach is conditioned by the properties of the spectrum of a certain matrix; the second approach eliminates this drawback, though at the cost of increasing the number of matrix operations. The cases corresponding to the most common generalized inverses are defined. The conditions that ensure stability of the proposed neural network are presented. Illustrative examples present the results of numerical simulations.

  6. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data and the complex rupture process at depth. The resulting earthquake source models quantify the spatiotemporal evolution of ruptures. They are also used to provide a rapid assessment of the severity of an earthquake and to estimate losses. However, because of uncertainties in the data, assumed fault geometry and velocity structure, and chosen rupture parameterization, it is not clear which features of these source models are robust. Improved understanding of the uncertainty and reliability of earthquake source inversions will allow the scientific community to use the robust features of kinematic inversions to more thoroughly investigate the complexity of the rupture process and to better constrain other earthquakerelated computations, such as ground motion simulations and static stress change calculations.

  7. Constraint on Parameters of Inverse Compton Scattering Model for ...

    Indian Academy of Sciences (India)

    B2319+60, two parameters of inverse Compton scattering model, the initial Lorentz factor and the factor of energy loss of relativistic particles are constrained. Key words. Pulsar—inverse Compton scattering—emission mechanism. 1. Introduction. Among various kinds of models for pulsar radio emission, the inverse ...

  8. Minimal solution for inconsistent singular fuzzy matrix equations

    Directory of Open Access Journals (Sweden)

    M. Nikuie

    2013-10-01

    Full Text Available The fuzzy matrix equations $Ailde{X}=ilde{Y}$ is called a singular fuzzy matrix equations while the coefficients matrix of its equivalent crisp matrix equations be a singular matrix. The singular fuzzy matrix equations are divided into two parts: consistent singular matrix equations and inconsistent fuzzy matrix equations. In this paper, the inconsistent singular fuzzy matrix equations is studied and the effect of generalized inverses in finding minimal solution of an inconsistent singular fuzzy matrix equations are investigated.

  9. Approximate inverse preconditioning of iterative methods for nonsymmetric linear systems

    Energy Technology Data Exchange (ETDEWEB)

    Benzi, M. [Universita di Bologna (Italy); Tuma, M. [Inst. of Computer Sciences, Prague (Czech Republic)

    1996-12-31

    A method for computing an incomplete factorization of the inverse of a nonsymmetric matrix A is presented. The resulting factorized sparse approximate inverse is used as a preconditioner in the iterative solution of Ax = b by Krylov subspace methods.

  10. Towards weakly constrained double field theory

    Directory of Open Access Journals (Sweden)

    Kanghoon Lee

    2016-08-01

    Full Text Available We show that it is possible to construct a well-defined effective field theory incorporating string winding modes without using strong constraint in double field theory. We show that X-ray (Radon transform on a torus is well-suited for describing weakly constrained double fields, and any weakly constrained fields are represented as a sum of strongly constrained fields. Using inverse X-ray transform we define a novel binary operation which is compatible with the level matching constraint. Based on this formalism, we construct a consistent gauge transform and gauge invariant action without using strong constraint. We then discuss the relation of our result to the closed string field theory. Our construction suggests that there exists an effective field theory description for massless sector of closed string field theory on a torus in an associative truncation.

  11. Downregulation of reversion-inducing cysteine-rich protein with Kazal motifs in malignant melanoma: inverse correlation with membrane-type 1-matrix metalloproteinase and tissue inhibitor of metalloproteinase 2.

    Science.gov (United States)

    Jacomasso, Thiago; Trombetta-Lima, Marina; Sogayar, Mari C; Winnischofer, Sheila M B

    2014-02-01

    The invasive phenotype of many tumors is associated with an imbalance between the matrix metalloproteinases (MMPs) and their inhibitors, tissue inhibitors of metalloproteinases (TIMPs), and the membrane-anchored reversion-inducing cysteine-rich protein with Kazal motifs (RECK). RECK inhibits MMP-2, MMP-9, and MT1-MMP, and has been linked to patient survival and better prognosis in several types of tumors. However, despite the wide implication of these MMPs in melanoma establishment and progression, the role of RECK in this type of tumor is still unknown. Here, we analyzed the expression of RECK, TIMP1, TIMP2, TIMP3, MT1MMP, MMP2, and MMP9 in two publicly available melanoma microarray datasets and in a panel of human melanoma cell lines. We found that RECK is downregulated in malignant melanoma, accompanied by upregulation of MT1MMP and TIMP2. In both datasets, we observed that the group of samples displaying higher RECK levels show lower median expression levels of MT1MMP and TIMP2 and higher levels of TIMP3. When tested in a sample-wise manner, these correlations were statistically significant. Inverse correlations between RECK, MT1MMP, and TIMP2 were verified in a panel of human melanoma cell lines and in a further reduced model that includes a pair of matched primary tumor-derived and metastasis-derived cell lines. Taken together, our data indicate a consistent correlation between RECK, MT1MMP, and TIMP2 across different models of clinical samples and cell lines and suggest evidence of the potential use of this subset of genes as a gene signature for diagnosing melanoma.

  12. Inverse Kinematics

    Directory of Open Access Journals (Sweden)

    Joel Sereno

    2010-01-01

    Full Text Available Inverse kinematics is the process of converting a Cartesian point in space into a set of joint angles to more efficiently move the end effector of a robot to a desired orientation. This project investigates the inverse kinematics of a robotic hand with fingers under various scenarios. Assuming the parameters of a provided robot, a general equation for the end effector point was calculated and used to plot the region of space that it can reach. Further, the benefits obtained from the addition of a prismatic joint versus an extra variable angle joint were considered. The results confirmed that having more movable parts, such as prismatic points and changing angles, increases the effective reach of a robotic hand.

  13. Multidimensional inversion

    International Nuclear Information System (INIS)

    Desesquelles, P.

    1997-01-01

    Computer Monte Carlo simulations occupy an increasingly important place between theory and experiment. This paper introduces a global protocol for the comparison of model simulations with experimental results. The correlated distributions of the model parameters are determined using an original recursive inversion procedure. Multivariate analysis techniques are used in order to optimally synthesize the experimental information with a minimum number of variables. This protocol is relevant in all fields if physics dealing with event generators and multi-parametric experiments. (authors)

  14. On the regularity of the covariance matrix of a discretized scalar field on the sphere

    Energy Technology Data Exchange (ETDEWEB)

    Bilbao-Ahedo, J.D. [Departamento de Física Moderna, Universidad de Cantabria, Av. los Castros s/n, 39005 Santander (Spain); Barreiro, R.B.; Herranz, D.; Vielva, P.; Martínez-González, E., E-mail: bilbao@ifca.unican.es, E-mail: barreiro@ifca.unican.es, E-mail: herranz@ifca.unican.es, E-mail: vielva@ifca.unican.es, E-mail: martinez@ifca.unican.es [Instituto de Física de Cantabria (CSIC-UC), Av. los Castros s/n, 39005 Santander (Spain)

    2017-02-01

    We present a comprehensive study of the regularity of the covariance matrix of a discretized field on the sphere. In a particular situation, the rank of the matrix depends on the number of pixels, the number of spherical harmonics, the symmetries of the pixelization scheme and the presence of a mask. Taking into account the above mentioned components, we provide analytical expressions that constrain the rank of the matrix. They are obtained by expanding the determinant of the covariance matrix as a sum of determinants of matrices made up of spherical harmonics. We investigate these constraints for five different pixelizations that have been used in the context of Cosmic Microwave Background (CMB) data analysis: Cube, Icosahedron, Igloo, GLESP and HEALPix, finding that, at least in the considered cases, the HEALPix pixelization tends to provide a covariance matrix with a rank closer to the maximum expected theoretical value than the other pixelizations. The effect of the propagation of numerical errors in the regularity of the covariance matrix is also studied for different computational precisions, as well as the effect of adding a certain level of noise in order to regularize the matrix. In addition, we investigate the application of the previous results to a particular example that requires the inversion of the covariance matrix: the estimation of the CMB temperature power spectrum through the Quadratic Maximum Likelihood algorithm. Finally, some general considerations in order to achieve a regular covariance matrix are also presented.

  15. Non-unitary neutrino mixing and CP violation in the minimal inverse seesaw model

    International Nuclear Information System (INIS)

    Malinsky, Michal; Ohlsson, Tommy; Xing, Zhi-zhong; Zhang He

    2009-01-01

    We propose a simplified version of the inverse seesaw model, in which only two pairs of the gauge-singlet neutrinos are introduced, to interpret the observed neutrino mass hierarchy and lepton flavor mixing at or below the TeV scale. This 'minimal' inverse seesaw scenario (MISS) is technically natural and experimentally testable. In particular, we show that the effective parameters describing the non-unitary neutrino mixing matrix are strongly correlated in the MISS, and thus, their upper bounds can be constrained by current experimental data in a more restrictive way. The Jarlskog invariants of non-unitary CP violation are calculated, and the discovery potential of such new CP-violating effects in the near detector of a neutrino factory is discussed.

  16. Evolutionary constrained optimization

    CERN Document Server

    Deb, Kalyanmoy

    2015-01-01

    This book makes available a self-contained collection of modern research addressing the general constrained optimization problems using evolutionary algorithms. Broadly the topics covered include constraint handling for single and multi-objective optimizations; penalty function based methodology; multi-objective based methodology; new constraint handling mechanism; hybrid methodology; scaling issues in constrained optimization; design of scalable test problems; parameter adaptation in constrained optimization; handling of integer, discrete and mix variables in addition to continuous variables; application of constraint handling techniques to real-world problems; and constrained optimization in dynamic environment. There is also a separate chapter on hybrid optimization, which is gaining lots of popularity nowadays due to its capability of bridging the gap between evolutionary and classical optimization. The material in the book is useful to researchers, novice, and experts alike. The book will also be useful...

  17. CLFs-based optimization control for a class of constrained visual servoing systems.

    Science.gov (United States)

    Song, Xiulan; Miaomiao, Fu

    2017-03-01

    In this paper, we use the control Lyapunov function (CLF) technique to present an optimized visual servo control method for constrained eye-in-hand robot visual servoing systems. With the knowledge of camera intrinsic parameters and depth of target changes, visual servo control laws (i.e. translation speed) with adjustable parameters are derived by image point features and some known CLF of the visual servoing system. The Fibonacci method is employed to online compute the optimal value of those adjustable parameters, which yields an optimized control law to satisfy constraints of the visual servoing system. The Lyapunov's theorem and the properties of CLF are used to establish stability of the constrained visual servoing system in the closed-loop with the optimized control law. One merit of the presented method is that there is no requirement of online calculating the pseudo-inverse of the image Jacobian's matrix and the homography matrix. Simulation and experimental results illustrated the effectiveness of the method proposed here. Copyright © 2016 ISA. Published by Elsevier Ltd. All rights reserved.

  18. The Inverse of Banded Matrices

    Science.gov (United States)

    2013-01-01

    indexed entries all zeros. In this paper, generalizing a method of Mallik (1999) [5], we give the LU factorization and the inverse of the matrix Br,n (if it...r ≤ i ≤ r, 1 ≤ j ≤ r, with the remaining un-indexed entries all zeros. In this paper generalizing a method of Mallik (1999) [5...matrices and applications to piecewise cubic approximation, J. Comput. Appl. Math. 8 (4) (1982) 285–288. [5] R.K. Mallik , The inverse of a lower

  19. Exploring Constrained Creative Communication

    DEFF Research Database (Denmark)

    Sørensen, Jannick Kirk

    2017-01-01

    Creative collaboration via online tools offers a less ‘media rich’ exchange of information between participants than face-to-face collaboration. The participants’ freedom to communicate is restricted in means of communication, and rectified in terms of possibilities offered in the interface. How do...... these constrains influence the creative process and the outcome? In order to isolate the communication problem from the interface- and technology problem, we examine via a design game the creative communication on an open-ended task in a highly constrained setting, a design game. Via an experiment the relation...... between communicative constrains and participants’ perception of dialogue and creativity is examined. Four batches of students preparing for forming semester project groups were conducted and documented. Students were asked to create an unspecified object without any exchange of communication except...

  20. Choosing health, constrained choices.

    Science.gov (United States)

    Chee Khoon Chan

    2009-12-01

    In parallel with the neo-liberal retrenchment of the welfarist state, an increasing emphasis on the responsibility of individuals in managing their own affairs and their well-being has been evident. In the health arena for instance, this was a major theme permeating the UK government's White Paper Choosing Health: Making Healthy Choices Easier (2004), which appealed to an ethos of autonomy and self-actualization through activity and consumption which merited esteem. As a counterpoint to this growing trend of informed responsibilization, constrained choices (constrained agency) provides a useful framework for a judicious balance and sense of proportion between an individual behavioural focus and a focus on societal, systemic, and structural determinants of health and well-being. Constrained choices is also a conceptual bridge between responsibilization and population health which could be further developed within an integrative biosocial perspective one might refer to as the social ecology of health and disease.

  1. Fast wavelet based sparse approximate inverse preconditioner

    Energy Technology Data Exchange (ETDEWEB)

    Wan, W.L. [Univ. of California, Los Angeles, CA (United States)

    1996-12-31

    Incomplete LU factorization is a robust preconditioner for both general and PDE problems but unfortunately not easy to parallelize. Recent study of Huckle and Grote and Chow and Saad showed that sparse approximate inverse could be a potential alternative while readily parallelizable. However, for special class of matrix A that comes from elliptic PDE problems, their preconditioners are not optimal in the sense that independent of mesh size. A reason may be that no good sparse approximate inverse exists for the dense inverse matrix. Our observation is that for this kind of matrices, its inverse entries typically have piecewise smooth changes. We can take advantage of this fact and use wavelet compression techniques to construct a better sparse approximate inverse preconditioner. We shall show numerically that our approach is effective for this kind of matrices.

  2. Inverse Kinematics of a Serial Robot

    Directory of Open Access Journals (Sweden)

    Amici Cinzia

    2016-01-01

    Full Text Available This work describes a technique to treat the inverse kinematics of a serial manipulator. The inverse kinematics is obtained through the numerical inversion of the Jacobian matrix, that represents the equation of motion of the manipulator. The inversion is affected by numerical errors and, in different conditions, due to the numerical nature of the solver, it does not converge to a reasonable solution. Thus a soft computing approach is adopted to mix different traditional methods to obtain an increment of algorithmic convergence.

  3. Group inverses of M-matrices and their applications

    CERN Document Server

    Kirkland, Stephen J

    2013-01-01

    Group inverses for singular M-matrices are useful tools not only in matrix analysis, but also in the analysis of stochastic processes, graph theory, electrical networks, and demographic models. Group Inverses of M-Matrices and Their Applications highlights the importance and utility of the group inverses of M-matrices in several application areas. After introducing sample problems associated with Leslie matrices and stochastic matrices, the authors develop the basic algebraic and spectral properties of the group inverse of a general matrix. They then derive formulas for derivatives of matrix f

  4. Constrained superfields in supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, Gianguido; Farakos, Fotis [Dipartimento di Fisica ed Astronomia “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-02-16

    We analyze constrained superfields in supergravity. We investigate the consistency and solve all known constraints, presenting a new class that may have interesting applications in the construction of inflationary models. We provide the superspace Lagrangians for minimal supergravity models based on them and write the corresponding theories in component form using a simplifying gauge for the goldstino couplings.

  5. Minimal constrained supergravity

    Energy Technology Data Exchange (ETDEWEB)

    Cribiori, N. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Dall' Agata, G., E-mail: dallagat@pd.infn.it [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Farakos, F. [Dipartimento di Fisica e Astronomia “Galileo Galilei”, Università di Padova, Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova, Via Marzolo 8, 35131 Padova (Italy); Porrati, M. [Center for Cosmology and Particle Physics, Department of Physics, New York University, 4 Washington Place, New York, NY 10003 (United States)

    2017-01-10

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  6. Minimal constrained supergravity

    Directory of Open Access Journals (Sweden)

    N. Cribiori

    2017-01-01

    Full Text Available We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  7. Minimal constrained supergravity

    International Nuclear Information System (INIS)

    Cribiori, N.; Dall'Agata, G.; Farakos, F.; Porrati, M.

    2017-01-01

    We describe minimal supergravity models where supersymmetry is non-linearly realized via constrained superfields. We show that the resulting actions differ from the so called “de Sitter” supergravities because we consider constraints eliminating directly the auxiliary fields of the gravity multiplet.

  8. 3D Frequency-Domain Seismic Inversion with Controlled Sloppiness

    NARCIS (Netherlands)

    Herrmann, F.; van Leeuwen, T.

    2014-01-01

    Seismic waveform inversion aims at obtaining detailed estimates of subsurface medium parameters, such as the spatial distribution of soundspeed, from multiexperiment seismic data. A formulation of this inverse problem in the frequency domain leads to an optimization problem constrained by a

  9. 3D Frequency-Domain Seismic Inversion with Controlled Sloppiness.

    NARCIS (Netherlands)

    T. van Leeuwen (Tristan); F.J. Herrmann

    2014-01-01

    htmlabstractSeismic waveform inversion aims at obtaining detailed estimates of subsurface medium parameters, such as the spatial distribution of soundspeed, from multiexperiment seismic data. A formulation of this inverse problem in the frequency domain leads to an optimization problem constrained

  10. Matrix theory selected topics and useful results

    CERN Document Server

    Mehta, Madan Lal

    1989-01-01

    Matrices and operations on matrices ; determinants ; elementary operations on matrices (continued) ; eigenvalues and eigenvectors, diagonalization of normal matrices ; functions of a matrix ; positive definiteness, various polar forms of a matrix ; special matrices ; matrices with quaternion elements ; inequalities ; generalised inverse of a matrix ; domain of values of a matrix, location and dispersion of eigenvalues ; symmetric functions ; integration over matrix variables ; permanents of doubly stochastic matrices ; infinite matrices ; Alexander matrices, knot polynomials, torsion numbers.

  11. An application of sparse inversion on the calculation of the inverse data space of geophysical data

    KAUST Repository

    Saragiotis, Christos

    2011-07-01

    Multiple reflections as observed in seismic reflection measurements often hide arrivals from the deeper target reflectors and need to be removed. The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from trivial as theory requires infinite time and offset recording. Furthermore regularization issues arise during inversion. We perform the inversion by minimizing the least-squares norm of the misfit function and by constraining the 1 norm of the solution, being the inverse data space. In this way a sparse inversion approach is obtained. We show results on field data with an application to surface multiple removal. © 2011 IEEE.

  12. Inverse problem in nuclear physics

    International Nuclear Information System (INIS)

    Zakhariev, B.N.

    1976-01-01

    The method of reconstruction of interaction from the scattering data is formulated in the frame of the R-matrix theory in which the potential is determined by position of resonance Esub(lambda) and their reduced widths γ 2 lambda. In finite difference approximation for the Schroedinger equation this new approach allows to make the logics of the inverse problem IP more clear. A possibility of applications of IP formalism to various nuclear systems is discussed. (author)

  13. Total-variation based velocity inversion with Bregmanized operator splitting algorithm

    Science.gov (United States)

    Zand, Toktam; Gholami, Ali

    2018-04-01

    Many problems in applied geophysics can be formulated as a linear inverse problem. The associated problems, however, are large-scale and ill-conditioned. Therefore, regularization techniques are needed to be employed for solving them and generating a stable and acceptable solution. We consider numerical methods for solving such problems in this paper. In order to tackle the ill-conditioning of the problem we use blockiness as a prior information of the subsurface parameters and formulate the problem as a constrained total variation (TV) regularization. The Bregmanized operator splitting (BOS) algorithm as a combination of the Bregman iteration and the proximal forward backward operator splitting method is developed to solve the arranged problem. Two main advantages of this new algorithm are that no matrix inversion is required and that a discrepancy stopping criterion is used to stop the iterations, which allow efficient solution of large-scale problems. The high performance of the proposed TV regularization method is demonstrated using two different experiments: 1) velocity inversion from (synthetic) seismic data which is based on Born approximation, 2) computing interval velocities from RMS velocities via Dix formula. Numerical examples are presented to verify the feasibility of the proposed method for high-resolution velocity inversion.

  14. Identification of different geologic units using fuzzy constrained resistivity tomography

    Science.gov (United States)

    Singh, Anand; Sharma, S. P.

    2018-01-01

    Different geophysical inversion strategies are utilized as a component of an interpretation process that tries to separate geologic units based on the resistivity distribution. In the present study, we present the results of separating different geologic units using fuzzy constrained resistivity tomography. This was accomplished using fuzzy c means, a clustering procedure to improve the 2D resistivity image and geologic separation within the iterative minimization through inversion. First, we developed a Matlab-based inversion technique to obtain a reliable resistivity image using different geophysical data sets (electrical resistivity and electromagnetic data). Following this, the recovered resistivity model was converted into a fuzzy constrained resistivity model by assigning the highest probability value of each model cell to the cluster utilizing fuzzy c means clustering procedure during the iterative process. The efficacy of the algorithm is demonstrated using three synthetic plane wave electromagnetic data sets and one electrical resistivity field dataset. The presented approach shows improvement on the conventional inversion approach to differentiate between different geologic units if the correct number of geologic units will be identified. Further, fuzzy constrained resistivity tomography was performed to examine the augmentation of uranium mineralization in the Beldih open cast mine as a case study. We also compared geologic units identified by fuzzy constrained resistivity tomography with geologic units interpreted from the borehole information.

  15. Constrained Vapor Bubble Experiment

    Science.gov (United States)

    Gokhale, Shripad; Plawsky, Joel; Wayner, Peter C., Jr.; Zheng, Ling; Wang, Ying-Xi

    2002-11-01

    Microgravity experiments on the Constrained Vapor Bubble Heat Exchanger, CVB, are being developed for the International Space Station. In particular, we present results of a precursory experimental and theoretical study of the vertical Constrained Vapor Bubble in the Earth's environment. A novel non-isothermal experimental setup was designed and built to study the transport processes in an ethanol/quartz vertical CVB system. Temperature profiles were measured using an in situ PC (personal computer)-based LabView data acquisition system via thermocouples. Film thickness profiles were measured using interferometry. A theoretical model was developed to predict the curvature profile of the stable film in the evaporator. The concept of the total amount of evaporation, which can be obtained directly by integrating the experimental temperature profile, was introduced. Experimentally measured curvature profiles are in good agreement with modeling results. For microgravity conditions, an analytical expression, which reveals an inherent relation between temperature and curvature profiles, was derived.

  16. Anisotropic magnetotelluric inversion using a mutual information constraint

    Science.gov (United States)

    Mandolesi, E.; Jones, A. G.

    2012-12-01

    In recent years, several authors pointed that the electrical conductivity of many subsurface structures cannot be described properly by a scalar field. With the development of field devices and techniques, data quality improved to the point that the anisotropy in conductivity of rocks (microscopic anisotropy) and tectonic structures (macroscopic anisotropy) cannot be neglected. Therefore a correct use of high quality data has to include electrical anisotropy and a correct interpretation of anisotropic data characterizes directly a non-negligible part of the subsurface. In this work we test an inversion routine that takes advantage of the classic Levenberg-Marquardt (LM) algorithm to invert magnetotelluric (MT) data generated from a bi-dimensional (2D) anisotropic domain. The LM method is routinely used in inverse problems due its performance and robustness. In non-linear inverse problems -such the MT problem- the LM method provides a spectacular compromise betwee quick and secure convergence at the price of the explicit computation and storage of the sensitivity matrix. Regularization in inverse MT problems has been used extensively, due to the necessity to constrain model space and to reduce the ill-posedness of the anisotropic MT problem, which makes MT inversions extremely challenging. In order to reduce non-uniqueness of the MT problem and to reach a model compatible with other different tomographic results from the same target region, we used a mutual information (MI) based constraint. MI is a basic quantity in information theory that can be used to define a metric between images, and it is routinely used in fields as computer vision, image registration and medical tomography, to cite some applications. We -thus- inverted for the model that best fits the anisotropic data and that is the closest -in a MI sense- to a tomographic model of the target area. The advantage of this technique is that the tomographic model of the studied region may be produced by any

  17. Constrained noninformative priors

    International Nuclear Information System (INIS)

    Atwood, C.L.

    1994-10-01

    The Jeffreys noninformative prior distribution for a single unknown parameter is the distribution corresponding to a uniform distribution in the transformed model where the unknown parameter is approximately a location parameter. To obtain a prior distribution with a specified mean but with diffusion reflecting great uncertainty, a natural generalization of the noninformative prior is the distribution corresponding to the constrained maximum entropy distribution in the transformed model. Examples are given

  18. Ring-constrained Join

    DEFF Research Database (Denmark)

    Yiu, Man Lung; Karras, Panagiotis; Mamoulis, Nikos

    2008-01-01

    . This new operation has important applications in decision support, e.g., placing recycling stations at fair locations between restaurants and residential complexes. Clearly, RCJ is defined based on a geometric constraint but not on distances between points. Thus, our operation is fundamentally different......We introduce a novel spatial join operator, the ring-constrained join (RCJ). Given two sets P and Q of spatial points, the result of RCJ consists of pairs (p, q) (where p ε P, q ε Q) satisfying an intuitive geometric constraint: the smallest circle enclosing p and q contains no other points in P, Q...

  19. Effective and accurate processing and inversion of airborne electromagnetic data

    DEFF Research Database (Denmark)

    Auken, Esben; Christiansen, Anders Vest; Andersen, Kristoffer Rønne

    Airborne electromagnetic (AEM) data is used throughout the world for mapping of mineral targets and groundwater resources. The development of technology and inversion algorithms has been tremendously over the last decade and results from these surveys are high-resolution images of the subsurface....... In this keynote talk, we discuss an effective inversion algorithm, which is both subjected to intense research and development as well as production. This is the well know Laterally Constrained Inversion (LCI) and Spatial Constrained Inversion algorithm. The same algorithm is also used in a voxel setup (3D model......) and for sheet inversions. An integral part of these different model discretization is an accurate modelling of the system transfer function and of auxiliary parameters like flight altitude, bird pitch,etc....

  20. Inverse problems of geophysics

    International Nuclear Information System (INIS)

    Yanovskaya, T.B.

    2003-07-01

    This report gives an overview and the mathematical formulation of geophysical inverse problems. General principles of statistical estimation are explained. The maximum likelihood and least square fit methods, the Backus-Gilbert method and general approaches for solving inverse problems are discussed. General formulations of linearized inverse problems, singular value decomposition and properties of pseudo-inverse solutions are given

  1. Inverse m-matrices and ultrametric matrices

    CERN Document Server

    Dellacherie, Claude; San Martin, Jaime

    2014-01-01

    The study of M-matrices, their inverses and discrete potential theory is now a well-established part of linear algebra and the theory of Markov chains. The main focus of this monograph is the so-called inverse M-matrix problem, which asks for a characterization of nonnegative matrices whose inverses are M-matrices. We present an answer in terms of discrete potential theory based on the Choquet-Deny Theorem. A distinguished subclass of inverse M-matrices is ultrametric matrices, which are important in applications such as taxonomy. Ultrametricity is revealed to be a relevant concept in linear algebra and discrete potential theory because of its relation with trees in graph theory and mean expected value matrices in probability theory. Remarkable properties of Hadamard functions and products for the class of inverse M-matrices are developed and probabilistic insights are provided throughout the monograph.

  2. Combinatorial matrix theory

    CERN Document Server

    Mitjana, Margarida

    2018-01-01

    This book contains the notes of the lectures delivered at an Advanced Course on Combinatorial Matrix Theory held at Centre de Recerca Matemàtica (CRM) in Barcelona. These notes correspond to five series of lectures. The first series is dedicated to the study of several matrix classes defined combinatorially, and was delivered by Richard A. Brualdi. The second one, given by Pauline van den Driessche, is concerned with the study of spectral properties of matrices with a given sign pattern. Dragan Stevanović delivered the third one, devoted to describing the spectral radius of a graph as a tool to provide bounds of parameters related with properties of a graph. The fourth lecture was delivered by Stephen Kirkland and is dedicated to the applications of the Group Inverse of the Laplacian matrix. The last one, given by Ángeles Carmona, focuses on boundary value problems on finite networks with special in-depth on the M-matrix inverse problem.

  3. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu

    2010-05-07

    This article investigates different approaches for assessing the degree of roughness of the slip distribution of future earthquakes. First, we analyze a database of slip images extracted from a suite of 152 finite-source rupture models from 80 events (Mw = 4.1–8.9). This results in an empirical model defining the distribution of the slip spectrum corner wave numbers (kc) as a function of moment magnitude. To reduce the “epistemic” uncertainty, we select a single slip model per event and screen out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests on synthetic data with a frequency domain inversion method. These tests reveal that due to smoothing constraints used to stabilize the inversion process, kc tends to be underestimated. We then develop an alternative approach: (1) we establish a proportionality relationship between kc and the peak ground acceleration (PGA), using a k−2 kinematic source model, and (2) we analyze the PGA distribution, which is believed to be better constrained than slip images. These two methods reveal that kc follows a lognormal distribution, with similar standard deviations for both methods.

  4. Matrix theory

    CERN Document Server

    Franklin, Joel N

    2003-01-01

    Mathematically rigorous introduction covers vector and matrix norms, the condition-number of a matrix, positive and irreducible matrices, much more. Only elementary algebra and calculus required. Includes problem-solving exercises. 1968 edition.

  5. Convergence of Chahine's nonlinear relaxation inversion method used for limb viewing remote sensing

    Science.gov (United States)

    Chu, W. P.

    1985-01-01

    The application of Chahine's (1970) inversion technique to remote sensing problems utilizing the limb viewing geometry is discussed. The problem considered here involves occultation-type measurements and limb radiance-type measurements from either spacecraft or balloon platforms. The kernel matrix of the inversion problem is either an upper or lower triangular matrix. It is demonstrated that the Chahine inversion technique always converges, provided the diagonal elements of the kernel matrix are nonzero.

  6. Nonlinear inversion of resistivity sounding data for 1-D earth models using the Neighbourhood Algorithm

    Science.gov (United States)

    Ojo, A. O.; Xie, Jun; Olorunfemi, M. O.

    2018-01-01

    To reduce ambiguity related to nonlinearities in the resistivity model-data relationships, an efficient direct-search scheme employing the Neighbourhood Algorithm (NA) was implemented to solve the 1-D resistivity problem. In addition to finding a range of best-fit models which are more likely to be global minimums, this method investigates the entire multi-dimensional model space and provides additional information about the posterior model covariance matrix, marginal probability density function and an ensemble of acceptable models. This provides new insights into how well the model parameters are constrained and make assessing trade-offs between them possible, thus avoiding some common interpretation pitfalls. The efficacy of the newly developed program is tested by inverting both synthetic (noisy and noise-free) data and field data from other authors employing different inversion methods so as to provide a good base for comparative performance. In all cases, the inverted model parameters were in good agreement with the true and recovered model parameters from other methods and remarkably correlate with the available borehole litho-log and known geology for the field dataset. The NA method has proven to be useful whilst a good starting model is not available and the reduced number of unknowns in the 1-D resistivity inverse problem makes it an attractive alternative to the linearized methods. Hence, it is concluded that the newly developed program offers an excellent complementary tool for the global inversion of the layered resistivity structure.

  7. Affine Lie algebraic origin of constrained KP hierarchies

    International Nuclear Information System (INIS)

    Aratyn, H.; Gomes, J.F.; Zimerman, A.H.

    1994-07-01

    It is presented an affine sl(n+1) algebraic construction of the basic constrained KP hierarchy. This hierarchy is analyzed using two approaches, namely linear matrix eigenvalue problem on hermitian symmetric space and constrained KP Lax formulation and we show that these approaches are equivalent. The model is recognized to be generalized non-linear Schroedinger (GNLS) hierarchy and it is used as a building block for a new class of constrained KP hierarchies. These constrained KP hierarchies are connected via similarity-Backlund transformations and interpolate between GNLS and multi-boson KP-Toda hierarchies. The construction uncovers origin of the Toda lattice structure behind the latter hierarchy. (author). 23 refs

  8. The revenge of the S-matrix

    CERN Multimedia

    CERN. Geneva

    2016-01-01

    In this talk I will describe recent work aiming to reinvigorate the 50 year old S-matrix program, which aims to constrain scattering of massive particles non-perturbatively. I will begin by considering quantum fields in anti-de Sitter space and show that one can extract information about the S-matrix by considering correlators in conformally invariant theories. The latter can be studied with "bootstrap" techniques, which allow us to constrain the S-matrix. In particular, in 1+1D one obtains bounds which are saturated by known integrable models. I will also show that it is also possible to directly constrain the S-matrix, without using the CFT crutch, by using crossing symmetry and unitarity. This alternative method is simpler and gives results in agreement with the previous approach. Both techniques are generalizable to higher dimensions.

  9. Polymer sol-gel composite inverse opal structures.

    Science.gov (United States)

    Zhang, Xiaoran; Blanchard, G J

    2015-03-25

    We report on the formation of composite inverse opal structures where the matrix used to form the inverse opal contains both silica, formed using sol-gel chemistry, and poly(ethylene glycol), PEG. We find that the morphology of the inverse opal structure depends on both the amount of PEG incorporated into the matrix and its molecular weight. The extent of organization in the inverse opal structure, which is characterized by scanning electron microscopy and optical reflectance data, is mediated by the chemical bonding interactions between the silica and PEG constituents in the hybrid matrix. Both polymer chain terminus Si-O-C bonding and hydrogen bonding between the polymer backbone oxygens and silanol functionalities can contribute, with the polymer mediating the extent to which Si-O-Si bonds can form within the silica regions of the matrix due to hydrogen-bonding interactions.

  10. Magnetotelluric inversion via reverse time migration algorithm of seismic data

    International Nuclear Information System (INIS)

    Ha, Taeyoung; Shin, Changsoo

    2007-01-01

    We propose a new algorithm for two-dimensional magnetotelluric (MT) inversion. Our algorithm is an MT inversion based on the steepest descent method, borrowed from the backpropagation technique of seismic inversion or reverse time migration, introduced in the middle 1980s by Lailly and Tarantola. The steepest descent direction can be calculated efficiently by using the symmetry of numerical Green's function derived from a mixed finite element method proposed by Nedelec for Maxwell's equation, without calculating the Jacobian matrix explicitly. We construct three different objective functions by taking the logarithm of the complex apparent resistivity as introduced in the recent waveform inversion algorithm by Shin and Min. These objective functions can be naturally separated into amplitude inversion, phase inversion and simultaneous inversion. We demonstrate our algorithm by showing three inversion results for synthetic data

  11. Early cosmology constrained

    Energy Technology Data Exchange (ETDEWEB)

    Verde, Licia; Jimenez, Raul [Institute of Cosmos Sciences, University of Barcelona, IEEC-UB, Martí Franquès, 1, E08028 Barcelona (Spain); Bellini, Emilio [University of Oxford, Denys Wilkinson Building, Keble Road, Oxford, OX1 3RH (United Kingdom); Pigozzo, Cassio [Instituto de Física, Universidade Federal da Bahia, Salvador, BA (Brazil); Heavens, Alan F., E-mail: liciaverde@icc.ub.edu, E-mail: emilio.bellini@physics.ox.ac.uk, E-mail: cpigozzo@ufba.br, E-mail: a.heavens@imperial.ac.uk, E-mail: raul.jimenez@icc.ub.edu [Imperial Centre for Inference and Cosmology (ICIC), Imperial College, Blackett Laboratory, Prince Consort Road, London SW7 2AZ (United Kingdom)

    2017-04-01

    We investigate our knowledge of early universe cosmology by exploring how much additional energy density can be placed in different components beyond those in the ΛCDM model. To do this we use a method to separate early- and late-universe information enclosed in observational data, thus markedly reducing the model-dependency of the conclusions. We find that the 95% credibility regions for extra energy components of the early universe at recombination are: non-accelerating additional fluid density parameter Ω{sub MR} < 0.006 and extra radiation parameterised as extra effective neutrino species 2.3 < N {sub eff} < 3.2 when imposing flatness. Our constraints thus show that even when analyzing the data in this largely model-independent way, the possibility of hiding extra energy components beyond ΛCDM in the early universe is seriously constrained by current observations. We also find that the standard ruler, the sound horizon at radiation drag, can be well determined in a way that does not depend on late-time Universe assumptions, but depends strongly on early-time physics and in particular on additional components that behave like radiation. We find that the standard ruler length determined in this way is r {sub s} = 147.4 ± 0.7 Mpc if the radiation and neutrino components are standard, but the uncertainty increases by an order of magnitude when non-standard dark radiation components are allowed, to r {sub s} = 150 ± 5 Mpc.

  12. Development of a Java Package for Matrix Programming

    OpenAIRE

    Lim, Ngee-Peng; Ling, Maurice HT; Lim, Shawn YC; Choi, Ji-Hee; Teo, Henry BK

    2003-01-01

    We had assembled a Java package, known as MatrixPak, of four classes for the purpose of numerical matrix computation. The classes are matrix, matrix_operations, StrToMatrix, and MatrixToStr; all of which are inherited from java.lang.Object class. Class matrix defines a matrix as a two-dimensional array of float types, and contains the following mathematical methods: transpose, adjoint, determinant, inverse, minor and cofactor. Class matrix_operations contains the following mathematical method...

  13. Modelling and inversion of local magnetic anomalies

    International Nuclear Information System (INIS)

    Quesnel, Y; Langlais, B; Sotin, C; Galdéano, A

    2008-01-01

    We present a method—named as MILMA for modelling and inversion of local magnetic anomalies—that combines forward and inverse modelling of aeromagnetic data to characterize both magnetization properties and location of unconstrained local sources. Parameters of simple-shape magnetized bodies (cylinder, prism or sphere) are first adjusted by trial and error to predict the signal. Their parameters provide a priori information for inversion of the measurements. Here, a generalized nonlinear approach with a least-squares criterion is adopted to seek the best parameters of the sphere (dipole). This inversion step allows the model to be more objectively adjusted to fit the magnetic signal. The validity of the MILMA method is demonstrated through synthetic and real cases using aeromagnetic measurements. Tests with synthetic data reveal accurate results in terms of depth source, whatever be the number of sources. The MILMA method is then used with real measurements to constrain the properties of the magnetized units of the Champtoceaux complex (France). The resulting parameters correlate with the crustal structure and properties revealed by other geological and geophysical surveys in the same area. The MILMA method can therefore be used to investigate the properties of poorly constrained lithospheric magnetized sources

  14. Angle-domain inverse scattering migration/inversion in isotropic media

    Science.gov (United States)

    Li, Wuqun; Mao, Weijian; Li, Xuelei; Ouyang, Wei; Liang, Quan

    2018-07-01

    The classical seismic asymptotic inversion can be transformed into a problem of inversion of generalized Radon transform (GRT). In such methods, the combined parameters are linearly attached to the scattered wave-field by Born approximation and recovered by applying an inverse GRT operator to the scattered wave-field data. Typical GRT-style true-amplitude inversion procedure contains an amplitude compensation process after the weighted migration via dividing an illumination associated matrix whose elements are integrals of scattering angles. It is intuitional to some extent that performs the generalized linear inversion and the inversion of GRT together by this process for direct inversion. However, it is imprecise to carry out such operation when the illumination at the image point is limited, which easily leads to the inaccuracy and instability of the matrix. This paper formulates the GRT true-amplitude inversion framework in an angle-domain version, which naturally degrades the external integral term related to the illumination in the conventional case. We solve the linearized integral equation for combined parameters of different fixed scattering angle values. With this step, we obtain high-quality angle-domain common-image gathers (CIGs) in the migration loop which provide correct amplitude-versus-angle (AVA) behavior and reasonable illumination range for subsurface image points. Then we deal with the over-determined problem to solve each parameter in the combination by a standard optimization operation. The angle-domain GRT inversion method keeps away from calculating the inaccurate and unstable illumination matrix. Compared with the conventional method, the angle-domain method can obtain more accurate amplitude information and wider amplitude-preserved range. Several model tests demonstrate the effectiveness and practicability.

  15. Linearized inversion of two components seismic data; Inversion linearisee de donnees sismiques a deux composantes

    Energy Technology Data Exchange (ETDEWEB)

    Lebrun, D.

    1997-05-22

    The aim of the dissertation is the linearized inversion of multicomponent seismic data for 3D elastic horizontally stratified media, using Born approximation. A Jacobian matrix is constructed; it will be used to model seismic data from elastic parameters. The inversion technique, relying on single value decomposition (SVD) of the Jacobian matrix, is described. Next, the resolution of inverted elastic parameters is quantitatively studies. A first use of the technique is shown in the frame of an evaluation of a sea bottom acquisition (synthetic data). Finally, a real data set acquired with conventional marine technique is inverted. (author) 70 refs.

  16. The Transmuted Generalized Inverse Weibull Distribution

    Directory of Open Access Journals (Sweden)

    Faton Merovci

    2014-05-01

    Full Text Available A generalization of the generalized inverse Weibull distribution the so-called transmuted generalized inverse Weibull distribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM in order to generate a flexible family of probability distributions taking the generalized inverseWeibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expressions for the moments, quantiles, and moment generating function of the new distribution are derived. We propose the method of maximum likelihood for estimating the model parameters and obtain the observed information matrix. A real data set are used to compare the flexibility of the transmuted version versus the generalized inverse Weibull distribution.

  17. Stochastic Gabor reflectivity and acoustic impedance inversion

    Science.gov (United States)

    Hariri Naghadeh, Diako; Morley, Christopher Keith; Ferguson, Angus John

    2018-02-01

    To delineate subsurface lithology to estimate petrophysical properties of a reservoir, it is possible to use acoustic impedance (AI) which is the result of seismic inversion. To change amplitude to AI, removal of wavelet effects from the seismic signal in order to get a reflection series, and subsequently transforming those reflections to AI, is vital. To carry out seismic inversion correctly it is important to not assume that the seismic signal is stationary. However, all stationary deconvolution methods are designed following that assumption. To increase temporal resolution and interpretation ability, amplitude compensation and phase correction are inevitable. Those are pitfalls of stationary reflectivity inversion. Although stationary reflectivity inversion methods are trying to estimate reflectivity series, because of incorrect assumptions their estimations will not be correct, but may be useful. Trying to convert those reflection series to AI, also merging with the low frequency initial model, can help us. The aim of this study was to apply non-stationary deconvolution to eliminate time variant wavelet effects from the signal and to convert the estimated reflection series to the absolute AI by getting bias from well logs. To carry out this aim, stochastic Gabor inversion in the time domain was used. The Gabor transform derived the signal’s time-frequency analysis and estimated wavelet properties from different windows. Dealing with different time windows gave an ability to create a time-variant kernel matrix, which was used to remove matrix effects from seismic data. The result was a reflection series that does not follow the stationary assumption. The subsequent step was to convert those reflections to AI using well information. Synthetic and real data sets were used to show the ability of the introduced method. The results highlight that the time cost to get seismic inversion is negligible related to general Gabor inversion in the frequency domain. Also

  18. Constrained evolution in numerical relativity

    Science.gov (United States)

    Anderson, Matthew William

    The strongest potential source of gravitational radiation for current and future detectors is the merger of binary black holes. Full numerical simulation of such mergers can provide realistic signal predictions and enhance the probability of detection. Numerical simulation of the Einstein equations, however, is fraught with difficulty. Stability even in static test cases of single black holes has proven elusive. Common to unstable simulations is the growth of constraint violations. This work examines the effect of controlling the growth of constraint violations by solving the constraints periodically during a simulation, an approach called constrained evolution. The effects of constrained evolution are contrasted with the results of unconstrained evolution, evolution where the constraints are not solved during the course of a simulation. Two different formulations of the Einstein equations are examined: the standard ADM formulation and the generalized Frittelli-Reula formulation. In most cases constrained evolution vastly improves the stability of a simulation at minimal computational cost when compared with unconstrained evolution. However, in the more demanding test cases examined, constrained evolution fails to produce simulations with long-term stability in spite of producing improvements in simulation lifetime when compared with unconstrained evolution. Constrained evolution is also examined in conjunction with a wide variety of promising numerical techniques, including mesh refinement and overlapping Cartesian and spherical computational grids. Constrained evolution in boosted black hole spacetimes is investigated using overlapping grids. Constrained evolution proves to be central to the host of innovations required in carrying out such intensive simulations.

  19. Matrix Encryption Scheme

    Directory of Open Access Journals (Sweden)

    Abdelhakim Chillali

    2017-05-01

    Full Text Available In classical cryptography, the Hill cipher is a polygraphic substitution cipher based on linear algebra. In this work, we proposed a new problem applicable to the public key cryptography, based on the Matrices, called “Matrix discrete logarithm problem”, it uses certain elements formed by matrices whose coefficients are elements in a finite field. We have constructed an abelian group and, for the cryptographic part in this unreliable group, we then perform the computation corresponding to the algebraic equations, Returning the encrypted result to a receiver. Upon receipt of the result, the receiver can retrieve the sender’s clear message by performing the inverse calculation.

  20. Imaging the Flow Networks from a Harmonic Pumping in a Karstic Field with an Inversion Algorithm

    Science.gov (United States)

    Fischer, P.; Lecoq, N.; Jardani, A.; Jourde, H.; Wang, X.; Chedeville, S.; Cardiff, M. A.

    2017-12-01

    Identifying flow paths within karstic fields remains a complex task because of the high dependency of the hydraulic responses to the relative locations between the observation boreholes and the karstic conduits and interconnected fractures that control the main flows of the hydrosystem. In this context, harmonic pumping is a new investigation tool that permits to inform on the flow paths connectivity between the boreholes. We have shown that the amplitude and phase offset values in the periodic responses of a hydrosystem to a harmonic pumping test characterize three different type of flow behavior between the measurement boreholes and the pumping borehole: a direct connectivity response (conduit flow), an indirect connectivity (conduit and short matrix flows), and an absence of connectivity (matrix). When the hydraulic responses to study are numerous and complex, the interpretation of the flow paths requires an inverse modeling. Therefore, we have recently developed a Cellular Automata-based Deterministic Inversion (CADI) approach that permits to infer the spatial distribution of field hydraulic conductivities in a structurally constrained model. This method distributes hydraulic conductivities along linear structures (i.e. karst conduits) and iteratively modifies the structural geometry of this conduits network to progressively match the observed responses to the modeled ones. As a result, this method produces a conductivity model that is composed of a discrete conduit network embedded in the background matrix, capable of producing the same flow behavior as the investigated hydrologic system. We applied the CADI approach in order to reproduce, in a model, the amplitude and phase offset values of a set of periodic responses generated from harmonic pumping tests conducted in different boreholes at the Terrieu karstic field site (Southern France). This association of oscillatory responses with the CADI method provides an interpretation of the flow paths within the

  1. Lightweight cryptography for constrained devices

    DEFF Research Database (Denmark)

    Alippi, Cesare; Bogdanov, Andrey; Regazzoni, Francesco

    2014-01-01

    Lightweight cryptography is a rapidly evolving research field that responds to the request for security in resource constrained devices. This need arises from crucial pervasive IT applications, such as those based on RFID tags where cost and energy constraints drastically limit the solution...... complexity, with the consequence that traditional cryptography solutions become too costly to be implemented. In this paper, we survey design strategies and techniques suitable for implementing security primitives in constrained devices....

  2. Inversion of electron-water elastic scattering data

    International Nuclear Information System (INIS)

    Lun, A.; Chen, X.J.; Allen, L.J.; Amos, K.

    1994-01-01

    Fixed energy inverse scattering theory has been used to analyse the differential cross-sections for the elastic scattering of electrons from water molecules. Both semiclassical (WKB) and fully quantal inversion methods have been used with data taken in the energy range 100 to 1000 eV. Constrained to be real, the local inversion potentials are found to be energy dependent; a dependence that can be interpreted as the local equivalence of true nonlocality in the actual interaction. 14 refs., 4 tabs., 8 figs

  3. Matrix calculus

    CERN Document Server

    Bodewig, E

    1959-01-01

    Matrix Calculus, Second Revised and Enlarged Edition focuses on systematic calculation with the building blocks of a matrix and rows and columns, shunning the use of individual elements. The publication first offers information on vectors, matrices, further applications, measures of the magnitude of a matrix, and forms. The text then examines eigenvalues and exact solutions, including the characteristic equation, eigenrows, extremum properties of the eigenvalues, bounds for the eigenvalues, elementary divisors, and bounds for the determinant. The text ponders on approximate solutions, as well

  4. Joint inversion of hydraulic head and self-potential data associated with harmonic pumping tests

    Science.gov (United States)

    Soueid Ahmed, A.; Jardani, A.; Revil, A.; Dupont, J. P.

    2016-09-01

    Harmonic pumping tests consist in stimulating an aquifer by the means of hydraulic stimulations at some discrete frequencies. The inverse problem consisting in retrieving the hydraulic properties is inherently ill posed and is usually underdetermined when considering the number of well head data available in field conditions. To better constrain this inverse problem, we add self-potential data recorded at the ground surface to the head data. The self-potential method is a passive geophysical method. Its signals are generated by the groundwater flow through an electrokinetic coupling. We showed using a 3-D saturated unconfined synthetic aquifer that the self-potential method significantly improves the results of the harmonic hydraulic tomography. The hydroelectric forward problem is obtained by solving first the Richards equation, describing the groundwater flow, and then using the result in an electrical Poisson equation describing the self-potential problem. The joint inversion problem is solved using a reduction model based on the principal component geostatistical approach. In this method, the large prior covariance matrix is truncated and replaced by its low-rank approximation, allowing thus for notable computational time and storage savings. Three test cases are studied, to assess the validity of our approach. In the first test, we show that when the number of harmonic stimulations is low, combining the harmonic hydraulic and self-potential data does not improve the inversion results. In the second test where enough harmonic stimulations are performed, a significant improvement of the hydraulic parameters is observed. In the last synthetic test, we show that the electrical conductivity field required to invert the self-potential data can be determined with enough accuracy using an electrical resistivity tomography survey using the same electrodes configuration as used for the self-potential investigation.

  5. Clustering Using Boosted Constrained k-Means Algorithm

    Directory of Open Access Journals (Sweden)

    Masayuki Okabe

    2018-03-01

    Full Text Available This article proposes a constrained clustering algorithm with competitive performance and less computation time to the state-of-the-art methods, which consists of a constrained k-means algorithm enhanced by the boosting principle. Constrained k-means clustering using constraints as background knowledge, although easy to implement and quick, has insufficient performance compared with metric learning-based methods. Since it simply adds a function into the data assignment process of the k-means algorithm to check for constraint violations, it often exploits only a small number of constraints. Metric learning-based methods, which exploit constraints to create a new metric for data similarity, have shown promising results although the methods proposed so far are often slow depending on the amount of data or number of feature dimensions. We present a method that exploits the advantages of the constrained k-means and metric learning approaches. It incorporates a mechanism for accepting constraint priorities and a metric learning framework based on the boosting principle into a constrained k-means algorithm. In the framework, a metric is learned in the form of a kernel matrix that integrates weak cluster hypotheses produced by the constrained k-means algorithm, which works as a weak learner under the boosting principle. Experimental results for 12 data sets from 3 data sources demonstrated that our method has performance competitive to those of state-of-the-art constrained clustering methods for most data sets and that it takes much less computation time. Experimental evaluation demonstrated the effectiveness of controlling the constraint priorities by using the boosting principle and that our constrained k-means algorithm functions correctly as a weak learner of boosting.

  6. An inverse dynamics model for the analysis, reconstruction and prediction of bipedal walking

    NARCIS (Netherlands)

    Koopman, Hubertus F.J.M.; Grootenboer, H.J.; de Jongh, Henk J.; Huijing, P.A.J.B.M.; de Vries, J.

    1995-01-01

    Walking is a constrained movement which may best be observed during the double stance phase when both feet contact the floor. When analyzing a measured movement with an inverse dynamics model, a violation of these constrains will always occur due to measuring errors and deviations of the segments

  7. Acute puerperal uterine inversion

    International Nuclear Information System (INIS)

    Hussain, M.; Liaquat, N.; Noorani, K.; Bhutta, S.Z; Jabeen, T.

    2004-01-01

    Objective: To determine the frequency, causes, clinical presentations, management and maternal mortality associated with acute puerperal inversion of the uterus. Materials and Methods: All the patients who developed acute puerperal inversion of the uterus either in or outside the JPMC were included in the study. Patients of chronic uterine inversion were not included in the present study. Abdominal and vaginal examination was done to confirm and classify inversion into first, second or third degrees. Results: 57036 deliveries and 36 acute uterine inversions occurred during the study period, so the frequency of uterine inversion was 1 in 1584 deliveries. Mismanagement of third stage of labour was responsible for uterine inversion in 75% of patients. Majority of the patients presented with shock, either hypovolemic (69%) or neurogenic (13%) in origin. Manual replacement of the uterus under general anaesthesia with 2% halothane was successfully done in 35 patients (97.5%). Abdominal hysterectomy was done in only one patient. There were three maternal deaths due to inversion. Conclusion: Proper education and training regarding placental delivery, diagnosis and management of uterine inversion must be imparted to the maternity care providers especially to traditional birth attendants and family physicians to prevent this potentially life-threatening condition. (author)

  8. Pole shifting with constrained output feedback

    International Nuclear Information System (INIS)

    Hamel, D.; Mensah, S.; Boisvert, J.

    1984-03-01

    The concept of pole placement plays an important role in linear, multi-variable, control theory. It has received much attention since its introduction, and several pole shifting algorithms are now available. This work presents a new method which allows practical and engineering constraints such as gain limitation and controller structure to be introduced right into the pole shifting design strategy. This is achieved by formulating the pole placement problem as a constrained optimization problem. Explicit constraints (controller structure and gain limits) are defined to identify an admissible region for the feedback gain matrix. The desired pole configuration is translated into an appropriate cost function which must be closed-loop minimized. The resulting constrained optimization problem can thus be solved with optimization algorithms. The method has been implemented as an algorithmic interactive module in a computer-aided control system design package, MVPACK. The application of the method is illustrated to design controllers for an aircraft and an evaporator. The results illustrate the importance of controller structure on overall performance of a control system

  9. Parametrization of the Kobayashi-Maskawa matrix

    International Nuclear Information System (INIS)

    Wolfenstein, L.

    1983-01-01

    The quark mixing matrix (Kobayashi-Maskawa matrix) is expanded in powers of a small parameter lambda equal to sintheta/sub c/ = 0.22. The term of order lambda 2 is determined from the recently measured B lifetime. Two remaining parameters, including the CP-nonconservation effects, enter only the term of order lambda 3 and are poorly constrained. A significant reduction in the limit on epsilon'/epsilon possible in an ongoing experiment would tightly constrain the CP-nonconservation parameter and could rule out the hypothesis that the only source of CP nonconservation is the Kobayashi-Maskawa mechanism

  10. Voxel inversion of airborne electromagnetic data for improved model integration

    Science.gov (United States)

    Fiandaca, Gianluca; Auken, Esben; Kirkegaard, Casper; Vest Christiansen, Anders

    2014-05-01

    Inversion of electromagnetic data has migrated from single site interpretations to inversions including entire surveys using spatial constraints to obtain geologically reasonable results. Though, the model space is usually linked to the actual observation points. For airborne electromagnetic (AEM) surveys the spatial discretization of the model space reflects the flight lines. On the contrary, geological and groundwater models most often refer to a regular voxel grid, not correlated to the geophysical model space, and the geophysical information has to be relocated for integration in (hydro)geological models. We have developed a new geophysical inversion algorithm working directly in a voxel grid disconnected from the actual measuring points, which then allows for informing directly geological/hydrogeological models. The new voxel model space defines the soil properties (like resistivity) on a set of nodes, and the distribution of the soil properties is computed everywhere by means of an interpolation function (e.g. inverse distance or kriging). Given this definition of the voxel model space, the 1D forward responses of the AEM data are computed as follows: 1) a 1D model subdivision, in terms of model thicknesses, is defined for each 1D data set, creating "virtual" layers. 2) the "virtual" 1D models at the sounding positions are finalized by interpolating the soil properties (the resistivity) in the center of the "virtual" layers. 3) the forward response is computed in 1D for each "virtual" model. We tested the new inversion scheme on an AEM survey carried out with the SkyTEM system close to Odder, in Denmark. The survey comprises 106054 dual mode AEM soundings, and covers an area of approximately 13 km X 16 km. The voxel inversion was carried out on a structured grid of 260 X 325 X 29 xyz nodes (50 m xy spacing), for a total of 2450500 inversion parameters. A classical spatially constrained inversion (SCI) was carried out on the same data set, using 106054

  11. A Globally Convergent Parallel SSLE Algorithm for Inequality Constrained Optimization

    Directory of Open Access Journals (Sweden)

    Zhijun Luo

    2014-01-01

    Full Text Available A new parallel variable distribution algorithm based on interior point SSLE algorithm is proposed for solving inequality constrained optimization problems under the condition that the constraints are block-separable by the technology of sequential system of linear equation. Each iteration of this algorithm only needs to solve three systems of linear equations with the same coefficient matrix to obtain the descent direction. Furthermore, under certain conditions, the global convergence is achieved.

  12. Anti-B-B Mixing Constrains Topcolor-Assisted Technicolor

    International Nuclear Information System (INIS)

    Burdman, Gustavo; Lane, Kenneth; Rador, Tonguc

    2000-01-01

    We argue that extended technicolor augmented with topcolor requires that all mixing between the third and the first two quark generations resides in the mixing matrix of left-handed down quarks. Then, the anti-B d -B d mixing that occurs in topcolor models constrains the coloron and Z(prime) boson masses to be greater than about 5 TeV. This implies fine tuning of the topcolor couplings to better than 1 percent

  13. Inverse logarithmic potential problem

    CERN Document Server

    Cherednichenko, V G

    1996-01-01

    The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.

  14. Inverse Kinematics using Quaternions

    DEFF Research Database (Denmark)

    Henriksen, Knud; Erleben, Kenny; Engell-Nørregård, Morten

    In this project I describe the status of inverse kinematics research, with the focus firmly on the methods that solve the core problem. An overview of the different methods are presented Three common methods used in inverse kinematics computation have been chosen as subject for closer inspection....

  15. Inverse problems in ordinary differential equations and applications

    CERN Document Server

    Llibre, Jaume

    2016-01-01

    This book is dedicated to study the inverse problem of ordinary differential equations, that is it focuses in finding all ordinary differential equations that satisfy a given set of properties. The Nambu bracket is the central tool in developing this approach. The authors start characterizing the ordinary differential equations in R^N which have a given set of partial integrals or first integrals. The results obtained are applied first to planar polynomial differential systems with a given set of such integrals, second to solve the 16th Hilbert problem restricted to generic algebraic limit cycles, third for solving the inverse problem for constrained Lagrangian and Hamiltonian mechanical systems, fourth for studying the integrability of a constrained rigid body. Finally the authors conclude with an analysis on nonholonomic mechanics, a generalization of the Hamiltonian principle, and the statement an solution of the inverse problem in vakonomic mechanics.

  16. Warhead verification as inverse problem: Applications of neutron spectrum unfolding from organic-scintillator measurements

    Science.gov (United States)

    Lawrence, Chris C.; Febbraro, Michael; Flaska, Marek; Pozzi, Sara A.; Becchetti, F. D.

    2016-08-01

    Verification of future warhead-dismantlement treaties will require detection of certain warhead attributes without the disclosure of sensitive design information, and this presents an unusual measurement challenge. Neutron spectroscopy—commonly eschewed as an ill-posed inverse problem—may hold special advantages for warhead verification by virtue of its insensitivity to certain neutron-source parameters like plutonium isotopics. In this article, we investigate the usefulness of unfolded neutron spectra obtained from organic-scintillator data for verifying a particular treaty-relevant warhead attribute: the presence of high-explosive and neutron-reflecting materials. Toward this end, several improvements on current unfolding capabilities are demonstrated: deuterated detectors are shown to have superior response-matrix condition to that of standard hydrogen-base scintintillators; a novel data-discretization scheme is proposed which removes important detector nonlinearities; and a technique is described for re-parameterizing the unfolding problem in order to constrain the parameter space of solutions sought, sidestepping the inverse problem altogether. These improvements are demonstrated with trial measurements and verified using accelerator-based time-of-flight calculation of reference spectra. Then, a demonstration is presented in which the elemental compositions of low-Z neutron-attenuating materials are estimated to within 10%. These techniques could have direct application in verifying the presence of high-explosive materials in a neutron-emitting test item, as well as other for treaty verification challenges.

  17. Improved extraction of hydrologic information from geophysical data through coupled hydrogeophysical inversion

    Energy Technology Data Exchange (ETDEWEB)

    Hinnell, A.C.; Ferre, T.P.A.; Vrugt, J.A.; Huisman, J.A.; Moysey, S.; Rings, J.; Kowalsky, M.B.

    2009-11-01

    There is increasing interest in the use of multiple measurement types, including indirect (geophysical) methods, to constrain hydrologic interpretations. To date, most examples integrating geophysical measurements in hydrology have followed a three-step, uncoupled inverse approach. This approach begins with independent geophysical inversion to infer the spatial and/or temporal distribution of a geophysical property (e.g. electrical conductivity). The geophysical property is then converted to a hydrologic property (e.g. water content) through a petrophysical relation. The inferred hydrologic property is then used either independently or together with direct hydrologic observations to constrain a hydrologic inversion. We present an alternative approach, coupled inversion, which relies on direct coupling of hydrologic models and geophysical models during inversion. We compare the abilities of coupled and uncoupled inversion using a synthetic example where surface-based electrical conductivity surveys are used to monitor one-dimensional infiltration and redistribution.

  18. A constrained robust least squares approach for contaminant release history identification

    Science.gov (United States)

    Sun, Alexander Y.; Painter, Scott L.; Wittmeyer, Gordon W.

    2006-04-01

    Contaminant source identification is an important type of inverse problem in groundwater modeling and is subject to both data and model uncertainty. Model uncertainty was rarely considered in the previous studies. In this work, a robust framework for solving contaminant source recovery problems is introduced. The contaminant source identification problem is first cast into one of solving uncertain linear equations, where the response matrix is constructed using a superposition technique. The formulation presented here is general and is applicable to any porous media flow and transport solvers. The robust least squares (RLS) estimator, which originated in the field of robust identification, directly accounts for errors arising from model uncertainty and has been shown to significantly reduce the sensitivity of the optimal solution to perturbations in model and data. In this work, a new variant of RLS, the constrained robust least squares (CRLS), is formulated for solving uncertain linear equations. CRLS allows for additional constraints, such as nonnegativity, to be imposed. The performance of CRLS is demonstrated through one- and two-dimensional test problems. When the system is ill-conditioned and uncertain, it is found that CRLS gave much better performance than its classical counterpart, the nonnegative least squares. The source identification framework developed in this work thus constitutes a reliable tool for recovering source release histories in real applications.

  19. A constrained regularization method for inverting data represented by linear algebraic or integral equations

    Science.gov (United States)

    Provencher, Stephen W.

    1982-09-01

    CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.

  20. EISPACK, Subroutines for Eigenvalues, Eigenvectors, Matrix Operations

    International Nuclear Information System (INIS)

    Garbow, Burton S.; Cline, A.K.; Meyering, J.

    1993-01-01

    : Driver subroutine for a nonsym. tridiag. matrix; SVD: Singular value decomposition of rectangular matrix; TINVIT: Find some vectors of sym. tridiag. matrix; TQLRAT: Find all values of sym. tridiag. matrix; TQL1: Find all values of sym. tridiag. matrix; TQL2: Find all values/vectors of sym. tridiag. matrix; TRBAK1: Back transform vectors of matrix formed by TRED1; TRBAK3: Back transform vectors of matrix formed by TRED3; TRED1: Reduce sym. matrix to sym. tridiag. matrix; TRED2: Reduce sym. matrix to sym. tridiag. matrix; TRED3: Reduce sym. packed matrix to sym. tridiag. matrix; TRIDIB: Find some values of sym. tridiag. matrix; TSTURM: Find some values/vectors of sym. tridiag. matrix. 2 - Method of solution: Almost all the algorithms used in EISPACK are based on similarity transformations. Similarity transformations based on orthogonal and unitary matrices are particularly attractive from a numerical point of view because they do not magnify any errors present in the input data or introduced during the computation. Most of the techniques employed are constructive realizations of variants of Schur's theorem, 'Any matrix can be triangularized by a unitary similarity transformation'. It is usually not possible to compute Schur's transformation with a finite number of rational arithmetic operations. Instead, the algorithms employ a potentially infinite sequence of similarity transformations in which the resultant matrix approaches an upper triangular matrix. The sequence is terminated when all of the sub-diagonal elements of the resulting matrix are less than the roundoff errors involved in the computation. The diagonal elements are then the desired approximations to the eigenvalues of the original matrix and the corresponding eigenvectors can be calculated. Special algorithms deal with symmetric matrices. QR, LR, QL, rational QR, bisection QZ, and inverse iteration methods are used

  1. Matrix thermalization

    International Nuclear Information System (INIS)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-01-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  2. Matrix thermalization

    Science.gov (United States)

    Craps, Ben; Evnin, Oleg; Nguyen, Kévin

    2017-02-01

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  3. Matrix thermalization

    Energy Technology Data Exchange (ETDEWEB)

    Craps, Ben [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Evnin, Oleg [Department of Physics, Faculty of Science, Chulalongkorn University, Thanon Phayathai, Pathumwan, Bangkok 10330 (Thailand); Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium); Nguyen, Kévin [Theoretische Natuurkunde, Vrije Universiteit Brussel (VUB), and International Solvay Institutes, Pleinlaan 2, B-1050 Brussels (Belgium)

    2017-02-08

    Matrix quantum mechanics offers an attractive environment for discussing gravitational holography, in which both sides of the holographic duality are well-defined. Similarly to higher-dimensional implementations of holography, collapsing shell solutions in the gravitational bulk correspond in this setting to thermalization processes in the dual quantum mechanical theory. We construct an explicit, fully nonlinear supergravity solution describing a generic collapsing dilaton shell, specify the holographic renormalization prescriptions necessary for computing the relevant boundary observables, and apply them to evaluating thermalizing two-point correlation functions in the dual matrix theory.

  4. A Projected Non-linear Conjugate Gradient Method for Interactive Inverse Kinematics

    DEFF Research Database (Denmark)

    Engell-Nørregård, Morten; Erleben, Kenny

    2009-01-01

    Inverse kinematics is the problem of posing an articulated figure to obtain a wanted goal, without regarding inertia and forces. Joint limits are modeled as bounds on individual degrees of freedom, leading to a box-constrained optimization problem. We present A projected Non-linear Conjugate...... Gradient optimization method suitable for box-constrained optimization problems for inverse kinematics. We show application on inverse kinematics positioning of a human figure. Performance is measured and compared to a traditional Jacobian Transpose method. Visual quality of the developed method...

  5. Gravity inversion code

    International Nuclear Information System (INIS)

    Burkhard, N.R.

    1979-01-01

    The gravity inversion code applies stabilized linear inverse theory to determine the topography of a subsurface density anomaly from Bouguer gravity data. The gravity inversion program consists of four source codes: SEARCH, TREND, INVERT, and AVERAGE. TREND and INVERT are used iteratively to converge on a solution. SEARCH forms the input gravity data files for Nevada Test Site data. AVERAGE performs a covariance analysis on the solution. This document describes the necessary input files and the proper operation of the code. 2 figures, 2 tables

  6. Constraining walking and custodial technicolor

    DEFF Research Database (Denmark)

    Foadi, Roshan; Frandsen, Mads Toudal; Sannino, Francesco

    2008-01-01

    We show how to constrain the physical spectrum of walking technicolor models via precision measurements and modified Weinberg sum rules. We also study models possessing a custodial symmetry for the S parameter at the effective Lagrangian level-custodial technicolor-and argue that these models...

  7. Inverse planning IMRT

    International Nuclear Information System (INIS)

    Rosenwald, J.-C.

    2008-01-01

    The lecture addressed the following topics: Optimizing radiotherapy dose distribution; IMRT contributes to optimization of energy deposition; Inverse vs direct planning; Main steps of IMRT; Background of inverse planning; General principle of inverse planning; The 3 main components of IMRT inverse planning; The simplest cost function (deviation from prescribed dose); The driving variable : the beamlet intensity; Minimizing a 'cost function' (or 'objective function') - the walker (or skier) analogy; Application to IMRT optimization (the gradient method); The gradient method - discussion; The simulated annealing method; The optimization criteria - discussion; Hard and soft constraints; Dose volume constraints; Typical user interface for definition of optimization criteria; Biological constraints (Equivalent Uniform Dose); The result of the optimization process; Semi-automatic solutions for IMRT; Generalisation of the optimization problem; Driving and driven variables used in RT optimization; Towards multi-criteria optimization; and Conclusions for the optimization phase. (P.A.)

  8. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells.

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-08-09

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs.

  9. Constrained Sintering in Fabrication of Solid Oxide Fuel Cells

    Science.gov (United States)

    Lee, Hae-Weon; Park, Mansoo; Hong, Jongsup; Kim, Hyoungchul; Yoon, Kyung Joong; Son, Ji-Won; Lee, Jong-Ho; Kim, Byung-Kook

    2016-01-01

    Solid oxide fuel cells (SOFCs) are inevitably affected by the tensile stress field imposed by the rigid substrate during constrained sintering, which strongly affects microstructural evolution and flaw generation in the fabrication process and subsequent operation. In the case of sintering a composite cathode, one component acts as a continuous matrix phase while the other acts as a dispersed phase depending upon the initial composition and packing structure. The clustering of dispersed particles in the matrix has significant effects on the final microstructure, and strong rigidity of the clusters covering the entire cathode volume is desirable to obtain stable pore structure. The local constraints developed around the dispersed particles and their clusters effectively suppress generation of major process flaws, and microstructural features such as triple phase boundary and porosity could be readily controlled by adjusting the content and size of the dispersed particles. However, in the fabrication of the dense electrolyte layer via the chemical solution deposition route using slow-sintering nanoparticles dispersed in a sol matrix, the rigidity of the cluster should be minimized for the fine matrix to continuously densify, and special care should be taken in selecting the size of the dispersed particles to optimize the thermodynamic stability criteria of the grain size and film thickness. The principles of constrained sintering presented in this paper could be used as basic guidelines for realizing the ideal microstructure of SOFCs. PMID:28773795

  10. On the quantum inverse scattering problem

    International Nuclear Information System (INIS)

    Maillet, J.M.; Terras, V.

    2000-01-01

    A general method for solving the so-called quantum inverse scattering problem (namely the reconstruction of local quantum (field) operators in term of the quantum monodromy matrix satisfying a Yang-Baxter quadratic algebra governed by an R-matrix) for a large class of lattice quantum integrable models is given. The principal requirement being the initial condition (R(0)=P, the permutation operator) for the quantum R-matrix solving the Yang-Baxter equation, it applies not only to most known integrable fundamental lattice models (such as Heisenberg spin chains) but also to lattice models with arbitrary number of impurities and to the so-called fused lattice models (including integrable higher spin generalizations of Heisenberg chains). Our method is then applied to several important examples like the sl n XXZ model, the XYZ spin-((1)/(2)) chain and also to the spin-s Heisenberg chains

  11. A Constrained Algorithm Based NMFα for Image Representation

    Directory of Open Access Journals (Sweden)

    Chenxue Yang

    2014-01-01

    Full Text Available Nonnegative matrix factorization (NMF is a useful tool in learning a basic representation of image data. However, its performance and applicability in real scenarios are limited because of the lack of image information. In this paper, we propose a constrained matrix decomposition algorithm for image representation which contains parameters associated with the characteristics of image data sets. Particularly, we impose label information as additional hard constraints to the α-divergence-NMF unsupervised learning algorithm. The resulted algorithm is derived by using Karush-Kuhn-Tucker (KKT conditions as well as the projected gradient and its monotonic local convergence is proved by using auxiliary functions. In addition, we provide a method to select the parameters to our semisupervised matrix decomposition algorithm in the experiment. Compared with the state-of-the-art approaches, our method with the parameters has the best classification accuracy on three image data sets.

  12. Contributions to Large Covariance and Inverse Covariance Matrices Estimation

    OpenAIRE

    Kang, Xiaoning

    2016-01-01

    Estimation of covariance matrix and its inverse is of great importance in multivariate statistics with broad applications such as dimension reduction, portfolio optimization, linear discriminant analysis and gene expression analysis. However, accurate estimation of covariance or inverse covariance matrices is challenging due to the positive definiteness constraint and large number of parameters, especially in the high-dimensional cases. In this thesis, I develop several approaches for estimat...

  13. Matrix inequalities

    CERN Document Server

    Zhan, Xingzhi

    2002-01-01

    The main purpose of this monograph is to report on recent developments in the field of matrix inequalities, with emphasis on useful techniques and ingenious ideas. Among other results this book contains the affirmative solutions of eight conjectures. Many theorems unify or sharpen previous inequalities. The author's aim is to streamline the ideas in the literature. The book can be read by research workers, graduate students and advanced undergraduates.

  14. Fast convolutional sparse coding using matrix inversion lemma

    Czech Academy of Sciences Publication Activity Database

    Šorel, Michal; Šroubek, Filip

    2016-01-01

    Roč. 55, č. 1 (2016), s. 44-51 ISSN 1051-2004 R&D Projects: GA ČR GA13-29225S Institutional support: RVO:67985556 Keywords : Convolutional sparse coding * Feature learning * Deconvolution networks * Shift-invariant sparse coding Subject RIV: JD - Computer Applications, Robotics Impact factor: 2.337, year: 2016 http://library.utia.cas.cz/separaty/2016/ZOI/sorel-0459332.pdf

  15. Trends in PDE constrained optimization

    CERN Document Server

    Benner, Peter; Engell, Sebastian; Griewank, Andreas; Harbrecht, Helmut; Hinze, Michael; Rannacher, Rolf; Ulbrich, Stefan

    2014-01-01

    Optimization problems subject to constraints governed by partial differential equations (PDEs) are among the most challenging problems in the context of industrial, economical and medical applications. Almost the entire range of problems in this field of research was studied and further explored as part of the Deutsche Forschungsgemeinschaft (DFG) priority program 1253 on “Optimization with Partial Differential Equations” from 2006 to 2013. The investigations were motivated by the fascinating potential applications and challenging mathematical problems that arise in the field of PDE constrained optimization. New analytic and algorithmic paradigms have been developed, implemented and validated in the context of real-world applications. In this special volume, contributions from more than fifteen German universities combine the results of this interdisciplinary program with a focus on applied mathematics.   The book is divided into five sections on “Constrained Optimization, Identification and Control”...

  16. Embedded Lattice and Properties of Gram Matrix

    Directory of Open Access Journals (Sweden)

    Futa Yuichi

    2017-03-01

    Full Text Available In this article, we formalize in Mizar [14] the definition of embedding of lattice and its properties. We formally define an inner product on an embedded module. We also formalize properties of Gram matrix. We formally prove that an inverse of Gram matrix for a rational lattice exists. Lattice of Z-module is necessary for lattice problems, LLL (Lenstra, Lenstra and Lov´asz base reduction algorithm [16] and cryptographic systems with lattice [17].

  17. Mini-lecture course: Introduction into hierarchical matrix technique

    KAUST Repository

    Litvinenko, Alexander

    2017-01-01

    allows us to work with general class of matrices (not only structured or Toeplits or sparse). H-matrices can keep the H-matrix data format during linear algebra operations (inverse, update, Schur complement).

  18. Limits to Nonlinear Inversion

    DEFF Research Database (Denmark)

    Mosegaard, Klaus

    2012-01-01

    For non-linear inverse problems, the mathematical structure of the mapping from model parameters to data is usually unknown or partly unknown. Absence of information about the mathematical structure of this function prevents us from presenting an analytical solution, so our solution depends on our......-heuristics are inefficient for large-scale, non-linear inverse problems, and that the 'no-free-lunch' theorem holds. We discuss typical objections to the relevance of this theorem. A consequence of the no-free-lunch theorem is that algorithms adapted to the mathematical structure of the problem perform more efficiently than...... pure meta-heuristics. We study problem-adapted inversion algorithms that exploit the knowledge of the smoothness of the misfit function of the problem. Optimal sampling strategies exist for such problems, but many of these problems remain hard. © 2012 Springer-Verlag....

  19. Inverse scale space decomposition

    DEFF Research Database (Denmark)

    Schmidt, Marie Foged; Benning, Martin; Schönlieb, Carola-Bibiane

    2018-01-01

    We investigate the inverse scale space flow as a decomposition method for decomposing data into generalised singular vectors. We show that the inverse scale space flow, based on convex and even and positively one-homogeneous regularisation functionals, can decompose data represented...... by the application of a forward operator to a linear combination of generalised singular vectors into its individual singular vectors. We verify that for this decomposition to hold true, two additional conditions on the singular vectors are sufficient: orthogonality in the data space and inclusion of partial sums...... of the subgradients of the singular vectors in the subdifferential of the regularisation functional at zero. We also address the converse question of when the inverse scale space flow returns a generalised singular vector given that the initial data is arbitrary (and therefore not necessarily in the range...

  20. How well do different tracers constrain the firn diffusivity profile?

    Directory of Open Access Journals (Sweden)

    C. M. Trudinger

    2013-02-01

    Full Text Available Firn air transport models are used to interpret measurements of the composition of air in firn and bubbles trapped in ice in order to reconstruct past atmospheric composition. The diffusivity profile in the firn is usually calibrated by comparing modelled and measured concentrations for tracers with known atmospheric history. However, in most cases this is an under-determined inverse problem, often with multiple solutions giving an adequate fit to the data (this is known as equifinality. Here we describe a method to estimate the firn diffusivity profile that allows multiple solutions to be identified, in order to quantify the uncertainty in diffusivity due to equifinality. We then look at how well different combinations of tracers constrain the firn diffusivity profile. Tracers with rapid atmospheric variations like CH3CCl3, HFCs and 14CO2 are most useful for constraining molecular diffusivity, while &delta:15N2 is useful for constraining parameters related to convective mixing near the surface. When errors in the observations are small and Gaussian, three carefully selected tracers are able to constrain the molecular diffusivity profile well with minimal equifinality. However, with realistic data errors or additional processes to constrain, there is benefit to including as many tracers as possible to reduce the uncertainties. We calculate CO2 age distributions and their spectral widths with uncertainties for five firn sites (NEEM, DE08-2, DSSW20K, South Pole 1995 and South Pole 2001 with quite different characteristics and tracers available for calibration. We recommend moving away from the use of a firn model with one calibrated parameter set to infer atmospheric histories, and instead suggest using multiple parameter sets, preferably with multiple representations of uncertain processes, to assist in quantification of the uncertainties.

  1. Source-independent elastic waveform inversion using a logarithmic wavefield

    KAUST Repository

    Choi, Yun Seok

    2012-01-01

    The logarithmic waveform inversion has been widely developed and applied to some synthetic and real data. In most logarithmic waveform inversion algorithms, the subsurface velocities are updated along with the source estimation. To avoid estimating the source wavelet in the logarithmic waveform inversion, we developed a source-independent logarithmic waveform inversion algorithm. In this inversion algorithm, we first normalize the wavefields with the reference wavefield to remove the source wavelet, and then take the logarithm of the normalized wavefields. Based on the properties of the logarithm, we define three types of misfit functions using the following methods: combination of amplitude and phase, amplitude-only, and phase-only. In the inversion, the gradient is computed using the back-propagation formula without directly calculating the Jacobian matrix. We apply our algorithm to noise-free and noise-added synthetic data generated for the modified version of elastic Marmousi2 model, and compare the results with those of the source-estimation logarithmic waveform inversion. For the noise-free data, the source-independent algorithms yield velocity models close to true velocity models. For random-noise data, the source-estimation logarithmic waveform inversion yields better results than the source-independent method, whereas for coherent-noise data, the results are reversed. Numerical results show that the source-independent and source-estimation logarithmic waveform inversion methods have their own merits for random- and coherent-noise data. © 2011.

  2. Information matrix estimation procedures for cognitive diagnostic models.

    Science.gov (United States)

    Liu, Yanlou; Xin, Tao; Andersson, Björn; Tian, Wei

    2018-03-06

    Two new methods to estimate the asymptotic covariance matrix for marginal maximum likelihood estimation of cognitive diagnosis models (CDMs), the inverse of the observed information matrix and the sandwich-type estimator, are introduced. Unlike several previous covariance matrix estimators, the new methods take into account both the item and structural parameters. The relationships between the observed information matrix, the empirical cross-product information matrix, the sandwich-type covariance matrix and the two approaches proposed by de la Torre (2009, J. Educ. Behav. Stat., 34, 115) are discussed. Simulation results show that, for a correctly specified CDM and Q-matrix or with a slightly misspecified probability model, the observed information matrix and the sandwich-type covariance matrix exhibit good performance with respect to providing consistent standard errors of item parameter estimates. However, with substantial model misspecification only the sandwich-type covariance matrix exhibits robust performance. © 2018 The British Psychological Society.

  3. Geostatistical regularization operators for geophysical inverse problems on irregular meshes

    Science.gov (United States)

    Jordi, C.; Doetsch, J.; Günther, T.; Schmelzbach, C.; Robertsson, J. OA

    2018-05-01

    Irregular meshes allow to include complicated subsurface structures into geophysical modelling and inverse problems. The non-uniqueness of these inverse problems requires appropriate regularization that can incorporate a priori information. However, defining regularization operators for irregular discretizations is not trivial. Different schemes for calculating smoothness operators on irregular meshes have been proposed. In contrast to classical regularization constraints that are only defined using the nearest neighbours of a cell, geostatistical operators include a larger neighbourhood around a particular cell. A correlation model defines the extent of the neighbourhood and allows to incorporate information about geological structures. We propose an approach to calculate geostatistical operators for inverse problems on irregular meshes by eigendecomposition of a covariance matrix that contains the a priori geological information. Using our approach, the calculation of the operator matrix becomes tractable for 3-D inverse problems on irregular meshes. We tested the performance of the geostatistical regularization operators and compared them against the results of anisotropic smoothing in inversions of 2-D surface synthetic electrical resistivity tomography (ERT) data as well as in the inversion of a realistic 3-D cross-well synthetic ERT scenario. The inversions of 2-D ERT and seismic traveltime field data with geostatistical regularization provide results that are in good accordance with the expected geology and thus facilitate their interpretation. In particular, for layered structures the geostatistical regularization provides geologically more plausible results compared to the anisotropic smoothness constraints.

  4. On the Duality of Forward and Inverse Light Transport.

    Science.gov (United States)

    Chandraker, Manmohan; Bai, Jiamin; Ng, Tian-Tsong; Ramamoorthi, Ravi

    2011-10-01

    Inverse light transport seeks to undo global illumination effects, such as interreflections, that pervade images of most scenes. This paper presents the theoretical and computational foundations for inverse light transport as a dual of forward rendering. Mathematically, this duality is established through the existence of underlying Neumann series expansions. Physically, it can be shown that each term of our inverse series cancels an interreflection bounce, just as the forward series adds them. While the convergence properties of the forward series are well known, we show that the oscillatory convergence of the inverse series leads to more interesting conditions on material reflectance. Conceptually, the inverse problem requires the inversion of a large light transport matrix, which is impractical for realistic resolutions using standard techniques. A natural consequence of our theoretical framework is a suite of fast computational algorithms for light transport inversion--analogous to finite element radiosity, Monte Carlo and wavelet-based methods in forward rendering--that rely at most on matrix-vector multiplications. We demonstrate two practical applications, namely, separation of individual bounces of the light transport and fast projector radiometric compensation, to display images free of global illumination artifacts in real-world environments.

  5. Some results on inverse scattering

    International Nuclear Information System (INIS)

    Ramm, A.G.

    2008-01-01

    A review of some of the author's results in the area of inverse scattering is given. The following topics are discussed: (1) Property C and applications, (2) Stable inversion of fixed-energy 3D scattering data and its error estimate, (3) Inverse scattering with 'incomplete' data, (4) Inverse scattering for inhomogeneous Schroedinger equation, (5) Krein's inverse scattering method, (6) Invertibility of the steps in Gel'fand-Levitan, Marchenko, and Krein inversion methods, (7) The Newton-Sabatier and Cox-Thompson procedures are not inversion methods, (8) Resonances: existence, location, perturbation theory, (9) Born inversion as an ill-posed problem, (10) Inverse obstacle scattering with fixed-frequency data, (11) Inverse scattering with data at a fixed energy and a fixed incident direction, (12) Creating materials with a desired refraction coefficient and wave-focusing properties. (author)

  6. 3-D minimum-structure inversion of magnetotelluric data using the finite-element method and tetrahedral grids

    Science.gov (United States)

    Jahandari, H.; Farquharson, C. G.

    2017-11-01

    Unstructured grids enable representing arbitrary structures more accurately and with fewer cells compared to regular structured grids. These grids also allow more efficient refinements compared to rectilinear meshes. In this study, tetrahedral grids are used for the inversion of magnetotelluric (MT) data, which allows for the direct inclusion of topography in the model, for constraining an inversion using a wireframe-based geological model and for local refinement at the observation stations. A minimum-structure method with an iterative model-space Gauss-Newton algorithm for optimization is used. An iterative solver is employed for solving the normal system of equations at each Gauss-Newton step and the sensitivity matrix-vector products that are required by this solver are calculated using pseudo-forward problems. This method alleviates the need to explicitly form the Hessian or Jacobian matrices which significantly reduces the required computation memory. Forward problems are formulated using an edge-based finite-element approach and a sparse direct solver is used for the solutions. This solver allows saving and re-using the factorization of matrices for similar pseudo-forward problems within a Gauss-Newton iteration which greatly minimizes the computation time. Two examples are presented to show the capability of the algorithm: the first example uses a benchmark model while the second example represents a realistic geological setting with topography and a sulphide deposit. The data that are inverted are the full-tensor impedance and the magnetic transfer function vector. The inversions sufficiently recovered the models and reproduced the data, which shows the effectiveness of unstructured grids for complex and realistic MT inversion scenarios. The first example is also used to demonstrate the computational efficiency of the presented model-space method by comparison with its data-space counterpart.

  7. Enhanced lepton flavour violation in the supersymmetric inverse seesaw

    International Nuclear Information System (INIS)

    Weiland, C

    2013-01-01

    In minimal supersymmetric seesaw models, the contribution to lepton flavour violation from Z-penguins is usually negligible. In this study, we consider the supersymmetric inverse seesaw and show that, in this case, the Z-penguin contribution dominates in several lepton flavour violating observables due to the low scale of the inverse seesaw mechanism. Among the observables considered, we find that the most constraining one is the μ-e conversion rate which is already restricting the otherwise allowed parameter space of the model. Moreover, in this framework, the Z-penguins exhibit a non-decoupling behaviour, which has previously been noticed in lepton flavour violating Higgs decays

  8. Trimming and procrastination as inversion techniques

    Science.gov (United States)

    Backus, George E.

    1996-12-01

    By examining the processes of truncating and approximating the model space (trimming it), and by committing to neither the objectivist nor the subjectivist interpretation of probability (procrastinating), we construct a formal scheme for solving linear and non-linear geophysical inverse problems. The necessary prior information about the correct model xE can be either a collection of inequalities or a probability measure describing where xE was likely to be in the model space X before the data vector y0 was measured. The results of the inversion are (1) a vector z0 that estimates some numerical properties zE of xE; (2) an estimate of the error δz = z0 - zE. As y0 is finite dimensional, so is z0, and hence in principle inversion cannot describe all of xE. The error δz is studied under successively more specialized assumptions about the inverse problem, culminating in a complete analysis of the linear inverse problem with a prior quadratic bound on xE. Our formalism appears to encompass and provide error estimates for many of the inversion schemes current in geomagnetism, and would be equally applicable in geodesy and seismology if adequate prior information were available there. As an idealized example we study the magnetic field at the core-mantle boundary, using satellite measurements of field elements at sites assumed to be almost uniformly distributed on a single spherical surface. Magnetospheric currents are neglected and the crustal field is idealized as a random process with rotationally invariant statistics. We find that an appropriate data compression diagonalizes the variance matrix of the crustal signal and permits an analytic trimming of the idealized problem.

  9. Matrix analysis

    CERN Document Server

    Bhatia, Rajendra

    1997-01-01

    A good part of matrix theory is functional analytic in spirit. This statement can be turned around. There are many problems in operator theory, where most of the complexities and subtleties are present in the finite-dimensional case. My purpose in writing this book is to present a systematic treatment of methods that are useful in the study of such problems. This book is intended for use as a text for upper division and gradu­ ate courses. Courses based on parts of the material have been given by me at the Indian Statistical Institute and at the University of Toronto (in collaboration with Chandler Davis). The book should also be useful as a reference for research workers in linear algebra, operator theory, mathe­ matical physics and numerical analysis. A possible subtitle of this book could be Matrix Inequalities. A reader who works through the book should expect to become proficient in the art of deriving such inequalities. Other authors have compared this art to that of cutting diamonds. One first has to...

  10. Nested Sampling with Constrained Hamiltonian Monte Carlo

    OpenAIRE

    Betancourt, M. J.

    2010-01-01

    Nested sampling is a powerful approach to Bayesian inference ultimately limited by the computationally demanding task of sampling from a heavily constrained probability distribution. An effective algorithm in its own right, Hamiltonian Monte Carlo is readily adapted to efficiently sample from any smooth, constrained distribution. Utilizing this constrained Hamiltonian Monte Carlo, I introduce a general implementation of the nested sampling algorithm.

  11. Sequential unconstrained minimization algorithms for constrained optimization

    International Nuclear Information System (INIS)

    Byrne, Charles

    2008-01-01

    The problem of minimizing a function f(x):R J → R, subject to constraints on the vector variable x, occurs frequently in inverse problems. Even without constraints, finding a minimizer of f(x) may require iterative methods. We consider here a general class of iterative algorithms that find a solution to the constrained minimization problem as the limit of a sequence of vectors, each solving an unconstrained minimization problem. Our sequential unconstrained minimization algorithm (SUMMA) is an iterative procedure for constrained minimization. At the kth step we minimize the function G k (x)=f(x)+g k (x), to obtain x k . The auxiliary functions g k (x):D subset of R J → R + are nonnegative on the set D, each x k is assumed to lie within D, and the objective is to minimize the continuous function f:R J → R over x in the set C = D-bar, the closure of D. We assume that such minimizers exist, and denote one such by x-circumflex. We assume that the functions g k (x) satisfy the inequalities 0≤g k (x)≤G k-1 (x)-G k-1 (x k-1 ), for k = 2, 3, .... Using this assumption, we show that the sequence {(x k )} is decreasing and converges to f(x-circumflex). If the restriction of f(x) to D has bounded level sets, which happens if x-circumflex is unique and f(x) is closed, proper and convex, then the sequence {x k } is bounded, and f(x*)=f(x-circumflex), for any cluster point x*. Therefore, if x-circumflex is unique, x* = x-circumflex and {x k } → x-circumflex. When x-circumflex is not unique, convergence can still be obtained, in particular cases. The SUMMA includes, as particular cases, the well-known barrier- and penalty-function methods, the simultaneous multiplicative algebraic reconstruction technique (SMART), the proximal minimization algorithm of Censor and Zenios, the entropic proximal methods of Teboulle, as well as certain cases of gradient descent and the Newton–Raphson method. The proof techniques used for SUMMA can be extended to obtain related results

  12. Noniterative MAP reconstruction using sparse matrix representations.

    Science.gov (United States)

    Cao, Guangzhi; Bouman, Charles A; Webb, Kevin J

    2009-09-01

    We present a method for noniterative maximum a posteriori (MAP) tomographic reconstruction which is based on the use of sparse matrix representations. Our approach is to precompute and store the inverse matrix required for MAP reconstruction. This approach has generally not been used in the past because the inverse matrix is typically large and fully populated (i.e., not sparse). In order to overcome this problem, we introduce two new ideas. The first idea is a novel theory for the lossy source coding of matrix transformations which we refer to as matrix source coding. This theory is based on a distortion metric that reflects the distortions produced in the final matrix-vector product, rather than the distortions in the coded matrix itself. The resulting algorithms are shown to require orthonormal transformations of both the measurement data and the matrix rows and columns before quantization and coding. The second idea is a method for efficiently storing and computing the required orthonormal transformations, which we call a sparse-matrix transform (SMT). The SMT is a generalization of the classical FFT in that it uses butterflies to compute an orthonormal transform; but unlike an FFT, the SMT uses the butterflies in an irregular pattern, and is numerically designed to best approximate the desired transforms. We demonstrate the potential of the noniterative MAP reconstruction with examples from optical tomography. The method requires offline computation to encode the inverse transform. However, once these offline computations are completed, the noniterative MAP algorithm is shown to reduce both storage and computation by well over two orders of magnitude, as compared to a linear iterative reconstruction methods.

  13. Inverse and Predictive Modeling

    Energy Technology Data Exchange (ETDEWEB)

    Syracuse, Ellen Marie [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)

    2017-09-27

    The LANL Seismo-Acoustic team has a strong capability in developing data-driven models that accurately predict a variety of observations. These models range from the simple – one-dimensional models that are constrained by a single dataset and can be used for quick and efficient predictions – to the complex – multidimensional models that are constrained by several types of data and result in more accurate predictions. Team members typically build models of geophysical characteristics of Earth and source distributions at scales of 1 to 1000s of km, the techniques used are applicable for other types of physical characteristics at an even greater range of scales. The following cases provide a snapshot of some of the modeling work done by the Seismo- Acoustic team at LANL.

  14. Inverse scattering with supersymmetric quantum mechanics

    International Nuclear Information System (INIS)

    Baye, Daniel; Sparenberg, Jean-Marc

    2004-01-01

    The application of supersymmetric quantum mechanics to the inverse scattering problem is reviewed. The main difference with standard treatments of the inverse problem lies in the simple and natural extension to potentials with singularities at the origin and with a Coulomb behaviour at infinity. The most general form of potentials which are phase-equivalent to a given potential is discussed. The use of singular potentials allows adding or removing states from the bound spectrum without contradicting the Levinson theorem. Physical applications of phase-equivalent potentials in nuclear reactions and in three-body systems are described. Derivation of a potential from the phase shift at fixed orbital momentum can also be performed with the supersymmetric inversion by using a Bargmann-type approximation of the scattering matrix or phase shift. A unique singular potential without bound states can be obtained from any phase shift. A limited number of bound states depending on the singularity can then be added. This inversion procedure is illustrated with nucleon-nucleon scattering

  15. Bayesian seismic AVO inversion

    Energy Technology Data Exchange (ETDEWEB)

    Buland, Arild

    2002-07-01

    A new linearized AVO inversion technique is developed in a Bayesian framework. The objective is to obtain posterior distributions for P-wave velocity, S-wave velocity and density. Distributions for other elastic parameters can also be assessed, for example acoustic impedance, shear impedance and P-wave to S-wave velocity ratio. The inversion algorithm is based on the convolutional model and a linearized weak contrast approximation of the Zoeppritz equation. The solution is represented by a Gaussian posterior distribution with explicit expressions for the posterior expectation and covariance, hence exact prediction intervals for the inverted parameters can be computed under the specified model. The explicit analytical form of the posterior distribution provides a computationally fast inversion method. Tests on synthetic data show that all inverted parameters were almost perfectly retrieved when the noise approached zero. With realistic noise levels, acoustic impedance was the best determined parameter, while the inversion provided practically no information about the density. The inversion algorithm has also been tested on a real 3-D dataset from the Sleipner Field. The results show good agreement with well logs but the uncertainty is high. The stochastic model includes uncertainties of both the elastic parameters, the wavelet and the seismic and well log data. The posterior distribution is explored by Markov chain Monte Carlo simulation using the Gibbs sampler algorithm. The inversion algorithm has been tested on a seismic line from the Heidrun Field with two wells located on the line. The uncertainty of the estimated wavelet is low. In the Heidrun examples the effect of including uncertainty of the wavelet and the noise level was marginal with respect to the AVO inversion results. We have developed a 3-D linearized AVO inversion method with spatially coupled model parameters where the objective is to obtain posterior distributions for P-wave velocity, S

  16. Representations for the generalized Drazin inverse of the sum in a Banach algebra and its application for some operator matrices.

    Science.gov (United States)

    Liu, Xiaoji; Qin, Xiaolan

    2015-01-01

    We investigate additive properties of the generalized Drazin inverse in a Banach algebra A. We find explicit expressions for the generalized Drazin inverse of the sum a + b, under new conditions on a, b ∈ A. As an application we give some new representations for the generalized Drazin inverse of an operator matrix.

  17. Constrained minimization in C ++ environment

    International Nuclear Information System (INIS)

    Dymov, S.N.; Kurbatov, V.S.; Silin, I.N.; Yashchenko, S.V.

    1998-01-01

    Based on the ideas, proposed by one of the authors (I.N.Silin), the suitable software was developed for constrained data fitting. Constraints may be of the arbitrary type: equalities and inequalities. The simplest of possible ways was used. Widely known program FUMILI was realized to the C ++ language. Constraints in the form of inequalities φ (θ i ) ≥ a were taken into account by change into equalities φ (θ i ) = t and simple inequalities of type t ≥ a. The equalities were taken into account by means of quadratic penalty functions. The suitable software was tested on the model data of the ANKE setup (COSY accelerator, Forschungszentrum Juelich, Germany)

  18. Coherent states in constrained systems

    International Nuclear Information System (INIS)

    Nakamura, M.; Kojima, K.

    2001-01-01

    When quantizing the constrained systems, there often arise the quantum corrections due to the non-commutativity in the re-ordering of constraint operators in the products of operators. In the bosonic second-class constraints, furthermore, the quantum corrections caused by the uncertainty principle should be taken into account. In order to treat these corrections simultaneously, the alternative projection technique of operators is proposed by introducing the available minimal uncertainty states of the constraint operators. Using this projection technique together with the projection operator method (POM), these two kinds of quantum corrections were investigated

  19. The gravitational S-matrix

    CERN Document Server

    Giddings, Steven B

    2010-01-01

    We investigate the hypothesized existence of an S-matrix for gravity, and some of its expected general properties. We first discuss basic questions regarding existence of such a matrix, including those of infrared divergences and description of asymptotic states. Distinct scattering behavior occurs in the Born, eikonal, and strong gravity regimes, and we describe aspects of both the partial wave and momentum space amplitudes, and their analytic properties, from these regimes. Classically the strong gravity region would be dominated by formation of black holes, and we assume its unitary quantum dynamics is described by corresponding resonances. Masslessness limits some powerful methods and results that apply to massive theories, though a continuation path implying crossing symmetry plausibly still exists. Physical properties of gravity suggest nonpolynomial amplitudes, although crossing and causality constrain (with modest assumptions) this nonpolynomial behavior, particularly requiring a polynomial bound in c...

  20. Matrix pentagons

    Science.gov (United States)

    Belitsky, A. V.

    2017-10-01

    The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang-Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4) matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  1. Matrix pentagons

    Directory of Open Access Journals (Sweden)

    A.V. Belitsky

    2017-10-01

    Full Text Available The Operator Product Expansion for null polygonal Wilson loop in planar maximally supersymmetric Yang–Mills theory runs systematically in terms of multi-particle pentagon transitions which encode the physics of excitations propagating on the color flux tube ending on the sides of the four-dimensional contour. Their dynamics was unraveled in the past several years and culminated in a complete description of pentagons as an exact function of the 't Hooft coupling. In this paper we provide a solution for the last building block in this program, the SU(4 matrix structure arising from internal symmetry indices of scalars and fermions. This is achieved by a recursive solution of the Mirror and Watson equations obeyed by the so-called singlet pentagons and fixing the form of the twisted component in their tensor decomposition. The non-singlet, or charged, pentagons are deduced from these by a limiting procedure.

  2. Retrieving rupture history using waveform inversions in time sequence

    Science.gov (United States)

    Yi, L.; Xu, C.; Zhang, X.

    2017-12-01

    The rupture history of large earthquakes is generally regenerated using the waveform inversion through utilizing seismological waveform records. In the waveform inversion, based on the superposition principle, the rupture process is linearly parameterized. After discretizing the fault plane into sub-faults, the local source time function of each sub-fault is usually parameterized using the multi-time window method, e.g., mutual overlapped triangular functions. Then the forward waveform of each sub-fault is synthesized through convoluting the source time function with its Green function. According to the superposition principle, these forward waveforms generated from the fault plane are summarized in the recorded waveforms after aligning the arrival times. Then the slip history is retrieved using the waveform inversion method after the superposing of all forward waveforms for each correspond seismological waveform records. Apart from the isolation of these forward waveforms generated from each sub-fault, we also realize that these waveforms are gradually and sequentially superimposed in the recorded waveforms. Thus we proposed a idea that the rupture model is possibly detachable in sequent rupture times. According to the constrained waveform length method emphasized in our previous work, the length of inverted waveforms used in the waveform inversion is objectively constrained by the rupture velocity and rise time. And one essential prior condition is the predetermined fault plane that limits the duration of rupture time, which means the waveform inversion is restricted in a pre-set rupture duration time. Therefore, we proposed a strategy to inverse the rupture process sequentially using the progressively shift rupture times as the rupture front expanding in the fault plane. And we have designed a simulation inversion to test the feasibility of the method. Our test result shows the prospect of this idea that requiring furthermore investigation.

  3. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-01-01

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  4. Wavefield reconstruction inversion with a multiplicative cost function

    Science.gov (United States)

    da Silva, Nuno V.; Yao, Gang

    2018-01-01

    We present a method for the automatic estimation of the trade-off parameter in the context of wavefield reconstruction inversion (WRI). WRI formulates the inverse problem as an optimisation problem, minimising the data misfit while penalising with a wave equation constraining term. The trade-off between the two terms is balanced by a scaling factor that balances the contributions of the data-misfit term and the constraining term to the value of the objective function. If this parameter is too large then it implies penalizing for the wave equation imposing a hard constraint in the inversion. If it is too small, then this leads to a poorly constrained solution as it is essentially penalizing for the data misfit and not taking into account the physics that explains the data. This paper introduces a new approach for the formulation of WRI recasting its formulation into a multiplicative cost function. We demonstrate that the proposed method outperforms the additive cost function when the trade-off parameter is appropriately scaled in the latter, when adapting it throughout the iterations, and when the data is contaminated with Gaussian random noise. Thus this work contributes with a framework for a more automated application of WRI.

  5. Some New Algebraic and Topological Properties of the Minkowski Inverse in the Minkowski Space

    Directory of Open Access Journals (Sweden)

    Hanifa Zekraoui

    2013-01-01

    Full Text Available We introduce some new algebraic and topological properties of the Minkowski inverse A⊕ of an arbitrary matrix A∈Mm,n (including singular and rectangular in a Minkowski space μ. Furthermore, we show that the Minkowski inverse A⊕ in a Minkowski space and the Moore-Penrose inverse A+ in a Hilbert space are different in many properties such as the existence, continuity, norm, and SVD. New conditions of the Minkowski inverse are also given. These conditions are related to the existence, continuity, and reverse order law. Finally, a new representation of the Minkowski inverse A⊕ is also derived.

  6. Electrochemically driven emulsion inversion

    Science.gov (United States)

    Johans, Christoffer; Kontturi, Kyösti

    2007-09-01

    It is shown that emulsions stabilized by ionic surfactants can be inverted by controlling the electrical potential across the oil-water interface. The potential dependent partitioning of sodium dodecyl sulfate (SDS) was studied by cyclic voltammetry at the 1,2-dichlorobenzene|water interface. In the emulsion the potential control was achieved by using a potential-determining salt. The inversion of a 1,2-dichlorobenzene-in-water (O/W) emulsion stabilized by SDS was followed by conductometry as a function of added tetrapropylammonium chloride. A sudden drop in conductivity was observed, indicating the change of the continuous phase from water to 1,2-dichlorobenzene, i.e. a water-in-1,2-dichlorobenzene emulsion was formed. The inversion potential is well in accordance with that predicted by the hydrophilic-lipophilic deviation if the interfacial potential is appropriately accounted for.

  7. Data quality for the inverse lsing problem

    International Nuclear Information System (INIS)

    Decelle, Aurélien; Ricci-Tersenghi, Federico; Zhang, Pan

    2016-01-01

    There are many methods proposed for inferring parameters of the Ising model from given data, that is a set of configurations generated according to the model itself. However little attention has been paid until now to the data, e.g. how the data is generated, whether the inference error using one set of data could be smaller than using another set of data, etc. In this paper we discuss the data quality problem in the inverse Ising problem, using as a benchmark the kinetic Ising model. We quantify the quality of data using effective rank of the correlation matrix, and show that data gathered in a out-of-equilibrium regime has a better quality than data gathered in equilibrium for coupling reconstruction. We also propose a matrix-perturbation based method for tuning the quality of given data and for removing bad-quality (i.e. redundant) configurations from data. (paper)

  8. Channelling versus inversion

    DEFF Research Database (Denmark)

    Gale, A.S.; Surlyk, Finn; Anderskouv, Kresten

    2013-01-01

    Evidence from regional stratigraphical patterns in Santonian−Campanian chalk is used to infer the presence of a very broad channel system (5 km across) with a depth of at least 50 m, running NNW−SSE across the eastern Isle of Wight; only the western part of the channel wall and fill is exposed. W......−Campanian chalks in the eastern Isle of Wight, involving penecontemporaneous tectonic inversion of the underlying basement structure, are rejected....

  9. Reactivity in inverse micelles

    International Nuclear Information System (INIS)

    Brochette, Pascal

    1987-01-01

    This research thesis reports the study of the use of micro-emulsions of water in oil as reaction support. Only the 'inverse micelles' domain of the ternary mixing (water/AOT/isooctane) has been studied. The main addressed issues have been: the micro-emulsion disturbance in presence of reactants, the determination of reactant distribution and the resulting kinetic theory, the effect of the interface on electron transfer reactions, and finally protein solubilization [fr

  10. Block-triangular preconditioners for PDE-constrained optimization

    KAUST Repository

    Rees, Tyrone; Stoll, Martin

    2010-01-01

    In this paper we investigate the possibility of using a block-triangular preconditioner for saddle point problems arising in PDE-constrained optimization. In particular, we focus on a conjugate gradient-type method introduced by Bramble and Pasciak that uses self-adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method-the appropriate scaling of the preconditioners-is easily overcome. We present an eigenvalue analysis for the block-triangular preconditioners that gives convergence bounds in the non-standard inner product and illustrates their competitiveness on a number of computed examples. Copyright © 2010 John Wiley & Sons, Ltd.

  11. Block-triangular preconditioners for PDE-constrained optimization

    KAUST Repository

    Rees, Tyrone

    2010-11-26

    In this paper we investigate the possibility of using a block-triangular preconditioner for saddle point problems arising in PDE-constrained optimization. In particular, we focus on a conjugate gradient-type method introduced by Bramble and Pasciak that uses self-adjointness of the preconditioned system in a non-standard inner product. We show when the Chebyshev semi-iteration is used as a preconditioner for the relevant matrix blocks involving the finite element mass matrix that the main drawback of the Bramble-Pasciak method-the appropriate scaling of the preconditioners-is easily overcome. We present an eigenvalue analysis for the block-triangular preconditioners that gives convergence bounds in the non-standard inner product and illustrates their competitiveness on a number of computed examples. Copyright © 2010 John Wiley & Sons, Ltd.

  12. Inverse transition radiation

    International Nuclear Information System (INIS)

    Steinhauer, L.C.; Romea, R.D.; Kimura, W.D.

    1997-01-01

    A new method for laser acceleration is proposed based upon the inverse process of transition radiation. The laser beam intersects an electron-beam traveling between two thin foils. The principle of this acceleration method is explored in terms of its classical and quantum bases and its inverse process. A closely related concept based on the inverse of diffraction radiation is also presented: this concept has the significant advantage that apertures are used to allow free passage of the electron beam. These concepts can produce net acceleration because they do not satisfy the conditions in which the Lawson-Woodward theorem applies (no net acceleration in an unbounded vacuum). Finally, practical aspects such as damage limits at optics are employed to find an optimized set of parameters. For reasonable assumptions an acceleration gradient of 200 MeV/m requiring a laser power of less than 1 GW is projected. An interesting approach to multi-staging the acceleration sections is also presented. copyright 1997 American Institute of Physics

  13. Intersections, ideals, and inversion

    International Nuclear Information System (INIS)

    Vasco, D.W.

    1998-01-01

    Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly one dimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons

  14. Intersections, ideals, and inversion

    Energy Technology Data Exchange (ETDEWEB)

    Vasco, D.W.

    1998-10-01

    Techniques from computational algebra provide a framework for treating large classes of inverse problems. In particular, the discretization of many types of integral equations and of partial differential equations with undetermined coefficients lead to systems of polynomial equations. The structure of the solution set of such equations may be examined using algebraic techniques.. For example, the existence and dimensionality of the solution set may be determined. Furthermore, it is possible to bound the total number of solutions. The approach is illustrated by a numerical application to the inverse problem associated with the Helmholtz equation. The algebraic methods are used in the inversion of a set of transverse electric (TE) mode magnetotelluric data from Antarctica. The existence of solutions is demonstrated and the number of solutions is found to be finite, bounded from above at 50. The best fitting structure is dominantly onedimensional with a low crustal resistivity of about 2 ohm-m. Such a low value is compatible with studies suggesting lower surface wave velocities than found in typical stable cratons.

  15. Testing earthquake source inversion methodologies

    KAUST Repository

    Page, Morgan T.; Mai, Paul Martin; Schorlemmer, Danijel

    2011-01-01

    Source Inversion Validation Workshop; Palm Springs, California, 11-12 September 2010; Nowadays earthquake source inversions are routinely performed after large earthquakes and represent a key connection between recorded seismic and geodetic data

  16. Formal language constrained path problems

    Energy Technology Data Exchange (ETDEWEB)

    Barrett, C.; Jacob, R.; Marathe, M.

    1997-07-08

    In many path finding problems arising in practice, certain patterns of edge/vertex labels in the labeled graph being traversed are allowed/preferred, while others are disallowed. Motivated by such applications as intermodal transportation planning, the authors investigate the complexity of finding feasible paths in a labeled network, where the mode choice for each traveler is specified by a formal language. The main contributions of this paper include the following: (1) the authors show that the problem of finding a shortest path between a source and destination for a traveler whose mode choice is specified as a context free language is solvable efficiently in polynomial time, when the mode choice is specified as a regular language they provide algorithms with improved space and time bounds; (2) in contrast, they show that the problem of finding simple paths between a source and a given destination is NP-hard, even when restricted to very simple regular expressions and/or very simple graphs; (3) for the class of treewidth bounded graphs, they show that (i) the problem of finding a regular language constrained simple path between source and a destination is solvable in polynomial time and (ii) the extension to finding context free language constrained simple paths is NP-complete. Several extensions of these results are presented in the context of finding shortest paths with additional constraints. These results significantly extend the results in [MW95]. As a corollary of the results, they obtain a polynomial time algorithm for the BEST k-SIMILAR PATH problem studied in [SJB97]. The previous best algorithm was given by [SJB97] and takes exponential time in the worst case.

  17. Recurrent Neural Network Approach Based on the Integral Representation of the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Živković, Ivan S; Wei, Yimin

    2015-10-01

    In this letter, we present the dynamical equation and corresponding artificial recurrent neural network for computing the Drazin inverse for arbitrary square real matrix, without any restriction on its eigenvalues. Conditions that ensure the stability of the defined recurrent neural network as well as its convergence toward the Drazin inverse are considered. Several illustrative examples present the results of computer simulations.

  18. Inverse scattering transform for the time dependent Schroedinger equation with applications to the KPI equation

    Energy Technology Data Exchange (ETDEWEB)

    Xin, Zhou [Wisconsin Univ., Madison (USA). Dept. of Mathematics

    1990-03-01

    For the direct-inverse scattering transform of the time dependent Schroedinger equation, rigorous results are obtained based on an operator-triangular-factorization approach. By viewing the equation as a first order operator equation, similar results as for the first order n x n matrix system are obtained. The nonlocal Riemann-Hilbert problem for inverse scattering is shown to have solution. (orig.).

  19. Inverse scattering transform for the time dependent Schroedinger equation with applications to the KPI equation

    International Nuclear Information System (INIS)

    Zhou Xin

    1990-01-01

    For the direct-inverse scattering transform of the time dependent Schroedinger equation, rigorous results are obtained based on an operator-triangular-factorization approach. By viewing the equation as a first order operator equation, similar results as for the first order n x n matrix system are obtained. The nonlocal Riemann-Hilbert problem for inverse scattering is shown to have solution. (orig.)

  20. Mini-lecture course: Introduction into hierarchical matrix technique

    KAUST Repository

    Litvinenko, Alexander

    2017-12-14

    The H-matrix format has a log-linear computational cost and storage O(kn log n), where the rank k is a small integer and n is the number of locations (mesh points). The H-matrix technique allows us to work with general class of matrices (not only structured or Toeplits or sparse). H-matrices can keep the H-matrix data format during linear algebra operations (inverse, update, Schur complement).

  1. Signature of inverse Compton emission from blazars

    Science.gov (United States)

    Gaur, Haritma; Mohan, Prashanth; Wierzcholska, Alicja; Gu, Minfeng

    2018-01-01

    Blazars are classified into high-, intermediate- and low-energy-peaked sources based on the location of their synchrotron peak. This lies in infra-red/optical to ultra-violet bands for low- and intermediate-peaked blazars. The transition from synchrotron to inverse Compton emission falls in the X-ray bands for such sources. We present the spectral and timing analysis of 14 low- and intermediate-energy-peaked blazars observed with XMM-Newton spanning 31 epochs. Parametric fits to X-ray spectra help constrain the possible location of transition from the high-energy end of the synchrotron to the low-energy end of the inverse Compton emission. In seven sources in our sample, we infer such a transition and constrain the break energy in the range 0.6-10 keV. The Lomb-Scargle periodogram is used to estimate the power spectral density (PSD) shape. It is well described by a power law in a majority of light curves, the index being flatter compared to general expectation from active galactic nuclei, ranging here between 0.01 and 1.12, possibly due to short observation durations resulting in an absence of long-term trends. A toy model involving synchrotron self-Compton and external Compton (EC; disc, broad line region, torus) mechanisms are used to estimate magnetic field strength ≤0.03-0.88 G in sources displaying the energy break and infer a prominent EC contribution. The time-scale for variability being shorter than synchrotron cooling implies steeper PSD slopes which are inferred in these sources.

  2. Support Minimized Inversion of Acoustic and Elastic Wave Scattering

    Science.gov (United States)

    Safaeinili, Ali

    Inversion of limited data is common in many areas of NDE such as X-ray Computed Tomography (CT), Ultrasonic and eddy current flaw characterization and imaging. In many applications, it is common to have a bias toward a solution with minimum (L^2)^2 norm without any physical justification. When it is a priori known that objects are compact as, say, with cracks and voids, by choosing "Minimum Support" functional instead of the minimum (L^2)^2 norm, an image can be obtained that is equally in agreement with the available data, while it is more consistent with what is most probably seen in the real world. We have utilized a minimum support functional to find a solution with the smallest volume. This inversion algorithm is most successful in reconstructing objects that are compact like voids and cracks. To verify this idea, we first performed a variational nonlinear inversion of acoustic backscatter data using minimum support objective function. A full nonlinear forward model was used to accurately study the effectiveness of the minimized support inversion without error due to the linear (Born) approximation. After successful inversions using a full nonlinear forward model, a linearized acoustic inversion was developed to increase speed and efficiency in imaging process. The results indicate that by using minimum support functional, we can accurately size and characterize voids and/or cracks which otherwise might be uncharacterizable. An extremely important feature of support minimized inversion is its ability to compensate for unknown absolute phase (zero-of-time). Zero-of-time ambiguity is a serious problem in the inversion of the pulse-echo data. The minimum support inversion was successfully used for the inversion of acoustic backscatter data due to compact scatterers without the knowledge of the zero-of-time. The main drawback to this type of inversion is its computer intensiveness. In order to make this type of constrained inversion available for common use, work

  3. Solidification processing of monotectic alloy matrix composites

    Science.gov (United States)

    Frier, Nancy L.; Shiohara, Yuh; Russell, Kenneth C.

    1989-01-01

    Directionally solidified aluminum-indium alloys of the monotectic composition were found to form an in situ rod composite which obeys a lambda exp 2 R = constant relation. The experimental data shows good agreement with previously reported results. A theoretical boundary between cellular and dendritic growth conditions was derived and compared with experiments. The unique wetting characteristics of the monotectic alloys can be utilized to tailor the interface structure in metal matrix composites. Metal matrix composites with monotectic and hypermonotectic Al-In matrices were made by pressure infiltration, remelted and directionally solidified to observe the wetting characteristics of the alloys as well as the effect on structure of solidification in the constrained field of the fiber interstices. Models for monotectic growth are modified to take into account solidification in these constrained fields.

  4. Sharp Boundary Inversion of 2D Magnetotelluric Data using Bayesian Method.

    Science.gov (United States)

    Zhou, S.; Huang, Q.

    2017-12-01

    Normally magnetotelluric(MT) inversion method cannot show the distribution of underground resistivity with clear boundary, even if there are obviously different blocks. Aiming to solve this problem, we develop a Bayesian structure to inverse 2D MT sharp boundary data, using boundary location and inside resistivity as the random variables. Firstly, we use other MT inversion results, like ModEM, to analyze the resistivity distribution roughly. Then, we select the suitable random variables and change its data format to traditional staggered grid parameters, which can be used to do finite difference forward part. Finally, we can shape the posterior probability density(PPD), which contains all the prior information and model-data correlation, by Markov Chain Monte Carlo(MCMC) sampling from prior distribution. The depth, resistivity and their uncertainty can be valued. It also works for sensibility estimation. We applied the method to a synthetic case, which composes two large abnormal blocks in a trivial background. We consider the boundary smooth and the near true model weight constrains that mimic joint inversion or constrained inversion, then we find that the model results a more precise and focused depth distribution. And we also test the inversion without constrains and find that the boundary could also be figured, though not as well. Both inversions have a good valuation of resistivity. The constrained result has a lower root mean square than ModEM inversion result. The data sensibility obtained via PPD shows that the resistivity is the most sensible, center depth comes second and both sides are the worst.

  5. Introduction to Schroedinger inverse scattering

    International Nuclear Information System (INIS)

    Roberts, T.M.

    1991-01-01

    Schroedinger inverse scattering uses scattering coefficients and bound state data to compute underlying potentials. Inverse scattering has been studied extensively for isolated potentials q(x), which tend to zero as vertical strokexvertical stroke→∞. Inverse scattering for isolated impurities in backgrounds p(x) that are periodic, are Heaviside steps, are constant for x>0 and periodic for x<0, or that tend to zero as x→∞ and tend to ∞ as x→-∞, have also been studied. This paper identifies literature for the five inverse problems just mentioned, and for four other inverse problems. Heaviside-step backgrounds are discussed at length. (orig.)

  6. An Augmented Lagrangian Method for a Class of Inverse Quadratic Programming Problems

    International Nuclear Information System (INIS)

    Zhang Jianzhong; Zhang Liwei

    2010-01-01

    We consider an inverse quadratic programming (QP) problem in which the parameters in the objective function of a given QP problem are adjusted as little as possible so that a known feasible solution becomes the optimal one. We formulate this problem as a minimization problem with a positive semidefinite cone constraint and its dual is a linearly constrained semismoothly differentiable (SC 1 ) convex programming problem with fewer variables than the original one. We demonstrate the global convergence of the augmented Lagrangian method for the dual problem and prove that the convergence rate of primal iterates, generated by the augmented Lagrange method, is proportional to 1/r, and the rate of multiplier iterates is proportional to 1/√r, where r is the penalty parameter in the augmented Lagrangian. As the objective function of the dual problem is a SC 1 function involving the projection operator onto the cone of symmetrically semi-definite matrices, the analysis requires extensive tools such as the singular value decomposition of matrices, an implicit function theorem for semismooth functions, and properties of the projection operator in the symmetric-matrix space. Furthermore, the semismooth Newton method with Armijo line search is applied to solve the subproblems in the augmented Lagrange approach, which is proven to have global convergence and local quadratic rate. Finally numerical results, implemented by the augmented Lagrangian method, are reported.

  7. Two-dimensional inversion of MT (magnetotelluric) data; MT ho no nijigen inversion kaiseki

    Energy Technology Data Exchange (ETDEWEB)

    Ito, S; Okuno, M; Ushijima, K; Mizunaga, H [Kyushu University, Fukuoka (Japan). Faculty of Engineering

    1997-05-27

    A program has been developed to conduct inversion analysis of two-dimensional model using MT data, accurately. For the developed program, finite element method (FEM) was applied to the section of sequential analysis. A method in which Jacobian matrix is calculated only one first time and is inversely analyzed by fixing this during the repetition, and a method in which Jacobian matrix is corrected at each repetition of inversion analysis, were compared mutually. As a result of the numerical simulation, it was revealed that the Jacobian correction method provided more stable convergence for the simple 2D model, and that the calculation time is almost same as that of the Jacobian fixation method. To confirm the applicability of this program to actually measured data, results obtained from this program were compared with those from the Schlumberger method analysis by using MT data obtained in the Hatchobara geothermal area. Consequently, it was demonstrated that the both are well coincided mutually. 17 refs., 7 figs.

  8. Inverse Faraday Effect Revisited

    Science.gov (United States)

    Mendonça, J. T.; Ali, S.; Davies, J. R.

    2010-11-01

    The inverse Faraday effect is usually associated with circularly polarized laser beams. However, it was recently shown that it can also occur for linearly polarized radiation [1]. The quasi-static axial magnetic field by a laser beam propagating in plasma can be calculated by considering both the spin and the orbital angular momenta of the laser pulse. A net spin is present when the radiation is circularly polarized and a net orbital angular momentum is present if there is any deviation from perfect rotational symmetry. This orbital angular momentum has recently been discussed in the plasma context [2], and can give an additional contribution to the axial magnetic field, thus enhancing or reducing the inverse Faraday effect. As a result, this effect that is usually attributed to circular polarization can also be excited by linearly polarized radiation, if the incident laser propagates in a Laguerre-Gauss mode carrying a finite amount of orbital angular momentum.[4pt] [1] S. ALi, J.R. Davies and J.T. Mendonca, Phys. Rev. Lett., 105, 035001 (2010).[0pt] [2] J. T. Mendonca, B. Thidé, and H. Then, Phys. Rev. Lett. 102, 185005 (2009).

  9. Wavelet library for constrained devices

    Science.gov (United States)

    Ehlers, Johan Hendrik; Jassim, Sabah A.

    2007-04-01

    The wavelet transform is a powerful tool for image and video processing, useful in a range of applications. This paper is concerned with the efficiency of a certain fast-wavelet-transform (FWT) implementation and several wavelet filters, more suitable for constrained devices. Such constraints are typically found on mobile (cell) phones or personal digital assistants (PDA). These constraints can be a combination of; limited memory, slow floating point operations (compared to integer operations, most often as a result of no hardware support) and limited local storage. Yet these devices are burdened with demanding tasks such as processing a live video or audio signal through on-board capturing sensors. In this paper we present a new wavelet software library, HeatWave, that can be used efficiently for image/video processing/analysis tasks on mobile phones and PDA's. We will demonstrate that HeatWave is suitable for realtime applications with fine control and range to suit transform demands. We shall present experimental results to substantiate these claims. Finally this library is intended to be of real use and applied, hence we considered several well known and common embedded operating system platform differences; such as a lack of common routines or functions, stack limitations, etc. This makes HeatWave suitable for a range of applications and research projects.

  10. Inverse Ising inference with correlated samples

    International Nuclear Information System (INIS)

    Obermayer, Benedikt; Levine, Erel

    2014-01-01

    Correlations between two variables of a high-dimensional system can be indicative of an underlying interaction, but can also result from indirect effects. Inverse Ising inference is a method to distinguish one from the other. Essentially, the parameters of the least constrained statistical model are learned from the observed correlations such that direct interactions can be separated from indirect correlations. Among many other applications, this approach has been helpful for protein structure prediction, because residues which interact in the 3D structure often show correlated substitutions in a multiple sequence alignment. In this context, samples used for inference are not independent but share an evolutionary history on a phylogenetic tree. Here, we discuss the effects of correlations between samples on global inference. Such correlations could arise due to phylogeny but also via other slow dynamical processes. We present a simple analytical model to address the resulting inference biases, and develop an exact method accounting for background correlations in alignment data by combining phylogenetic modeling with an adaptive cluster expansion algorithm. We find that popular reweighting schemes are only marginally effective at removing phylogenetic bias, suggest a rescaling strategy that yields better results, and provide evidence that our conclusions carry over to the frequently used mean-field approach to the inverse Ising problem. (paper)

  11. The effects of flavour symmetry breaking on hadron matrix elements

    International Nuclear Information System (INIS)

    Cooke, A.N.; Horsley, R.; Pleiter, D.; Zanotti, J.M.

    2012-12-01

    By considering a flavour expansion about the SU(3)-flavour symmetric point, we investigate how flavour-blindness constrains octet baryon matrix elements after SU(3) is broken by the mass difference between the strange and light quarks. We find the expansions to be highly constrained along a mass trajectory where the singlet quark mass is held constant, which proves beneficial for extrapolations of 2+1 flavour lattice data to the physical point. We investigate these effects numerically via a lattice calculation of the flavour-conserving and flavour-changing matrix elements of the vector and axial operators between octet baryon states.

  12. The effects of flavour symmetry breaking on hadron matrix elements

    Energy Technology Data Exchange (ETDEWEB)

    Cooke, A.N.; Horsley, R. [Edinburgh Univ. (United Kingdom). School of Physics and Astronomy; Nakamura, Y. [RIKEN Advanced Institute for Computational Science, Kobe (Japan); Pleiter, D. [Juelich Research Centre (Germany); Regensburg Univ. (Germany). Institut fuer Theoretische Physik; Rakow, P.E.L. [Liverpool Univ. (United Kingdom). Theoretical Physics Division; Schierholz, G. [Deutsches Elektronen-Synchrotron (DESY), Hamburg (Germany); Zanotti, J.M. [Adelaide Univ. (Australia). School of Chemistry and Physics

    2012-12-15

    By considering a flavour expansion about the SU(3)-flavour symmetric point, we investigate how flavour-blindness constrains octet baryon matrix elements after SU(3) is broken by the mass difference between the strange and light quarks. We find the expansions to be highly constrained along a mass trajectory where the singlet quark mass is held constant, which proves beneficial for extrapolations of 2+1 flavour lattice data to the physical point. We investigate these effects numerically via a lattice calculation of the flavour-conserving and flavour-changing matrix elements of the vector and axial operators between octet baryon states.

  13. A First-order Prediction-Correction Algorithm for Time-varying (Constrained) Optimization: Preprint

    Energy Technology Data Exchange (ETDEWEB)

    Dall-Anese, Emiliano [National Renewable Energy Laboratory (NREL), Golden, CO (United States); Simonetto, Andrea [Universite catholique de Louvain

    2017-07-25

    This paper focuses on the design of online algorithms based on prediction-correction steps to track the optimal solution of a time-varying constrained problem. Existing prediction-correction methods have been shown to work well for unconstrained convex problems and for settings where obtaining the inverse of the Hessian of the cost function can be computationally affordable. The prediction-correction algorithm proposed in this paper addresses the limitations of existing methods by tackling constrained problems and by designing a first-order prediction step that relies on the Hessian of the cost function (and do not require the computation of its inverse). Analytical results are established to quantify the tracking error. Numerical simulations corroborate the analytical results and showcase performance and benefits of the algorithms.

  14. Recurrent Neural Network for Computing the Drazin Inverse.

    Science.gov (United States)

    Stanimirović, Predrag S; Zivković, Ivan S; Wei, Yimin

    2015-11-01

    This paper presents a recurrent neural network (RNN) for computing the Drazin inverse of a real matrix in real time. This recurrent neural network (RNN) is composed of n independent parts (subnetworks), where n is the order of the input matrix. These subnetworks can operate concurrently, so parallel and distributed processing can be achieved. In this way, the computational advantages over the existing sequential algorithms can be attained in real-time applications. The RNN defined in this paper is convenient for an implementation in an electronic circuit. The number of neurons in the neural network is the same as the number of elements in the output matrix, which represents the Drazin inverse. The difference between the proposed RNN and the existing ones for the Drazin inverse computation lies in their network architecture and dynamics. The conditions that ensure the stability of the defined RNN as well as its convergence toward the Drazin inverse are considered. In addition, illustrative examples and examples of application to the practical engineering problems are discussed to show the efficacy of the proposed neural network.

  15. Building Generalized Inverses of Matrices Using Only Row and Column Operations

    Science.gov (United States)

    Stuart, Jeffrey

    2010-01-01

    Most students complete their first and only course in linear algebra with the understanding that a real, square matrix "A" has an inverse if and only if "rref"("A"), the reduced row echelon form of "A", is the identity matrix I[subscript n]. That is, if they apply elementary row operations via the Gauss-Jordan algorithm to the partitioned matrix…

  16. A finite-difference contrast source inversion method

    International Nuclear Information System (INIS)

    Abubakar, A; Hu, W; Habashy, T M; Van den Berg, P M

    2008-01-01

    We present a contrast source inversion (CSI) algorithm using a finite-difference (FD) approach as its backbone for reconstructing the unknown material properties of inhomogeneous objects embedded in a known inhomogeneous background medium. Unlike the CSI method using the integral equation (IE) approach, the FD-CSI method can readily employ an arbitrary inhomogeneous medium as its background. The ability to use an inhomogeneous background medium has made this algorithm very suitable to be used in through-wall imaging and time-lapse inversion applications. Similar to the IE-CSI algorithm the unknown contrast sources and contrast function are updated alternately to reconstruct the unknown objects without requiring the solution of the full forward problem at each iteration step in the optimization process. The FD solver is formulated in the frequency domain and it is equipped with a perfectly matched layer (PML) absorbing boundary condition. The FD operator used in the FD-CSI method is only dependent on the background medium and the frequency of operation, thus it does not change throughout the inversion process. Therefore, at least for the two-dimensional (2D) configurations, where the size of the stiffness matrix is manageable, the FD stiffness matrix can be inverted using a non-iterative inversion matrix approach such as a Gauss elimination method for the sparse matrix. In this case, an LU decomposition needs to be done only once and can then be reused for multiple source positions and in successive iterations of the inversion. Numerical experiments show that this FD-CSI algorithm has an excellent performance for inverting inhomogeneous objects embedded in an inhomogeneous background medium

  17. Order-constrained linear optimization.

    Science.gov (United States)

    Tidwell, Joe W; Dougherty, Michael R; Chrabaszcz, Jeffrey S; Thomas, Rick P

    2017-11-01

    Despite the fact that data and theories in the social, behavioural, and health sciences are often represented on an ordinal scale, there has been relatively little emphasis on modelling ordinal properties. The most common analytic framework used in psychological science is the general linear model, whose variants include ANOVA, MANOVA, and ordinary linear regression. While these methods are designed to provide the best fit to the metric properties of the data, they are not designed to maximally model ordinal properties. In this paper, we develop an order-constrained linear least-squares (OCLO) optimization algorithm that maximizes the linear least-squares fit to the data conditional on maximizing the ordinal fit based on Kendall's τ. The algorithm builds on the maximum rank correlation estimator (Han, 1987, Journal of Econometrics, 35, 303) and the general monotone model (Dougherty & Thomas, 2012, Psychological Review, 119, 321). Analyses of simulated data indicate that when modelling data that adhere to the assumptions of ordinary least squares, OCLO shows minimal bias, little increase in variance, and almost no loss in out-of-sample predictive accuracy. In contrast, under conditions in which data include a small number of extreme scores (fat-tailed distributions), OCLO shows less bias and variance, and substantially better out-of-sample predictive accuracy, even when the outliers are removed. We show that the advantages of OCLO over ordinary least squares in predicting new observations hold across a variety of scenarios in which researchers must decide to retain or eliminate extreme scores when fitting data. © 2017 The British Psychological Society.

  18. Inverse fusion PCR cloning.

    Directory of Open Access Journals (Sweden)

    Markus Spiliotis

    Full Text Available Inverse fusion PCR cloning (IFPC is an easy, PCR based three-step cloning method that allows the seamless and directional insertion of PCR products into virtually all plasmids, this with a free choice of the insertion site. The PCR-derived inserts contain a vector-complementary 5'-end that allows a fusion with the vector by an overlap extension PCR, and the resulting amplified insert-vector fusions are then circularized by ligation prior transformation. A minimal amount of starting material is needed and experimental steps are reduced. Untreated circular plasmid, or alternatively bacteria containing the plasmid, can be used as templates for the insertion, and clean-up of the insert fragment is not urgently required. The whole cloning procedure can be performed within a minimal hands-on time and results in the generation of hundreds to ten-thousands of positive colonies, with a minimal background.

  19. Inverse plasma equilibria

    International Nuclear Information System (INIS)

    Hicks, H.R.; Dory, R.A.; Holmes, J.A.

    1983-01-01

    We illustrate in some detail a 2D inverse-equilibrium solver that was constructed to analyze tokamak configurations and stellarators (the latter in the context of the average method). To ensure that the method is suitable not only to determine equilibria, but also to provide appropriately represented data for existing stability codes, it is important to be able to control the Jacobian, tilde J is identical to delta(R,Z)/delta(rho, theta). The form chosen is tilde J = J 0 (rho)R/sup l/rho where rho is a flux surface label, and l is an integer. The initial implementation is for a fixed conducting-wall boundary, but the technique can be extended to a free-boundary model

  20. A Single Software For Processing, Inversion, And Presentation Of Aem Data Of Different Systems

    DEFF Research Database (Denmark)

    Auken, Esben; Christiansen, Anders Vest; Viezzoli, Andrea

    2009-01-01

    modeling and Spatial Constrained inversion (SCI) for quasi 3-D inversion. The Workbench implements a user friendly interface to these algorithms enabling non-geophysicists to carry out inversion of complicated airborne data sets without having in-depth knowledge about how the algorithm actually works. Just...... to manage data and settings. The benefits of using a databases compared to flat ASCII column files should not be underestimated. Firstly, user-handled input/output is nearly eliminated, thus minimizing the chance of human errors. Secondly, data are stored in a well described and documented format which...

  1. Shape-constrained regularization by statistical multiresolution for inverse problems: asymptotic analysis

    International Nuclear Information System (INIS)

    Frick, Klaus; Marnitz, Philipp; Munk, Axel

    2012-01-01

    This paper is concerned with a novel regularization technique for solving linear ill-posed operator equations in Hilbert spaces from data that are corrupted by white noise. We combine convex penalty functionals with extreme-value statistics of projections of the residuals on a given set of sub-spaces in the image space of the operator. We prove general consistency and convergence rate results in the framework of Bregman divergences which allows for a vast range of penalty functionals. Various examples that indicate the applicability of our approach will be discussed. We will illustrate in the context of signal and image processing that the presented method constitutes a locally adaptive reconstruction method. (paper)

  2. Pulsed laser deposition of the lysozyme protein: an unexpected “Inverse MAPLE” process

    DEFF Research Database (Denmark)

    Schou, Jørgen; Matei, Andreea; Constantinescu, Catalin

    2012-01-01

    Films of organic materials are commonly deposited by laser assisted methods, such as MAPLE (matrix-assisted pulsed laser evaporation), where a few percent of the film material in the target is protected by a light-absorbing volatile matrix. Another possibility is to irradiate the dry organic...... the ejection and deposition of lysozyme. This can be called an “inverse MAPLE” process, since the ratio of “matrix” to film material in the target is 10:90, which is inverse of the typical MAPLE process where the film material is dissolved in the matrix down to several wt.%. Lysozyme is a well-known protein...

  3. On a quadratic inverse eigenvalue problem

    International Nuclear Information System (INIS)

    Cai, Yunfeng; Xu, Shufang

    2009-01-01

    This paper concerns the quadratic inverse eigenvalue problem (QIEP) of constructing real symmetric matrices M, C and K of size n × n, with M nonsingular, so that the quadratic matrix polynomial Q(λ) ≡ λ 2 M + λC + K has a completely prescribed set of eigenvalues and eigenvectors. It is shown via construction that the QIEP has a solution if and only if r 0, where r and δ are computable from the prescribed spectral data. A necessary and sufficient condition for the existence of a solution to the QIEP with M being positive definite is also established in a constructive way. Furthermore, two algorithms are developed: one is to solve the QIEP; another is to find a particular solution to the QIEP with the leading coefficient matrix being positive definite, which also provides us an approach to a simultaneous reduction of real symmetric matrix triple (M, C, K) by real congruence. Numerical results show that the two algorithms are feasible and numerically reliable

  4. Transmuted Generalized Inverse Weibull Distribution

    OpenAIRE

    Merovci, Faton; Elbatal, Ibrahim; Ahmed, Alaa

    2013-01-01

    A generalization of the generalized inverse Weibull distribution so-called transmuted generalized inverse Weibull dis- tribution is proposed and studied. We will use the quadratic rank transmutation map (QRTM) in order to generate a flexible family of probability distributions taking generalized inverse Weibull distribution as the base value distribution by introducing a new parameter that would offer more distributional flexibility. Various structural properties including explicit expression...

  5. Complementarity of flux- and biometric-based data to constrain parameters in a terrestrial carbon model

    Directory of Open Access Journals (Sweden)

    Zhenggang Du

    2015-03-01

    Full Text Available To improve models for accurate projections, data assimilation, an emerging statistical approach to combine models with data, have recently been developed to probe initial conditions, parameters, data content, response functions and model uncertainties. Quantifying how many information contents are contained in different data streams is essential to predict future states of ecosystems and the climate. This study uses a data assimilation approach to examine the information contents contained in flux- and biometric-based data to constrain parameters in a terrestrial carbon (C model, which includes canopy photosynthesis and vegetation–soil C transfer submodels. Three assimilation experiments were constructed with either net ecosystem exchange (NEE data only or biometric data only [including foliage and woody biomass, litterfall, soil organic C (SOC and soil respiration], or both NEE and biometric data to constrain model parameters by a probabilistic inversion application. The results showed that NEE data mainly constrained parameters associated with gross primary production (GPP and ecosystem respiration (RE but were almost invalid for C transfer coefficients, while biometric data were more effective in constraining C transfer coefficients than other parameters. NEE and biometric data constrained about 26% (6 and 30% (7 of a total of 23 parameters, respectively, but their combined application constrained about 61% (14 of all parameters. The complementarity of NEE and biometric data was obvious in constraining most of parameters. The poor constraint by only NEE or biometric data was probably attributable to either the lack of long-term C dynamic data or errors from measurements. Overall, our results suggest that flux- and biometric-based data, containing different processes in ecosystem C dynamics, have different capacities to constrain parameters related to photosynthesis and C transfer coefficients, respectively. Multiple data sources could also

  6. Calculation of the inverse data space via sparse inversion

    KAUST Repository

    Saragiotis, Christos; Doulgeris, Panagiotis C.; Verschuur, Dirk Jacob Eric

    2011-01-01

    The inverse data space provides a natural separation of primaries and surface-related multiples, as the surface multiples map onto the area around the origin while the primaries map elsewhere. However, the calculation of the inverse data is far from

  7. Inverse feasibility problems of the inverse maximum flow problems

    Indian Academy of Sciences (India)

    199–209. c Indian Academy of Sciences. Inverse feasibility problems of the inverse maximum flow problems. ADRIAN DEACONU. ∗ and ELEONOR CIUREA. Department of Mathematics and Computer Science, Faculty of Mathematics and Informatics, Transilvania University of Brasov, Brasov, Iuliu Maniu st. 50,. Romania.

  8. Technical Note: Variance-covariance matrix and averaging kernels for the Levenberg-Marquardt solution of the retrieval of atmospheric vertical profiles

    Directory of Open Access Journals (Sweden)

    S. Ceccherini

    2010-03-01

    Full Text Available The variance-covariance matrix (VCM and the averaging kernel matrix (AKM are widely used tools to characterize atmospheric vertical profiles retrieved from remote sensing measurements. Accurate estimation of these quantities is essential for both the evaluation of the quality of the retrieved profiles and for the correct use of the profiles themselves in subsequent applications such as data comparison, data assimilation and data fusion. We propose a new method to estimate the VCM and AKM of vertical profiles retrieved using the Levenberg-Marquardt iterative technique. We apply the new method to the inversion of simulated limb emission measurements. Then we compare the obtained VCM and AKM with those resulting from other methods already published in the literature and with accurate estimates derived using statistical and numerical estimators. The proposed method accounts for all the iterations done in the inversion and provides the most accurate VCM and AKM. Furthermore, it correctly estimates the VCM and the AKM also if the retrieval iterations are stopped when a physically meaningful convergence criterion is fulfilled, i.e. before achievement of the numerical convergence at machine precision. The method can be easily implemented in any Levenberg-Marquardt iterative retrieval scheme, either constrained or unconstrained, without significant computational overhead.

  9. Quantifying uncertainties of seismic Bayesian inversion of Northern Great Plains

    Science.gov (United States)

    Gao, C.; Lekic, V.

    2017-12-01

    Elastic waves excited by earthquakes are the fundamental observations of the seismological studies. Seismologists measure information such as travel time, amplitude, and polarization to infer the properties of earthquake source, seismic wave propagation, and subsurface structure. Across numerous applications, seismic imaging has been able to take advantage of complimentary seismic observables to constrain profiles and lateral variations of Earth's elastic properties. Moreover, seismic imaging plays a unique role in multidisciplinary studies of geoscience by providing direct constraints on the unreachable interior of the Earth. Accurate quantification of uncertainties of inferences made from seismic observations is of paramount importance for interpreting seismic images and testing geological hypotheses. However, such quantification remains challenging and subjective due to the non-linearity and non-uniqueness of geophysical inverse problem. In this project, we apply a reverse jump Markov chain Monte Carlo (rjMcMC) algorithm for a transdimensional Bayesian inversion of continental lithosphere structure. Such inversion allows us to quantify the uncertainties of inversion results by inverting for an ensemble solution. It also yields an adaptive parameterization that enables simultaneous inversion of different elastic properties without imposing strong prior information on the relationship between them. We present retrieved profiles of shear velocity (Vs) and radial anisotropy in Northern Great Plains using measurements from USArray stations. We use both seismic surface wave dispersion and receiver function data due to their complementary constraints of lithosphere structure. Furthermore, we analyze the uncertainties of both individual and joint inversion of those two data types to quantify the benefit of doing joint inversion. As an application, we infer the variation of Moho depths and crustal layering across the northern Great Plains.

  10. Modeling the microstructural evolution during constrained sintering

    DEFF Research Database (Denmark)

    Bjørk, Rasmus; Frandsen, Henrik Lund; Tikare, V.

    A numerical model able to simulate solid state constrained sintering of a powder compact is presented. The model couples an existing kinetic Monte Carlo (kMC) model for free sintering with a finite element (FE) method for calculating stresses on a microstructural level. The microstructural response...... to the stress field as well as the FE calculation of the stress field from the microstructural evolution is discussed. The sintering behavior of two powder compacts constrained by a rigid substrate is simulated and compared to free sintering of the same samples. Constrained sintering result in a larger number...

  11. Resolving spectral information from time domain induced polarization data through 2-D inversion

    DEFF Research Database (Denmark)

    Fiandaca, Gianluca; Ramm, James; Binley, A.

    2013-01-01

    these limitations of conventional approaches, a new 2-D inversion algorithm has been developed using the full voltage decay of the IP response, together with an accurate description of the transmitter waveform and receiver transfer function. This allows reconstruction of the spectral information contained in the TD...... sampling necessary in the fast Hankel transform. These features, together with parallel computation, ensure inversion times comparable with those of direct current algorithms. The algorithm has been developed in a laterally constrained inversion scheme, and handles both smooth and layered inversions......; the latter being helpful in sedimentary environments, where quasi-layered models often represent the actual geology more accurately than smooth minimum-structure models. In the layered inversion approach, a general method to derive the thickness derivative from the complex conductivity Jacobian is also...

  12. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-07

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  13. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander

    2015-01-05

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.

  14. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul

    2015-01-01

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(n log n). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and optimal design

  15. Hierarchical matrix approximation of large covariance matrices

    KAUST Repository

    Litvinenko, Alexander; Genton, Marc G.; Sun, Ying; Tempone, Raul

    2015-01-01

    We approximate large non-structured covariance matrices in the H-matrix format with a log-linear computational cost and storage O(nlogn). We compute inverse, Cholesky decomposition and determinant in H-format. As an example we consider the class of Matern covariance functions, which are very popular in spatial statistics, geostatistics, machine learning and image analysis. Applications are: kriging and op- timal design.

  16. Fast Component Pursuit for Large-Scale Inverse Covariance Estimation.

    Science.gov (United States)

    Han, Lei; Zhang, Yu; Zhang, Tong

    2016-08-01

    The maximum likelihood estimation (MLE) for the Gaussian graphical model, which is also known as the inverse covariance estimation problem, has gained increasing interest recently. Most existing works assume that inverse covariance estimators contain sparse structure and then construct models with the ℓ 1 regularization. In this paper, different from existing works, we study the inverse covariance estimation problem from another perspective by efficiently modeling the low-rank structure in the inverse covariance, which is assumed to be a combination of a low-rank part and a diagonal matrix. One motivation for this assumption is that the low-rank structure is common in many applications including the climate and financial analysis, and another one is that such assumption can reduce the computational complexity when computing its inverse. Specifically, we propose an efficient COmponent Pursuit (COP) method to obtain the low-rank part, where each component can be sparse. For optimization, the COP method greedily learns a rank-one component in each iteration by maximizing the log-likelihood. Moreover, the COP algorithm enjoys several appealing properties including the existence of an efficient solution in each iteration and the theoretical guarantee on the convergence of this greedy approach. Experiments on large-scale synthetic and real-world datasets including thousands of millions variables show that the COP method is faster than the state-of-the-art techniques for the inverse covariance estimation problem when achieving comparable log-likelihood on test data.

  17. Inverse Schroedinger equation and the exact wave function

    International Nuclear Information System (INIS)

    Nakatsuji, Hiroshi

    2002-01-01

    Using the inverse of the Hamiltonian, we introduce the inverse Schroedinger equation (ISE) that is equivalent to the ordinary Schroedinger equation (SE). The ISE has the variational principle and the H-square group of equations as the SE has. When we use a positive Hamiltonian, shifting the energy origin, the inverse energy becomes monotonic and we further have the inverse Ritz variational principle and cross-H-square equations. The concepts of the SE and the ISE are combined to generalize the theory for calculating the exact wave function that is a common eigenfunction of the SE and ISE. The Krylov sequence is extended to include the inverse Hamiltonian, and the complete Krylov sequence is introduced. The iterative configuration interaction (ICI) theory is generalized to cover both the SE and ISE concepts and four different computational methods of calculating the exact wave function are presented in both analytical and matrix representations. The exact wave-function theory based on the inverse Hamiltonian can be applied to systems that have singularities in the Hamiltonian. The generalized ICI theory is applied to the hydrogen atom, giving the exact solution without any singularity problem

  18. Constraining global methane emissions and uptake by ecosystems

    International Nuclear Information System (INIS)

    Spahni, R.; Wania, R.; Neef, L.; Van Weele, M.; Van Velthoven, P.; Pison, I.; Bousquet, P.

    2011-01-01

    Natural methane (CH 4 ) emissions from wet ecosystems are an important part of today's global CH 4 budget. Climate affects the exchange of CH 4 between ecosystems and the atmosphere by influencing CH 4 production, oxidation, and transport in the soil. The net CH 4 exchange depends on ecosystem hydrology, soil and vegetation characteristics. Here, the LPJ-WHyMe global dynamical vegetation model is used to simulate global net CH 4 emissions for different ecosystems: northern peat-lands (45 degrees-90 degrees N), naturally inundated wetlands (60 degrees S-45 degrees N), rice agriculture and wet mineral soils. Mineral soils are a potential CH 4 sink, but can also be a source with the direction of the net exchange depending on soil moisture content. The geographical and seasonal distributions are evaluated against multi-dimensional atmospheric inversions for 2003-2005, using two independent four-dimensional variational assimilation systems. The atmospheric inversions are constrained by the atmospheric CH 4 observations of the SCIAMACHY satellite instrument and global surface networks. Compared to LPJ-WHyMe the inversions result in a significant reduction in the emissions from northern peat-lands and suggest that LPJ-WHyMe maximum annual emissions peak about one month late. The inversions do not put strong constraints on the division of sources between inundated wetlands and wet mineral soils in the tropics. Based on the inversion results we diagnose model parameters in LPJ-WHyMe and simulate the surface exchange of CH 4 over the period 1990-2008. Over the whole period we infer an increase of global ecosystem CH 4 emissions of +1.11 TgCH 4 yr -1 , not considering potential additional changes in wetland extent. The increase in simulated CH 4 emissions is attributed to enhanced soil respiration resulting from the observed rise in land temperature and in atmospheric carbon dioxide that were used as input. The long term decline of the atmospheric CH 4 growth rate from 1990

  19. Face inversion increases attractiveness.

    Science.gov (United States)

    Leder, Helmut; Goller, Juergen; Forster, Michael; Schlageter, Lena; Paul, Matthew A

    2017-07-01

    Assessing facial attractiveness is a ubiquitous, inherent, and hard-wired phenomenon in everyday interactions. As such, it has highly adapted to the default way that faces are typically processed: viewing faces in upright orientation. By inverting faces, we can disrupt this default mode, and study how facial attractiveness is assessed. Faces, rotated at 90 (tilting to either side) and 180°, were rated on attractiveness and distinctiveness scales. For both orientations, we found that faces were rated more attractive and less distinctive than upright faces. Importantly, these effects were more pronounced for faces rated low in upright orientation, and smaller for highly attractive faces. In other words, the less attractive a face was, the more it gained in attractiveness by inversion or rotation. Based on these findings, we argue that facial attractiveness assessments might not rely on the presence of attractive facial characteristics, but on the absence of distinctive, unattractive characteristics. These unattractive characteristics are potentially weighed against an individual, attractive prototype in assessing facial attractiveness. Copyright © 2017 The Authors. Published by Elsevier B.V. All rights reserved.

  20. Inverse problem in hydrogeology

    Science.gov (United States)

    Carrera, Jesús; Alcolea, Andrés; Medina, Agustín; Hidalgo, Juan; Slooten, Luit J.

    2005-03-01

    The state of the groundwater inverse problem is synthesized. Emphasis is placed on aquifer characterization, where modelers have to deal with conceptual model uncertainty (notably spatial and temporal variability), scale dependence, many types of unknown parameters (transmissivity, recharge, boundary conditions, etc.), nonlinearity, and often low sensitivity of state variables (typically heads and concentrations) to aquifer properties. Because of these difficulties, calibration cannot be separated from the modeling process, as it is sometimes done in other fields. Instead, it should be viewed as one step in the process of understanding aquifer behavior. In fact, it is shown that actual parameter estimation methods do not differ from each other in the essence, though they may differ in the computational details. It is argued that there is ample room for improvement in groundwater inversion: development of user-friendly codes, accommodation of variability through geostatistics, incorporation of geological information and different types of data (temperature, occurrence and concentration of isotopes, age, etc.), proper accounting of uncertainty, etc. Despite this, even with existing codes, automatic calibration facilitates enormously the task of modeling. Therefore, it is contended that its use should become standard practice. L'état du problème inverse des eaux souterraines est synthétisé. L'accent est placé sur la caractérisation de l'aquifère, où les modélisateurs doivent jouer avec l'incertitude des modèles conceptuels (notamment la variabilité spatiale et temporelle), les facteurs d'échelle, plusieurs inconnues sur différents paramètres (transmissivité, recharge, conditions aux limites, etc.), la non linéarité, et souvent la sensibilité de plusieurs variables d'état (charges hydrauliques, concentrations) des propriétés de l'aquifère. A cause de ces difficultés, le calibrage ne peut êtreséparé du processus de modélisation, comme c'est le

  1. Multiples waveform inversion

    KAUST Repository

    Zhang, Dongliang

    2013-01-01

    To increase the illumination of the subsurface and to eliminate the dependency of FWI on the source wavelet, we propose multiples waveform inversion (MWI) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. These virtual sources are used to numerically generate downgoing wavefields that are correlated with the backprojected surface-related multiples to give the migration image. Since the recorded data are treated as the virtual sources, knowledge of the source wavelet is not required, and the subsurface illumination is greatly enhanced because the entire free surface acts as an extended source compared to the radiation pattern of a traditional point source. Numerical tests on the Marmousi2 model show that the convergence rate and the spatial resolution of MWI is, respectively, faster and more accurate then FWI. The potential pitfall with this method is that the multiples undergo more than one roundtrip to the surface, which increases attenuation and reduces spatial resolution. This can lead to less resolved tomograms compared to conventional FWI. The possible solution is to combine both FWI and MWI in inverting for the subsurface velocity distribution.

  2. An interpretation of signature inversion

    International Nuclear Information System (INIS)

    Onishi, Naoki; Tajima, Naoki

    1988-01-01

    An interpretation in terms of the cranking model is presented to explain why signature inversion occurs for positive γ of the axially asymmetric deformation parameter and emerges into specific orbitals. By introducing a continuous variable, the eigenvalue equation can be reduced to a one dimensional Schroedinger equation by means of which one can easily understand the cause of signature inversion. (author)

  3. Inverse problems for Maxwell's equations

    CERN Document Server

    Romanov, V G

    1994-01-01

    The Inverse and Ill-Posed Problems Series is a series of monographs publishing postgraduate level information on inverse and ill-posed problems for an international readership of professional scientists and researchers. The series aims to publish works which involve both theory and applications in, e.g., physics, medicine, geophysics, acoustics, electrodynamics, tomography, and ecology.

  4. Kinematic source inversions of teleseismic data based on the QUESO library for uncertainty quantification and prediction

    Science.gov (United States)

    Zielke, O.; McDougall, D.; Mai, P. M.; Babuska, I.

    2014-12-01

    One fundamental aspect of seismic hazard mitigation is gaining a better understanding of the rupture process. Because direct observation of the relevant parameters and properties is not possible, other means such as kinematic source inversions are used instead. By constraining the spatial and temporal evolution of fault slip during an earthquake, those inversion approaches may enable valuable insights in the physics of the rupture process. However, due to the underdetermined nature of this inversion problem (i.e., inverting a kinematic source model for an extended fault based on seismic data), the provided solutions are generally non-unique. Here we present a statistical (Bayesian) inversion approach based on an open-source library for uncertainty quantification (UQ) called QUESO that was developed at ICES (UT Austin). The approach has advantages with respect to deterministic inversion approaches as it provides not only a single (non-unique) solution but also provides uncertainty bounds with it. Those uncertainty bounds help to qualitatively and quantitatively judge how well constrained an inversion solution is and how much rupture complexity the data reliably resolve. The presented inversion scheme uses only tele-seismically recorded body waves but future developments may lead us towards joint inversion schemes. After giving an insight in the inversion scheme ifself (based on delayed rejection adaptive metropolis, DRAM) we explore the method's resolution potential. For that, we synthetically generate tele-seismic data, add for example different levels of noise and/or change fault plane parameterization and then apply our inversion scheme in the attempt to extract the (known) kinematic rupture model. We conclude with exemplary inverting real tele-seismic data of a recent large earthquake and compare those results with deterministically derived kinematic source models provided by other research groups.

  5. Asymptotic Likelihood Distribution for Correlated & Constrained Systems

    CERN Document Server

    Agarwal, Ujjwal

    2016-01-01

    It describes my work as summer student at CERN. The report discusses the asymptotic distribution of the likelihood ratio for total no. of parameters being h and 2 out of these being are constrained and correlated.

  6. Constrained bidirectional propagation and stroke segmentation

    Energy Technology Data Exchange (ETDEWEB)

    Mori, S; Gillespie, W; Suen, C Y

    1983-03-01

    A new method for decomposing a complex figure into its constituent strokes is described. This method, based on constrained bidirectional propagation, is suitable for parallel processing. Examples of its application to the segmentation of Chinese characters are presented. 9 references.

  7. Mathematical Modeling of Constrained Hamiltonian Systems

    NARCIS (Netherlands)

    Schaft, A.J. van der; Maschke, B.M.

    1995-01-01

    Network modelling of unconstrained energy conserving physical systems leads to an intrinsic generalized Hamiltonian formulation of the dynamics. Constrained energy conserving physical systems are directly modelled as implicit Hamiltonian systems with regard to a generalized Dirac structure on the

  8. Client's Constraining Factors to Construction Project Management

    African Journals Online (AJOL)

    factors as a significant system that constrains project management success of public and ... finance for the project and prompt payment for work executed; clients .... consideration of the loading patterns of these variables, the major factor is ...

  9. Inverse scattering scheme for the Dirac equation at fixed energy

    International Nuclear Information System (INIS)

    Leeb, H.; Lehninger, H.; Schilder, C.

    2001-01-01

    Full text: Based on the concept of generalized transformation operators a new hierarchy of Dirac equations with spherical symmetric scalar and fourth component vector potentials is presented. Within this hierarchy closed form expressions for the solutions, the potentials and the S-matrix can be given in terms of solutions of the original Dirac equation. Using these transformations an inverse scattering scheme has been constructed for the Dirac equation which is the analog to the rational scheme in the non-relativistic case. The given method provides for the first time an inversion scheme with closed form expressions for the S-matrix for non-relativistic scattering problems with central and spin-orbit potentials. (author)

  10. On the origin of constrained superfields

    Energy Technology Data Exchange (ETDEWEB)

    Dall’Agata, G. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy); Dudas, E. [Centre de Physique Théorique, École Polytechnique, CNRS, Université Paris-Saclay,F-91128 Palaiseau (France); Farakos, F. [Dipartimento di Fisica “Galileo Galilei”, Università di Padova,Via Marzolo 8, 35131 Padova (Italy); INFN, Sezione di Padova,Via Marzolo 8, 35131 Padova (Italy)

    2016-05-06

    In this work we analyze constrained superfields in supersymmetry and supergravity. We propose a constraint that, in combination with the constrained goldstino multiplet, consistently removes any selected component from a generic superfield. We also describe its origin, providing the operators whose equations of motion lead to the decoupling of such components. We illustrate our proposal by means of various examples and show how known constraints can be reproduced by our method.

  11. Optimal Inversion Parameters for Full Waveform Inversion using OBS Data Set

    Science.gov (United States)

    Kim, S.; Chung, W.; Shin, S.; Kim, D.; Lee, D.

    2017-12-01

    In recent years, full Waveform Inversion (FWI) has been the most researched technique in seismic data processing. It uses the residuals between observed and modeled data as an objective function; thereafter, the final subsurface velocity model is generated through a series of iterations meant to minimize the residuals.Research on FWI has expanded from acoustic media to elastic media. In acoustic media, the subsurface property is defined by P-velocity; however, in elastic media, properties are defined by multiple parameters, such as P-velocity, S-velocity, and density. Further, the elastic media can also be defined by Lamé constants, density or impedance PI, SI; consequently, research is being carried out to ascertain the optimal parameters.From results of advanced exploration equipment and Ocean Bottom Seismic (OBS) survey, it is now possible to obtain multi-component seismic data. However, to perform FWI on these data and generate an accurate subsurface model, it is important to determine optimal inversion parameters among (Vp, Vs, ρ), (λ, μ, ρ), and (PI, SI) in elastic media. In this study, staggered grid finite difference method was applied to simulate OBS survey. As in inversion, l2-norm was set as objective function. Further, the accurate computation of gradient direction was performed using the back-propagation technique and its scaling was done using the Pseudo-hessian matrix.In acoustic media, only Vp is used as the inversion parameter. In contrast, various sets of parameters, such as (Vp, Vs, ρ) and (λ, μ, ρ) can be used to define inversion in elastic media. Therefore, it is important to ascertain the parameter that gives the most accurate result for inversion with OBS data set.In this study, we generated Vp and Vs subsurface models by using (λ, μ, ρ) and (Vp, Vs, ρ) as inversion parameters in every iteration, and compared the final two FWI results.This research was supported by the Basic Research Project(17-3312) of the Korea Institute of

  12. Efficiency criterion for teleportation via channel matrix, measurement matrix and collapsed matrix

    Directory of Open Access Journals (Sweden)

    Xin-Wei Zha

    Full Text Available In this paper, three kinds of coefficient matrixes (channel matrix, measurement matrix, collapsed matrix associated with the pure state for teleportation are presented, the general relation among channel matrix, measurement matrix and collapsed matrix is obtained. In addition, a criterion for judging whether a state can be teleported successfully is given, depending on the relation between the number of parameter of an unknown state and the rank of the collapsed matrix. Keywords: Channel matrix, Measurement matrix, Collapsed matrix, Teleportation

  13. Inverse Cerenkov experiment

    International Nuclear Information System (INIS)

    Kimura, W.D.

    1993-01-01

    The final report describes work performed to investigate inverse Cherenkov acceleration (ICA) as a promising method for laser particle acceleration. In particular, an improved configuration of ICA is being tested in a experiment presently underway on the Accelerator Test Facility (ATF). In the experiment, the high peak power (∼ 10 GW) linearly polarized ATF CO 2 laser beam is converted to a radially polarized beam. This is beam is focused with an axicon at the Cherenkov angle onto the ATF 50-MeV e-beam inside a hydrogen gas cell, where the gas acts as the phase matching medium of the interaction. An energy gain of ∼12 MeV is predicted assuming a delivered laser peak power of 5 GW. The experiment is divided into two phases. The Phase I experiments, which were completed in the spring of 1992, were conducted before the ATF e-beam was available and involved several successful tests of the optical systems. Phase II experiments are with the e-beam and laser beam, and are still in progress. The ATF demonstrated delivery of the e-beam to the experiment in Dec. 1992. A preliminary ''debugging'' run with the e-beam and laser beam occurred in May 1993. This revealed the need for some experimental modifications, which have been implemented. The second run is tentatively scheduled for October or November 1993. In parallel to the experimental efforts has been ongoing theoretical work to support the experiment and investigate improvement and/or offshoots. One exciting offshoot has been theoretical work showing that free-space laser acceleration of electrons is possible using a radially-polarized, axicon-focused laser beam, but without any phase-matching gas. The Monte Carlo code used to model the ICA process has been upgraded and expanded to handle different types of laser beam input profiles

  14. Extended biorthogonal matrix polynomials

    Directory of Open Access Journals (Sweden)

    Ayman Shehata

    2017-01-01

    Full Text Available The pair of biorthogonal matrix polynomials for commutative matrices were first introduced by Varma and Tasdelen in [22]. The main aim of this paper is to extend the properties of the pair of biorthogonal matrix polynomials of Varma and Tasdelen and certain generating matrix functions, finite series, some matrix recurrence relations, several important properties of matrix differential recurrence relations, biorthogonality relations and matrix differential equation for the pair of biorthogonal matrix polynomials J(A,B n (x, k and K(A,B n (x, k are discussed. For the matrix polynomials J(A,B n (x, k, various families of bilinear and bilateral generating matrix functions are constructed in the sequel.

  15. Arikan and Alamouti matrices based on fast block-wise inverse Jacket transform

    Science.gov (United States)

    Lee, Moon Ho; Khan, Md Hashem Ali; Kim, Kyeong Jin

    2013-12-01

    Recently, Lee and Hou (IEEE Signal Process Lett 13: 461-464, 2006) proposed one-dimensional and two-dimensional fast algorithms for block-wise inverse Jacket transforms (BIJTs). Their BIJTs are not real inverse Jacket transforms from mathematical point of view because their inverses do not satisfy the usual condition, i.e., the multiplication of a matrix with its inverse matrix is not equal to the identity matrix. Therefore, we mathematically propose a fast block-wise inverse Jacket transform of orders N = 2 k , 3 k , 5 k , and 6 k , where k is a positive integer. Based on the Kronecker product of the successive lower order Jacket matrices and the basis matrix, the fast algorithms for realizing these transforms are obtained. Due to the simple inverse and fast algorithms of Arikan polar binary and Alamouti multiple-input multiple-output (MIMO) non-binary matrices, which are obtained from BIJTs, they can be applied in areas such as 3GPP physical layer for ultra mobile broadband permutation matrices design, first-order q-ary Reed-Muller code design, diagonal channel design, diagonal subchannel decompose for interference alignment, and 4G MIMO long-term evolution Alamouti precoding design.

  16. Matrix completion by deep matrix factorization.

    Science.gov (United States)

    Fan, Jicong; Cheng, Jieyu

    2018-02-01

    Conventional methods of matrix completion are linear methods that are not effective in handling data of nonlinear structures. Recently a few researchers attempted to incorporate nonlinear techniques into matrix completion but there still exists considerable limitations. In this paper, a novel method called deep matrix factorization (DMF) is proposed for nonlinear matrix completion. Different from conventional matrix completion methods that are based on linear latent variable models, DMF is on the basis of a nonlinear latent variable model. DMF is formulated as a deep-structure neural network, in which the inputs are the low-dimensional unknown latent variables and the outputs are the partially observed variables. In DMF, the inputs and the parameters of the multilayer neural network are simultaneously optimized to minimize the reconstruction errors for the observed entries. Then the missing entries can be readily recovered by propagating the latent variables to the output layer. DMF is compared with state-of-the-art methods of linear and nonlinear matrix completion in the tasks of toy matrix completion, image inpainting and collaborative filtering. The experimental results verify that DMF is able to provide higher matrix completion accuracy than existing methods do and DMF is applicable to large matrices. Copyright © 2017 Elsevier Ltd. All rights reserved.

  17. Pareto joint inversion of 2D magnetotelluric and gravity data

    Science.gov (United States)

    Miernik, Katarzyna; Bogacz, Adrian; Kozubal, Adam; Danek, Tomasz; Wojdyła, Marek

    2015-04-01

    In this contribution, the first results of the "Innovative technology of petrophysical parameters estimation of geological media using joint inversion algorithms" project were described. At this stage of the development, Pareto joint inversion scheme for 2D MT and gravity data was used. Additionally, seismic data were provided to set some constrains for the inversion. Sharp Boundary Interface(SBI) approach and description model with set of polygons were used to limit the dimensionality of the solution space. The main engine was based on modified Particle Swarm Optimization(PSO). This algorithm was properly adapted to handle two or more target function at once. Additional algorithm was used to eliminate non- realistic solution proposals. Because PSO is a method of stochastic global optimization, it requires a lot of proposals to be evaluated to find a single Pareto solution and then compose a Pareto front. To optimize this stage parallel computing was used for both inversion engine and 2D MT forward solver. There are many advantages of proposed solution of joint inversion problems. First of all, Pareto scheme eliminates cumbersome rescaling of the target functions, that can highly affect the final solution. Secondly, the whole set of solution is created in one optimization run, providing a choice of the final solution. This choice can be based off qualitative data, that are usually very hard to be incorporated into the regular inversion schema. SBI parameterisation not only limits the problem of dimensionality, but also makes constraining of the solution easier. At this stage of work, decision to test the approach using MT and gravity data was made, because this combination is often used in practice. It is important to mention, that the general solution is not limited to this two methods and it is flexible enough to be used with more than two sources of data. Presented results were obtained for synthetic models, imitating real geological conditions, where

  18. A Generalization of the Spherical Inversion

    Science.gov (United States)

    Ramírez, José L.; Rubiano, Gustavo N.

    2017-01-01

    In the present article, we introduce a generalization of the spherical inversion. In particular, we define an inversion with respect to an ellipsoid, and prove several properties of this new transformation. The inversion in an ellipsoid is the generalization of the elliptic inversion to the three-dimensional space. We also study the inverse images…

  19. Continuity of the direct and inverse problems in one-dimensional scattering theory and numerical solution of the inverse problem

    International Nuclear Information System (INIS)

    Moura, C.A. de.

    1976-09-01

    We propose an algorithm for computing the potential V(x) associated to the one-dimensional Schroedinger operator E identical to - d 2 /dx 2 + V(x) -infinite < x< infinite from knowledge of the S.matrix, more exactly, of one of the reelection coefficients. The convergence of the algorithm is guaranteed by the stability results obtained for both the direct and inverse problems

  20. Comparison of inverse Laplace and numerical inversion methods for obtaining z-depth profiles of diffraction data

    International Nuclear Information System (INIS)

    Xiaojing Zhu; Predecki, P.; Ballard, B.

    1995-01-01

    Two different inversion methods, the inverse Laplace method and the linear constrained numerical method, for retrieving the z-profiles of diffraction data from experimentally obtained i-profiles were compared using tests with a known function as the original z-profile. Two different real data situations were simulated to determine the effects of specimen thickness and missing τ-profile data at small τ-values on the retrieved z-profiles. The results indicate that although both methods are able to retrieve the z-profiles in the bulk specimens satisfactorily, the numerical method can be used for thin film samples as well. Missing τ-profile data at small τ values causes error in the retrieved z-profiles with both methods, particularly when the trend of the τ-profile at small τ is significantly changed because of the missing data. 6 refs., 3 figs

  1. The generalised Marchenko equation and the canonical structure of the A.K.N.S.-Z.S. inverse method

    International Nuclear Information System (INIS)

    Dodd, R.K.; Bullough, R.K.

    1979-01-01

    A generalised Marchenko equation is derived for a 2 X 2 matrix inverse method and it is used to show that, for the subset of equations solvable by the method which can be constructed as defining the flows of Hamiltonians, the inverse transform is a canonical (homogeneous contact) transformation. Baecklund transformations are re-examined from this point of view. (Auth.)

  2. The Matrix Cookbook

    DEFF Research Database (Denmark)

    Petersen, Kaare Brandt; Pedersen, Michael Syskind

    Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices.......Matrix identities, relations and approximations. A desktop reference for quick overview of mathematics of matrices....

  3. On some Toeplitz matrices and their inversions

    Directory of Open Access Journals (Sweden)

    S. Dutta

    2014-10-01

    Full Text Available In this article, using the difference operator B(a[m], we introduce a lower triangular Toeplitz matrix T which includes several difference matrices such as Δ(1,Δ(m,B(r,s,B(r,s,t, and B(r̃,s̃,t̃,ũ in different special cases. For any x ∈ w and m∈N0={0,1,2,…}, the difference operator B(a[m] is defined by (B(a[m]xk=ak(0xk+ak-1(1xk-1+ak-2(2xk-2+⋯+ak-m(mxk-m,(k∈N0 where a[m] = {a(0, a(1, …, a(m} and a(i = (ak(i for 0 ⩽ i ⩽ m are convergent sequences of real numbers. We use the convention that any term with negative subscript is equal to zero. The main results of this article relate to the determination and applications of the inverse of the Toeplitz matrix T.

  4. Statistical perspectives on inverse problems

    DEFF Research Database (Denmark)

    Andersen, Kim Emil

    of the interior of an object from electrical boundary measurements. One part of this thesis concerns statistical approaches for solving, possibly non-linear, inverse problems. Thus inverse problems are recasted in a form suitable for statistical inference. In particular, a Bayesian approach for regularisation...... problem is given in terms of probability distributions. Posterior inference is obtained by Markov chain Monte Carlo methods and new, powerful simulation techniques based on e.g. coupled Markov chains and simulated tempering is developed to improve the computational efficiency of the overall simulation......Inverse problems arise in many scientific disciplines and pertain to situations where inference is to be made about a particular phenomenon from indirect measurements. A typical example, arising in diffusion tomography, is the inverse boundary value problem for non-invasive reconstruction...

  5. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-01

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded

  6. Wave-equation dispersion inversion

    KAUST Repository

    Li, Jing; Feng, Zongcai; Schuster, Gerard T.

    2016-01-01

    We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained

  7. Parameter estimation and inverse problems

    CERN Document Server

    Aster, Richard C; Thurber, Clifford H

    2005-01-01

    Parameter Estimation and Inverse Problems primarily serves as a textbook for advanced undergraduate and introductory graduate courses. Class notes have been developed and reside on the World Wide Web for faciliting use and feedback by teaching colleagues. The authors'' treatment promotes an understanding of fundamental and practical issus associated with parameter fitting and inverse problems including basic theory of inverse problems, statistical issues, computational issues, and an understanding of how to analyze the success and limitations of solutions to these probles. The text is also a practical resource for general students and professional researchers, where techniques and concepts can be readily picked up on a chapter-by-chapter basis.Parameter Estimation and Inverse Problems is structured around a course at New Mexico Tech and is designed to be accessible to typical graduate students in the physical sciences who may not have an extensive mathematical background. It is accompanied by a Web site that...

  8. Carbonate fuel cell matrix

    Science.gov (United States)

    Farooque, Mohammad; Yuh, Chao-Yi

    1996-01-01

    A carbonate fuel cell matrix comprising support particles and crack attenuator particles which are made platelet in shape to increase the resistance of the matrix to through cracking. Also disclosed is a matrix having porous crack attenuator particles and a matrix whose crack attenuator particles have a thermal coefficient of expansion which is significantly different from that of the support particles, and a method of making platelet-shaped crack attenuator particles.

  9. Inversion Therapy: Can It Relieve Back Pain?

    Science.gov (United States)

    Inversion therapy: Can it relieve back pain? Does inversion therapy relieve back pain? Is it safe? Answers from Edward R. Laskowski, M.D. Inversion therapy doesn't provide lasting relief from back ...

  10. Constraining the roughness degree of slip heterogeneity

    KAUST Repository

    Causse, Mathieu; Cotton, Fabrice; Mai, Paul Martin

    2010-01-01

    out poorly resolved models. The number of remaining models (30) is thus rather small. In addition, the robustness of the empirical model rests on a reliable estimation of kc by kinematic inversion methods. We address this issue by performing tests

  11. Thermal measurements and inverse techniques

    CERN Document Server

    Orlande, Helcio RB; Maillet, Denis; Cotta, Renato M

    2011-01-01

    With its uncommon presentation of instructional material regarding mathematical modeling, measurements, and solution of inverse problems, Thermal Measurements and Inverse Techniques is a one-stop reference for those dealing with various aspects of heat transfer. Progress in mathematical modeling of complex industrial and environmental systems has enabled numerical simulations of most physical phenomena. In addition, recent advances in thermal instrumentation and heat transfer modeling have improved experimental procedures and indirect measurements for heat transfer research of both natural phe

  12. Computation of inverse magnetic cascades

    International Nuclear Information System (INIS)

    Montgomery, D.

    1981-10-01

    Inverse cascades of magnetic quantities for turbulent incompressible magnetohydrodynamics are reviewed, for two and three dimensions. The theory is extended to the Strauss equations, a description intermediate between two and three dimensions appropriate to tokamak magnetofluids. Consideration of the absolute equilibrium Gibbs ensemble for the system leads to a prediction of an inverse cascade of magnetic helicity, which may manifest itself as a major disruption. An agenda for computational investigation of this conjecture is proposed

  13. The attitude inversion method of geostationary satellites based on unscented particle filter

    Science.gov (United States)

    Du, Xiaoping; Wang, Yang; Hu, Heng; Gou, Ruixin; Liu, Hao

    2018-04-01

    The attitude information of geostationary satellites is difficult to be obtained since they are presented in non-resolved images on the ground observation equipment in space object surveillance. In this paper, an attitude inversion method for geostationary satellite based on Unscented Particle Filter (UPF) and ground photometric data is presented. The inversion algorithm based on UPF is proposed aiming at the strong non-linear feature in the photometric data inversion for satellite attitude, which combines the advantage of Unscented Kalman Filter (UKF) and Particle Filter (PF). This update method improves the particle selection based on the idea of UKF to redesign the importance density function. Moreover, it uses the RMS-UKF to partially correct the prediction covariance matrix, which improves the applicability of the attitude inversion method in view of UKF and the particle degradation and dilution of the attitude inversion method based on PF. This paper describes the main principles and steps of algorithm in detail, correctness, accuracy, stability and applicability of the method are verified by simulation experiment and scaling experiment in the end. The results show that the proposed method can effectively solve the problem of particle degradation and depletion in the attitude inversion method on account of PF, and the problem that UKF is not suitable for the strong non-linear attitude inversion. However, the inversion accuracy is obviously superior to UKF and PF, in addition, in the case of the inversion with large attitude error that can inverse the attitude with small particles and high precision.

  14. Matrix with Prescribed Eigenvectors

    Science.gov (United States)

    Ahmad, Faiz

    2011-01-01

    It is a routine matter for undergraduates to find eigenvalues and eigenvectors of a given matrix. But the converse problem of finding a matrix with prescribed eigenvalues and eigenvectors is rarely discussed in elementary texts on linear algebra. This problem is related to the "spectral" decomposition of a matrix and has important technical…

  15. Triangularization of a Matrix

    Indian Academy of Sciences (India)

    Much of linear algebra is devoted to reducing a matrix (via similarity or unitary similarity) to another that has lots of zeros. The simplest such theorem is the Schur triangularization theorem. This says that every matrix is unitarily similar to an upper triangular matrix. Our aim here is to show that though it is very easy to prove it ...

  16. EDITORIAL: Inverse Problems in Engineering

    Science.gov (United States)

    West, Robert M.; Lesnic, Daniel

    2007-01-01

    Presented here are 11 noteworthy papers selected from the Fifth International Conference on Inverse Problems in Engineering: Theory and Practice held in Cambridge, UK during 11-15 July 2005. The papers have been peer-reviewed to the usual high standards of this journal and the contributions of reviewers are much appreciated. The conference featured a good balance of the fundamental mathematical concepts of inverse problems with a diverse range of important and interesting applications, which are represented here by the selected papers. Aspects of finite-element modelling and the performance of inverse algorithms are investigated by Autrique et al and Leduc et al. Statistical aspects are considered by Emery et al and Watzenig et al with regard to Bayesian parameter estimation and inversion using particle filters. Electrostatic applications are demonstrated by van Berkel and Lionheart and also Nakatani et al. Contributions to the applications of electrical techniques and specifically electrical tomographies are provided by Wakatsuki and Kagawa, Kim et al and Kortschak et al. Aspects of inversion in optical tomography are investigated by Wright et al and Douiri et al. The authors are representative of the worldwide interest in inverse problems relating to engineering applications and their efforts in producing these excellent papers will be appreciated by many readers of this journal.

  17. Nonlinear Spatial Inversion Without Monte Carlo Sampling

    Science.gov (United States)

    Curtis, A.; Nawaz, A.

    2017-12-01

    High-dimensional, nonlinear inverse or inference problems usually have non-unique solutions. The distribution of solutions are described by probability distributions, and these are usually found using Monte Carlo (MC) sampling methods. These take pseudo-random samples of models in parameter space, calculate the probability of each sample given available data and other information, and thus map out high or low probability values of model parameters. However, such methods would converge to the solution only as the number of samples tends to infinity; in practice, MC is found to be slow to converge, convergence is not guaranteed to be achieved in finite time, and detection of convergence requires the use of subjective criteria. We propose a method for Bayesian inversion of categorical variables such as geological facies or rock types in spatial problems, which requires no sampling at all. The method uses a 2-D Hidden Markov Model over a grid of cells, where observations represent localized data constraining the model in each cell. The data in our example application are seismic properties such as P- and S-wave impedances or rock density; our model parameters are the hidden states and represent the geological rock types in each cell. The observations at each location are assumed to depend on the facies at that location only - an assumption referred to as `localized likelihoods'. However, the facies at a location cannot be determined solely by the observation at that location as it also depends on prior information concerning its correlation with the spatial distribution of facies elsewhere. Such prior information is included in the inversion in the form of a training image which represents a conceptual depiction of the distribution of local geologies that might be expected, but other forms of prior information can be used in the method as desired. The method provides direct (pseudo-analytic) estimates of posterior marginal probability distributions over each variable

  18. Bayesian Markov Chain Monte Carlo inversion for weak anisotropy parameters and fracture weaknesses using azimuthal elastic impedance

    Science.gov (United States)

    Chen, Huaizhen; Pan, Xinpeng; Ji, Yuxin; Zhang, Guangzhi

    2017-08-01

    A system of aligned vertical fractures and fine horizontal shale layers combine to form equivalent orthorhombic media. Weak anisotropy parameters and fracture weaknesses play an important role in the description of orthorhombic anisotropy (OA). We propose a novel approach of utilizing seismic reflection amplitudes to estimate weak anisotropy parameters and fracture weaknesses from observed seismic data, based on azimuthal elastic impedance (EI). We first propose perturbation in stiffness matrix in terms of weak anisotropy parameters and fracture weaknesses, and using the perturbation and scattering function, we derive PP-wave reflection coefficient and azimuthal EI for the case of an interface separating two OA media. Then we demonstrate an approach to first use a model constrained damped least-squares algorithm to estimate azimuthal EI from partially incidence-phase-angle-stack seismic reflection data at different azimuths, and then extract weak anisotropy parameters and fracture weaknesses from the estimated azimuthal EI using a Bayesian Markov Chain Monte Carlo inversion method. In addition, a new procedure to construct rock physics effective model is presented to estimate weak anisotropy parameters and fracture weaknesses from well log interpretation results (minerals and their volumes, porosity, saturation, fracture density, etc.). Tests on synthetic and real data indicate that unknown parameters including elastic properties (P- and S-wave impedances and density), weak anisotropy parameters and fracture weaknesses can be estimated stably in the case of seismic data containing a moderate noise, and our approach can make a reasonable estimation of anisotropy in a fractured shale reservoir.

  19. Iron control on global productivity: an efficient inverse model of the ocean's coupled phosphate and iron cycles.

    Science.gov (United States)

    Pasquier, B.; Holzer, M.; Frants, M.

    2016-02-01

    We construct a data-constrained mechanistic inverse model of the ocean's coupled phosphorus and iron cycles. The nutrient cycling is embedded in a data-assimilated steady global circulation. Biological nutrient uptake is parameterized in terms of nutrient, light, and temperature limitations on growth for two classes of phytoplankton that are not transported explicitly. A matrix formulation of the discretized nutrient tracer equations allows for efficient numerical solutions, which facilitates the objective optimization of the key biogeochemical parameters. The optimization minimizes the misfit between the modelled and observed nutrient fields of the current climate. We systematically assess the nonlinear response of the biological pump to changes in the aeolian iron supply for a variety of scenarios. Specifically, Green-function techniques are employed to quantify in detail the pathways and timescales with which those perturbations are propagated throughout the world oceans, determining the global teleconnections that mediate the response of the global ocean ecosystem. We confirm previous findings from idealized studies that increased iron fertilization decreases biological production in the subtropical gyres and we quantify the counterintuitive and asymmetric response of global productivity to increases and decreases in the aeolian iron supply.

  20. Continuation of Sets of Constrained Orbit Segments

    DEFF Research Database (Denmark)

    Schilder, Frank; Brøns, Morten; Chamoun, George Chaouki

    Sets of constrained orbit segments of time continuous flows are collections of trajectories that represent a whole or parts of an invariant set. A non-trivial but simple example is a homoclinic orbit. A typical representation of this set consists of an equilibrium point of the flow and a trajectory...... that starts close and returns close to this fixed point within finite time. More complicated examples are hybrid periodic orbits of piecewise smooth systems or quasi-periodic invariant tori. Even though it is possible to define generalised two-point boundary value problems for computing sets of constrained...... orbit segments, this is very disadvantageous in practice. In this talk we will present an algorithm that allows the efficient continuation of sets of constrained orbit segments together with the solution of the full variational problem....

  1. A constrained extended Kalman filter for the optimal estimate of kinematics and kinetics of a sagittal symmetric exercise.

    Science.gov (United States)

    Bonnet, V; Dumas, R; Cappozzo, A; Joukov, V; Daune, G; Kulić, D; Fraisse, P; Andary, S; Venture, G

    2017-09-06

    This paper presents a method for real-time estimation of the kinematics and kinetics of a human body performing a sagittal symmetric motor task, which would minimize the impact of the stereophotogrammetric soft tissue artefacts (STA). The method is based on a bi-dimensional mechanical model of the locomotor apparatus the state variables of which (joint angles, velocities and accelerations, and the segments lengths and inertial parameters) are estimated by a constrained extended Kalman filter (CEKF) that fuses input information made of both stereophotogrammetric and dynamometric measurement data. Filter gains are made to saturate in order to obtain plausible state variables and the measurement covariance matrix of the filter accounts for the expected STA maximal amplitudes. We hypothesised that the ensemble of constraints and input redundant information would allow the method to attenuate the STA propagation to the end results. The method was evaluated in ten human subjects performing a squat exercise. The CEKF estimated and measured skin marker trajectories exhibited a RMS difference lower than 4mm, thus in the range of STAs. The RMS differences between the measured ground reaction force and moment and those estimated using the proposed method (9N and 10Nm) were much lower than obtained using a classical inverse dynamics approach (22N and 30Nm). From the latter results it may be inferred that the presented method allows for a significant improvement of the accuracy with which kinematic variables and relevant time derivatives, model parameters and, therefore, intersegmental moments are estimated. Copyright © 2016 Elsevier Ltd. All rights reserved.

  2. Constrained principal component analysis and related techniques

    CERN Document Server

    Takane, Yoshio

    2013-01-01

    In multivariate data analysis, regression techniques predict one set of variables from another while principal component analysis (PCA) finds a subspace of minimal dimensionality that captures the largest variability in the data. How can regression analysis and PCA be combined in a beneficial way? Why and when is it a good idea to combine them? What kind of benefits are we getting from them? Addressing these questions, Constrained Principal Component Analysis and Related Techniques shows how constrained PCA (CPCA) offers a unified framework for these approaches.The book begins with four concre

  3. Exact closed-form expression for the inverse moments of one-sided correlated Gram matrices

    KAUST Repository

    Elkhalil, Khalil

    2016-08-15

    In this paper, we derive a closed-form expression for the inverse moments of one sided-correlated random Gram matrices. Such a question is mainly motivated by applications in signal processing and wireless communications for which evaluating this quantity is a question of major interest. This is for instance the case of the best linear unbiased estimator, in which the average estimation error corresponds to the first inverse moment of a random Gram matrix.

  4. Exact closed-form expression for the inverse moments of one-sided correlated Gram matrices

    KAUST Repository

    Elkhalil, Khalil; Kammoun, Abla; Al-Naffouri, Tareq Y.; Alouini, Mohamed-Slim

    2016-01-01

    In this paper, we derive a closed-form expression for the inverse moments of one sided-correlated random Gram matrices. Such a question is mainly motivated by applications in signal processing and wireless communications for which evaluating this quantity is a question of major interest. This is for instance the case of the best linear unbiased estimator, in which the average estimation error corresponds to the first inverse moment of a random Gram matrix.

  5. Monofrequency waveform acquisition and inversion: A new paradigm

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-01-01

    In seismic inversion, we tend to use the geometrical behavior of the wavefield (the kinematics), extracted from the data, to constrain the long wavelength model components and use the recorded reections to invert for the short wavelength features in a process referred to as full waveform inversion (FWI). For such a recipe, single frequency (the right frequency) data are capable of providing the ingredients for both model components. A frequency that provides model wavelengths (through the transmission components) low enough to update the background and high enough (reections) to map the scattering may render the other frequencies almost obsolete, especially large offset data are available to provide the transition from background to scattering components. Thus, I outline a scenario in which we acquire dedicated mono frequency data, allowing for more time to inject more of that single frequency energy at a reduced cost. The cost savings can be utilized to acquire larger offsets, which is an important for constraining the background model. Combing this single frequency data with a hierarchical scattering angle filter strategy in FWI, and potentially reection FWI, provides an opportunity to invert for complex models starting even with poor initial velocity models. The objective of this new paradigm is a high resolution model of the Earth to replace our focus on the image, which requires a band of frequencies.

  6. Monofrequency waveform acquisition and inversion: A new paradigm

    KAUST Repository

    Alkhalifah, Tariq Ali

    2014-08-05

    In seismic inversion, we tend to use the geometrical behavior of the wavefield (the kinematics), extracted from the data, to constrain the long wavelength model components and use the recorded reections to invert for the short wavelength features in a process referred to as full waveform inversion (FWI). For such a recipe, single frequency (the right frequency) data are capable of providing the ingredients for both model components. A frequency that provides model wavelengths (through the transmission components) low enough to update the background and high enough (reections) to map the scattering may render the other frequencies almost obsolete, especially large offset data are available to provide the transition from background to scattering components. Thus, I outline a scenario in which we acquire dedicated mono frequency data, allowing for more time to inject more of that single frequency energy at a reduced cost. The cost savings can be utilized to acquire larger offsets, which is an important for constraining the background model. Combing this single frequency data with a hierarchical scattering angle filter strategy in FWI, and potentially reection FWI, provides an opportunity to invert for complex models starting even with poor initial velocity models. The objective of this new paradigm is a high resolution model of the Earth to replace our focus on the image, which requires a band of frequencies.

  7. The source parameters of 2013 Mw6.6 Lushan earthquake constrained with the restored local clipped seismic waveforms

    Science.gov (United States)

    Hao, J.; Zhang, J. H.; Yao, Z. X.

    2017-12-01

    We developed a method to restore the clipped seismic waveforms near epicenter using projection onto convex sets method (Zhang et al, 2016). This method was applied to rescue the local clipped waveforms of 2013 Mw 6.6 Lushan earthquake. We restored 88 out of 93 clipped waveforms of 38 broadband seismic stations of China Earthquake Networks (CEN). The epicenter distance of the nearest station to the epicenter that we can faithfully restore is only about 32 km. In order to investigate if the source parameters of earthquake could be determined exactly with the restored data, restored waveforms are utilized to get the mechanism of Lushan earthquake. We apply the generalized reflection-transmission coefficient matrix method to calculate the synthetic seismic records and simulated annealing method in inversion (Yao and Harkrider, 1983; Hao et al., 2012). We select 5 stations of CEN with the epicenter distance about 200km whose records aren't clipped and three-component velocity records are used. The result shows the strike, dip and rake angles of Lushan earthquake are 200o, 51o and 87o respectively, hereinafter "standard result". Then the clipped and restored seismic waveforms are applied respectively. The strike, dip and rake angles of clipped seismic waveforms are 184o, 53o and 72o respectively. The largest misfit of angle is 16o. In contrast, the strike, dip and rake angles of restored seismic waveforms are 198o, 51o and 87o respectively. It is very close to the "standard result". We also study the rupture history of Lushan earthquake constrained with the restored local broadband and teleseismic waves based on finite fault method (Hao et al., 2013). The result consists with that constrained with the strong motion and teleseismic waves (Hao et al., 2013), especially the location of the patch with larger slip. In real-time seismology, determining the source parameters as soon as possible is important. This method will help us to determine the mechanism of earthquake

  8. The inverse problem of the calculus of variations for discrete systems

    Science.gov (United States)

    Barbero-Liñán, María; Farré Puiggalí, Marta; Ferraro, Sebastián; Martín de Diego, David

    2018-05-01

    We develop a geometric version of the inverse problem of the calculus of variations for discrete mechanics and constrained discrete mechanics. The geometric approach consists of using suitable Lagrangian and isotropic submanifolds. We also provide a transition between the discrete and the continuous problems and propose variationality as an interesting geometric property to take into account in the design and computer simulation of numerical integrators for constrained systems. For instance, nonholonomic mechanics is generally non variational but some special cases admit an alternative variational description. We apply some standard nonholonomic integrators to such an example to study which ones conserve this property.

  9. Visco-elastic controlled-source full waveform inversion without surface waves

    Science.gov (United States)

    Paschke, Marco; Krause, Martin; Bleibinhaus, Florian

    2016-04-01

    We developed a frequency-domain visco-elastic full waveform inversion for onshore seismic experiments with topography. The forward modeling is based on a finite-difference time-domain algorithm by Robertsson that uses the image-method to ensure a stress-free condition at the surface. The time-domain data is Fourier-transformed at every point in the model space during the forward modeling for a given set of frequencies. The motivation for this approach is the reduced amount of memory when computing kernels, and the straightforward implementation of the multiscale approach. For the inversion, we calculate the Frechet derivative matrix explicitly, and we implement a Levenberg-Marquardt scheme that allows for computing the resolution matrix. To reduce the size of the Frechet derivative matrix, and to stabilize the inversion, an adapted inverse mesh is used. The node spacing is controlled by the velocity distribution and the chosen frequencies. To focus the inversion on body waves (P, P-coda, and S) we mute the surface waves from the data. Consistent spatiotemporal weighting factors are applied to the wavefields during the Fourier transform to obtain the corresponding kernels. We test our code with a synthetic study using the Marmousi model with arbitrary topography. This study also demonstrates the importance of topography and muting surface waves in controlled-source full waveform inversion.

  10. On Tree-Constrained Matchings and Generalizations

    NARCIS (Netherlands)

    S. Canzar (Stefan); K. Elbassioni; G.W. Klau (Gunnar); J. Mestre

    2011-01-01

    htmlabstractWe consider the following \\textsc{Tree-Constrained Bipartite Matching} problem: Given two rooted trees $T_1=(V_1,E_1)$, $T_2=(V_2,E_2)$ and a weight function $w: V_1\\times V_2 \\mapsto \\mathbb{R}_+$, find a maximum weight matching $\\mathcal{M}$ between nodes of the two trees, such that

  11. Constrained systems described by Nambu mechanics

    International Nuclear Information System (INIS)

    Lassig, C.C.; Joshi, G.C.

    1996-01-01

    Using the framework of Nambu's generalised mechanics, we obtain a new description of constrained Hamiltonian dynamics, involving the introduction of another degree of freedom in phase space, and the necessity of defining the action integral on a world sheet. We also discuss the problem of quantizing Nambu mechanics. (authors). 5 refs

  12. Client's constraining factors to construction project management ...

    African Journals Online (AJOL)

    This study analyzed client's related factors that constrain project management success of public and private sector construction in Nigeria. Issues that concern clients in any project can not be undermined as they are the owners and the initiators of project proposals. It is assumed that success, failure or abandonment of ...

  13. Hyperbolicity and constrained evolution in linearized gravity

    International Nuclear Information System (INIS)

    Matzner, Richard A.

    2005-01-01

    Solving the 4-d Einstein equations as evolution in time requires solving equations of two types: the four elliptic initial data (constraint) equations, followed by the six second order evolution equations. Analytically the constraint equations remain solved under the action of the evolution, and one approach is to simply monitor them (unconstrained evolution). Since computational solution of differential equations introduces almost inevitable errors, it is clearly 'more correct' to introduce a scheme which actively maintains the constraints by solution (constrained evolution). This has shown promise in computational settings, but the analysis of the resulting mixed elliptic hyperbolic method has not been completely carried out. We present such an analysis for one method of constrained evolution, applied to a simple vacuum system, linearized gravitational waves. We begin with a study of the hyperbolicity of the unconstrained Einstein equations. (Because the study of hyperbolicity deals only with the highest derivative order in the equations, linearization loses no essential details.) We then give explicit analytical construction of the effect of initial data setting and constrained evolution for linearized gravitational waves. While this is clearly a toy model with regard to constrained evolution, certain interesting features are found which have relevance to the full nonlinear Einstein equations

  14. A Dynamic Programming Approach to Constrained Portfolios

    DEFF Research Database (Denmark)

    Kraft, Holger; Steffensen, Mogens

    2013-01-01

    This paper studies constrained portfolio problems that may involve constraints on the probability or the expected size of a shortfall of wealth or consumption. Our first contribution is that we solve the problems by dynamic programming, which is in contrast to the existing literature that applies...

  15. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    2001-01-01

    A model for constrained computerized adaptive testing is proposed in which the information on the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  16. A model for optimal constrained adaptive testing

    NARCIS (Netherlands)

    van der Linden, Willem J.; Reese, Lynda M.

    1997-01-01

    A model for constrained computerized adaptive testing is proposed in which the information in the test at the ability estimate is maximized subject to a large variety of possible constraints on the contents of the test. At each item-selection step, a full test is first assembled to have maximum

  17. Neutron Powder Diffraction and Constrained Refinement

    DEFF Research Database (Denmark)

    Pawley, G. S.; Mackenzie, Gordon A.; Dietrich, O. W.

    1977-01-01

    The first use of a new program, EDINP, is reported. This program allows the constrained refinement of molecules in a crystal structure with neutron diffraction powder data. The structures of p-C6F4Br2 and p-C6F4I2 are determined by packing considerations and then refined with EDINP. Refinement is...

  18. Terrestrial Sagnac delay constraining modified gravity models

    Science.gov (United States)

    Karimov, R. Kh.; Izmailov, R. N.; Potapov, A. A.; Nandi, K. K.

    2018-04-01

    Modified gravity theories include f(R)-gravity models that are usually constrained by the cosmological evolutionary scenario. However, it has been recently shown that they can also be constrained by the signatures of accretion disk around constant Ricci curvature Kerr-f(R0) stellar sized black holes. Our aim here is to use another experimental fact, viz., the terrestrial Sagnac delay to constrain the parameters of specific f(R)-gravity prescriptions. We shall assume that a Kerr-f(R0) solution asymptotically describes Earth's weak gravity near its surface. In this spacetime, we shall study oppositely directed light beams from source/observer moving on non-geodesic and geodesic circular trajectories and calculate the time gap, when the beams re-unite. We obtain the exact time gap called Sagnac delay in both cases and expand it to show how the flat space value is corrected by the Ricci curvature, the mass and the spin of the gravitating source. Under the assumption that the magnitude of corrections are of the order of residual uncertainties in the delay measurement, we derive the allowed intervals for Ricci curvature. We conclude that the terrestrial Sagnac delay can be used to constrain the parameters of specific f(R) prescriptions. Despite using the weak field gravity near Earth's surface, it turns out that the model parameter ranges still remain the same as those obtained from the strong field accretion disk phenomenon.

  19. Chance constrained uncertain classification via robust optimization

    NARCIS (Netherlands)

    Ben-Tal, A.; Bhadra, S.; Bhattacharayya, C.; Saketha Nat, J.

    2011-01-01

    This paper studies the problem of constructing robust classifiers when the training is plagued with uncertainty. The problem is posed as a Chance-Constrained Program (CCP) which ensures that the uncertain data points are classified correctly with high probability. Unfortunately such a CCP turns out

  20. Integrating job scheduling and constrained network routing

    DEFF Research Database (Denmark)

    Gamst, Mette

    2010-01-01

    This paper examines the NP-hard problem of scheduling jobs on resources such that the overall profit of executed jobs is maximized. Job demand must be sent through a constrained network to the resource before execution can begin. The problem has application in grid computing, where a number...

  1. Neuroevolutionary Constrained Optimization for Content Creation

    DEFF Research Database (Denmark)

    Liapis, Antonios; Yannakakis, Georgios N.; Togelius, Julian

    2011-01-01

    and thruster types and topologies) independently of game physics and steering strategies. According to the proposed framework, the designer picks a set of requirements for the spaceship that a constrained optimizer attempts to satisfy. The constraint satisfaction approach followed is based on neuroevolution...... and survival tasks and are also visually appealing....

  2. Models of Flux Tubes from Constrained Relaxation

    Indian Academy of Sciences (India)

    tribpo

    J. Astrophys. Astr. (2000) 21, 299 302. Models of Flux Tubes from Constrained Relaxation. Α. Mangalam* & V. Krishan†, Indian Institute of Astrophysics, Koramangala,. Bangalore 560 034, India. *e mail: mangalam @ iiap. ernet. in. † e mail: vinod@iiap.ernet.in. Abstract. We study the relaxation of a compressible plasma to ...

  3. A new parameterization for waveform inversion in acoustic orthorhombic media

    KAUST Repository

    Masmoudi, Nabil

    2016-05-26

    Orthorhombic anisotropic model inversion is extra challenging because of the multiple parameter nature of the inversion problem. The high number of parameters required to describe the medium exerts considerable trade-off and additional nonlinearity to a full-waveform inversion (FWI) application. Choosing a suitable set of parameters to describe the model and designing an effective inversion strategy can help in mitigating this problem. Using the Born approximation, which is the central ingredient of the FWI update process, we have derived radiation patterns for the different acoustic orthorhombic parameterizations. Analyzing the angular dependence of scattering (radiation patterns) of the parameters of different parameterizations starting with the often used Thomsen-Tsvankin parameterization, we have assessed the potential trade-off between the parameters and the resolution in describing the data and inverting for the parameters. The analysis led us to introduce new parameters ϵd, δd, and ηd, which have azimuthally dependent radiation patterns, but keep the scattering potential of the transversely isotropic parameters stationary with azimuth (azimuth independent). The novel parameters ϵd, δd, and ηd are dimensionless and represent a measure of deviation between the vertical planes in orthorhombic anisotropy. Therefore, these deviation parameters offer a new parameterization style for an acoustic orthorhombic medium described by six parameters: three vertical transversely isotropic (VTI) parameters, two deviation parameters, and one parameter describing the anisotropy in the horizontal symmetry plane. The main feature of any parameterization based on the deviation parameters, is the azimuthal independency of the modeled data with respect to the VTI parameters, which allowed us to propose practical inversion strategies based on our experience with the VTI parameters. This feature of the new parameterization style holds for even the long-wavelength components of

  4. Thermal stress effects in intermetallic matrix composites

    Science.gov (United States)

    Wright, P. K.; Sensmeier, M. D.; Kupperman, D. S.; Wadley, H. N. G.

    1993-01-01

    Intermetallic matrix composites develop residual stresses from the large thermal expansion mismatch (delta-alpha) between the fibers and matrix. This work was undertaken to: establish improved techniques to measure these thermal stresses in IMC's; determine residual stresses in a variety of IMC systems by experiments and modeling; and, determine the effect of residual stresses on selected mechanical properties of an IMC. X ray diffraction (XRD), neutron diffraction (ND), synchrotron XRD (SXRD), and ultrasonics (US) techniques for measuring thermal stresses in IMC were examined and ND was selected as the most promising technique. ND was demonstrated on a variety of IMC systems encompassing Ti- and Ni-base matrices, SiC, W, and Al2O3 fibers, and different fiber fractions (Vf). Experimental results on these systems agreed with predictions of a concentric cylinder model. In SiC/Ti-base systems, little yielding was found and stresses were controlled primarily by delta-alpha and Vf. In Ni-base matrix systems, yield strength of the matrix and Vf controlled stress levels. The longitudinal residual stresses in SCS-6/Ti-24Al-llNb composite were modified by thermomechanical processing. Increasing residual stress decreased ultimate tensile strength in agreement with model predictions. Fiber pushout strength showed an unexpected inverse correlation with residual stress. In-plane shear yield strength showed no dependence on residual stress. Higher levels of residual tension led to higher fatigue crack growth rates, as suggested by matrix mean stress effects.

  5. Inverse source problems in elastodynamics

    Science.gov (United States)

    Bao, Gang; Hu, Guanghui; Kian, Yavar; Yin, Tao

    2018-04-01

    We are concerned with time-dependent inverse source problems in elastodynamics. The source term is supposed to be the product of a spatial function and a temporal function with compact support. We present frequency-domain and time-domain approaches to show uniqueness in determining the spatial function from wave fields on a large sphere over a finite time interval. The stability estimate of the temporal function from the data of one receiver and the uniqueness result using partial boundary data are proved. Our arguments rely heavily on the use of the Fourier transform, which motivates inversion schemes that can be easily implemented. A Landweber iterative algorithm for recovering the spatial function and a non-iterative inversion scheme based on the uniqueness proof for recovering the temporal function are proposed. Numerical examples are demonstrated in both two and three dimensions.

  6. Inversion of the star transform

    International Nuclear Information System (INIS)

    Zhao, Fan; Schotland, John C; Markel, Vadim A

    2014-01-01

    We define the star transform as a generalization of the broken ray transform introduced by us in previous work. The advantages of using the star transform include the possibility to reconstruct the absorption and the scattering coefficients of the medium separately and simultaneously (from the same data) and the possibility to utilize scattered radiation which, in the case of conventional x-ray tomography, is discarded. In this paper, we derive the star transform from physical principles, discuss its mathematical properties and analyze numerical stability of inversion. In particular, it is shown that stable inversion of the star transform can be obtained only for configurations involving odd number of rays. Several computationally-efficient inversion algorithms are derived and tested numerically. (paper)

  7. Inverse comptonization vs. thermal synchrotron

    International Nuclear Information System (INIS)

    Fenimore, E.E.; Klebesadel, R.W.; Laros, J.G.

    1983-01-01

    There are currently two radiation mechanisms being considered for gamma-ray bursts: thermal synchrotron and inverse comptonization. They are mutually exclusive since thermal synchrotron requires a magnetic field of approx. 10 12 Gauss whereas inverse comptonization cannot produce a monotonic spectrum if the field is larger than 10 11 and is too inefficient relative to thermal synchrotron unless the field is less than 10 9 Gauss. Neither mechanism can explain completely the observed characteristics of gamma-ray bursts. However, we conclude that thermal synchrotron is more consistent with the observations if the sources are approx. 40 kpc away whereas inverse comptonization is more consistent if they are approx. 300 pc away. Unfortunately, the source distance is still not known and, thus, the radiation mechanism is still uncertain

  8. Inverse photoemission of uranium oxides

    International Nuclear Information System (INIS)

    Roussel, P.; Morrall, P.; Tull, S.J.

    2009-01-01

    Understanding the itinerant-localised bonding role of the 5f electrons in the light actinides will afford an insight into their unusual physical and chemical properties. In recent years, the combination of core and valance band electron spectroscopies with theoretic modelling have already made significant progress in this area. However, information of the unoccupied density of states is still scarce. When compared to the forward photoemission techniques, measurements of the unoccupied states suffer from significantly less sensitivity and lower resolution. In this paper, we report on our experimental apparatus, which is designed to measure the inverse photoemission spectra of the light actinides. Inverse photoemission spectra of UO 2 and UO 2.2 along with the corresponding core and valance electron spectra are presented in this paper. UO 2 has been reported previously, although through its inclusion here it allows us to compare and contrast results from our experimental apparatus to the previous Bremsstrahlung Isochromat Spectroscopy and Inverse Photoemission Spectroscopy investigations

  9. Optimization for nonlinear inverse problem

    International Nuclear Information System (INIS)

    Boyadzhiev, G.; Brandmayr, E.; Pinat, T.; Panza, G.F.

    2007-06-01

    The nonlinear inversion of geophysical data in general does not yield a unique solution, but a single model, representing the investigated field, is preferred for an easy geological interpretation of the observations. The analyzed region is constituted by a number of sub-regions where the multi-valued nonlinear inversion is applied, which leads to a multi-valued solution. Therefore, combining the values of the solution in each sub-region, many acceptable models are obtained for the entire region and this complicates the geological interpretation of geophysical investigations. In this paper are presented new methodologies, capable to select one model, among all acceptable ones, that satisfies different criteria of smoothness in the explored space of solutions. In this work we focus on the non-linear inversion of surface waves dispersion curves, which gives structural models of shear-wave velocity versus depth, but the basic concepts have a general validity. (author)

  10. Some Phenomena on Negative Inversion Constructions

    Science.gov (United States)

    Sung, Tae-Soo

    2013-01-01

    We examine the characteristics of NDI (negative degree inversion) and its relation with other inversion phenomena such as SVI (subject-verb inversion) and SAI (subject-auxiliary inversion). The negative element in the NDI construction may be" not," a negative adverbial, or a negative verb. In this respect, NDI has similar licensing…

  11. Decomposing Large Inverse Problems with an Augmented Lagrangian Approach: Application to Joint Inversion of Body-Wave Travel Times and Surface-Wave Dispersion Measurements

    Science.gov (United States)

    Reiter, D. T.; Rodi, W. L.

    2015-12-01

    Constructing 3D Earth models through the joint inversion of large geophysical data sets presents numerous theoretical and practical challenges, especially when diverse types of data and model parameters are involved. Among the challenges are the computational complexity associated with large data and model vectors and the need to unify differing model parameterizations, forward modeling methods and regularization schemes within a common inversion framework. The challenges can be addressed in part by decomposing the inverse problem into smaller, simpler inverse problems that can be solved separately, providing one knows how to merge the separate inversion results into an optimal solution of the full problem. We have formulated an approach to the decomposition of large inverse problems based on the augmented Lagrangian technique from optimization theory. As commonly done, we define a solution to the full inverse problem as the Earth model minimizing an objective function motivated, for example, by a Bayesian inference formulation. Our decomposition approach recasts the minimization problem equivalently as the minimization of component objective functions, corresponding to specified data subsets, subject to the constraints that the minimizing models be equal. A standard optimization algorithm solves the resulting constrained minimization problems by alternating between the separate solution of the component problems and the updating of Lagrange multipliers that serve to steer the individual solution models toward a common model solving the full problem. We are applying our inversion method to the reconstruction of the·crust and upper-mantle seismic velocity structure across Eurasia.· Data for the inversion comprise a large set of P and S body-wave travel times·and fundamental and first-higher mode Rayleigh-wave group velocities.

  12. Two-Dimensional Linear Inversion of GPR Data with a Shifting Zoom along the Observation Line

    Directory of Open Access Journals (Sweden)

    Raffaele Persico

    2017-09-01

    Full Text Available Linear inverse scattering problems can be solved by regularized inversion of a matrix, whose calculation and inversion may require significant computing resources, in particular, a significant amount of RAM memory. This effort is dependent on the extent of the investigation domain, which drives a large amount of data to be gathered and a large number of unknowns to be looked for, when this domain becomes electrically large. This leads, in turn, to the problem of inversion of excessively large matrices. Here, we consider the problem of a ground-penetrating radar (GPR survey in two-dimensional (2D geometry, with antennas at an electrically short distance from the soil. In particular, we present a strategy to afford inversion of large investigation domains, based on a shifting zoom procedure. The proposed strategy was successfully validated using experimental radar data.

  13. Inverse Problems in Systems Biology: A Critical Review.

    Science.gov (United States)

    Guzzi, Rodolfo; Colombo, Teresa; Paci, Paola

    2018-01-01

    Systems Biology may be assimilated to a symbiotic cyclic interplaying between the forward and inverse problems. Computational models need to be continuously refined through experiments and in turn they help us to make limited experimental resources more efficient. Every time one does an experiment we know that there will be some noise that can disrupt our measurements. Despite the noise certainly is a problem, the inverse problems already involve the inference of missing information, even if the data is entirely reliable. So the addition of a certain limited noise does not fundamentally change the situation but can be used to solve the so-called ill-posed problem, as defined by Hadamard. It can be seen as an extra source of information. Recent studies have shown that complex systems, among others the systems biology, are poorly constrained and ill-conditioned because it is difficult to use experimental data to fully estimate their parameters. For these reasons was born the concept of sloppy models, a sequence of models of increasing complexity that become sloppy in the limit of microscopic accuracy. Furthermore the concept of sloppy models contains also the concept of un-identifiability, because the models are characterized by many parameters that are poorly constrained by experimental data. Then a strategy needs to be designed to infer, analyze, and understand biological systems. The aim of this work is to provide a critical review to the inverse problems in systems biology defining a strategy to determine the minimal set of information needed to overcome the problems arising from dynamic biological models that generally may have many unknown, non-measurable parameters.

  14. MIMO Radar Transmit Beampattern Design Without Synthesising the Covariance Matrix

    KAUST Repository

    Ahmed, Sajid

    2013-10-28

    Compared to phased-array, multiple-input multiple-output (MIMO) radars provide more degrees-offreedom (DOF) that can be exploited for improved spatial resolution, better parametric identifiability, lower side-lobe levels at the transmitter/receiver, and design variety of transmit beampatterns. The design of the transmit beampattern generally requires the waveforms to have arbitrary auto- and crosscorrelation properties. The generation of such waveforms is a two step complicated process. In the first step a waveform covariance matrix is synthesised, which is a constrained optimisation problem. In the second step, to realise this covariance matrix actual waveforms are designed, which is also a constrained optimisation problem. Our proposed scheme converts this two step constrained optimisation problem into a one step unconstrained optimisation problem. In the proposed scheme, in contrast to synthesising the covariance matrix for the desired beampattern, nT independent finite-alphabet constantenvelope waveforms are generated and pre-processed, with weight matrix W, before transmitting from the antennas. In this work, two weight matrices are proposed that can be easily optimised for the desired symmetric and non-symmetric beampatterns and guarantee equal average power transmission from each antenna. Simulation results validate our claims.

  15. Parallelism in matrix computations

    CERN Document Server

    Gallopoulos, Efstratios; Sameh, Ahmed H

    2016-01-01

    This book is primarily intended as a research monograph that could also be used in graduate courses for the design of parallel algorithms in matrix computations. It assumes general but not extensive knowledge of numerical linear algebra, parallel architectures, and parallel programming paradigms. The book consists of four parts: (I) Basics; (II) Dense and Special Matrix Computations; (III) Sparse Matrix Computations; and (IV) Matrix functions and characteristics. Part I deals with parallel programming paradigms and fundamental kernels, including reordering schemes for sparse matrices. Part II is devoted to dense matrix computations such as parallel algorithms for solving linear systems, linear least squares, the symmetric algebraic eigenvalue problem, and the singular-value decomposition. It also deals with the development of parallel algorithms for special linear systems such as banded ,Vandermonde ,Toeplitz ,and block Toeplitz systems. Part III addresses sparse matrix computations: (a) the development of pa...

  16. Neutrino mass matrix

    International Nuclear Information System (INIS)

    Strobel, E.L.

    1985-01-01

    Given the many conflicting experimental results, examination is made of the neutrino mass matrix in order to determine possible masses and mixings. It is assumed that the Dirac mass matrix for the electron, muon, and tau neutrinos is similar in form to those of the quarks and charged leptons, and that the smallness of the observed neutrino masses results from the Gell-Mann-Ramond-Slansky mechanism. Analysis of masses and mixings for the neutrinos is performed using general structures for the Majorana mass matrix. It is shown that if certain tentative experimental results concerning the neutrino masses and mixing angles are confirmed, significant limitations may be placed on the Majorana mass matrix. The most satisfactory simple assumption concerning the Majorana mass matrix is that it is approximately proportional to the Dirac mass matrix. A very recent experimental neutrino mass result and its implications are discussed. Some general properties of matrices with structure similar to the Dirac mass matrices are discussed

  17. Size Estimates in Inverse Problems

    KAUST Repository

    Di Cristo, Michele

    2014-01-06

    Detection of inclusions or obstacles inside a body by boundary measurements is an inverse problems very useful in practical applications. When only finite numbers of measurements are available, we try to detect some information on the embedded object such as its size. In this talk we review some recent results on several inverse problems. The idea is to provide constructive upper and lower estimates of the area/volume of the unknown defect in terms of a quantity related to the work that can be expressed with the available boundary data.

  18. -Dimensional Fractional Lagrange's Inversion Theorem

    Directory of Open Access Journals (Sweden)

    F. A. Abd El-Salam

    2013-01-01

    Full Text Available Using Riemann-Liouville fractional differential operator, a fractional extension of the Lagrange inversion theorem and related formulas are developed. The required basic definitions, lemmas, and theorems in the fractional calculus are presented. A fractional form of Lagrange's expansion for one implicitly defined independent variable is obtained. Then, a fractional version of Lagrange's expansion in more than one unknown function is generalized. For extending the treatment in higher dimensions, some relevant vectors and tensors definitions and notations are presented. A fractional Taylor expansion of a function of -dimensional polyadics is derived. A fractional -dimensional Lagrange inversion theorem is proved.

  19. Darwin's "strange inversion of reasoning".

    Science.gov (United States)

    Dennett, Daniel

    2009-06-16

    Darwin's theory of evolution by natural selection unifies the world of physics with the world of meaning and purpose by proposing a deeply counterintuitive "inversion of reasoning" (according to a 19th century critic): "to make a perfect and beautiful machine, it is not requisite to know how to make it" [MacKenzie RB (1868) (Nisbet & Co., London)]. Turing proposed a similar inversion: to be a perfect and beautiful computing machine, it is not requisite to know what arithmetic is. Together, these ideas help to explain how we human intelligences came to be able to discern the reasons for all of the adaptations of life, including our own.

  20. Inverse transport theory and applications

    International Nuclear Information System (INIS)

    Bal, Guillaume

    2009-01-01

    Inverse transport consists of reconstructing the optical properties of a domain from measurements performed at the domain's boundary. This review concerns several types of measurements: time-dependent, time-independent, angularly resolved and angularly averaged measurements. We review recent results on the reconstruction of the optical parameters from such measurements and the stability of such reconstructions. Inverse transport finds applications e.g. in medical imaging (optical tomography, optical molecular imaging) and in geophysical imaging (remote sensing in the Earth's atmosphere). (topical review)

  1. Full 3-D stratigraphic inversion with a priori information: a powerful way to optimize data integration

    Energy Technology Data Exchange (ETDEWEB)

    Grizon, L.; Leger, M.; Dequirez, P.Y.; Dumont, F.; Richard, V.

    1998-12-31

    Integration between seismic and geological data is crucial to ensure that a reservoir study is accurate and reliable. To reach this goal, there is used a post-stack stratigraphic inversion with a priori information. The global cost-function combines two types of constraints. One is relevant to seismic amplitudes, and the other to an a priori impedance model. This paper presents this flexible and interpretative inversion to determine acoustic impedances constrained by seismic data, log data and geologic information. 5 refs., 8 figs.

  2. Constraining decaying dark matter with FERMI-LAT gamma rays

    International Nuclear Information System (INIS)

    Maccione, L.

    2011-01-01

    High energy electron sand positrons from decaying dark matter can produce a significant flux of gamma rays by inverse Compton of low energy photons in the interstellar radiation field. This possibility is inevitably related with the dark matter interpretation of the observed PAMELA and FERMI excesses. We will describe a simple and universal method to constrain dark matter models which produce electrons and positrons in their decay by using the FERMI-LAT gamma-ray observations in the energy range between 0.5 GeV and 300 GeV, by exploiting universal response functions that, once convolved with a specific dark matter model, produce the desired constraint. The response functions contain all the astrophysical inputs. Here is discussed the uncertainties in the determination of the response functions and apply them to place constraints on some specific dark matter decay models that can well fit the positron and electron fluxes observed by PAMELA and FERMI LAT, also taking into account prompt radiation from the dark matter decay. With the available data decaying dark matter can not be excluded as source of the PAMELA positron excess.

  3. Constraining particle dark matter using local galaxy distribution

    International Nuclear Information System (INIS)

    Ando, Shin’ichiro; Ishiwata, Koji

    2016-01-01

    It has been long discussed that cosmic rays may contain signals of dark matter. In the last couple of years an anomaly of cosmic-ray positrons has drawn a lot of attentions, and recently an excess in cosmic-ray anti-proton has been reported by AMS-02 collaboration. Both excesses may indicate towards decaying or annihilating dark matter with a mass of around 1–10 TeV. In this article we study the gamma rays from dark matter and constraints from cross correlations with distribution of galaxies, particularly in a local volume. We find that gamma rays due to inverse-Compton process have large intensity, and hence they give stringent constraints on dark matter scenarios in the TeV scale mass regime. Taking the recent developments in modeling astrophysical gamma-ray sources as well as comprehensive possibilities of the final state products of dark matter decay or annihilation into account, we show that the parameter regions of decaying dark matter that are suggested to explain the excesses are excluded. We also discuss the constrains on annihilating scenarios.

  4. Constraining decaying dark matter with Fermi LAT gamma-rays

    International Nuclear Information System (INIS)

    Zhang, Le; Sigl, Günter; Weniger, Christoph; Maccione, Luca; Redondo, Javier

    2010-01-01

    High energy electrons and positrons from decaying dark matter can produce a significant flux of gamma rays by inverse Compton off low energy photons in the interstellar radiation field. This possibility is inevitably related with the dark matter interpretation of the observed PAMELA and FERMI excesses. The aim of this paper is providing a simple and universal method to constrain dark matter models which produce electrons and positrons in their decay by using the Fermi LAT gamma-ray observations in the energy range between 0.5 GeV and 300 GeV. We provide a set of universal response functions that, once convolved with a specific dark matter model produce the desired constraints. Our response functions contain all the astrophysical inputs such as the electron propagation in the galaxy, the dark matter profile, the gamma-ray fluxes of known origin, and the Fermi LAT data. We study the uncertainties in the determination of the response functions and apply them to place constraints on some specific dark matter decay models that can well fit the positron and electron fluxes observed by PAMELA and Fermi LAT. To this end we also take into account prompt radiation from the dark matter decay. We find that with the available data decaying dark matter cannot be excluded as source of the PAMELA positron excess

  5. Efficient realization of 3D joint inversion of seismic and magnetotelluric data with cross gradient structure constraint

    Science.gov (United States)

    Luo, H.; Zhang, H.; Gao, J.

    2016-12-01

    Seismic and magnetotelluric (MT) imaging methods are generally used to characterize subsurface structures at various scales. The two methods are complementary to each other and the integration of them is helpful for more reliably determining the resistivity and velocity models of the target region. Because of the difficulty in finding empirical relationship between resistivity and velocity parameters, Gallardo and Meju [2003] proposed a joint inversion method enforcing resistivity and velocity models consistent in structure, which is realized by minimizing cross gradients between two models. However, it is extremely challenging to combine two different inversion systems together along with the cross gradient constraints. For this reason, Gallardo [2007] proposed a joint inversion scheme that decouples the seismic and MT inversion systems by iteratively performing seismic and MT inversions as well as cross gradient minimization separately. This scheme avoids the complexity of combining two different systems together but it suffers the issue of balancing between data fitting and structure constraint. In this study, we have developed a new joint inversion scheme that avoids the problem encountered by the scheme of Gallardo [2007]. In the new scheme, seismic and MT inversions are still separately performed but the cross gradient minimization is also constrained by model perturbations from separate inversions. In this way, the new scheme still avoids the complexity of combining two different systems together and at the same time the balance between data fitting and structure consistency constraint can be enforced. We have tested our joint inversion algorithm for both 2D and 3D cases. Synthetic tests show that joint inversion better reconstructed the velocity and resistivity models than separate inversions. Compared to separate inversions, joint inversion can remove artifacts in the resistivity model and can improve the resolution for deeper resistivity structures. We

  6. Inverse planning and class solutions for brachytherapy treatment planning

    International Nuclear Information System (INIS)

    Trnkova, P.

    2010-01-01

    has no additional features how to control spatial distribution of high dose regions. It is possible to create a dosimetrically acceptable treatment plans with IPSA. Nevertheless, the size of high dose regions is not acceptable. In comparison to manual treatment planning as well as to HIPO optimization, IPSA lead to the conflicting results concerning high dose regions. To be able to use IPSA for inverse treatment planning in cervical cancer brachytherapy additional tools have to be developed. The last part of the thesis is engaged with vaginal wall dosimetry. Dose volume constraints for target volume as well as for organs OARs are an input for inverse planning optimization calculation. A dose tolerance of each OAR has to be known to be able to create appropriate constraints. Until now only bladder, rectum and sigmoid were considered as OARs. The dose limits for vagina don't exist yet because of several uncertainties during assessing of dose to the vagina and vaginal morbidity. To overcome contouring uncertainties a simplified vagina contour was proposed and tested. The analysis showed that this contour is able to detect differences between different applicators as well as between different treatment plans. A prospective study comparing dosimetric results of this model and side effects in the vagina has to be done to prove whether this contour is working or not. In conclusion, this thesis proved that inverse planning can be used for cervical cancer brachytherapy. An appropriate implementation of the inverse planning algorithm has to be considered to avoid high dose regions. Prior including of a dose volume constrains of vagina as another input parameter for the inverse planning calculation vaginal dose reporting has to be solved. The feasibility of the proposed simplified vagina contour for dose reporting has to be proven with a prospective study. (author) [de

  7. A novel matrix approach for controlling the invariant densities of chaotic maps

    International Nuclear Information System (INIS)

    Rogers, Alan; Shorten, Robert; Heffernan, Daniel M.

    2008-01-01

    Recent work on positive matrices has resulted in a new matrix method for generating chaotic maps with arbitrary piecewise constant invariant densities, sometimes known as the inverse Frobenius-Perron problem (IFPP). In this paper, we give an extensive introduction to the IFPP, describing existing methods for solving it, and we describe our new matrix approach for solving the IFPP

  8. Description of identical particles via gauged matrix models: a generalization of the Calogero-Sutherland system

    International Nuclear Information System (INIS)

    Park, Jeong-Hyuck

    2003-01-01

    We elaborate the idea that the matrix models equipped with the gauge symmetry provide a natural framework to describe identical particles. After demonstrating the general prescription, we study an exactly solvable harmonic oscillator type gauged matrix model. The model gives a generalization of the Calogero-Sutherland system where the strength of the inverse square potential is not fixed but dynamical bounded by below

  9. Matrix-based image reconstruction methods for tomography

    International Nuclear Information System (INIS)

    Llacer, J.; Meng, J.D.

    1984-10-01

    Matrix methods of image reconstruction have not been used, in general, because of the large size of practical matrices, ill condition upon inversion and the success of Fourier-based techniques. An exception is the work that has been done at the Lawrence Berkeley Laboratory for imaging with accelerated radioactive ions. An extension of that work into more general imaging problems shows that, with a correct formulation of the problem, positron tomography with ring geometries results in well behaved matrices which can be used for image reconstruction with no distortion of the point response in the field of view and flexibility in the design of the instrument. Maximum Likelihood Estimator methods of reconstruction, which use the system matrices tailored to specific instruments and do not need matrix inversion, are shown to result in good preliminary images. A parallel processing computer structure based on multiple inexpensive microprocessors is proposed as a system to implement the matrix-MLE methods. 14 references, 7 figures

  10. Acoustic Impedance Inversion of Seismic Data Using Genetic Algorithm

    Science.gov (United States)

    Eladj, Said; Djarfour, Noureddine; Ferahtia, Djalal; Ouadfeul, Sid-Ali

    2013-04-01

    The inversion of seismic data can be used to constrain estimates of the Earth's acoustic impedance structure. This kind of problem is usually known to be non-linear, high-dimensional, with a complex search space which may be riddled with many local minima, and results in irregular objective functions. We investigate here the performance and the application of a genetic algorithm, in the inversion of seismic data. The proposed algorithm has the advantage of being easily implemented without getting stuck in local minima. The effects of population size, Elitism strategy, uniform cross-over and lower mutation are examined. The optimum solution parameters and performance were decided as a function of the testing error convergence with respect to the generation number. To calculate the fitness function, we used L2 norm of the sample-to-sample difference between the reference and the inverted trace. The cross-over probability is of 0.9-0.95 and mutation has been tested at 0.01 probability. The application of such a genetic algorithm to synthetic data shows that the inverted acoustic impedance section was efficient. Keywords: Seismic, Inversion, acoustic impedance, genetic algorithm, fitness functions, cross-over, mutation.

  11. Can earthquake source inversion benefit from rotational ground motion observations?

    Science.gov (United States)

    Igel, H.; Donner, S.; Reinwald, M.; Bernauer, M.; Wassermann, J. M.; Fichtner, A.

    2015-12-01

    With the prospects of instruments to observe rotational ground motions in a wide frequency and amplitude range in the near future we engage in the question how this type of ground motion observation can be used to solve seismic inverse problems. Here, we focus on the question, whether point or finite source inversions can benefit from additional observations of rotational motions. In an attempt to be fair we compare observations from a surface seismic network with N 3-component translational sensors (classic seismometers) with those obtained with N/2 6-component sensors (with additional colocated 3-component rotational motions). Thus we keep the overall number of traces constant. Synthetic seismograms are calculated for known point- or finite-source properties. The corresponding inverse problem is posed in a probabilistic way using the Shannon information content as a measure how the observations constrain the seismic source properties. The results show that with the 6-C subnetworks the source properties are not only equally well recovered (even that would be benefitial because of the substantially reduced logistics installing N/2 sensors) but statistically significant some source properties are almost always better resolved. We assume that this can be attributed to the fact the (in particular vertical) gradient information is contained in the additional rotational motion components. We compare these effects for strike-slip and normal-faulting type sources. Thus the answer to the question raised is a definite "yes". The challenge now is to demonstrate these effects on real data.

  12. Inverse dynamic analysis of general n-link robot manipulators

    International Nuclear Information System (INIS)

    Yih, T.C.; Wang, T.Y.; Burks, B.L.; Babcock, S.M.

    1996-01-01

    In this paper, a generalized matrix approach is derived to analyze the dynamic forces and moments (torques) required by the joint actuators. This method is general enough to solve the problems of any n-link open-chain robot manipulators with joint combinations of R(revolute), P(prismatic), and S(spherical). On the other hand, the proposed matrix solution is applicable to both nonredundant and redundant robotic systems. The matrix notation is formulated based on the Newton-Euler equations under the condition of quasi-static equilibrium. The 4 x 4 homogeneous cylindrical coordinates-Bryant angles (C-B) notation is applied to model the robotic systems. Displacements, velocities, and accelerations of each joint and link center of gravity (CG) are calculated through kinematic analysis. The resultant external forces and moments exerted on the CG of each link are considered as known inputs. Subsequently, a 6n x 6n displacement coefficient matrix and a 6n x 1 external force/moment vector can be established. At last, the joint forces and moments needed for the joint actuators to control the robotic system are determined through matrix inversion. Numerical examples will be illustrated for the nonredundant industrial robots: Bendix AA/CNC (RRP/RRR) and Unimate 2000 spherical (SP/RRR) robots; and the redundant light duty utility arm (LDUA), modified LDUA, and tank waste retrieval manipulator system

  13. Random Matrix Theory and Econophysics

    Science.gov (United States)

    Rosenow, Bernd

    2000-03-01

    Random Matrix Theory (RMT) [1] is used in many branches of physics as a ``zero information hypothesis''. It describes generic behavior of different classes of systems, while deviations from its universal predictions allow to identify system specific properties. We use methods of RMT to analyze the cross-correlation matrix C of stock price changes [2] of the largest 1000 US companies. In addition to its scientific interest, the study of correlations between the returns of different stocks is also of practical relevance in quantifying the risk of a given stock portfolio. We find [3,4] that the statistics of most of the eigenvalues of the spectrum of C agree with the predictions of RMT, while there are deviations for some of the largest eigenvalues. We interpret these deviations as a system specific property, e.g. containing genuine information about correlations in the stock market. We demonstrate that C shares universal properties with the Gaussian orthogonal ensemble of random matrices. Furthermore, we analyze the eigenvectors of C through their inverse participation ratio and find eigenvectors with large ratios at both edges of the eigenvalue spectrum - a situation reminiscent of localization theory results. This work was done in collaboration with V. Plerou, P. Gopikrishnan, T. Guhr, L.A.N. Amaral, and H.E Stanley and is related to recent work of Laloux et al.. 1. T. Guhr, A. Müller Groeling, and H.A. Weidenmüller, ``Random Matrix Theories in Quantum Physics: Common Concepts'', Phys. Rep. 299, 190 (1998). 2. See, e.g. R.N. Mantegna and H.E. Stanley, Econophysics: Correlations and Complexity in Finance (Cambridge University Press, Cambridge, England, 1999). 3. V. Plerou, P. Gopikrishnan, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Universal and Nonuniversal Properties of Cross Correlations in Financial Time Series'', Phys. Rev. Lett. 83, 1471 (1999). 4. V. Plerou, P. Gopikrishnan, T. Guhr, B. Rosenow, L.A.N. Amaral, and H.E. Stanley, ``Random Matrix Theory

  14. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan; Motamed, Mohammad; Tempone, Raul

    2016-01-01

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  15. Fast Bayesian optimal experimental design for seismic source inversion

    KAUST Repository

    Long, Quan

    2015-07-01

    We develop a fast method for optimally designing experiments in the context of statistical seismic source inversion. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by elastodynamic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the "true" parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem. © 2015 Elsevier B.V.

  16. Fast Bayesian Optimal Experimental Design for Seismic Source Inversion

    KAUST Repository

    Long, Quan

    2016-01-06

    We develop a fast method for optimally designing experiments [1] in the context of statistical seismic source inversion [2]. In particular, we efficiently compute the optimal number and locations of the receivers or seismographs. The seismic source is modeled by a point moment tensor multiplied by a time-dependent function. The parameters include the source location, moment tensor components, and start time and frequency in the time function. The forward problem is modeled by the elastic wave equations. We show that the Hessian of the cost functional, which is usually defined as the square of the weighted L2 norm of the difference between the experimental data and the simulated data, is proportional to the measurement time and the number of receivers. Consequently, the posterior distribution of the parameters, in a Bayesian setting, concentrates around the true parameters, and we can employ Laplace approximation and speed up the estimation of the expected Kullback-Leibler divergence (expected information gain), the optimality criterion in the experimental design procedure. Since the source parameters span several magnitudes, we use a scaling matrix for efficient control of the condition number of the original Hessian matrix. We use a second-order accurate finite difference method to compute the Hessian matrix and either sparse quadrature or Monte Carlo sampling to carry out numerical integration. We demonstrate the efficiency, accuracy, and applicability of our method on a two-dimensional seismic source inversion problem.

  17. Bilinear Inverse Problems: Theory, Algorithms, and Applications

    Science.gov (United States)

    Ling, Shuyang

    We will discuss how several important real-world signal processing problems, such as self-calibration and blind deconvolution, can be modeled as bilinear inverse problems and solved by convex and nonconvex optimization approaches. In Chapter 2, we bring together three seemingly unrelated concepts, self-calibration, compressive sensing and biconvex optimization. We show how several self-calibration problems can be treated efficiently within the framework of biconvex compressive sensing via a new method called SparseLift. More specifically, we consider a linear system of equations y = DAx, where the diagonal matrix D (which models the calibration error) is unknown and x is an unknown sparse signal. By "lifting" this biconvex inverse problem and exploiting sparsity in this model, we derive explicit theoretical guarantees under which both x and D can be recovered exactly, robustly, and numerically efficiently. In Chapter 3, we study the question of the joint blind deconvolution and blind demixing, i.e., extracting a sequence of functions [special characters omitted] from observing only the sum of their convolutions [special characters omitted]. In particular, for the special case s = 1, it becomes the well-known blind deconvolution problem. We present a non-convex algorithm which guarantees exact recovery under conditions that are competitive with convex optimization methods, with the additional advantage of being computationally much more efficient. We discuss several applications of the proposed framework in image processing and wireless communications in connection with the Internet-of-Things. In Chapter 4, we consider three different self-calibration models of practical relevance. We show how their corresponding bilinear inverse problems can be solved by both the simple linear least squares approach and the SVD-based approach. As a consequence, the proposed algorithms are numerically extremely efficient, thus allowing for real-time deployment. Explicit theoretical

  18. Superconductivity in Pb inverse opal

    International Nuclear Information System (INIS)

    Aliev, Ali E.; Lee, Sergey B.; Zakhidov, Anvar A.; Baughman, Ray H.

    2007-01-01

    Type-II superconducting behavior was observed in highly periodic three-dimensional lead inverse opal prepared by infiltration of melted Pb in blue (D = 160 nm), green (D = 220 nm) and red (D = 300 nm) opals and followed by the extraction of the SiO 2 spheres by chemical etching. The onset of a broad phase transition (ΔT = 0.3 K) was shifted from T c = 7.196 K for bulk Pb to T c = 7.325 K. The upper critical field H c2 (3150 Oe) measured from high-field hysteresis loops exceeds the critical field for bulk lead (803 Oe) fourfold. Two well resolved peaks observed in the hysteresis loops were ascribed to flux penetration into the cylindrical void space that can be found in inverse opal structure and into the periodic structure of Pb nanoparticles. The red inverse opal shows pronounced oscillations of magnetic moment in the mixed state at low temperatures, T 0.9T c has been observed for all of the samples studied. The magnetic field periodicity of resistivity modulation is in good agreement with the lattice parameter of the inverse opal structure. We attribute the failure to observe pronounced modulation in magneto-resistive measurement to difficulties in the precision orientation of the sample along the magnetic field

  19. Inverse problem of solar oscillations

    International Nuclear Information System (INIS)

    Sekii, T.; Shibahashi, H.

    1987-01-01

    The authors present some preliminary results of numerical simulation to infer the sound velocity distribution in the solar interior from the oscillation data of the Sun as the inverse problem. They analyze the acoustic potential itself by taking account of some factors other than the sound velocity, and infer the sound velocity distribution in the deep interior of the Sun

  20. Wave-equation dispersion inversion

    KAUST Repository

    Li, Jing

    2016-12-08

    We present the theory for wave-equation inversion of dispersion curves, where the misfit function is the sum of the squared differences between the wavenumbers along the predicted and observed dispersion curves. The dispersion curves are obtained from Rayleigh waves recorded by vertical-component geophones. Similar to wave-equation traveltime tomography, the complicated surface wave arrivals in traces are skeletonized as simpler data, namely the picked dispersion curves in the phase-velocity and frequency domains. Solutions to the elastic wave equation and an iterative optimization method are then used to invert these curves for 2-D or 3-D S-wave velocity models. This procedure, denoted as wave-equation dispersion inversion (WD), does not require the assumption of a layered model and is significantly less prone to the cycle-skipping problems of full waveform inversion. The synthetic and field data examples demonstrate that WD can approximately reconstruct the S-wave velocity distributions in laterally heterogeneous media if the dispersion curves can be identified and picked. The WD method is easily extended to anisotropic data and the inversion of dispersion curves associated with Love waves.

  1. From capture to simulation: connecting forward and inverse problems in fluids

    KAUST Repository

    Gregson, James; Ihrke, Ivo; Thuerey, Nils; Heidrich, Wolfgang

    2014-01-01

    We explore the connection between fluid capture, simulation and proximal methods, a class of algorithms commonly used for inverse problems in image processing and computer vision. Our key finding is that the proximal operator constraining fluid velocities to be divergence-free is directly equivalent to the pressure-projection methods commonly used in incompressible flow solvers. This observation lets us treat the inverse problem of fluid tracking as a constrained flow problem all while working in an efficient, modular framework. In addition it lets us tightly couple fluid simulation into flow tracking, providing a global prior that significantly increases tracking accuracy and temporal coherence as compared to previous techniques. We demonstrate how we can use these improved results for a variety of applications, such as re-simulation, detail enhancement, and domain modification. We furthermore give an outlook of the applications beyond fluid tracking that our proximal operator framework could enable by exploring the connection of deblurring and fluid guiding.

  2. From capture to simulation: connecting forward and inverse problems in fluids

    KAUST Repository

    Gregson, James

    2014-07-27

    We explore the connection between fluid capture, simulation and proximal methods, a class of algorithms commonly used for inverse problems in image processing and computer vision. Our key finding is that the proximal operator constraining fluid velocities to be divergence-free is directly equivalent to the pressure-projection methods commonly used in incompressible flow solvers. This observation lets us treat the inverse problem of fluid tracking as a constrained flow problem all while working in an efficient, modular framework. In addition it lets us tightly couple fluid simulation into flow tracking, providing a global prior that significantly increases tracking accuracy and temporal coherence as compared to previous techniques. We demonstrate how we can use these improved results for a variety of applications, such as re-simulation, detail enhancement, and domain modification. We furthermore give an outlook of the applications beyond fluid tracking that our proximal operator framework could enable by exploring the connection of deblurring and fluid guiding.

  3. Cosmogenic photons strongly constrain UHECR source models

    Directory of Open Access Journals (Sweden)

    van Vliet Arjen

    2017-01-01

    Full Text Available With the newest version of our Monte Carlo code for ultra-high-energy cosmic ray (UHECR propagation, CRPropa 3, the flux of neutrinos and photons due to interactions of UHECRs with extragalactic background light can be predicted. Together with the recently updated data for the isotropic diffuse gamma-ray background (IGRB by Fermi LAT, it is now possible to severely constrain UHECR source models. The evolution of the UHECR sources especially plays an important role in the determination of the expected secondary photon spectrum. Pure proton UHECR models are already strongly constrained, primarily by the highest energy bins of Fermi LAT’s IGRB, as long as their number density is not strongly peaked at recent times.

  4. A constrained supersymmetric left-right model

    Energy Technology Data Exchange (ETDEWEB)

    Hirsch, Martin [AHEP Group, Instituto de Física Corpuscular - C.S.I.C./Universitat de València, Edificio de Institutos de Paterna, Apartado 22085, E-46071 València (Spain); Krauss, Manuel E. [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Opferkuch, Toby [Bethe Center for Theoretical Physics & Physikalisches Institut der Universität Bonn, Nussallee 12, 53115 Bonn (Germany); Porod, Werner [Institut für Theoretische Physik und Astronomie, Universität Würzburg,Emil-Hilb-Weg 22, 97074 Wuerzburg (Germany); Staub, Florian [Theory Division, CERN,1211 Geneva 23 (Switzerland)

    2016-03-02

    We present a supersymmetric left-right model which predicts gauge coupling unification close to the string scale and extra vector bosons at the TeV scale. The subtleties in constructing a model which is in agreement with the measured quark masses and mixing for such a low left-right breaking scale are discussed. It is shown that in the constrained version of this model radiative breaking of the gauge symmetries is possible and a SM-like Higgs is obtained. Additional CP-even scalars of a similar mass or even much lighter are possible. The expected mass hierarchies for the supersymmetric states differ clearly from those of the constrained MSSM. In particular, the lightest down-type squark, which is a mixture of the sbottom and extra vector-like states, is always lighter than the stop. We also comment on the model’s capability to explain current anomalies observed at the LHC.

  5. Coding for Two Dimensional Constrained Fields

    DEFF Research Database (Denmark)

    Laursen, Torben Vaarbye

    2006-01-01

    a first order model to model higher order constraints by the use of an alphabet extension. We present an iterative method that based on a set of conditional probabilities can help in choosing the large numbers of parameters of the model in order to obtain a stationary model. Explicit results are given...... for the No Isolated Bits constraint. Finally we present a variation of the encoding scheme of bit-stuffing that is applicable to the class of checkerboard constrained fields. It is possible to calculate the entropy of the coding scheme thus obtaining lower bounds on the entropy of the fields considered. These lower...... bounds are very tight for the Run-Length limited fields. Explicit bounds are given for the diamond constrained field as well....

  6. Communication Schemes with Constrained Reordering of Resources

    DEFF Research Database (Denmark)

    Popovski, Petar; Utkovski, Zoran; Trillingsgaard, Kasper Fløe

    2013-01-01

    This paper introduces a communication model inspired by two practical scenarios. The first scenario is related to the concept of protocol coding, where information is encoded in the actions taken by an existing communication protocol. We investigate strategies for protocol coding via combinatorial...... reordering of the labelled user resources (packets, channels) in an existing, primary system. However, the degrees of freedom of the reordering are constrained by the operation of the primary system. The second scenario is related to communication systems with energy harvesting, where the transmitted signals...... are constrained by the energy that is available through the harvesting process. We have introduced a communication model that covers both scenarios and elicits their key feature, namely the constraints of the primary system or the harvesting process. We have shown how to compute the capacity of the channels...

  7. Q-deformed systems and constrained dynamics

    International Nuclear Information System (INIS)

    Shabanov, S.V.

    1993-01-01

    It is shown that quantum theories of the q-deformed harmonic oscillator and one-dimensional free q-particle (a free particle on the 'quantum' line) can be obtained by the canonical quantization of classical Hamiltonian systems with commutative phase-space variables and a non-trivial symplectic structure. In the framework of this approach, classical dynamics of a particle on the q-line coincides with the one of a free particle with friction. It is argued that q-deformed systems can be treated as ordinary mechanical systems with the second-class constraints. In particular, second-class constrained systems corresponding to the q-oscillator and q-particle are given. A possibility of formulating q-deformed systems via gauge theories (first-class constrained systems) is briefly discussed. (orig.)

  8. Fundamentals of matrix analysis with applications

    CERN Document Server

    Saff, Edward Barry

    2015-01-01

    This book provides comprehensive coverage of matrix theory from a geometric and physical perspective, and the authors address the functionality of matrices and their ability to illustrate and aid in many practical applications.  Readers are introduced to inverses and eigenvalues through physical examples such as rotations, reflections, and projections, and only then are computational details described and explored.  MATLAB is utilized to aid in reader comprehension, and the authors are careful to address the issue of rank fragility so readers are not flummoxed when MATLAB displays conflict wit

  9. Workflows for Full Waveform Inversions

    Science.gov (United States)

    Boehm, Christian; Krischer, Lion; Afanasiev, Michael; van Driel, Martin; May, Dave A.; Rietmann, Max; Fichtner, Andreas

    2017-04-01

    Despite many theoretical advances and the increasing availability of high-performance computing clusters, full seismic waveform inversions still face considerable challenges regarding data and workflow management. While the community has access to solvers which can harness modern heterogeneous computing architectures, the computational bottleneck has fallen to these often manpower-bounded issues that need to be overcome to facilitate further progress. Modern inversions involve huge amounts of data and require a tight integration between numerical PDE solvers, data acquisition and processing systems, nonlinear optimization libraries, and job orchestration frameworks. To this end we created a set of libraries and applications revolving around Salvus (http://salvus.io), a novel software package designed to solve large-scale full waveform inverse problems. This presentation focuses on solving passive source seismic full waveform inversions from local to global scales with Salvus. We discuss (i) design choices for the aforementioned components required for full waveform modeling and inversion, (ii) their implementation in the Salvus framework, and (iii) how it is all tied together by a usable workflow system. We combine state-of-the-art algorithms ranging from high-order finite-element solutions of the wave equation to quasi-Newton optimization algorithms using trust-region methods that can handle inexact derivatives. All is steered by an automated interactive graph-based workflow framework capable of orchestrating all necessary pieces. This naturally facilitates the creation of new Earth models and hopefully sparks new scientific insights. Additionally, and even more importantly, it enhances reproducibility and reliability of the final results.

  10. 3D CSEM inversion based on goal-oriented adaptive finite element method

    Science.gov (United States)

    Zhang, Y.; Key, K.

    2016-12-01

    We present a parallel 3D frequency domain controlled-source electromagnetic inversion code name MARE3DEM. Non-linear inversion of observed data is performed with the Occam variant of regularized Gauss-Newton optimization. The forward operator is based on the goal-oriented finite element method that efficiently calculates the responses and sensitivity kernels in parallel using a data decomposition scheme where independent modeling tasks contain different frequencies and subsets of the transmitters and receivers. To accommodate complex 3D conductivity variation with high flexibility and precision, we adopt the dual-grid approach where the forward mesh conforms to the inversion parameter grid and is adaptively refined until the forward solution converges to the desired accuracy. This dual-grid approach is memory efficient, since the inverse parameter grid remains independent from fine meshing generated around the transmitter and receivers by the adaptive finite element method. Besides, the unstructured inverse mesh efficiently handles multiple scale structures and allows for fine-scale model parameters within the region of interest. Our mesh generation engine keeps track of the refinement hierarchy so that the map of conductivity and sensitivity kernel between the forward and inverse mesh is retained. We employ the adjoint-reciprocity method to calculate the sensitivity kernels which establish a linear relationship between changes in the conductivity model and changes in the modeled responses. Our code uses a direcy solver for the linear systems, so the adjoint problem is efficiently computed by re-using the factorization from the primary problem. Further computational efficiency and scalability is obtained in the regularized Gauss-Newton portion of the inversion using parallel dense matrix-matrix multiplication and matrix factorization routines implemented with the ScaLAPACK library. We show the scalability, reliability and the potential of the algorithm to deal with

  11. Online constrained model-based reinforcement learning

    CSIR Research Space (South Africa)

    Van Niekerk, B

    2017-08-01

    Full Text Available Constrained Model-based Reinforcement Learning Benjamin van Niekerk School of Computer Science University of the Witwatersrand South Africa Andreas Damianou∗ Amazon.com Cambridge, UK Benjamin Rosman Council for Scientific and Industrial Research, and School... MULTIPLE SHOOTING Using direct multiple shooting (Bock and Plitt, 1984), problem (1) can be transformed into a structured non- linear program (NLP). First, the time horizon [t0, t0 + T ] is partitioned into N equal subintervals [tk, tk+1] for k = 0...

  12. Constraining supergravity models from gluino production

    International Nuclear Information System (INIS)

    Barbieri, R.; Gamberini, G.; Giudice, G.F.; Ridolfi, G.

    1988-01-01

    The branching ratios for gluino decays g tilde → qanti qΧ, g tilde → gΧ into a stable undetected neutralino are computed as functions of the relevant parameters of the underlying supergravity theory. A simple way of constraining supergravity models from gluino production emerges. The effectiveness of hadronic versus e + e - colliders in the search for supersymmetry can be directly compared. (orig.)

  13. Cosmicflows Constrained Local UniversE Simulations

    Science.gov (United States)

    Sorce, Jenny G.; Gottlöber, Stefan; Yepes, Gustavo; Hoffman, Yehuda; Courtois, Helene M.; Steinmetz, Matthias; Tully, R. Brent; Pomarède, Daniel; Carlesi, Edoardo

    2016-01-01

    This paper combines observational data sets and cosmological simulations to generate realistic numerical replicas of the nearby Universe. The latter are excellent laboratories for studies of the non-linear process of structure formation in our neighbourhood. With measurements of radial peculiar velocities in the local Universe (cosmicflows-2) and a newly developed technique, we produce Constrained Local UniversE Simulations (CLUES). To assess the quality of these constrained simulations, we compare them with random simulations as well as with local observations. The cosmic variance, defined as the mean one-sigma scatter of cell-to-cell comparison between two fields, is significantly smaller for the constrained simulations than for the random simulations. Within the inner part of the box where most of the constraints are, the scatter is smaller by a factor of 2 to 3 on a 5 h-1 Mpc scale with respect to that found for random simulations. This one-sigma scatter obtained when comparing the simulated and the observation-reconstructed velocity fields is only 104 ± 4 km s-1, I.e. the linear theory threshold. These two results demonstrate that these simulations are in agreement with each other and with the observations of our neighbourhood. For the first time, simulations constrained with observational radial peculiar velocities resemble the local Universe up to a distance of 150 h-1 Mpc on a scale of a few tens of megaparsecs. When focusing on the inner part of the box, the resemblance with our cosmic neighbourhood extends to a few megaparsecs (<5 h-1 Mpc). The simulations provide a proper large-scale environment for studies of the formation of nearby objects.

  14. Dynamic Convex Duality in Constrained Utility Maximization

    OpenAIRE

    Li, Yusong; Zheng, Harry

    2016-01-01

    In this paper, we study a constrained utility maximization problem following the convex duality approach. After formulating the primal and dual problems, we construct the necessary and sufficient conditions for both the primal and dual problems in terms of FBSDEs plus additional conditions. Such formulation then allows us to explicitly characterize the primal optimal control as a function of the adjoint process coming from the dual FBSDEs in a dynamic fashion and vice versa. Moreover, we also...

  15. Statistical mechanics of budget-constrained auctions

    OpenAIRE

    Altarelli, F.; Braunstein, A.; Realpe-Gomez, J.; Zecchina, R.

    2009-01-01

    Finding the optimal assignment in budget-constrained auctions is a combinatorial optimization problem with many important applications, a notable example being the sale of advertisement space by search engines (in this context the problem is often referred to as the off-line AdWords problem). Based on the cavity method of statistical mechanics, we introduce a message passing algorithm that is capable of solving efficiently random instances of the problem extracted from a natural distribution,...

  16. Constraining neutron star matter with Quantum Chromodynamics

    CERN Document Server

    Kurkela, Aleksi; Schaffner-Bielich, Jurgen; Vuorinen, Aleksi

    2014-01-01

    In recent years, there have been several successful attempts to constrain the equation of state of neutron star matter using input from low-energy nuclear physics and observational data. We demonstrate that significant further restrictions can be placed by additionally requiring the pressure to approach that of deconfined quark matter at high densities. Remarkably, the new constraints turn out to be highly insensitive to the amount --- or even presence --- of quark matter inside the stars.

  17. Patience of matrix games

    DEFF Research Database (Denmark)

    Hansen, Kristoffer Arnsfelt; Ibsen-Jensen, Rasmus; Podolskii, Vladimir V.

    2013-01-01

    For matrix games we study how small nonzero probability must be used in optimal strategies. We show that for image win–lose–draw games (i.e. image matrix games) nonzero probabilities smaller than image are never needed. We also construct an explicit image win–lose game such that the unique optimal...

  18. Matrix comparison, Part 2

    DEFF Research Database (Denmark)

    Schneider, Jesper Wiborg; Borlund, Pia

    2007-01-01

    The present two-part article introduces matrix comparison as a formal means for evaluation purposes in informetric studies such as cocitation analysis. In the first part, the motivation behind introducing matrix comparison to informetric studies, as well as two important issues influencing such c...

  19. Unitarity of CKM Matrix

    CERN Document Server

    Saleem, M

    2002-01-01

    The Unitarity of the CKM matrix is examined in the light of the latest available accurate data. The analysis shows that a conclusive result cannot be derived at present. Only more precise data can determine whether the CKM matrix opens new vistas beyond the standard model or not.

  20. Uncertainties in constraining low-energy constants from {sup 3}H β decay

    Energy Technology Data Exchange (ETDEWEB)

    Klos, P.; Carbone, A.; Hebeler, K. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, ExtreMe Matter Institute EMMI, Darmstadt (Germany); Menendez, J. [University of Tokyo, Department of Physics, Tokyo (Japan); Schwenk, A. [Technische Universitaet Darmstadt, Institut fuer Kernphysik, Darmstadt (Germany); GSI Helmholtzzentrum fuer Schwerionenforschung GmbH, ExtreMe Matter Institute EMMI, Darmstadt (Germany); Max-Planck-Institut fuer Kernphysik, Heidelberg (Germany)

    2017-08-15

    We discuss the uncertainties in constraining low-energy constants of chiral effective field theory from {sup 3}H β decay. The half-life is very precisely known, so that the Gamow-Teller matrix element has been used to fit the coupling c{sub D} of the axial-vector current to a short-range two-nucleon pair. Because the same coupling also describes the leading one-pion-exchange three-nucleon force, this in principle provides a very constraining fit, uncorrelated with the {sup 3}H binding energy fit used to constrain another low-energy coupling in three-nucleon forces. However, so far such {sup 3}H half-life fits have only been performed at a fixed cutoff value. We show that the cutoff dependence due to the regulators in the axial-vector two-body current can significantly affect the Gamow-Teller matrix elements and consequently also the extracted values for the c{sub D} coupling constant. The degree of the cutoff dependence is correlated with the softness of the employed NN interaction. As a result, present three-nucleon forces based on a fit to {sup 3}H β decay underestimate the uncertainty in c{sub D}. We explore a range of c{sub D} values that is compatible within cutoff variation with the experimental {sup 3}H half-life and estimate the resulting uncertainties for many-body systems by performing calculations of symmetric nuclear matter. (orig.)

  1. Anatomy of Higgs mass in supersymmetric inverse seesaw models

    Energy Technology Data Exchange (ETDEWEB)

    Chun, Eung Jin, E-mail: ejchun@kias.re.kr [Korea Institute for Advanced Study, Seoul 130-722 (Korea, Republic of); Mummidi, V. Suryanarayana, E-mail: soori9@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India); Vempati, Sudhir K., E-mail: vempati@cts.iisc.ernet.in [Centre for High Energy Physics, Indian Institute of Science, Bangalore 560012 (India)

    2014-09-07

    We compute the one loop corrections to the CP-even Higgs mass matrix in the supersymmetric inverse seesaw model to single out the different cases where the radiative corrections from the neutrino sector could become important. It is found that there could be a significant enhancement in the Higgs mass even for Dirac neutrino masses of O(30) GeV if the left-handed sneutrino soft mass is comparable or larger than the right-handed neutrino mass. In the case where right-handed neutrino masses are significantly larger than the supersymmetry breaking scale, the corrections can utmost account to an upward shift of 3 GeV. For very heavy multi TeV sneutrinos, the corrections replicate the stop corrections at 1-loop. We further show that general gauge mediation with inverse seesaw model naturally accommodates a 125 GeV Higgs with TeV scale stops.

  2. Constraining the mass of the Local Group

    Science.gov (United States)

    Carlesi, Edoardo; Hoffman, Yehuda; Sorce, Jenny G.; Gottlöber, Stefan

    2017-03-01

    The mass of the Local Group (LG) is a crucial parameter for galaxy formation theories. However, its observational determination is challenging - its mass budget is dominated by dark matter that cannot be directly observed. To meet this end, the posterior distributions of the LG and its massive constituents have been constructed by means of constrained and random cosmological simulations. Two priors are assumed - the Λ cold dark matter model that is used to set up the simulations, and an LG model that encodes the observational knowledge of the LG and is used to select LG-like objects from the simulations. The constrained simulations are designed to reproduce the local cosmography as it is imprinted on to the Cosmicflows-2 data base of velocities. Several prescriptions are used to define the LG model, focusing in particular on different recent estimates of the tangential velocity of M31. It is found that (a) different vtan choices affect the peak mass values up to a factor of 2, and change mass ratios of MM31 to MMW by up to 20 per cent; (b) constrained simulations yield more sharply peaked posterior distributions compared with the random ones; (c) LG mass estimates are found to be smaller than those found using the timing argument; (d) preferred Milky Way masses lie in the range of (0.6-0.8) × 1012 M⊙; whereas (e) MM31 is found to vary between (1.0-2.0) × 1012 M⊙, with a strong dependence on the vtan values used.

  3. Inversion for Eigenvalues and Modes Using Sierra-SD and ROL.

    Energy Technology Data Exchange (ETDEWEB)

    Walsh, Timothy [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Aquino, Wilkins [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Ridzal, Denis [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Kouri, Drew Philip [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-12-01

    In this report we formulate eigenvalue-based methods for model calibration using a PDE-constrained optimization framework. We derive the abstract optimization operators from first principles and implement these methods using Sierra-SD and the Rapid Optimization Library (ROL). To demon- strate this approach, we use experimental measurements and an inverse solution to compute the joint and elastic foam properties of a low-fidelity unit (LFU) model.

  4. Fuzzy risk matrix

    International Nuclear Information System (INIS)

    Markowski, Adam S.; Mannan, M. Sam

    2008-01-01

    A risk matrix is a mechanism to characterize and rank process risks that are typically identified through one or more multifunctional reviews (e.g., process hazard analysis, audits, or incident investigation). This paper describes a procedure for developing a fuzzy risk matrix that may be used for emerging fuzzy logic applications in different safety analyses (e.g., LOPA). The fuzzification of frequency and severity of the consequences of the incident scenario are described which are basic inputs for fuzzy risk matrix. Subsequently using different design of risk matrix, fuzzy rules are established enabling the development of fuzzy risk matrices. Three types of fuzzy risk matrix have been developed (low-cost, standard, and high-cost), and using a distillation column case study, the effect of the design on final defuzzified risk index is demonstrated

  5. Fuzzy vulnerability matrix

    International Nuclear Information System (INIS)

    Baron, Jorge H.; Rivera, S.S.

    2000-01-01

    The so-called vulnerability matrix is used in the evaluation part of the probabilistic safety assessment for a nuclear power plant, during the containment event trees calculations. This matrix is established from what is knows as Numerical Categories for Engineering Judgement. This matrix is usually established with numerical values obtained with traditional arithmetic using the set theory. The representation of this matrix with fuzzy numbers is much more adequate, due to the fact that the Numerical Categories for Engineering Judgement are better represented with linguistic variables, such as 'highly probable', 'probable', 'impossible', etc. In the present paper a methodology to obtain a Fuzzy Vulnerability Matrix is presented, starting from the recommendations on the Numerical Categories for Engineering Judgement. (author)

  6. Quantum algorithm for support matrix machines

    Science.gov (United States)

    Duan, Bojia; Yuan, Jiabin; Liu, Ying; Li, Dan

    2017-09-01

    We propose a quantum algorithm for support matrix machines (SMMs) that efficiently addresses an image classification problem by introducing a least-squares reformulation. This algorithm consists of two core subroutines: a quantum matrix inversion (Harrow-Hassidim-Lloyd, HHL) algorithm and a quantum singular value thresholding (QSVT) algorithm. The two algorithms can be implemented on a universal quantum computer with complexity O[log(npq) ] and O[log(pq)], respectively, where n is the number of the training data and p q is the size of the feature space. By iterating the algorithms, we can find the parameters for the SMM classfication model. Our analysis shows that both HHL and QSVT algorithms achieve an exponential increase of speed over their classical counterparts.

  7. An efficient, block-by-block algorithm for inverting a block tridiagonal, nearly block Toeplitz matrix

    International Nuclear Information System (INIS)

    Reuter, Matthew G; Hill, Judith C

    2012-01-01

    We present an algorithm for computing any block of the inverse of a block tridiagonal, nearly block Toeplitz matrix (defined as a block tridiagonal matrix with a small number of deviations from the purely block Toeplitz structure). By exploiting both the block tridiagonal and the nearly block Toeplitz structures, this method scales independently of the total number of blocks in the matrix and linearly with the number of deviations. Numerical studies demonstrate this scaling and the advantages of our method over alternatives.

  8. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao

    2016-12-07

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  9. A Matrix Splitting Method for Composite Function Minimization

    KAUST Repository

    Yuan, Ganzhao; Zheng, Wei-Shi; Ghanem, Bernard

    2016-01-01

    Composite function minimization captures a wide spectrum of applications in both computer vision and machine learning. It includes bound constrained optimization and cardinality regularized optimization as special cases. This paper proposes and analyzes a new Matrix Splitting Method (MSM) for minimizing composite functions. It can be viewed as a generalization of the classical Gauss-Seidel method and the Successive Over-Relaxation method for solving linear systems in the literature. Incorporating a new Gaussian elimination procedure, the matrix splitting method achieves state-of-the-art performance. For convex problems, we establish the global convergence, convergence rate, and iteration complexity of MSM, while for non-convex problems, we prove its global convergence. Finally, we validate the performance of our matrix splitting method on two particular applications: nonnegative matrix factorization and cardinality regularized sparse coding. Extensive experiments show that our method outperforms existing composite function minimization techniques in term of both efficiency and efficacy.

  10. Matrix Transfer Function Design for Flexible Structures: An Application

    Science.gov (United States)

    Brennan, T. J.; Compito, A. V.; Doran, A. L.; Gustafson, C. L.; Wong, C. L.

    1985-01-01

    The application of matrix transfer function design techniques to the problem of disturbance rejection on a flexible space structure is demonstrated. The design approach is based on parameterizing a class of stabilizing compensators for the plant and formulating the design specifications as a constrained minimization problem in terms of these parameters. The solution yields a matrix transfer function representation of the compensator. A state space realization of the compensator is constructed to investigate performance and stability on the nominal and perturbed models. The application is made to the ACOSSA (Active Control of Space Structures) optical structure.

  11. Sparse inverse covariance estimation with the graphical lasso.

    Science.gov (United States)

    Friedman, Jerome; Hastie, Trevor; Tibshirani, Robert

    2008-07-01

    We consider the problem of estimating sparse graphs by a lasso penalty applied to the inverse covariance matrix. Using a coordinate descent procedure for the lasso, we develop a simple algorithm--the graphical lasso--that is remarkably fast: It solves a 1000-node problem ( approximately 500,000 parameters) in at most a minute and is 30-4000 times faster than competing methods. It also provides a conceptual link between the exact problem and the approximation suggested by Meinshausen and Bühlmann (2006). We illustrate the method on some cell-signaling data from proteomics.

  12. On the Quantum Inverse problem for the continuous Heisenberg spin chain with axial anisotropy

    International Nuclear Information System (INIS)

    Roy Chowdhury, A.; Chanda, P.K.

    1986-06-01

    We have considered the Quantum Inverse problem for the continuous form of Heisenberg spin chain with anisotropy. The form of quantum R-matrix, the commutation rules for the scattering data, and the explicit structure of the excitation spectrum are obtained. (author)

  13. Three-Dimensional Inversion of Magnetotelluric Data for the Sediment–Basement Interface

    DEFF Research Database (Denmark)

    Cai, Hongzhu; Zhdanov, Michael

    2016-01-01

    the thickness and the conductivities of the sedimentary basin. The forward modeling is based on the integral equation approach. The inverse problem is solved using a regularized conjugate gradient method. The Fréchet derivative matrix is calculated based on quasi-Born approximation. The developed method...

  14. On the joint inversion of SGG and SST data from the GOCE mission

    Directory of Open Access Journals (Sweden)

    P. Ditmar

    2003-01-01

    Full Text Available The computation of spherical harmonic coefficients of the Earth’s gravity field from satellite-to-satellite tracking (SST data and satellite gravity gradiometry (SGG data is considered. As long as the functional model related to SST data contains nuisance parameters (e.g. unknown initial state vectors, assembling of the corresponding normal matrix must be supplied with the back-substitution operation, so that the nuisance parameters are excluded from consideration. The traditional back-substitution algorithm, however, may result in large round-off errors. Hence an alternative approach, back-substitution at the level of the design matrix, is implemented. Both a stand-alone inversion of either type of data and a joint inversion of both types are considered. The conclusion drawn is that the joint inversion results in a much better model of the Earth’s gravity field than a standalone inversion. Furthermore, two numerical techniques for solving the joint system of normal equations are compared: (i the Cholesky method based on an explicit computation of the normal matrix, and (ii the pre-conditioned conjugate gradient method (PCCG, for which an explicit computation of the entire normal matrix is not needed. The comparison shows that the PCCG method is much faster than the Cholesky method.Key words. Earth’s gravity field, GOCE, satellite-tosatellite tracking, satellite gravity gradiometry, backsubstitution

  15. The nuclear reaction matrix

    International Nuclear Information System (INIS)

    Krenciglowa, E.M.; Kung, C.L.; Kuo, T.T.S.; Osnes, E.; and Department of Physics, State University of New York at Stony Brook, Stony Brook, New York 11794)

    1976-01-01

    Different definitions of the reaction matrix G appropriate to the calculation of nuclear structure are reviewed and discussed. Qualitative physical arguments are presented in support of a two-step calculation of the G-matrix for finite nuclei. In the first step the high-energy excitations are included using orthogonalized plane-wave intermediate states, and in the second step the low-energy excitations are added in, using harmonic oscillator intermediate states. Accurate calculations of G-matrix elements for nuclear structure calculations in the Aapprox. =18 region are performed following this procedure and treating the Pauli exclusion operator Q 2 /sub p/ by the method of Tsai and Kuo. The treatment of Q 2 /sub p/, the effect of the intermediate-state spectrum and the energy dependence of the reaction matrix are investigated in detail. The present matrix elements are compared with various matrix elements given in the literature. In particular, close agreement is obtained with the matrix elements calculated by Kuo and Brown using approximate methods

  16. Inverse photon-photon processes

    International Nuclear Information System (INIS)

    Carimalo, C.; Crozon, M.; Kesler, P.; Parisi, J.

    1981-12-01

    We here consider inverse photon-photon processes, i.e. AB → γγX (where A, B are hadrons, in particular protons or antiprotons), at high energies. As regards the production of a γγ continuum, we show that, under specific conditions the study of such processes might provide some information on the subprocess gg γγ, involving a quark box. It is also suggested to use those processes in order to systematically look for heavy C = + structures (quarkonium states, gluonia, etc.) showing up in the γγ channel. Inverse photon-photon processes might thus become a new and fertile area of investigation in high-energy physics, provided the difficult problem of discriminating between direct photons and indirect ones can be handled in a satisfactory way

  17. Analysis of RAE-1 inversion

    Science.gov (United States)

    Hedland, D. A.; Degonia, P. K.

    1974-01-01

    The RAE-1 spacecraft inversion performed October 31, 1972 is described based upon the in-orbit dynamical data in conjunction with results obtained from previously developed computer simulation models. The computer simulations used are predictive of the satellite dynamics, including boom flexing, and are applicable during boom deployment and retraction, inter-phase coast periods, and post-deployment operations. Attitude data, as well as boom tip data, were analyzed in order to obtain a detailed description of the dynamical behavior of the spacecraft during and after the inversion. Runs were made using the computer model and the results were analyzed and compared with the real time data. Close agreement between the actual recorded spacecraft attitude and the computer simulation results was obtained.

  18. Obtaining the crystal potential by inversion from electron scattering intensities

    International Nuclear Information System (INIS)

    Allen, L.T.; Josefsson, T.W.; Leeb, H.

    1998-01-01

    A method to obtain the crystal potential from the intensities of the diffracted beams in high energy electron diffraction is proposed. It is based on a series of measurements for specific well determined orientations of the incident beam which determine the moduli of all elements of the scattering matrix. Using unitarity and the specific form of the scattering matrix (including symmetries) an overdetermined set of non-linear equations is obtained from these data. Solution of these equations yields the required phase information and allows the determination of a (projected) crystal potential by inversion which is unique up to an arbitrary shift of the origin. The reconstruction of potentials from intensities is illustrated for two realistic examples, a [111] systematic row case in ZnS and a [110] zone axis orientation in GaAs (both noncentrosymmetric crystals)

  19. Matrix Metalloproteinase Enzyme Family

    Directory of Open Access Journals (Sweden)

    Ozlem Goruroglu Ozturk

    2013-04-01

    Full Text Available Matrix metalloproteinases play an important role in many biological processes such as embriogenesis, tissue remodeling, wound healing, and angiogenesis, and in some pathological conditions such as atherosclerosis, arthritis and cancer. Currently, 24 genes have been identified in humans that encode different groups of matrix metalloproteinase enzymes. This review discuss the members of the matrix metalloproteinase family and their substrate specificity, structure, function and the regulation of their enzyme activity by tissue inhibitors. [Archives Medical Review Journal 2013; 22(2.000: 209-220

  20. Matrix groups for undergraduates

    CERN Document Server

    Tapp, Kristopher

    2005-01-01

    Matrix groups touch an enormous spectrum of the mathematical arena. This textbook brings them into the undergraduate curriculum. It makes an excellent one-semester course for students familiar with linear and abstract algebra and prepares them for a graduate course on Lie groups. Matrix Groups for Undergraduates is concrete and example-driven, with geometric motivation and rigorous proofs. The story begins and ends with the rotations of a globe. In between, the author combines rigor and intuition to describe basic objects of Lie theory: Lie algebras, matrix exponentiation, Lie brackets, and maximal tori.

  1. Elementary matrix theory

    CERN Document Server

    Eves, Howard

    1980-01-01

    The usefulness of matrix theory as a tool in disciplines ranging from quantum mechanics to psychometrics is widely recognized, and courses in matrix theory are increasingly a standard part of the undergraduate curriculum.This outstanding text offers an unusual introduction to matrix theory at the undergraduate level. Unlike most texts dealing with the topic, which tend to remain on an abstract level, Dr. Eves' book employs a concrete elementary approach, avoiding abstraction until the final chapter. This practical method renders the text especially accessible to students of physics, engineeri

  2. Sparseness- and continuity-constrained seismic imaging

    Science.gov (United States)

    Herrmann, Felix J.

    2005-04-01

    Non-linear solution strategies to the least-squares seismic inverse-scattering problem with sparseness and continuity constraints are proposed. Our approach is designed to (i) deal with substantial amounts of additive noise (SNR formulating the solution of the seismic inverse problem in terms of an optimization problem. During the optimization, sparseness on the basis and continuity along the reflectors are imposed by jointly minimizing the l1- and anisotropic diffusion/total-variation norms on the coefficients and reflectivity, respectively. [Joint work with Peyman P. Moghaddam was carried out as part of the SINBAD project, with financial support secured through ITF (the Industry Technology Facilitator) from the following organizations: BG Group, BP, ExxonMobil, and SHELL. Additional funding came from the NSERC Discovery Grants 22R81254.

  3. Minimal models from W-constrained hierarchies via the Kontsevich-Miwa transform

    CERN Document Server

    Gato-Rivera, Beatriz

    1992-01-01

    A direct relation between the conformal formalism for 2d-quantum gravity and the W-constrained KP hierarchy is found, without the need to invoke intermediate matrix model technology. The Kontsevich-Miwa transform of the KP hierarchy is used to establish an identification between W constraints on the KP tau function and decoupling equations corresponding to Virasoro null vectors. The Kontsevich-Miwa transform maps the $W^{(l)}$-constrained KP hierarchy to the $(p^\\prime,p)$ minimal model, with the tau function being given by the correlator of a product of (dressed) $(l,1)$ (or $(1,l)$) operators, provided the Miwa parameter $n_i$ and the free parameter (an abstract $bc$ spin) present in the constraints are expressed through the ratio $p^\\prime/p$ and the level $l$.

  4. 2.5D Inversion Algorithm of Frequency-Domain Airborne Electromagnetics with Topography

    Directory of Open Access Journals (Sweden)

    Jianjun Xi

    2016-01-01

    Full Text Available We presented a 2.5D inversion algorithm with topography for frequency-domain airborne electromagnetic data. The forward modeling is based on edge finite element method and uses the irregular hexahedron to adapt the topography. The electric and magnetic fields are split into primary (background and secondary (scattered field to eliminate the source singularity. For the multisources of frequency-domain airborne electromagnetic method, we use the large-scale sparse matrix parallel shared memory direct solver PARDISO to solve the linear system of equations efficiently. The inversion algorithm is based on Gauss-Newton method, which has the efficient convergence rate. The Jacobian matrix is calculated by “adjoint forward modelling” efficiently. The synthetic inversion examples indicated that our proposed method is correct and effective. Furthermore, ignoring the topography effect can lead to incorrect results and interpretations.

  5. A field theory description of constrained energy-dissipation processes

    International Nuclear Information System (INIS)

    Mandzhavidze, I.D.; Sisakyan, A.N.

    2002-01-01

    A field theory description of dissipation processes constrained by a high-symmetry group is given. The formalism is presented in the example of the multiple-hadron production processes, where the transition to the thermodynamic equilibrium results from the kinetic energy of colliding particles dissipating into hadron masses. The dynamics of these processes is restricted because the constraints responsible for the colour charge confinement must be taken into account. We develop a more general S-matrix formulation of the thermodynamics of nonequilibrium dissipative processes and find a necessary and sufficient condition for the validity of this description; this condition is similar to the correlation relaxation condition, which, according to Bogolyubov, must apply as the system approaches equilibrium. This situation must physically occur in processes with an extremely high multiplicity, at least if the hadron mass is nonzero. We also describe a new strong-coupling perturbation scheme, which is useful for taking symmetry restrictions on the dynamics of dissipation processes into account. We review the literature devoted to this problem

  6. Wavelet evolutionary network for complex-constrained portfolio rebalancing

    Science.gov (United States)

    Suganya, N. C.; Vijayalakshmi Pai, G. A.

    2012-07-01

    Portfolio rebalancing problem deals with resetting the proportion of different assets in a portfolio with respect to changing market conditions. The constraints included in the portfolio rebalancing problem are basic, cardinality, bounding, class and proportional transaction cost. In this study, a new heuristic algorithm named wavelet evolutionary network (WEN) is proposed for the solution of complex-constrained portfolio rebalancing problem. Initially, the empirical covariance matrix, one of the key inputs to the problem, is estimated using the wavelet shrinkage denoising technique to obtain better optimal portfolios. Secondly, the complex cardinality constraint is eliminated using k-means cluster analysis. Finally, WEN strategy with logical procedures is employed to find the initial proportion of investment in portfolio of assets and also rebalance them after certain period. Experimental studies of WEN are undertaken on Bombay Stock Exchange, India (BSE200 index, period: July 2001-July 2006) and Tokyo Stock Exchange, Japan (Nikkei225 index, period: March 2002-March 2007) data sets. The result obtained using WEN is compared with the only existing counterpart named Hopfield evolutionary network (HEN) strategy and also verifies that WEN performs better than HEN. In addition, different performance metrics and data envelopment analysis are carried out to prove the robustness and efficiency of WEN over HEN strategy.

  7. Cascading Constrained 2-D Arrays using Periodic Merging Arrays

    DEFF Research Database (Denmark)

    Forchhammer, Søren; Laursen, Torben Vaarby

    2003-01-01

    We consider a method for designing 2-D constrained codes by cascading finite width arrays using predefined finite width periodic merging arrays. This provides a constructive lower bound on the capacity of the 2-D constrained code. Examples include symmetric RLL and density constrained codes...

  8. Operator approach to solutions of the constrained BKP hierarchy

    International Nuclear Information System (INIS)

    Shen, Hsin-Fu; Lee, Niann-Chern; Tu, Ming-Hsien

    2011-01-01

    The operator formalism to the vector k-constrained BKP hierarchy is presented. We solve the Hirota bilinear equations of the vector k-constrained BKP hierarchy via the method of neutral free fermion. In particular, by choosing suitable group element of O(∞), we construct rational and soliton solutions of the vector k-constrained BKP hierarchy.

  9. Frequency-domain elastic full waveform inversion using encoded simultaneous sources

    Science.gov (United States)

    Jeong, W.; Son, W.; Pyun, S.; Min, D.

    2011-12-01

    Currently, numerous studies have endeavored to develop robust full waveform inversion and migration algorithms. These processes require enormous computational costs, because of the number of sources in the survey. To avoid this problem, the phase encoding technique for prestack migration was proposed by Romero (2000) and Krebs et al. (2009) proposed the encoded simultaneous-source inversion technique in the time domain. On the other hand, Ben-Hadj-Ali et al. (2011) demonstrated the robustness of the frequency-domain full waveform inversion with simultaneous sources for noisy data changing the source assembling. Although several studies on simultaneous-source inversion tried to estimate P- wave velocity based on the acoustic wave equation, seismic migration and waveform inversion based on the elastic wave equations are required to obtain more reliable subsurface information. In this study, we propose a 2-D frequency-domain elastic full waveform inversion technique using phase encoding methods. In our algorithm, the random phase encoding method is employed to calculate the gradients of the elastic parameters, source signature estimation and the diagonal entries of approximate Hessian matrix. The crosstalk for the estimated source signature and the diagonal entries of approximate Hessian matrix are suppressed with iteration as for the gradients. Our 2-D frequency-domain elastic waveform inversion algorithm is composed using the back-propagation technique and the conjugate-gradient method. Source signature is estimated using the full Newton method. We compare the simultaneous-source inversion with the conventional waveform inversion for synthetic data sets of the Marmousi-2 model. The inverted results obtained by simultaneous sources are comparable to those obtained by individual sources, and source signature is successfully estimated in simultaneous source technique. Comparing the inverted results using the pseudo Hessian matrix with previous inversion results

  10. Spatio-Temporal Audio Enhancement Based on IAA Noise Covariance Matrix Estimates

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Jensen, Jesper Rindom; Christensen, Mads Græsbøll

    2014-01-01

    A method for estimating the noise covariance matrix in a mul- tichannel setup is proposed. The method is based on the iter- ative adaptive approach (IAA), which only needs short seg- ments of data to estimate the covariance matrix. Therefore, the method can be used for fast varying signals....... The method is based on an assumption of the desired signal being harmonic, which is used for estimating the noise covariance matrix from the covariance matrix of the observed signal. The noise co- variance estimate is used in the linearly constrained minimum variance (LCMV) filter and compared...

  11. Joint Inversion Modelling of Geophysical Data From Lough Neagh Basin

    Science.gov (United States)

    Vozar, J.; Moorkamp, M.; Jones, A. G.; Rath, V.; Muller, M. R.

    2015-12-01

    Multi-dimensional modelling of geophysical data collected in the Lough Neagh Basin is presented in the frame of the IRETHERM project. The Permo-Triassic Lough Neagh Basin, situated in the southeastern part of Northern Ireland, exhibits elevated geothermal gradient (~30 °C/km) in the exploratory drilled boreholes. This is taken to indicate good geothermal exploitation potential in the Sherwood Sandstone aquifer for heating, and possibly even electricity production, purposes. We have used a 3-D joint inversion framework for modelling the magnetotelluric (MT) and gravity data collected to the north of the Lough Neagh to derive robust subsurface geological models. Comprehensive supporting geophysical and geological data (e.g. borehole logs and reflection seismic images) have been used in order to analyze and model the MT and gravity data. The geophysical data sets were provided by the Geological Survey of Northern Ireland (GSNI). Considering correct objective function weighting in favor of noise-free MT response functions is particularly important in joint inversion. There is no simple way how to correct distortion effects the 3-D responses as can be done in 1-D or 2-D case. We have used the Tellus Project airborne EM data to constrain magnetotelluric data and correct them for near surface effects. The shallow models from airborne data are used to constrain the uppermost part of 3-D inversion model. Preliminary 3-D joint inversion modeling reveals that the Sherwood Sandstone Group and the Permian Sandstone Formation are imaged as a conductive zone at the depth range of 500 m to 2000 m with laterally varying thickness, depth, and conductance. The conductive target sediments become shallower and thinner to the north and they are laterally continuous. To obtain better characterization of thermal transport properties of investigated area we used porosity and resistivity data from the Annaghmore and Ballymacilroy boreholes to estimate the relations between porosity

  12. 3D magnetization vector inversion based on fuzzy clustering: inversion algorithm, uncertainty analysis, and application to geology differentiation

    Science.gov (United States)

    Sun, J.; Li, Y.

    2017-12-01

    Magnetic data contain important information about the subsurface rocks that were magnetized in the geological history, which provides an important avenue to the study of the crustal heterogeneities associated with magmatic and hydrothermal activities. Interpretation of magnetic data has been widely used in mineral exploration, basement characterization and large scale crustal studies for several decades. However, interpreting magnetic data has been often complicated by the presence of remanent magnetizations with unknown magnetization directions. Researchers have developed different methods to deal with the challenges posed by remanence. We have developed a new and effective approach to inverting magnetic data for magnetization vector distributions characterized by region-wise consistency in the magnetization directions. This approach combines the classical Tikhonov inversion scheme with fuzzy C-means clustering algorithm, and constrains the estimated magnetization vectors to a specified small number of possible directions while fitting the observed magnetic data to within noise level. Our magnetization vector inversion recovers both the magnitudes and the directions of the magnetizations in the subsurface. Magnetization directions reflect the unique geological or hydrothermal processes applied to each geological unit, and therefore, can potentially be used for the purpose of differentiating various geological units. We have developed a practically convenient and effective way of assessing the uncertainty associated with the inverted magnetization directions (Figure 1), and investigated how geological differentiation results might be affected (Figure 2). The algorithm and procedures we have developed for magnetization vector inversion and uncertainty analysis open up new possibilities of extracting useful information from magnetic data affected by remanence. We will use a field data example from exploration of an iron-oxide-copper-gold (IOCG) deposit in Brazil to

  13. Elastic reflection waveform inversion with variable density

    KAUST Repository

    Li, Yuanyuan; Li, Zhenchun; Alkhalifah, Tariq Ali; Guo, Qiang

    2017-01-01

    Elastic full waveform inversion (FWI) provides a better description of the subsurface than those given by the acoustic assumption. However it suffers from a more serious cycle skipping problem compared with the latter. Reflection waveform inversion

  14. Full waveform inversion using envelope-based global correlation norm

    KAUST Repository

    Oh, Juwon

    2018-01-28

    Various parameterizations have been suggested to simplify inversions of first arrivals, or P −waves, in orthorhombic anisotropic media, but the number and type of retrievable parameters have not been decisively determined. We show that only six parameters can be retrieved from the dynamic linearized inversion of P −waves. These parameters are different from the six parameters needed to describe the kinematics of P −waves. Reflection-based radiation patterns from the P − P scattered waves are remapped into the spectral domain to allow for our resolution analysis based on the effective angle of illumination concept. Singular value decomposition of the spectral sensitivities from various azimuths, offset coverage scenarios, and data bandwidths allows us to quantify the resolution of different parameterizations, taking into account the signal-to-noise ratio in a given experiment. According to our singular value analysis, when the primary goal of inversion is determining the velocity of the P −waves, gradually adding anisotropy of lower orders (isotropic, vertically transversally isotropic, orthorhombic) in hierarchical parameterization is the best choice. Hierarchical parametrization reduces the tradeoff between the parameters and makes gradual introduction of lower anisotropy orders straightforward. When all the anisotropic parameters affecting P −wave propagation need to be retrieved simultaneously, the classic parameterization of orthorhombic medium with elastic stiffness matrix coefficients and density is a better choice for inversion. We provide estimates of the number and set of parameters that can be retrieved from surface seismic data in different acquisition scenarios. To set up an inversion process, the singular values determine the number of parameters that can be inverted and the resolution matrices from the parameterizations can be used to ascertain the set of parameters that can be resolved.

  15. Feature and Pose Constrained Visual Aided Inertial Navigation for Computationally Constrained Aerial Vehicles

    Science.gov (United States)

    Williams, Brian; Hudson, Nicolas; Tweddle, Brent; Brockers, Roland; Matthies, Larry

    2011-01-01

    A Feature and Pose Constrained Extended Kalman Filter (FPC-EKF) is developed for highly dynamic computationally constrained micro aerial vehicles. Vehicle localization is achieved using only a low performance inertial measurement unit and a single camera. The FPC-EKF framework augments the vehicle's state with both previous vehicle poses and critical environmental features, including vertical edges. This filter framework efficiently incorporates measurements from hundreds of opportunistic visual features to constrain the motion estimate, while allowing navigating and sustained tracking with respect to a few persistent features. In addition, vertical features in the environment are opportunistically used to provide global attitude references. Accurate pose estimation is demonstrated on a sequence including fast traversing, where visual features enter and exit the field-of-view quickly, as well as hover and ingress maneuvers where drift free navigation is achieved with respect to the environment.

  16. Capturing Hotspots For Constrained Indoor Movement

    DEFF Research Database (Denmark)

    Ahmed, Tanvir; Pedersen, Torben Bach; Lu, Hua

    2013-01-01

    Finding the hotspots in large indoor spaces is very important for getting overloaded locations, security, crowd management, indoor navigation and guidance. The tracking data coming from indoor tracking are huge in volume and not readily available for finding hotspots. This paper presents a graph......-based model for constrained indoor movement that can map the tracking records into mapping records which represent the entry and exit times of an object in a particular location. Then it discusses the hotspots extraction technique from the mapping records....

  17. Quantization of soluble classical constrained systems

    International Nuclear Information System (INIS)

    Belhadi, Z.; Menas, F.; Bérard, A.; Mohrbach, H.

    2014-01-01

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way

  18. Quantization of soluble classical constrained systems

    Energy Technology Data Exchange (ETDEWEB)

    Belhadi, Z. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Laboratoire de physique théorique, Faculté des sciences exactes, Université de Bejaia, 06000 Bejaia (Algeria); Menas, F. [Laboratoire de physique et chimie quantique, Faculté des sciences, Université Mouloud Mammeri, BP 17, 15000 Tizi Ouzou (Algeria); Ecole Nationale Préparatoire aux Etudes d’ingéniorat, Laboratoire de physique, RN 5 Rouiba, Alger (Algeria); Bérard, A. [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France); Mohrbach, H., E-mail: herve.mohrbach@univ-lorraine.fr [Equipe BioPhysStat, Laboratoire LCP-A2MC, ICPMB, IF CNRS No 2843, Université de Lorraine, 1 Bd Arago, 57078 Metz Cedex (France)

    2014-12-15

    The derivation of the brackets among coordinates and momenta for classical constrained systems is a necessary step toward their quantization. Here we present a new approach for the determination of the classical brackets which does neither require Dirac’s formalism nor the symplectic method of Faddeev and Jackiw. This approach is based on the computation of the brackets between the constants of integration of the exact solutions of the equations of motion. From them all brackets of the dynamical variables of the system can be deduced in a straightforward way.

  19. On a complete topological inverse polycyclic monoid

    Directory of Open Access Journals (Sweden)

    S. O. Bardyla

    2016-12-01

    Full Text Available We give sufficient conditions when a topological inverse $\\lambda$-polycyclic monoid $P_{\\lambda}$ is absolutely $H$-closed in the class of topological inverse semigroups. For every infinite cardinal $\\lambda$ we construct the coarsest semigroup inverse topology $\\tau_{mi}$ on $P_\\lambda$ and give an example of a topological inverse monoid $S$ which contains the polycyclic monoid $P_2$ as a dense discrete subsemigroup.

  20. Inverse problem in transformation optics

    DEFF Research Database (Denmark)

    Novitsky, Andrey

    2011-01-01

    The straightforward method of transformation optics implies that one starts from the coordinate transformation and determines the Jacobian matrix, the fields and material parameters of the cloak. However, the coordinate transformation appears as an optional function: it is not necessary to know it...

  1. Hacking the Matrix.

    Science.gov (United States)

    Czerwinski, Michael; Spence, Jason R

    2017-01-05

    Recently in Nature, Gjorevski et al. (2016) describe a fully defined synthetic hydrogel that mimics the extracellular matrix to support in vitro growth of intestinal stem cells and organoids. The hydrogel allows exquisite control over the chemical and physical in vitro niche and enables identification of regulatory properties of the matrix. Copyright © 2017 Elsevier Inc. All rights reserved.

  2. The Matrix Organization Revisited

    DEFF Research Database (Denmark)

    Gattiker, Urs E.; Ulhøi, John Parm

    1999-01-01

    This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively).......This paper gives a short overview of matrix structure and technology management. It outlines some of the characteristics and also points out that many organizations may actualy be hybrids (i.e. mix several ways of organizing to allocate resorces effectively)....

  3. Challenges of inversely estimating Jacobian from metabolomics data

    Directory of Open Access Journals (Sweden)

    Xiaoliang eSun

    2015-11-01

    Full Text Available Inferring dynamics of metabolic networks directly from metabolomics data provides a promising way to elucidate the underlying mechanisms of biological systems, as reported in our previous studies [1-3] by a differential Jacobian approach. The Jacobian is solved from an over-determined system of equations as JC + CJT = -2D, called Lyapunov Equation in its generic form , where J is the Jacobian, C is the covariance matrix of metabolomics data and D is the fluctuation matrix. Lyapunov Equation can be further simplified as the linear form Ax = b. Frequently, this linear equation system is ill-conditioned, i.e., a small variation in the right side b results in a big change in the solution x, thus making the solution unstable and error-prone. At the same time, inaccurate estimation of covariance matrix and uncertainties in the fluctuation matrix bring biases to the solution x. Here, we firstly reviewed common approaches to circumvent the ill-conditioned problems, including total least squares, Tikhonov regularization and truncated singular value decomposition. Then we benchmarked these methods on several in-silico kinetic models with small to large perturbations on the covariance and fluctuation matrices. The results identified that the accuracy of the reverse Jacobian is mainly dependent on the condition number of A, the perturbation amplitude of C and the stiffness of the kinetic models. Our research contributes a systematical comparison of methods to inversely solve Jacobian from metabolomics data.

  4. Irrigation Requirement Estimation Using Vegetation Indices and Inverse Biophysical Modeling

    Science.gov (United States)

    Bounoua, Lahouari; Imhoff, Marc L.; Franks, Shannon

    2010-01-01

    We explore an inverse biophysical modeling process forced by satellite and climatological data to quantify irrigation requirements in semi-arid agricultural areas. We constrain the carbon and water cycles modeled under both equilibrium, balance between vegetation and climate, and non-equilibrium, water added through irrigation. We postulate that the degree to which irrigated dry lands vary from equilibrium climate conditions is related to the amount of irrigation. The amount of water required over and above precipitation is considered as an irrigation requirement. For July, results show that spray irrigation resulted in an additional amount of water of 1.3 mm per occurrence with a frequency of 24.6 hours. In contrast, the drip irrigation required only 0.6 mm every 45.6 hours or 46% of that simulated by the spray irrigation. The modeled estimates account for 87% of the total reported irrigation water use, when soil salinity is not important and 66% in saline lands.

  5. Image-domain full waveform inversion: Field data example

    KAUST Repository

    Zhang, Sanzong

    2014-08-05

    The main difficulty with the data-domain full waveform inversion (FWI) is that it tends to get stuck in the local minima associated with the waveform misfit function. This is the result of cycle skipping which degrades the low-wavenumber update in the absence of low-frequencies and long-offset data. An image-domain objective function is defined as the normed difference between the predicted and observed common image gathers (CIGs) in the subsurface offset domain. This new objective function is not constrained by cycle skipping at the far subsurface offsets. To test the effectiveness of this method, we apply it to marine data recorded in the Gulf of Mexico. Results show that image-domain FWI is less sensitive to the initial model and the absence of low-frequency data compared with conventional FWI. The liability, however, is that it is almost an order of magnitude more expensive than standard FWI.

  6. Image-domain full waveform inversion: Field data example

    KAUST Repository

    Zhang, Sanzong; Schuster, Gerard T.

    2014-01-01

    The main difficulty with the data-domain full waveform inversion (FWI) is that it tends to get stuck in the local minima associated with the waveform misfit function. This is the result of cycle skipping which degrades the low-wavenumber update in the absence of low-frequencies and long-offset data. An image-domain objective function is defined as the normed difference between the predicted and observed common image gathers (CIGs) in the subsurface offset domain. This new objective function is not constrained by cycle skipping at the far subsurface offsets. To test the effectiveness of this method, we apply it to marine data recorded in the Gulf of Mexico. Results show that image-domain FWI is less sensitive to the initial model and the absence of low-frequency data compared with conventional FWI. The liability, however, is that it is almost an order of magnitude more expensive than standard FWI.

  7. High-resolution Fracture Characterization Using Elastic Full-waveform Inversion

    KAUST Repository

    Zhang, Z.

    2017-05-26

    Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution. Here, we propose to estimate both the spatial distribution and physical properties of fractures using full waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. To better understand the inversion results, we analyze the FWI radiation patterns of the fracture weaknesses. A shape regularization term is added to the objective function to improve the inversion for the horizontal weakness, which is otherwise poorly constrained. Alternatively, a simplified model of penny-shaped cracks is used to reduce the nonuniqueness in the inverted weaknesses and achieve a faster convergence.

  8. High-resolution Fracture Characterization Using Elastic Full-waveform Inversion

    KAUST Repository

    Zhang, Z.; Tsvankin, I.; Alkhalifah, Tariq Ali

    2017-01-01

    Current methodologies to characterize fractures at the reservoir scale have serious limitations in spatial resolution. Here, we propose to estimate both the spatial distribution and physical properties of fractures using full waveform inversion (FWI) of multicomponent surface seismic data. An effective orthorhombic medium with five clusters of vertical fractures distributed in a checkboard fashion is used to test the algorithm. To better understand the inversion results, we analyze the FWI radiation patterns of the fracture weaknesses. A shape regularization term is added to the objective function to improve the inversion for the horizontal weakness, which is otherwise poorly constrained. Alternatively, a simplified model of penny-shaped cracks is used to reduce the nonuniqueness in the inverted weaknesses and achieve a faster convergence.

  9. Elastic full-waveform inversion of transmission data in 2D VTI media

    KAUST Repository

    Kamath, Nishant; Tsvankin, Ilya

    2014-01-01

    Full-waveform inversion (FWI) has been implemented mostly for isotropic media, with extensions to anisotropic models typically limited to acoustic approximations. Here, we develop elastic FWI for transmitted waves in 2D heterogeneous VTI (transversely isotropic with a vertical symmetry axis) media. The model is parameterized in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. To test the FWI algorithm, we introduce Gaussian anomalies in the Thomsen parameters of a homogeneous VTI medium and perform FWI of transmission data for different configurations of the source and receiver arrays. The inversion results strongly depend on the acquisition geometry and the aperture because of the parameter trade-offs. In contrast to acoustic FWI, the elastic inversion helps constrain the S-wave vertical velocity, which for our model is decoupled from the other parameters.

  10. Elastic full-waveform inversion of transmission data in 2D VTI media

    KAUST Repository

    Kamath, Nishant

    2014-08-05

    Full-waveform inversion (FWI) has been implemented mostly for isotropic media, with extensions to anisotropic models typically limited to acoustic approximations. Here, we develop elastic FWI for transmitted waves in 2D heterogeneous VTI (transversely isotropic with a vertical symmetry axis) media. The model is parameterized in terms of the P- and S-wave vertical velocities and the P-wave normal-moveout and horizontal velocities. To test the FWI algorithm, we introduce Gaussian anomalies in the Thomsen parameters of a homogeneous VTI medium and perform FWI of transmission data for different configurations of the source and receiver arrays. The inversion results strongly depend on the acquisition geometry and the aperture because of the parameter trade-offs. In contrast to acoustic FWI, the elastic inversion helps constrain the S-wave vertical velocity, which for our model is decoupled from the other parameters.

  11. The Exopolysaccharide Matrix

    Science.gov (United States)

    Koo, H.; Falsetta, M.L.; Klein, M.I.

    2013-01-01

    Many infectious diseases in humans are caused or exacerbated by biofilms. Dental caries is a prime example of a biofilm-dependent disease, resulting from interactions of microorganisms, host factors, and diet (sugars), which modulate the dynamic formation of biofilms on tooth surfaces. All biofilms have a microbial-derived extracellular matrix as an essential constituent. The exopolysaccharides formed through interactions between sucrose- (and starch-) and Streptococcus mutans-derived exoenzymes present in the pellicle and on microbial surfaces (including non-mutans) provide binding sites for cariogenic and other organisms. The polymers formed in situ enmesh the microorganisms while forming a matrix facilitating the assembly of three-dimensional (3D) multicellular structures that encompass a series of microenvironments and are firmly attached to teeth. The metabolic activity of microbes embedded in this exopolysaccharide-rich and diffusion-limiting matrix leads to acidification of the milieu and, eventually, acid-dissolution of enamel. Here, we discuss recent advances concerning spatio-temporal development of the exopolysaccharide matrix and its essential role in the pathogenesis of dental caries. We focus on how the matrix serves as a 3D scaffold for biofilm assembly while creating spatial heterogeneities and low-pH microenvironments/niches. Further understanding on how the matrix modulates microbial activity and virulence expression could lead to new approaches to control cariogenic biofilms. PMID:24045647

  12. Inversion: A Most Useful Kind of Transformation.

    Science.gov (United States)

    Dubrovsky, Vladimir

    1992-01-01

    The transformation assigning to every point its inverse with respect to a circle with given radius and center is called an inversion. Discusses inversion with respect to points, circles, angles, distances, space, and the parallel postulate. Exercises related to these topics are included. (MDH)

  13. Probabilistic Geoacoustic Inversion in Complex Environments

    Science.gov (United States)

    2015-09-30

    Probabilistic Geoacoustic Inversion in Complex Environments Jan Dettmer School of Earth and Ocean Sciences, University of Victoria, Victoria BC...long-range inversion methods can fail to provide sufficient resolution. For proper quantitative examination of variability, parameter uncertainty must...project aims to advance probabilistic geoacoustic inversion methods for complex ocean environments for a range of geoacoustic data types. The work is

  14. Changes in epistemic frameworks: Random or constrained?

    Directory of Open Access Journals (Sweden)

    Ananka Loubser

    2012-11-01

    Full Text Available Since the emergence of a solid anti-positivist approach in the philosophy of science, an important question has been to understand how and why epistemic frameworks change in time, are modified or even substituted. In contemporary philosophy of science three main approaches to framework-change were detected in the humanist tradition:1. In both the pre-theoretical and theoretical domains changes occur according to a rather constrained, predictable or even pre-determined pattern (e.g. Holton.2. Changes occur in a way that is more random or unpredictable and free from constraints (e.g. Kuhn, Feyerabend, Rorty, Lyotard.3. Between these approaches, a middle position can be found, attempting some kind of synthesis (e.g. Popper, Lakatos.Because this situation calls for clarification and systematisation, this article in fact tried to achieve more clarity on how changes in pre-scientific frameworks occur, as well as provided transcendental criticism of the above positions. This article suggested that the above-mentioned positions are not fully satisfactory, as change and constancy are not sufficiently integrated. An alternative model was suggested in which changes in epistemic frameworks occur according to a pattern, neither completely random nor rigidly constrained, which results in change being dynamic but not arbitrary. This alternative model is integral, rather than dialectical and therefore does not correspond to position three. 

  15. Fringe instability in constrained soft elastic layers.

    Science.gov (United States)

    Lin, Shaoting; Cohen, Tal; Zhang, Teng; Yuk, Hyunwoo; Abeyaratne, Rohan; Zhao, Xuanhe

    2016-11-04

    Soft elastic layers with top and bottom surfaces adhered to rigid bodies are abundant in biological organisms and engineering applications. As the rigid bodies are pulled apart, the stressed layer can exhibit various modes of mechanical instabilities. In cases where the layer's thickness is much smaller than its length and width, the dominant modes that have been studied are the cavitation, interfacial and fingering instabilities. Here we report a new mode of instability which emerges if the thickness of the constrained elastic layer is comparable to or smaller than its width. In this case, the middle portion along the layer's thickness elongates nearly uniformly while the constrained fringe portions of the layer deform nonuniformly. When the applied stretch reaches a critical value, the exposed free surfaces of the fringe portions begin to undulate periodically without debonding from the rigid bodies, giving the fringe instability. We use experiments, theory and numerical simulations to quantitatively explain the fringe instability and derive scaling laws for its critical stress, critical strain and wavelength. We show that in a force controlled setting the elastic fingering instability is associated with a snap-through buckling that does not exist for the fringe instability. The discovery of the fringe instability will not only advance the understanding of mechanical instabilities in soft materials but also have implications for biological and engineered adhesives and joints.

  16. Approximating the constellation constrained capacity of the MIMO channel with discrete input

    DEFF Research Database (Denmark)

    Yankov, Metodi Plamenov; Forchhammer, Søren; Larsen, Knud J.

    2015-01-01

    In this paper the capacity of a Multiple Input Multiple Output (MIMO) channel is considered, subject to average power constraint, for multi-dimensional discrete input, in the case when no channel state information is available at the transmitter. We prove that when the constellation size grows, t...... for the equivalent orthogonal channel, obtained by the singular value decomposition. Furthermore, lower bounds on the constrained capacity are derived for the cases of square and tall MIMO matrix, by optimizing the constellation for the equivalent channel, obtained by QR decomposition....

  17. Grain Yield Observations Constrain Cropland CO2 Fluxes Over Europe

    Science.gov (United States)

    Combe, M.; de Wit, A. J. W.; Vilà-Guerau de Arellano, J.; van der Molen, M. K.; Magliulo, V.; Peters, W.

    2017-12-01

    Carbon exchange over croplands plays an important role in the European carbon cycle over daily to seasonal time scales. A better description of this exchange in terrestrial biosphere models—most of which currently treat crops as unmanaged grasslands—is needed to improve atmospheric CO2 simulations. In the framework we present here, we model gross European cropland CO2 fluxes with a crop growth model constrained by grain yield observations. Our approach follows a two-step procedure. In the first step, we calculate day-to-day crop carbon fluxes and pools with the WOrld FOod STudies (WOFOST) model. A scaling factor of crop growth is optimized regionally by minimizing the final grain carbon pool difference to crop yield observations from the Statistical Office of the European Union. In a second step, we re-run our WOFOST model for the full European 25 × 25 km gridded domain using the optimized scaling factors. We combine our optimized crop CO2 fluxes with a simple soil respiration model to obtain the net cropland CO2 exchange. We assess our model's ability to represent cropland CO2 exchange using 40 years of observations at seven European FluxNet sites and compare it with carbon fluxes produced by a typical terrestrial biosphere model. We conclude that our new model framework provides a more realistic and strongly observation-driven estimate of carbon exchange over European croplands. Its products will be made available to the scientific community through the ICOS Carbon Portal and serve as a new cropland component in the CarbonTracker Europe inverse model.

  18. A HARDCORE model for constraining an exoplanet's core size

    Science.gov (United States)

    Suissa, Gabrielle; Chen, Jingjing; Kipping, David

    2018-05-01

    The interior structure of an exoplanet is hidden from direct view yet likely plays a crucial role in influencing the habitability of the Earth analogues. Inferences of the interior structure are impeded by a fundamental degeneracy that exists between any model comprising more than two layers and observations constraining just two bulk parameters: mass and radius. In this work, we show that although the inverse problem is indeed degenerate, there exists two boundary conditions that enables one to infer the minimum and maximum core radius fraction, CRFmin and CRFmax. These hold true even for planets with light volatile envelopes, but require the planet to be fully differentiated and that layers denser than iron are forbidden. With both bounds in hand, a marginal CRF can also be inferred by sampling in-between. After validating on the Earth, we apply our method to Kepler-36b and measure CRFmin = (0.50 ± 0.07), CRFmax = (0.78 ± 0.02), and CRFmarg = (0.64 ± 0.11), broadly consistent with the Earth's true CRF value of 0.55. We apply our method to a suite of hypothetical measurements of synthetic planets to serve as a sensitivity analysis. We find that CRFmin and CRFmax have recovered uncertainties proportional to the relative error on the planetary density, but CRFmarg saturates to between 0.03 and 0.16 once (Δρ/ρ) drops below 1-2 per cent. This implies that mass and radius alone cannot provide any better constraints on internal composition once bulk density constraints hit around a per cent, providing a clear target for observers.

  19. A statistical kinematic source inversion approach based on the QUESO library for uncertainty quantification and prediction

    Science.gov (United States)

    Zielke, Olaf; McDougall, Damon; Mai, Martin; Babuska, Ivo

    2014-05-01

    Seismic, often augmented with geodetic data, are frequently used to invert for the spatio-temporal evolution of slip along a rupture plane. The resulting images of the slip evolution for a single event, inferred by different research teams, often vary distinctly, depending on the adopted inversion approach and rupture model parameterization. This observation raises the question, which of the provided kinematic source inversion solutions is most reliable and most robust, and — more generally — how accurate are fault parameterization and solution predictions? These issues are not included in "standard" source inversion approaches. Here, we present a statistical inversion approach to constrain kinematic rupture parameters from teleseismic body waves. The approach is based a) on a forward-modeling scheme that computes synthetic (body-)waves for a given kinematic rupture model, and b) on the QUESO (Quantification of Uncertainty for Estimation, Simulation, and Optimization) library that uses MCMC algorithms and Bayes theorem for sample selection. We present Bayesian inversions for rupture parameters in synthetic earthquakes (i.e. for which the exact rupture history is known) in an attempt to identify the cross-over at which further model discretization (spatial and temporal resolution of the parameter space) is no longer attributed to a decreasing misfit. Identification of this cross-over is of importance as it reveals the resolution power of the studied data set (i.e. teleseismic body waves), enabling one to constrain kinematic earthquake rupture histories of real earthquakes at a resolution that is supported by data. In addition, the Bayesian approach allows for mapping complete posterior probability density functions of the desired kinematic source parameters, thus enabling us to rigorously assess the uncertainties in earthquake source inversions.

  20. Treatment of pauli exclusion operator in G-matrix calculations for hypernuclei

    International Nuclear Information System (INIS)

    Kuo, T.T.S.; Hao, Jifa

    1995-01-01

    We discuss a matrix-inversion method for treating the Pauli exclusion operator Q in the hyperon-nucleon G-matrix equation for hypernuclei such as Λ 16 O. A model space consisted of shell-model wave functions is employed. We discuss that it is preferable to employ a free-particle spectrum for the intermediate states of the G matrix. This leads to the difficulty that the G-matrix intermediate states are plane waves and on this representation the Pauli operator Q has a rather complicated structure. A matrix-inversion method for over-coming this difficulty is examined. To implement this method it is necessary to employ a so-called n 3Λ truncation approximation. Numerical calculations using the Juelich B tilde and A tilde potentials have been performed to study the accuracy of this approximation. (author)