REGULARIZED D-BAR METHOD FOR THE INVERSE CONDUCTIVITY PROBLEM
Knudsen, Kim; Lassas, Matti; Mueller, Jennifer;
2009-01-01
A strategy for regularizing the inversion procedure for the two-dimensional D-bar reconstruction algorithm based on the global uniqueness proof of Nachman [Ann. Math. 143 (1996)] for the ill-posed inverse conductivity problem is presented. The strategy utilizes truncation of the boundary integral...
D-bar method for electrical impedance tomography with discontinuous conductivities
Knudsen, Kim; Lassas, Matti; Mueller, Jennifer L.;
The effects of truncating the (approximate) scattering transform in the D-bar reconstruction method for 2-D electrical impedance tomography are studied. The method is based on Nachman s uniqueness proof [Ann. of Math. 143 (1996)] that applies to twice differentiable conductivities. However......, the reconstruction algorithm has been successfully applied to experimental data, which can be characterized as piecewise smooth conductivities. The truncation is shown to stabilize the method against measurement noise and to have a smoothing effect on the reconstructed conductivity. Thus the truncation can...... be interpreted as regularization of the D-bar method. Numerical reconstructions are presented demonstrating that features of discontinuous high contrast conductivities can be recovered using the D-bar method. Further, a new connection between Calder´on s linearization method and the D-bar method is established...
Anisotropic Total Variation Regularized L^1-Approximation and Denoising/Deblurring of 2D Bar Codes
Choksi, Rustum; Oberman, Adam
2010-01-01
We consider variations of the Rudin-Osher-Fatemi functional which are particularly well-suited to denoising and deblurring of 2D bar codes. These functionals consist of an anisotropic total variation favoring rectangles and a fidelity term which measure the L^1 distance to the signal, both with and without the presence of a deconvolution operator. Based upon the existence of a certain associated vector field, we find necessary and sufficient conditions for a function to be a minimizer. We apply these results to 2D bar codes to find explicit regimes ---in terms of the fidelity parameter and smallest length scale of the bar codes--- for which a perfect bar code is recoverable via minimization of the functionals. Via a discretization reformulated as a linear program, we perform numerical experiments for all functionals demonstrating their denoising and deblurring capabilities.
Iterative sinc-convolution method for solving planar D-bar equation with application to EIT.
Abbasi, Mahdi; Naghsh-Nilchi, Ahmad-Reza
2012-08-01
The numerical solution of D-bar integral equations is the key in inverse scattering solution of many complex problems in science and engineering including conductivity imaging. Recently, a couple of methodologies were considered for the numerical solution of D-bar integral equation, namely product integrals and multigrid. The first one involves high computational complexity and other one has low convergence rate disadvantages. In this paper, a new and efficient sinc-convolution algorithm is introduced to solve the two-dimensional D-bar integral equation to overcome both of these disadvantages and to resolve the singularity problem not tackled before effectively. The method of sinc-convolution is based on using collocation to replace multidimensional convolution-form integrals- including the two-dimensional D-bar integral equations - by a system of algebraic equations. Separation of variables in the proposed method allows elimination of the formulation of the huge full matrices and therefore reduces the computational complexity drastically. In addition, the sinc-convolution method converges exponentially with a convergence rate of O(e-cN). Simulation results on solving a test electrical impedance tomography problem confirm the efficiency of the proposed sinc-convolution-based algorithm. Copyright © 2012 John Wiley & Sons, Ltd.
Imaging cardiac activity by the D-bar method for electrical impedance tomography
Isaacson, D; Mueller, J L; Newell, J C; Siltanen, S
2006-01-01
A practical D-bar algorithm for reconstructing conductivity changes from EIT data taken on electrodes in a 2D geometry is described. The algorithm is based on the global uniqueness proof of Nachman (1996 Ann. Math. 143 71–96) for the 2D inverse conductivity problem. Results are shown for reconstructions from data collected on electrodes placed around the circumference of a human chest to reconstruct a 2D cross-section of the torso. The images show changes in conductivity during a cardiac cycl...
Towards a d-bar reconstruction method for three-dimensional EIT
Cornean, Horia Decebal; Knudsen, Kim
Three-dimensional electrical impedance tomography (EIT) is considered. Both uniqueness proofs and theoretical reconstruction algorithms available for this problem rely on the use of exponentially growing solutions to the governing conductivity equation. The study of those solutions is continued...... here. It is shown that exponentially growing solutions exist for low complex frequencies without imposing any regularity assumption on the conductivity. Further, a reconstruction method for conductivities close to a constant is given. In this method the complex frequency is taken to zero instead...
Quality of regularization methods
Bouwman, J.
1998-01-01
The solution of ill-posed problems is non-trivial in the sense that frequently applied methods like least-squares fail. The ill-posedness of the problem is refiected by very small changes in the input data which may result in very large changes in the output data. Hence, some sort of stabilization
Quality of regularization methods
Bouwman, J.
1998-01-01
The solution of ill-posed problems is non-trivial in the sense that frequently applied methods like least-squares fail. The ill-posedness of the problem is refiected by very small changes in the input data which may result in very large changes in the output data. Hence, some sort of stabilization o
Fast regularized image interpolation method
Hongchen Liu; Yong Feng; Linjing Li
2007-01-01
The regularized image interpolation method is widely used based on the vector interpolation model in which down-sampling matrix has very large dimension and needs large storage consumption and higher computation complexity. In this paper, a fast algorithm for image interpolation based on the tensor product of matrices is presented, which transforms the vector interpolation model to matrix form. The proposed algorithm can extremely reduce the storage requirement and time consumption. The simulation results verify their validity.
Regularization methods in Banach spaces
Schuster, Thomas; Hofmann, Bernd; Kazimierski, Kamil S
2012-01-01
Regularization methods aimed at finding stable approximate solutions are a necessary tool to tackle inverse and ill-posed problems. Usually the mathematical model of an inverse problem consists of an operator equation of the first kind and often the associated forward operator acts between Hilbert spaces. However, for numerous problems the reasons for using a Hilbert space setting seem to be based rather on conventions than on an approprimate and realistic model choice, so often a Banach space setting would be closer to reality. Furthermore, sparsity constraints using general Lp-norms or the B
A FAST CONVERGENT METHOD OF ITERATED REGULARIZATION
Huang Xiaowei; Wu Chuansheng; Wu Di
2009-01-01
This article presents a fast convergent method of iterated regularization based on the idea of Landweber iterated regularization, and a method for a-posteriori choice by the Morozov discrepancy principle and the optimum asymptotic convergence order of the regularized solution is obtained. Numerical test shows that the method of iterated regu-larization can quicken the convergence speed and reduce the calculation burden efficiently.
New Regularization Method in Electrical Impedance Tomography
侯卫东; 莫玉龙
2002-01-01
Image reconstruction in elecrical impedance tomography(EIT)is a highly ill-posed inverse problem,Regularization techniques must be used in order to solve the problem,In this paper,a new regularization method based on the spatial filtering theory is proposed.The new regularized reconstruction for EIT is independent of the estimation of impedance distribution,so it can be implemented more easily than the maxiumum a posteriori(MAP) method.The regularization level in our proposed method varies spatially so as to be suited to the correlation character of the object's impedance distribution.We implemented our regularization method with two dimensional computer simulations.The experimental results indicate that the quality of the reconstructed impedance images with the descibed regularization method based on spatial filtering theory is better than that with Tikhonov method.
Iterative Regularization with Minimum-Residual Methods
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success...... as regularization methods is highly problem dependent....
Iterative regularization with minimum-residual methods
Jensen, Toke Koldborg; Hansen, Per Christian
2006-01-01
We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES - their success...... as regularization methods is highly problem dependent....
Iterative Regularization with Minimum-Residual Methods
Jensen, Toke Koldborg; Hansen, Per Christian
2007-01-01
We study the regularization properties of iterative minimum-residual methods applied to discrete ill-posed problems. In these methods, the projection onto the underlying Krylov subspace acts as a regularizer, and the emphasis of this work is on the role played by the basis vectors of these Krylov...... subspaces. We provide a combination of theory and numerical examples, and our analysis confirms the experience that MINRES and MR-II can work as general regularization methods. We also demonstrate theoretically and experimentally that the same is not true, in general, for GMRES and RRGMRES their success...
On Comparison of Adaptive Regularization Methods
Sigurdsson, Sigurdur; Larsen, Jan; Hansen, Lars Kai
2000-01-01
, a very flexible regularization may substitute the need for selection procedures. This paper investigates recently suggested adaptive regularization schemes. Some methods focus directly on minimizing an estimate of the generalization error (either algebraic or empirical), whereas others start from...... different criteria, e.g., the Bayesian evidence. The evidence expresses basically the probability of the model, which is conceptually different from generalization error; however, asymptotically for large training data sets they will converge. First the basic model definition, training and generalization...
Regularization and Iterative Methods for Monotone Variational Inequalities
Xiubin Xu
2010-01-01
Full Text Available We provide a general regularization method for monotone variational inequalities, where the regularizer is a Lipschitz continuous and strongly monotone operator. We also introduce an iterative method as discretization of the regularization method. We prove that both regularization and iterative methods converge in norm.
Z_c(3900) as a D\\bar{D}^* Molecule from Pole Counting Rule
Gong, Qin-Rong; Meng, Ce; Tang, Guang-Yi; Zheng, Han-Qing
2016-01-01
A careful study on the nature of the Z_c(3900) resonant structure is carried out in this work. By constructing the pertinent effective Lagrangians and considering the important final-state-interaction effects, we first give a unified description to all the relevant experimental data available, including the J/\\psi\\pi and \\pi\\pi invariant mass distributions from the e^+e^-\\to J/\\psi\\pi\\pi process, the h_c\\pi distribution from e^+e^-\\to h_c\\pi\\pi and also the D\\bar{D}^{*} spectrum in the e^+e^-\\to D\\bar{D}^{*}\\pi process. After fitting the unknown parameters to the previous data, we search the pole in the complex energy plane and only find one pole in the nearby energy region in different Riemann sheets. Therefore we conclude that Z_c(3900) is of D\\bar{D}^* molecular nature, according to the pole counting rule method. We emphasize that the conclusion based upon pole counting method is not trivial, since both the D\\bar{D}^{*} contact interactions and the explicit Z_c exchanges are introduced in our analyses and ...
Regularity properties of a class of hybrid methods
DaxueCHEN; AiguoXIAO
2000-01-01
The existence of spurious steady solutions and period-2 solutions in constant timestep is studied. The concepts of Rill-regularity and R-regularity of a class of hybrid methods for dynamical systems of ordinary differential equations are introduced and studied.Some conditions guaranteeing R-regularity and R-regularity of such methods applied to dynamical systems of ordinary differential equations with some important structures are given.
A Gradient Regularization Method in Crosswell Seismic Tomography
Wang Shoudong
2006-01-01
Crosswell seismic tomography can be used to study the lateral variation of reservoirs, reservoir properties and the dynamic movement of fluids. In view of the instability of crosswell seismic tomography, the gradient method was improved by introducing regularization, and a gradient regularization method in presented in this paper. This method was verified by processing numerical simulation data and physical model data.
Iterative regularization methods for nonlinear ill-posed problems
Scherzer, Otmar; Kaltenbacher, Barbara
2008-01-01
Nonlinear inverse problems appear in many applications, and typically they lead to mathematical models that are ill-posed, i.e., they are unstable under data perturbations. Those problems require a regularization, i.e., a special numerical treatment. This book presents regularization schemes which are based on iteration methods, e.g., nonlinear Landweber iteration, level set methods, multilevel methods and Newton type methods.
A regularized GMRES method for inverse blackbody radiation problem
Wu Jieer
2013-01-01
Full Text Available The inverse blackbody radiation problem is focused on determining temperature distribution of a blackbody from measured total radiated power spectrum. This problem consists of solving a first kind of Fredholm integral equation and many numerical methods have been proposed. In this paper, a regularized GMRES method is presented to solve the linear ill-posed problem caused by the discretization of such an integral equation. This method projects the orignal problem onto a lower dimensional subspaces by the Arnoldi process. Tikhonov regularization combined with GCV criterion is applied to stabilize the numerical iteration process. Three numerical examples indicate the effectiveness of the regularized GMRES method.
Regularized Kernel Forms of Minimum Squared Error Method
XU Jian-hua; ZHANG Xue-gong; LI Yan-da
2006-01-01
Minimum squared error (MSE) algorithm is one of the classical pattern recognition and regression analysis methods,whose objective is to minimize the squared error summation between the output of linear function and the desired output.In this paper,the MSE algorithm is modified by using kernel functions satisfying the Mercer condition and regularization technique; and the nonlinear MSE algorithms based on kernels and regularization term,that is,the regularized kernel forms of MSE algorithm,are proposed.Their objective functions include the squared error summation between the output of nonlinear function based on kernels and the desired output and a proper regularization term.The regularization technique can handle ill-posed problems,reduce the solution space,and control the generalization.Three squared regularization terms are utilized in this paper.In accordance with the probabilistic interpretation of regularization terms,the difference among three regularization terms is given in detail.The synthetic and real data are used to analyze the algorithm performance.
An adaptive Tikhonov regularization method for fluorescence molecular tomography.
Cao, Xu; Zhang, Bin; Wang, Xin; Liu, Fei; Liu, Ke; Luo, Jianwen; Bai, Jing
2013-08-01
The high degree of absorption and scattering of photons propagating through biological tissues makes fluorescence molecular tomography (FMT) reconstruction a severe ill-posed problem and the reconstructed result is susceptible to noise in the measurements. To obtain a reasonable solution, Tikhonov regularization (TR) is generally employed to solve the inverse problem of FMT. However, with a fixed regularization parameter, the Tikhonov solutions suffer from low resolution. In this work, an adaptive Tikhonov regularization (ATR) method is presented. Considering that large regularization parameters can smoothen the solution with low spatial resolution, while small regularization parameters can sharpen the solution with high level of noise, the ATR method adaptively updates the spatially varying regularization parameters during the iteration process and uses them to penalize the solutions. The ATR method can adequately sharpen the feasible region with fluorescent probes and smoothen the region without fluorescent probes resorting to no complementary priori information. Phantom experiments are performed to verify the feasibility of the proposed method. The results demonstrate that the proposed method can improve the spatial resolution and reduce the noise of FMT reconstruction at the same time.
Lavrentiev regularization method for nonlinear ill-posed problems
Kinh, N V
2002-01-01
In this paper we shall be concerned with Lavientiev regularization method to reconstruct solutions x sub 0 of non ill-posed problems F(x)=y sub o , where instead of y sub 0 noisy data y subdelta is an element of X with absolut(y subdelta-y sub 0) X is an accretive nonlinear operator from a real reflexive Banach space X into itself. In this regularization method solutions x subalpha supdelta are obtained by solving the singularly perturbed nonlinear operator equation F(x)+alpha(x-x*)=y subdelta with some initial guess x*. Assuming certain conditions concerning the operator F and the smoothness of the element x*-x sub 0 we derive stability estimates which show that the accuracy of the regularized solutions is order optimal provided that the regularization parameter alpha has been chosen properly.
Comparison of Regularized Regression Methods for ~Omics Data
Acharjee, A.; Finkers, H.J.; Visser, R.G.F.; Maliepaard, C.A.
2013-01-01
Background: In this study, we compare methods that can be used to relate a phenotypic trait of interest to an ~omics data set, where the number or variables outnumbers by far the number of samples. Methods: We apply univariate regression and different regularized multiple regression methods: ridge r
An Improved Traffic Matrix Decomposition Method with Frequency Domain Regularization
Wang, Zhe; Yin, Baolin
2012-01-01
In this letter, we propose a novel network traffic matrix decomposition method named as Stable Principal Component Pursuit with Frequency Domain Regularization (SPCP-FDR). SPCP-FDR improves the Stable Principal Component Pursuit (SPCP) method by using a new noise regularization function defined in frequency domain. Compared with SPCP, SPCP-FDR is more adaptive to empirical frequency properties of diverse traffic components. The Accelerated Proximal Gradient (APG) algorithm for SPCP-FDR is presented. Our experiment results demonstrate the rationality of this new method.
Nonmonotone Spectral Gradient Method for l_1-regularized Least Squares
Wanyou Cheng
2016-08-01
Full Text Available In the paper, we investigate a linear constraint optimization reformulation to a more general form of the l_1 regularization problem and give some good properties of it. We first show that the equivalence between the linear constraint optimization problem and the l_1 regularization problem. Second, the KKT point of the linear constraint problem always exists since the constraints are linear; we show that the half constraints must be active at any KKT point. In addition, we show that the KKT points of the linear constraint problem are the same as the stationary points of the l_1 regularization problem. Based on the linear constraint optimization problem, we propose a nonomotone spectral gradient method and establish its global convergence. Numerical experiments with compressive sense problems show that our approach is competitive with several known methods for standard l_2-l_1 problem.
A REGULARIZATION NEWTON METHOD FOR MIXED COMPLEMENTARITY PROBLEMS
王宜举; 周厚春; 王长钰
2004-01-01
In this paper, a regularization Newton method for mixed complementarity problem(MCP) based on the reformulation of MCP in [1] is proposed. Its global convergence is proved under the assumption that F is a Po-function. The main feature of our algorithm is that a priori of the existence of an accumulation point for convergence need not to be assumed.
Global Optimization methods for Gravitational Lens Systems with Regularized Sources
Rogers, Adam
2012-01-01
Several approaches exist to model gravitational lens systems. In this study, we apply global optimization methods to find the optimal set of lens parameters using a genetic algorithm. We treat the full optimization procedure as a two-step process: an analytical description of the source plane intensity distribution is used to find an initial approximation to the optimal lens parameters. The second stage of the optimization uses a pixelated source plane with the semilinear method to determine an optimal source. Regularization is handled by means of an iterative method and the generalized cross validation (GCV) and unbiased predictive risk estimator (UPRE) functions that are commonly used in standard image deconvolution problems. This approach simultaneously estimates the optimal regularization parameter and the number of degrees of freedom in the source. Using the GCV and UPRE functions we are able to justify an estimation of the number of source degrees of freedom found in previous work. We test our approach ...
A two-way regularization method for MEG source reconstruction
Tian, Tian Siva
2012-09-01
The MEG inverse problem refers to the reconstruction of the neural activity of the brain from magnetoencephalography (MEG) measurements. We propose a two-way regularization (TWR) method to solve the MEG inverse problem under the assumptions that only a small number of locations in space are responsible for the measured signals (focality), and each source time course is smooth in time (smoothness). The focality and smoothness of the reconstructed signals are ensured respectively by imposing a sparsity-inducing penalty and a roughness penalty in the data fitting criterion. A two-stage algorithm is developed for fast computation, where a raw estimate of the source time course is obtained in the first stage and then refined in the second stage by the two-way regularization. The proposed method is shown to be effective on both synthetic and real-world examples. © Institute of Mathematical Statistics, 2012.
A regularization method for extrapolation of solar potential magnetic fields
Gary, G. A.; Musielak, Z. E.
1992-01-01
The mathematical basis of a Tikhonov regularization method for extrapolating the chromospheric-coronal magnetic field using photospheric vector magnetograms is discussed. The basic techniques show that the Cauchy initial value problem can be formulated for potential magnetic fields. The potential field analysis considers a set of linear, elliptic partial differential equations. It is found that, by introducing an appropriate smoothing of the initial data of the Cauchy potential problem, an approximate Fourier integral solution is found, and an upper bound to the error in the solution is derived. This specific regularization technique, which is a function of magnetograph measurement sensitivities, provides a method to extrapolate the potential magnetic field above an active region into the chromosphere and low corona.
Regularization method for calibrated POD reduced-order models
El Majd Badr Abou
2014-01-01
Full Text Available In this work we present a regularization method to improve the accuracy of reduced-order models based on Proper Orthogonal Decomposition. The bench mark configuration retained corresponds to a case of relatively simple dynamics: a two-dimensional flow around a cylinder for a Reynolds number of 200. Finally, we show for this flow configuration that this procedure is efficient in term of reduction of errors.
An inverse method with regularity condition for transonic airfoil design
Zhu, Ziqiang; Xia, Zhixun; Wu, Liyi
1991-01-01
It is known from Lighthill's exact solution of the incompressible inverse problem that in the inverse design problem, the surface pressure distribution and the free stream speed cannot both be prescribed independently. This implies the existence of a constraint on the prescribed pressure distribution. The same constraint exists at compressible speeds. Presented here is an inverse design method for transonic airfoils. In this method, the target pressure distribution contains a free parameter that is adjusted during the computation to satisfy the regularity condition. Some design results are presented in order to demonstrate the capabilities of the method.
Abbasi Mahdi; Naghsh-Nilchi Ahmad-Reza
2012-01-01
Abstract Background Electrical Impedance Tomography (EIT) is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, sy...
Singular and Regular Implementations of the Hybrid Boundary Node Method
无
2007-01-01
The hybrid boundary node method (HdBNM) combines a modified function with the moving least squares approximation to form a boundary-only truly meshless method. This paper describes two implementations of the HdBNM, the singular hybrid boundary node method (ShBNM) and the regular hybrid boundary node method (RhBNM). The ShBNM and RhBNM were compared with each other, and the parameters that influence their performance were studied in detail. The convergence rates and their applicability to thin structures were also investigated. The ShBNM and RhBNM are found to be very easy to implement and to efficiently obtain numerical solutions to computational mechanics problems.
A hybrid splitting method for smoothing Tikhonov regularization problem
Yu-Hua Zeng
2016-02-01
Full Text Available Abstract In this paper, a hybrid splitting method is proposed for solving a smoothing Tikhonov regularization problem. At each iteration, the proposed method solves three subproblems. First of all, two subproblems are solved in a parallel fashion, and the multiplier associated to these two block variables is updated in a rapid sequence. Then the third subproblem is solved in the sense of an alternative fashion with the former two subproblems. Finally, the multiplier associated to the last two block variables is updated. Global convergence of the proposed method is proven under some suitable conditions. Some numerical experiments on the discrete ill-posed problems (DIPPs show the validity and efficiency of the proposed hybrid splitting method.
The regularized monotonicity method: detecting irregular indefinite inclusions
Garde, Henrik; Staboulis, Stratos
2017-01-01
In inclusion detection in electrical impedance tomography, the support of perturbations (inclusion) from a known background conductivity is typically reconstructed from idealized continuum data modelled by a Neumann-to-Dirichlet map. Only few reconstruction methods apply when detecting indefinite...... of approximative measurement models, including the Complete Electrode Model, hence making the method robust against modelling error and noise. In particular, we demonstrate that for a convergent family of approximative models there exists a sequence of regularization parameters such that the outer shape...... of the inclusions is asymptotically exactly characterized. Finally, a peeling-type reconstruction algorithm is presented and, for the first time in literature, numerical examples of monotonicity reconstructions for indefinite inclusions are presented....
Smoothing-Norm Preconditioning for Regularizing Minimum-Residual Methods
Hansen, Per Christian; Jensen, Toke Koldborg
2006-01-01
When GMRES (or a similar minimum-residual algorithm such as RRGMRES, MINRES, or MR-II) is applied to a discrete ill-posed problem with a square matrix, in some cases the iterates can be considered as regularized solutions. We show how to precondition these methods in such a way that the iterations...... take into account a smoothing norm for the solution. This technique is well established for CGLS, but it does not immediately carry over to minimum-residual methods when the smoothing norm is a seminorm or a Sobolev norm. We develop a new technique which works for any smoothing norm of the form $\\|L......\\,x\\|_2$ and which preserves symmetry if the coefficient matrix is symmetric. We also discuss the efficient implementation of our preconditioning technique, and we demonstrate its performance with numerical examples in one and two dimensions....
Exclusive Initial-State-Radiation Production of the DDbar,D*Dbar, and D*D*bar Systems
Aubert, B.; Karyotakis, Y.; Lees, J.P.; Poireau, V.; Prencipe, E.; Prudent, X.; Tisserand, V.; /Annecy, LAPP; Garra Tico, J.; Grauges, E.; /Barcelona U., ECM; Lopez, L.; Palano, A.; Pappagallo, M.; /INFN, Bari /Bari U.; Eigen, G.; Stugu, B.; Sun, L.; /Bergen U.; Battaglia, M.; Brown, D.N.; Kerth, L.T.; Kolomensky, Yu.G.; Lynch, G.; Osipenkov, I.L.; /LBL, Berkeley /UC, Berkeley /Birmingham U. /Ruhr U., Bochum /British Columbia U. /Brunel U. /Novosibirsk, IYF /UC, Irvine /UCLA /UC, Riverside /UC, San Diego /UC, Santa Barbara /UC, Santa Cruz /Caltech /Cincinnati U. /Colorado U. /Colorado State U. /Dortmund U. /Dresden, Tech. U. /Ecole Polytechnique /Edinburgh U. /INFN, Ferrara /Ferrara U. /INFN, Ferrara /INFN, Ferrara /Ferrara U. /INFN, Ferrara /INFN, Ferrara /Ferrara U. /Frascati /INFN, Genoa /Genoa U. /INFN, Genoa /INFN, Genoa /Genoa U. /INFN, Genoa /INFN, Genoa /Genoa U. /Harvard U. /Heidelberg U. /Humboldt U., Berlin /Imperial Coll., London /Iowa U. /Iowa State U. /Johns Hopkins U. /Orsay, LAL /LLNL, Livermore /Liverpool U. /Queen Mary, U. of London /Royal Holloway, U. of London /Louisville U. /Mainz U., Inst. Kernphys. /Manchester U. /Maryland U. /Massachusetts U., Amherst /MIT, LNS /McGill U. /INFN, Milan /Milan U. /INFN, Milan /INFN, Milan /Milan U. /Mississippi U. /Montreal U. /Mt. Holyoke Coll. /INFN, Naples /Naples U. /INFN, Naples /INFN, Naples /Naples U. /NIKHEF, Amsterdam /Notre Dame U. /Ohio State U. /Oregon U. /INFN, Padua /Padua U. /INFN, Padua /INFN, Padua /Padua U. /Paris U., VI-VII /Pennsylvania U. /INFN, Perugia /Perugia U. /INFN, Pisa /Pisa U. /INFN, Pisa /Pisa, Scuola Normale Superiore /INFN, Pisa /Pisa U. /INFN, Pisa /Princeton U. /INFN, Rome /INFN, Rome /Rome U. /INFN, Rome /INFN, Rome /Rome U. /INFN, Rome /INFN, Rome /Rome U. /INFN, Rome /INFN, Rome /Rome U. /INFN, Rome /Rostock U. /Rutherford /DSM, DAPNIA, Saclay /South Carolina U. /SLAC /Stanford U., Phys. Dept. /SUNY, Albany /Tennessee U. /Texas U. /Texas U., Dallas /INFN, Turin /Turin U. /INFN, Trieste /Trieste U. /Valencia U., IFIC /Victoria U. /Warwick U. /Wisconsin U., Madison
2009-06-19
We perform a study of the exclusive production of D{bar D}, D*{bar D}, and D*{bar D}* in initial-state-radiation events, from e{sup +}e{sup -} annihilations at a center-of-mass energy near 10.58 GeV, to search for charmonium and possible new resonances. The data sample corresponds to an integrated luminosity of 384 fb{sup -1} and was recorded by the BABAR experiment at the PEP-II storage rings. The D{bar D}, D*{bar D}, and D*{bar D}* mass spectra show clear evidence of several {psi} resonances. However, there is no evidence for Y(4260) {yields} D*{bar D} or Y(4260) {yields} D*{bar D}*.
The d-bar Neumann problem and Schrödinger operators
Haslinger, Friedrich
2014-01-01
The topic of this bookis located at the intersection of complex analysis, operator theory and partial differential equations. First we investigate the canonical solution operator to d-bar restricted to Bergman spaces of holomorphic L2 functions in one and several complex variables. These operators are Hankel operators of special type. In the following we consider the general d-bar-complex and derive properties of the complex Laplacian on L2 spaces of bounded pseudoconvex domains and on weighted L2 spaces.The main part is devoted to compactness of the d-bar-Neumann operator. The last part will
Mu-Synthesis robust control of 3D bar structure vibration using piezo-stack actuators
Mystkowski, Arkadiusz; Koszewnik, Andrzej Piotr
2016-10-01
This paper presents an idea for the Mu-Synthesis robust control of 3D bar structure vibration with using a piezo-stack actuators. A model of the 3D bar structure with uncertain parameters is presented as multi-input multi-output (MIMO) dynamics. Nominal stability and nominal performances of the open-loop 3D bar structure dynamic model is developed. The uncertain model-based robust controller is derived due to voltage control signal saturation and selected parameter perturbations. The robust control performances and robustness of the system due to uncertainties influence is evaluated by using singular values and a small gain theorem. Finally, simulation investigations and experimental results shown that system response of the 3D bar structure dynamic model with taken into account perturbed parameters met desired robust stability and system limits. The proposed robust controller ensures a good dynamics of the closed-loop system, robustness, and vibration attenuation.
Online System Identification Method Using Modified Regularized Exponential Forgetting
Ján VACHÁLEK
2013-12-01
Full Text Available The paper deals with the use of regularized exponential forgetting (REF in the process of online system identification. The deployment of this type of forgetting strategy is advantageous for very long runs with small changes in the identified input parameters (in the range of 100 000 steps. In these cases, the classical methods of forgetting, such as an exponential (EF or directional forgetting (DF lack the required quality and reach the limit of numerical stability of the calculations of system parameters, which may lead to the early termination of system identification procedure. To avoid this undesirable effect and maintain sufficient primary information about the identified system, a modified REF method is used that employs alternative covariance matrix (ACM formulation to store the primary information of the identified system (REFACM and prevents the numerical destabilization of the identification process. The quality of the modified REFACM forgetting method —along with its validation and comparison with REZ to verify its properties—is performed using standard tests.
Correlations between D and D-bar mesons in high energy photoproduction
Gottschalk, Erik E.; Link, J.; Reyes, M.; Yager, P.M.; Anjos, J.; Bediaga, I.; Gobel, C.; Magnin, J.; Massafferri, A.; Miranda, J.M. de; Pepe, I.M.; Reis, A.C. dos; Carrillo, S.; Casimiro, E.; Cuautle, E.; Sanchez-Hernandez, A.; Uribe, C.; Vasquez, F.; Agostino, L.; Cinquini, L.; Cumalat, J.P.; O' Reilly, B.; Ramirez, J.E.; Segoni, I.; Butler, J.N.; Cheung, H.W.K.; Chiodini, G.; Gaines, I.; Garbincius, P.H.; Garren, L.A.; Gottschalk, E.E.; Kasper, P.H.; Kreymer, A.E.; Kutschke, R.; Benussi, L.; Bianco, S.; Fabbri, F.L.; Zallo, A.; Cawlfield, C.; Kim, D.Y.; Park, K.S.; Rahimi, A.; Wiss, J.; Gardner, R.; Kryemadhi, A.; Chang, K.H.; Chung, Y.S.; Kang, J.S.; Ko, B.R.; Kwak, J.W.; Lee, K.B.; Cho, K.; Park, H.; Alimonti, G.; Barberis, S.; Cerutti, A.; Boschini, M.; D' Angelo, P.; DiCorato, M.; Dini, P.; Edera, L.; Erba, S.; Giammarchi, M.; Inzani, P.; Leveraro, F.; Malvezzi, S.; Menasce, D.; Mezzadri, M.; Moroni, L.; Pedrini, D.; Pontoglio, C.; Prelz, F.; Rovere, M.; Sala, S.; Davenport, T.F.; Arena, V.; Boca, G.; Bonomi, G.; Gianini, G.; Liguori, G.; Merlo, M.M.; Pantea, D.; Ratti, S.P.; Vitulo, P.; Hernandez, H.; Lopez, A.M.; Mendez, H.; Mendez, L.; Montiel, E.; Olaya, D.; Paris, A.; Quinones, J.; Rivera, C.; Xiong, W.; Zhang, Y.; Wilson, J.R.; Handler, T.; Mitchell, R.; Engh, D.; Hosack, M.; Johns, W.E.; Nehring, M.; Sheldon, P.D.; Stenson, K.; Vaadering, E.W.; Webster, M.; Sheaff, M
2003-04-01
Over 7000 events containing a fully reconstructed D D-bar pair have been extracted from data recorded by the FOCUS photoproduction experiment at Fermilab. Preliminary results from a study of correlations between D and D-bar mesons are presented. Correlations are used to study perturbative QCD predictions and investigate non-perturbative effects. We also present a preliminary result on the production of {psi}(3770)
Total variation regularization for bioluminescence tomography with the split Bregman method.
Feng, Jinchao; Qin, Chenghu; Jia, Kebin; Zhu, Shouping; Liu, Kai; Han, Dong; Yang, Xin; Gao, Quansheng; Tian, Jie
2012-07-01
Regularization methods have been broadly applied to bioluminescence tomography (BLT) to obtain stable solutions, including l2 and l1 regularizations. However, l2 regularization can oversmooth reconstructed images and l1 regularization may sparsify the source distribution, which degrades image quality. In this paper, the use of total variation (TV) regularization in BLT is investigated. Since a nonnegativity constraint can lead to improved image quality, the nonnegative constraint should be considered in BLT. However, TV regularization with a nonnegativity constraint is extremely difficult to solve due to its nondifferentiability and nonlinearity. The aim of this work is to validate the split Bregman method to minimize the TV regularization problem with a nonnegativity constraint for BLT. The performance of split Bregman-resolved TV (SBRTV) based BLT reconstruction algorithm was verified with numerical and in vivo experiments. Experimental results demonstrate that the SBRTV regularization can provide better regularization quality over l2 and l1 regularizations.
$\\bar d - \\bar u$ asymmetry in the proton in chiral effective theory
Salamu, Yusupujiang [Institute of High Energy Physics, CAS, Beijing (China); Ji, Chueng -Ryong [North Carolina State Univ., Raleigh, NC (United States); Melnitchouk, W. [Thomas Jefferson National Accelerator Facility (TJNAF), Newport News, VA (United States); Wang, P. [Institute of High Energy Physics, Beijing (China); Theoretical Physics Center for Science Facilities, CAS, Beijing (China)
2015-03-25
We compute the $\\bar d - \\bar u$ asymmetry in the proton in chiral effective theory, including both nucleon and Δ degrees of freedom, within both relativistic and heavy baryon frameworks. In addition to the distribution at $x>0$, we estimate the correction to the integrated asymmetry arising from zero momentum contributions from pion rainbow and bubble diagrams at $x=0$, which have not been accounted for in previous analyses. In conclusion, we find that the empirical $x$ dependence of $\\bar d - \\bar u$ as well as the integrated asymmetry can be well reproduced in terms of a transverse momentum cutoff parameter.
Possible $D\\bar{D}$ and $B\\bar{B}$ Molecular states in a chiral quark model
Li, M T; Dong, Y B; Zhang, Z Y
2012-01-01
We perform a systematic study of the bound state problem of $D\\bar{D}$ and $B\\bar{B}$ systems by using effective interaction in our chiral quark model. Our results show that both the interactions of $D\\bar{D}$ and $B\\bar{B}$ states are attractive, which consequently result in $I^G(J^{PC})=0^+(0^{++})$ $D\\bar{D}$ and $B\\bar{B}$ bound states.
Anomaly detection in homogenous populations: A sparse multiple kernel-based regularization method
Chen, Tianshi; Andersen, Martin S.; Chiuso, Alessandro;
2014-01-01
A problem of anomaly detection in homogenous populations consisting of linear stable systems is studied. The recently introduced sparse multiple kernel based regularization method is applied to solve the problem. A common problem with the existing regularization methods is that there lacks...... an efficient and systematic way to tune the involved regularization parameters. In contrast, the hyper-parameters (some of them can be interpreted as regularization parameters) involved in the proposed method are tuned in an automatic way, and in fact estimated by using the empirical Bayes method. What's more...
Study of the Exclusive Initial State RadiationProduction of the D \\bar D System
Aubert, B.
2006-09-07
A study of exclusive production of the D{bar D} system through initial-state radiation is performed in a search for charmonium states, where D = D{sup 0} or D{sup +}. The D{sup 0} mesons are reconstructed in the D{sup 0} {yields} K{sup -}{pi}{sup +}, D{sup 0} {yields} K{sup -}{pi}{sup +}{pi}{sup 0}, and D{sup 0} {yields} K{sup -}{pi}{sup +}{pi}{sup +}{pi}{sup -} decay modes. The D{sup +} is reconstructed through the D{sup +} {yields} K{sup -}{pi}{sup +}{pi}{sup +} decay mode. The analysis makes use of an integrated luminosity of 288.5 fb{sup -1} collected by the BABAR experiment. The D{bar D} mass spectrum shows a clear {psi}(3770) signal. Further structures appear in the 3.9 and 4.1 GeV/c{sup 2} regions. No evidence is found for Y(4260) decays to D{bar D}, implying an upper limit {Beta}(Y(4260) {yields} D{bar D})/{Beta}(Y(4260) {yields} J/{psi}{pi}{sup +}{pi}{sup -}) < 7.6 (95% confidence level).
Study of the Exclusive Initial-State Radiation Production of the $D \\bar D$ System
Aubert, B; Bóna, M; Boutigny, D; Couderc, F; Karyotakis, Yu; Lees, J P; Poireau, V; Tisserand, V; Zghiche, A; Graugès-Pous, E; Palano, A; Chen, J C; Qi, N D; Rong, G; Wang, P; Zhu, Y S; Eigen, G; Ofte, I; Stugu, B; Abrams, G S; Battaglia, M; Brown, D N; Button-Shafer, J; Cahn, R N; Charles, E; Gill, M S; Groysman, Y; Jacobsen, R G; Kadyk, J A; Kerth, L T; Kolomensky, Yu G; Kukartsev, G; Lynch, G; Mir, L M; Orimoto, T J; Pripstein, M; Roe, N A; Ronan, M T; Wenzel, W A; Del Amo-Sánchez, P; Barrett, M; Ford, K E; Hart, A J; Harrison, T J; Hawkes, C M; Morgan, S E; Watson, A T; Held, T; Koch, H; Lewandowski, B; Pelizaeus, M; Peters, K; Schröder, T; Steinke, M; Boyd, J T; Burke, J P; Cottingham, W N; Walker, D; Asgeirsson, D J; Çuhadar-Dönszelmann, T; Fulsom, B G; Hearty, C; Knecht, N S; Mattison, T S; McKenna, J A; Khan, A; Kyberd, P; Saleem, M; Sherwood, D J; Teodorescu, L; Blinov, V E; Bukin, A D; Druzhinin, V P; Golubev, V B; Onuchin, A P; Serednyakov, S I; Skovpen, Yu I; Solodov, E P; Todyshev, K Yu; Best, D S; Bondioli, M; Bruinsma, M; Chao, M; Curry, S; Eschrich, I; Kirkby, D; Lankford, A J; Lund, P; Mandelkern, M A; Mommsen, R K; Röthel, W; Stoker, D P; Abachi, S; Buchanan, C; Foulkes, S D; Gary, J W; Long, O; Shen, B C; Wang, K; Zhang, L; Hadavand, H K; Hill, E J; Paar, H P; Rahatlou, S; Sharma, V; Berryhill, J W; Campagnari, C; Cunha, A; Dahmes, B; Hong, T M; Kovalskyi, D; Richman, J D; Beck, T W; Eisner, A M; Flacco, C J; Heusch, C A; Kroseberg, J; Lockman, W S; Nesom, G; Schalk, T; Schumm, B A; Seiden, A; Spradlin, P; Williams, D C; Wilson, M G; Albert, J; Chen, E; Dvoretskii, A; Fang, F; Hitlin, D G; Narsky, I; Piatenko, T; Porter, F C; Ryd, A; Samuel, A; Mancinelli, G; Meadows, B T; Mishra, K; Sokoloff, M D; Blanc, F; Bloom, P C; Chen, S; Ford, W T; Hirschauer, J F; Kreisel, A; Nagel, M; Nauenberg, U; Olivas, A; Ruddick, W O; Smith, J G; Ulmer, K A; Wagner, S R; Zhang, J; Chen, A; Eckhart, E A; Soffer, A; Toki, W H; Wilson, R J; Winklmeier, F; Zeng, Q; Altenburg, D D; Feltresi, E; Hauke, A; Jasper, H; Merkel, J; Petzold, A; Spaan, B; Brandt, T; Klose, V; Lacker, H M; Mader, W F; Nogowski, R; Schubert, J; Schubert, K R; Schwierz, R; Sundermann, J E; Volk, A; Bernard, D; Bonneaud, G R; Latour, E; Thiebaux, C; Verderi, M; Clark, P J; Gradl, W; Muheim, F; Playfer, S; Robertson, A I; Xie, Y; Andreotti, M; Bettoni, D; Bozzi, C; Calabrese, R; Cibinetto, G; Luppi, E; Negrini, M; Petrella, A; Piemontese, L; Prencipe, E; Anulli, F; Baldini-Ferroli, R; Calcaterra, A; De Sangro, R; Finocchiaro, G; Pacetti, S; Patteri, P; Peruzzi, I M; Piccolo, M; Rama, M; Zallo, A; Buzzo, A; Capra, R; Contri, R; Lo Vetere, M; Macri, M M; Monge, M R; Passaggio, S; Patrignani, C; Robutti, E; Santroni, A; Tosi, S; Brandenburg, G; Chaisanguanthum, K S; Morii, M; Wu, J; Dubitzky, R S; Marks, J; Schenk, S; Uwer, U; Bard, D; Bhimji, W; Bowerman, D A; Dauncey, P D; Egede, U; Flack, R L; Nash, J A; Nikolich, M B; Panduro-Vazquez, W; Behera, P K; Chai, X; Charles, M J; Mallik, U; Meyer, N T; Ziegler, V; Cochran, J; Crawley, H B; Dong, L; Eyges, V; Meyer, W T; Prell, S; Rosenberg, E I; Rubin, A E; Gritsan, A V; Denig, A G; Fritsch, M; Schott, G; Arnaud, N; Davier, M; Grosdidier, G; Höcker, A; Le Diberder, F R; Lepeltier, V; Lutz, A M; Oyanguren, A; Pruvot, S; Rodier, S; Roudeau, P; Schune, M H; Stocchi, A; Wang, W F; Wormser, G; Cheng, C H; Lange, D J; Wright, D M; Chavez, C A; Forster, I J; Fry, J R; Gabathuler, E; Gamet, R; George, K A; Hutchcroft, D E; Payne, D J; Schofield, K C; Touramanis, C; Bevan, A J; Di Lodovico, F; Menges, W; Sacco, R; Cowan, G; Flächer, H U; Hopkins, D A; Jackson, P S; McMahon, T R; Ricciardi, S; Salvatore, F; Wren, A C; Davis, C L; Allison, J; Barlow, N R; Barlow, R J; Chia, Y M; Edgar, C L; Lafferty, G D; Naisbit, M T; Williams, J C; Yi, J I; Chen, C; Hulsbergen, W D; Jawahery, A; Lae, C K; Roberts, D A; Simi, G; Blaylock, G; Dallapiccola, C; Hertzbach, S S; Li, X; Moore, T B; Saremi, S; Stängle, H; Cowan, R; Sciolla, G; Sekula, S J; Spitznagel, M; Taylor, F; Yamamoto, R K; Kim, H; Mclachlin, S E; Patel, P M; Robertson, S H; Lazzaro, A; Lombardo, V; Palombo, F; Bauer, J M; Cremaldi, L; Eschenburg, V; Godang, R; Kroeger, R; Sanders, D A; Summers, D J; Zhao, H W; Brunet, S; Côté, D; Simard, M; Taras, P; Viaud, F B; Nicholson, H; Cavallo, N; De Nardo, Gallieno; Fabozzi, F; Gatto, C; Lista, L; Monorchio, D; Paolucci, P; Piccolo, D; Sciacca, C; Baak, M A; Raven, G; Snoek, H L; Jessop, C P; LoSecco, J M; Allmendinger, T; Benelli, G; Corwin, L A; Gan, K K; Honscheid, K; Hufnagel, D; Jackson, P D; Kagan, H; Kass, R; Rahimi, A M; Regensburger, J J; Ter-Antonian, R; Wong, Q K; Blount, N L; Brau, J E; Frey, R; Igonkina, O; Kolb, J A; Lu, M; Rahmat, R; Sinev, N B; Strom, D; Strube, J; Torrence, E; Gaz, A; Margoni, M; Morandin, M; Pompili, A; Posocco, M; Rotondo, M; Simonetto, F; Stroili, R; Voci, C; Benayoun, M; Briand, H; Chauveau, J; David, P; Del Buono, L; La Vaissière, C de; Hamon, O; Hartfiel, B L; John, M J J; Leruste, P; Malcles, J; Ocariz, J; Roos, L; Therin, G; Gladney, L; Panetta, J; Biasini, M; Covarelli, R; Angelini, C; Batignani, G; Bettarini, S; Bucci, F; Calderini, G; Carpinelli, M; Cenci, R; Forti, F; Giorgi, M A; Lusiani, A; Marchiori, G; Mazur, M A; Morganti, M; Neri, N; Rizzo, G; Walsh, J J; Haire, M; Judd, D; Wagoner, D E; Biesiada, J; Danielson, N; Elmer, P; Lau, Y P; Lü, C; Olsen, J; Smith, A J S; Telnov, A V; Bellini, F; Cavoto, G; D'Orazio, A; Del Re, D; Di Marco, E; Faccini, R; Ferrarotto, F; Ferroni, F; Gaspero, M; Li Gioi, L; Mazzoni, M A; Morganti, S; Piredda, G; Polci, F; Safai-Tehrani, F; Voena, C; Ebert, M; Schröder, H; Waldi, R; Adye, T; De Groot, N; Franek, B; Olaiya, E O; Wilson, F F; Aleksan, R; Emery, S; Gaidot, A; Ganzhur, S F; Hamel de Monchenault, G; Kozanecki, Witold; Legendre, M; Vasseur, G; Yéche, C; Zito, M; Chen, X R; Liu, H; Park, W; Purohit, M V; Wilson, J R; Allen, M T; Aston, D; Bartoldus, R; Bechtle, P; Berger, N; Claus, R; Coleman, J P; Convery, M R; Cristinziani, M; Dingfelder, J C; Dorfan, J; Dubois-Felsmann, G P; Dujmic, D; Dunwoodie, W M; Field, R C; Glanzman, T; Gowdy, S J; Graham, M T; Grenier, P; Halyo, V; Hast, C; Hrynóva, T; Innes, W R; Kelsey, M H; Kim, P; Leith, D W G S; Li, S; Luitz, S; Lüth, V; Lynch, H L; MacFarlane, D B; Marsiske, H; Messner, R; Müller, D R; O'Grady, C P; Ozcan, V E; Perazzo, A; Perl, M; Pulliam, T; Ratcliff, B N; Roodman, A; Salnikov, A A; Schindler, R H; Schwiening, J; Snyder, A; Stelzer, J; Su, D; Sullivan, M K; Suzuki, K; Swain, S K; Thompson, J M; Vavra, J; van, N; Bakel; Weaver, M; Weinstein, A J R; Wisniewski, W J; Wittgen, M; Wright, D H; Yarritu, A K; Yi, K; Young, C C; Burchat, P R; Edwards, A J; Majewski, S A; Petersen, B A; Roat, C; Wilden, L; Ahmed, S; Alam, M S; Bula, R; Ernst, J A; Jain, V; Pan, B; Saeed, M A; Wappler, F R; Zain, S B; Bugg, W; Krishnamurthy, M; Spanier, S M; Eckmann, R; Ritchie, J L; Satpathy, A; Schilling, C J; Schwitters, R F; Izen, J M; Lou, X C; Ye, S; Bianchi, F; Gallo, F; Gamba, D; Bomben, M; Bosisio, L; Cartaro, C; Cossutti, F; Della Ricca, G; Dittongo, S; Lanceri, L; Vitale, L; Azzolini, V; Lopez-March, N; Martínez-Vidal, F; Banerjee, Sw; Bhuyan, B; Brown, C M; Fortin, D; Hamano, K; Kowalewski, R V; Nugent, I M; Roney, J M; Sobie, R J; Back, J J; Harrison, P F; Latham, T E; Mohanty, G B; Pappagallo, M; Band, H R; Chen, X; Cheng, B; Dasu, S; Datta, M; Flood, K T; Hollar, J J; Kutter, P E; Mellado, B; Mihályi, A; Pan, Y; Pierini, M; Prepost, R; Wu, S L; Yu, Z; Neal, H
2006-01-01
A study of exclusive production of the $D \\bar D$ system through initial-state r adiation is performed in a search for charmonium states, where $D=D^0$ or $D^+$. The $D^0$ mesons are reconstructed in the $D^0 \\to K^- \\pi^+$, $D^0 \\to K^- \\pi^+ \\pi^0$, and $D^0 \\to K^- \\pi^+ \\pi^+ \\pi^-$ decay modes. The $D^+$ is reconstructed through the $D^+ \\to K^- \\pi^+ \\pi^+$ decay mode. The analysis makes use of an integrated luminosity of 288.5 fb$^{-1}$ collected by the BaBar experiment. The $D \\bar D$ mass spectrum shows a clear $\\psi(3770)$ signal. Further structures appear in the 3.9 and 4.1 GeV/$c^2$ regions. No evidence is found for Y(4260) decays to $D \\bar D$, implying an up per limit $\\frac{\\BR(Y(4260)\\to D \\bar D)}{\\BR(Y(4260)\\to J/\\psi \\pi^+ \\pi^-)} < 7.6$ (95 % confidence level).
Wang, Qi; Wang, Huaxiang; Zhang, Ronghua; Wang, Jinhai; Zheng, Yu; Cui, Ziqiang; Yang, Chengyi
2012-10-01
Electrical impedance tomography (EIT) is a technique for reconstructing the conductivity distribution by injecting currents at the boundary of a subject and measuring the resulting changes in voltage. Image reconstruction in EIT is a nonlinear and ill-posed inverse problem. The Tikhonov method with L(2) regularization is always used to solve the EIT problem. However, the L(2) method always smoothes the sharp changes or discontinue areas of the reconstruction. Image reconstruction using the L(1) regularization allows addressing this difficulty. In this paper, a sum of absolute values is substituted for the sum of squares used in the L(2) regularization to form the L(1) regularization, the solution is obtained by the barrier method. However, the L(1) method often involves repeatedly solving large-dimensional matrix equations, which are computationally expensive. In this paper, the projection method is combined with the L(1) regularization method to reduce the computational cost. The L(1) problem is mainly solved in the coarse subspace. This paper also discusses the strategies of choosing parameters. Both simulation and experimental results of the L(1) regularization method were compared with the L(2) regularization method, indicating that the L(1) regularization method can improve the quality of image reconstruction and tolerate a relatively high level of noise in the measured voltages. Furthermore, the projected L(1) method can also effectively reduce the computational time without affecting the quality of reconstructed images.
Deconvolution methods based on φHL regularization for spectral recovery.
Zhu, Hu; Deng, Lizhen; Bai, Xiaodong; Li, Meng; Cheng, Zhao
2015-05-10
The recorded spectra often suffer noise and band overlapping, and deconvolution methods are always used for spectral recovery. However, during the process of spectral recovery, the details cannot always be preserved. To solve this problem, two regularization terms are introduced and proposed. First, the conditions on the regularization term are analyzed for smoothing noise and preserving detail, and according to these conditions, φHL regularization is introduced into the spectral deconvolution model. In view of the deficiency of φHL under noisy condition, adaptive φHL regularization (φAHL) is proposed. Then semi-blind deconvolution methods based on φHL regularization (SBD-HL) and based on adaptive φHL regularization (SBD-AHL) are proposed, respectively. The simulation experimental results indicate that the proposed SBD-HL and SBD-AHL methods have better recovery, and SBD-AHL is superior to SBD-HL, especially in the noisy case.
Manzini, Gianmarco [Los Alamos National Laboratory
2012-07-13
We develop and analyze a new family of virtual element methods on unstructured polygonal meshes for the diffusion problem in primal form, that use arbitrarily regular discrete spaces V{sub h} {contained_in} C{sup {alpha}} {element_of} N. The degrees of freedom are (a) solution and derivative values of various degree at suitable nodes and (b) solution moments inside polygons. The convergence of the method is proven theoretically and an optimal error estimate is derived. The connection with the Mimetic Finite Difference method is also discussed. Numerical experiments confirm the convergence rate that is expected from the theory.
Regularization method with two parameters for nonlinear ill-posed problems
2008-01-01
This paper is devoted to the regularization of a class of nonlinear ill-posed problems in Banach spaces. The operators involved are multi-valued and the data are assumed to be known approximately. Under the assumption that the original problem is solvable, a strongly convergent approximation procedure is designed by means of the Tikhonov regularization method with two pa- rameters.
Regularization methods for a class of variational inequalities in banach spaces
Buong, Nguyen; Phuong, Nguyen Thi Hong
2012-11-01
In this paper, we introduce two regularization methods, based on the Browder-Tikhonov and iterative regularizations, for finding a solution of variational inequalities over the set of common fixed points of an infinite family of nonexpansive mappings on real reflexive and strictly convex Banach spaces with a uniformly Gateaux differentiate norm.
X(5568) as a {su}\\bar{d}\\bar{b} tetraquark in a simple quark model
Stancu, Fl
2016-10-01
The S-wave eigenstates of tetraquarks of type {su}\\bar{d}\\bar{b} with J P = 0+, 1+ and 2+ are studied within a simple quark model with chromomagnetic interaction and effective quark masses extracted from meson and baryon spectra. It is tempting to see if this spectrum can accommodate the new narrow structure X(5568), observed by the DØ Collaboration, but not confirmed by the LHCb Collaboration. If it exists, such a tetraquark is a system with four different flavors and its study can improve our understanding of multiquark systems. The presently calculated mass of X(5568) agrees quite well with the experimental value of the DØ Collaboration. Predictions are also made for the spectrum of the charmed partner {su}\\bar{d}\\bar{c}. However we are aware of the difficulty of extracting effective quark masses, from mesons and baryons, to be used in multiquark systems.
Assessment of Tikhonov-type regularization methods for solving atmospheric inverse problems
Xu, Jian; Schreier, Franz; Doicu, Adrian; Trautmann, Thomas
2016-11-01
Inverse problems occurring in atmospheric science aim to estimate state parameters (e.g. temperature or constituent concentration) from observations. To cope with nonlinear ill-posed problems, both direct and iterative Tikhonov-type regularization methods can be used. The major challenge in the framework of direct Tikhonov regularization (TR) concerns the choice of the regularization parameter λ, while iterative regularization methods require an appropriate stopping rule and a flexible λ-sequence. In the framework of TR, a suitable value of the regularization parameter can be generally determined based on a priori, a posteriori, and error-free selection rules. In this study, five practical regularization parameter selection methods, i.e. the expected error estimation (EEE), the discrepancy principle (DP), the generalized cross-validation (GCV), the maximum likelihood estimation (MLE), and the L-curve (LC), have been assessed. As a representative of iterative methods, the iteratively regularized Gauss-Newton (IRGN) algorithm has been compared with TR. This algorithm uses a monotonically decreasing λ-sequence and DP as an a posteriori stopping criterion. Practical implementations pertaining to retrievals of vertically distributed temperature and trace gas profiles from synthetic microwave emission measurements and from real far infrared data, respectively, have been conducted. Our numerical analysis demonstrates that none of the parameter selection methods dedicated to TR appear to be perfect and each has its own advantages and disadvantages. Alternatively, IRGN is capable of producing plausible retrieval results, allowing a more efficient manner for estimating λ.
On the use of nonlinear regularization in inverse method for the tachocline profile determination
Corbard, T; Provost, J P; Blanc-Féraud, L
1998-01-01
Inversions of rotational splittings have shown that the surface layers and the so-called solar tachocline at the base of the convection zone are regions in which high radial gradients of the rotation rate occur. The usual regularization methods tend to smooth out every high gradients in the solution and may not be appropriate for the study of a zone like the tachocline. In this paper we use nonlinear regularization methods that are developed for edge-preserving regularization in computed imaging (e.g. Blanc-Féraud et al. 1995) and we apply them in the helioseismic context of rotational inversions.
Elasticity imaging for regularly spaced structures utilizing WT matched filtering method
无
2002-01-01
Based on wavelet transform of time-scale domain, a new strain estimation method is presented to position the regular scatterers, calculate the local scatterer spacing and its change, and estimate the internal strain distribution of tissue mimicking phantom. Simulation and experiment results for uniform and nonuniform phantoms show the internal strain of regularly spaced structures can be estimated accurately using this method and the influence of global boundary condition on the estimated strain distribution can be eliminated by reconstructing the real elasticity distribution. This approach has the potentials to become a valuable tool for the regularly spaced structures.
Jinping Tang
2017-01-01
Full Text Available Optical tomography is an emerging and important molecular imaging modality. The aim of optical tomography is to reconstruct optical properties of human tissues. In this paper, we focus on reconstructing the absorption coefficient based on the radiative transfer equation (RTE. It is an ill-posed parameter identification problem. Regularization methods have been broadly applied to reconstruct the optical coefficients, such as the total variation (TV regularization and the L1 regularization. In order to better reconstruct the piecewise constant and sparse coefficient distributions, TV and L1 norms are combined as the regularization. The forward problem is discretized with the discontinuous Galerkin method on the spatial space and the finite element method on the angular space. The minimization problem is solved by a Jacobian-based Levenberg-Marquardt type method which is equipped with a split Bregman algorithms for the L1 regularization. We use the adjoint method to compute the Jacobian matrix which dramatically improves the computation efficiency. By comparing with the other imaging reconstruction methods based on TV and L1 regularizations, the simulation results show the validity and efficiency of the proposed method.
Zhong-Zhi; Yu-Mei; K.
2010-01-01
Image restoration is often solved by minimizing an energy function consisting of a data-fidelity term and a regularization term. A regularized convex term can usually preserve the image edges well in the restored image. In this paper, we consider a class of convex and edge-preserving regularization functions, I.e., multiplicative half-quadratic regularizations, and we use the Newton method to solve the correspondingly reduced systems of nonlinear equations. At each Newton iterate, the preconditioned conjugate gradient method, incorporated with a constraint preconditioner, is employed to solve the structured Newton equation that has a symmetric positive definite coefficient matrix.The igenvalue bounds of the preconditioned matrix are deliberately derived, which can be used to estimate the convergence speed of the preconditioned conjugate gradient method. We use experimental results to demonstrate that this new approach is efficient,and the effect of image restoration is r0easonably well.
Ahmad, Munir; Shahzad, Tasawar; Masood, Khalid; Rashid, Khalid; Tanveer, Muhammad; Iqbal, Rabail; Hussain, Nasir; Shahid, Abubakar; Fazal-E-Aleem
2016-06-01
Emission tomographic image reconstruction is an ill-posed problem due to limited and noisy data and various image-degrading effects affecting the data and leads to noisy reconstructions. Explicit regularization, through iterative reconstruction methods, is considered better to compensate for reconstruction-based noise. Local smoothing and edge-preserving regularization methods can reduce reconstruction-based noise. However, these methods produce overly smoothed images or blocky artefacts in the final image because they can only exploit local image properties. Recently, non-local regularization techniques have been introduced, to overcome these problems, by incorporating geometrical global continuity and connectivity present in the objective image. These techniques can overcome drawbacks of local regularization methods; however, they also have certain limitations, such as choice of the regularization function, neighbourhood size or calibration of several empirical parameters involved. This work compares different local and non-local regularization techniques used in emission tomographic imaging in general and emission computed tomography in specific for improved quality of the resultant images.
Regularizing the molecular potential in electronic structure calculations. II. Many-body methods
Bischoff, Florian A., E-mail: florian.bischoff@hu-berlin.de [Institut für Chemie, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin (Germany)
2014-11-14
In Paper I of this series [F. A. Bischoff, “Regularizing the molecular potential in electronic structure calculations. I. SCF methods,” J. Chem. Phys. 141, 184105 (2014)] a regularized molecular Hamilton operator for electronic structure calculations was derived and its properties in SCF calculations were studied. The regularization was achieved using a correlation factor that models the electron-nuclear cusp. In the present study we extend the regularization to correlated methods, in particular the exact solution of the two-electron problem, as well as second-order many body perturbation theory. The nuclear and electronic correlation factors lead to computations with a smaller memory footprint because the singularities are removed from the working equations, which allows coarser grid resolution while maintaining the precision. Numerical examples are given.
A Certain Regular Property of the Method I Construction and Packing Measure
Sheng You WEN
2007-01-01
Let τ be a premeasure on a complete separable metric space and let τ* be the Method I measure constructed from τ . We give conditions on τ such that τ* has a regularity as follows: Every τ* -measurable set has measure equivalent to the supremum of premeasures of its compact subsets. Then we prove that the packing measure has this regularity if and only if the corresponding packing premeasure is locally finite.
无
2007-01-01
In particle sizing by light extinction method, the regularization parameter plays an important role in applying regularization to find the solution to ill-posed inverse problems. We combine the generalized cross-validation (GCV) and L-curve criteria with the Twomey-NNLS algorithm in parameter optimization. Numerical simulation and experimental validation show that the resistance of the newly developed algorithms to measurement errors can be improved leading to stable inversion results for unimodal particle size distribution.
Constantin Bota
2014-01-01
Full Text Available The paper presents the optimal homotopy perturbation method, which is a new method to find approximate analytical solutions for nonlinear partial differential equations. Based on the well-known homotopy perturbation method, the optimal homotopy perturbation method presents an accelerated convergence compared to the regular homotopy perturbation method. The applications presented emphasize the high accuracy of the method by means of a comparison with previous results.
A REGULARIZED CONJUGATE GRADIENT METHOD FOR SYMMETRIC POSITIVE DEFINITE SYSTEM OF LINEAR EQUATIONS
Zhong-zhi Bai; Shao-liang Zhang
2002-01-01
A class of regularized conjugate gradient methods is presented for solving the large sparse system of linear equations of which the coefficient matrix is an ill-conditioned symmetric positive definite matrix. The convergence properties of these methods are discussed in depth, and the best possible choices of the parameters involved in the new methods are investigated in detail. Numerical computations show that the new methods are more efficient and robust than both classical relaxation methods and classical conjugate direction methods.
Wu, Wei; Fan, Qinwei; Zurada, Jacek M; Wang, Jian; Yang, Dakun; Liu, Yan
2014-02-01
The aim of this paper is to develop a novel method to prune feedforward neural networks by introducing an L1/2 regularization term into the error function. This procedure forces weights to become smaller during the training and can eventually removed after the training. The usual L1/2 regularization term involves absolute values and is not differentiable at the origin, which typically causes oscillation of the gradient of the error function during the training. A key point of this paper is to modify the usual L1/2 regularization term by smoothing it at the origin. This approach offers the following three advantages: First, it removes the oscillation of the gradient value. Secondly, it gives better pruning, namely the final weights to be removed are smaller than those produced through the usual L1/2 regularization. Thirdly, it makes it possible to prove the convergence of the training. Supporting numerical examples are also provided.
Chen, Xueli; Yang, Defu; Zhang, Qitan; Liang, Jimin
2014-05-01
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l1/2 regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l1/2 regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l1 regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Limited-view ultrasonic guided wave tomography using an adaptive regularization method
Rao, Jing; Ratassepp, Madis; Fan, Zheng
2016-11-01
Ultrasonic guided waves are useful to assess the integrity of a structure from a remote location. Recently, tomography techniques have been developed to quantitatively estimate the thickness map of plate-like structures based on the dispersion characteristics of guided waves. In many applications only limited locations are available to place transducers. The missing viewing angles lead to artifacts which can degrade the image quality. To address this problem, this paper applies the regularization method to synthesize the missing components. The regularization technique is performed by an adaptive threshold approach to the limited view reconstruction. The effectiveness of this method combined with the full waveform inversion method is demonstrated by using numerical simulations as well as experiments on an irregularly shaped defect and two flat-bottom defects. The results indicate that the additional components obtained from the regularization method can significantly reduce the artifacts, leading to better reconstruction accuracy.
Fairouz Zouyed
2015-01-01
Full Text Available This paper discusses the inverse problem of determining an unknown source in a second order differential equation from measured final data. This problem is ill-posed; that is, the solution (if it exists does not depend continuously on the data. In order to solve the considered problem, an iterative method is proposed. Using this method a regularized solution is constructed and an a priori error estimate between the exact solution and its regularized approximation is obtained. Moreover, numerical results are presented to illustrate the accuracy and efficiency of this method.
An iterative regularization method for nonlinear problems based on Bregman projections
Maaß, Peter; Strehlow, Robin
2016-11-01
In this paper, we present an iterative method for the regularization of ill-posed, nonlinear problems. The approach is based on the Bregman projection onto stripes the width of which is controlled by both the noise level and the structure of the operator. In our investigations, we follow (Lorenz et al 2014 SIAM J. Imaging Sci. 7 1237-62) and extend the respective method to the setting of nonlinear operators. Furthermore, we present a proof for the regularizing properties of the method.
Parent, Maxim; Niezgoda, Helen; Keller, Heather H; Chambers, Larry W; Daly, Shauna
2012-10-01
A variety of methods are available for assessing diet; however, many are impractical for large research studies in an institutional environment. Technology, specifically digital imaging, can make diet estimations more feasible for research. Our goal was to compare a digital imaging method of estimating regular and modified-texture main plate food waste with traditional on-site visual estimations, in a continuing and long-term care setting using a meal-tray delivery service. Food waste was estimated for participants on regular (n=36) and modified-texture (n=42) diets. A tracking system to ensure collection and digital imaging of all main meal plates was developed. Four observers used a modified Comstock method to assess food waste for vegetables, starches, and main courses on 551 main meal plates. Intermodal, inter-rater, and intra-rater reliability were calculated using intraclass correlation for absolute agreement. Intermodal reliability was based on one rater's assessments. The digital imaging method results were in high agreement with the real-time visual method for both regular and modified-texture food (intraclass correlation=0.90 and 0.88, respectively). Agreements between observers for regular diets were higher than those for modified-texture food (range=0.91 to 0.94; 0.82 to 0.91, respectively). Intra-rater agreements were very high for both regular and modified-texture food (range=0.93 to 0.99; 0.91 to 0.98). The digital imaging method is a reliable alternative to estimating regular and modified-texture food waste for main meal plates when compared with real-time visual estimation. Color, shape, reheating, mixing, and use of sauces made modified-texture food waste slightly more difficult to estimate, regardless of estimation method. Copyright © 2012 Academy of Nutrition and Dietetics. Published by Elsevier Inc. All rights reserved.
Cubic Trigonometric B-spline Galerkin Methods for the Regularized Long Wave Equation
Irk, Dursun; Keskin, Pinar
2016-10-01
A numerical solution of the Regularized Long Wave (RLW) equation is obtained using Galerkin finite element method, based on Crank Nicolson method for the time integration and cubic trigonometric B-spline functions for the space integration. After two different linearization techniques are applied, the proposed algorithms are tested on the problems of propagation of a solitary wave and interaction of two solitary waves.
Chengcai Leng
2015-01-01
Full Text Available Optical molecular imaging is a promising technique and has been widely used in physiology, and pathology at cellular and molecular levels, which includes different modalities such as bioluminescence tomography, fluorescence molecular tomography and Cerenkov luminescence tomography. The inverse problem is ill-posed for the above modalities, which cause a nonunique solution. In this paper, we propose an effective reconstruction method based on the linearized Bregman iterative algorithm with sparse regularization (LBSR for reconstruction. Considering the sparsity characteristics of the reconstructed sources, the sparsity can be regarded as a kind of a priori information and sparse regularization is incorporated, which can accurately locate the position of the source. The linearized Bregman iteration method is exploited to minimize the sparse regularization problem so as to further achieve fast and accurate reconstruction results. Experimental results in a numerical simulation and in vivo mouse demonstrate the effectiveness and potential of the proposed method.
Jozwiak, G; Masalska, A; Gotszalk, T; Ritz, I; Steigmann, H
2011-01-01
The problem of an accurate tip radius and shape characterization is very important for determination of surface mechanical and chemical properties on the basis of the scanning probe microscopy measurements. We think that the most favorable methods for this purpose are blind tip reconstruction methods, since they do not need any calibrated characterizers and might be performed on an ordinary SPM setup. As in many other inverse problems also in case of these methods the stability of the solution in presence of vibrational and electronic noise needs application of so called regularization techniques. In this paper the novel regularization technique (Regularized Blind Tip Reconstruction - RBTR) for blind tip reconstruction algorithm is presented. It improves the quality of the solution in presence of isotropic and anisotropic noise. The superiority of our approach is proved on the basis of computer simulations and analysis of images of the Budget Sensors TipCheck calibration standard. In case of characterization ...
A regularizing iterative ensemble Kalman method for PDE-constrained inverse problems
Iglesias, Marco A.
2016-02-01
We introduce a derivative-free computational framework for approximating solutions to nonlinear PDE-constrained inverse problems. The general aim is to merge ideas from iterative regularization with ensemble Kalman methods from Bayesian inference to develop a derivative-free stable method easy to implement in applications where the PDE (forward) model is only accessible as a black box (e.g. with commercial software). The proposed regularizing ensemble Kalman method can be derived as an approximation of the regularizing Levenberg-Marquardt (LM) scheme (Hanke 1997 Inverse Problems 13 79-95) in which the derivative of the forward operator and its adjoint are replaced with empirical covariances from an ensemble of elements from the admissible space of solutions. The resulting ensemble method consists of an update formula that is applied to each ensemble member and that has a regularization parameter selected in a similar fashion to the one in the LM scheme. Moreover, an early termination of the scheme is proposed according to a discrepancy principle-type of criterion. The proposed method can be also viewed as a regularizing version of standard Kalman approaches which are often unstable unless ad hoc fixes, such as covariance localization, are implemented. The aim of this paper is to provide a detailed numerical investigation of the regularizing and convergence properties of the proposed regularizing ensemble Kalman scheme; the proof of these properties is an open problem. By means of numerical experiments, we investigate the conditions under which the proposed method inherits the regularizing properties of the LM scheme of (Hanke 1997 Inverse Problems 13 79-95) and is thus stable and suitable for its application in problems where the computation of the Fréchet derivative is not computationally feasible. More concretely, we study the effect of ensemble size, number of measurements, selection of initial ensemble and tunable parameters on the performance of the method
Augmented Arnoldi-Tikhonov Regularization Methods for Solving Large-Scale Linear Ill-Posed Systems
Yiqin Lin
2013-01-01
Full Text Available We propose an augmented Arnoldi-Tikhonov regularization method for the solution of large-scale linear ill-posed systems. This method augments the Krylov subspace by a user-supplied low-dimensional subspace, which contains a rough approximation of the desired solution. The augmentation is implemented by a modified Arnoldi process. Some useful results are also presented. Numerical experiments illustrate that the augmented method outperforms the corresponding method without augmentation on some real-world examples.
Zhao-Qing Wang
2014-01-01
Full Text Available Embedding the irregular doubly connected domain into an annular regular region, the unknown functions can be approximated by the barycentric Lagrange interpolation in the regular region. A highly accurate regular domain collocation method is proposed for solving potential problems on the irregular doubly connected domain in polar coordinate system. The formulations of regular domain collocation method are constructed by using barycentric Lagrange interpolation collocation method on the regular domain in polar coordinate system. The boundary conditions are discretized by barycentric Lagrange interpolation within the regular domain. An additional method is used to impose the boundary conditions. The least square method can be used to solve the overconstrained equations. The function values of points in the irregular doubly connected domain can be calculated by barycentric Lagrange interpolation within the regular domain. Some numerical examples demonstrate the effectiveness and accuracy of the presented method.
SUI Da-shan; CUI Zhen-shan
2007-01-01
The accurate material physical properties, initial and boundary conditions are indispensable to the numerical simulation in the casting process, and they are related to the simulation accuracy directly.The inverse heat conduction method can be used to identify the mentioned above parameters based on the temperature measurement data.This paper presented a new inverse method according to Tikhonov regularization theory.A regularization functional was established and the regularization parameter was deduced, the Newton-Raphson iteration method was used to solve the equations.One detailed case was solved to identify the thermal conductivity and specific heat of sand mold and interfacial heat transfer coefficient (IHTC) at the meantime.This indicates that the regularization method is very efficient in decreasing the sensitivity to the temperature measurement data, overcoming the illposedness of the inverse heat conduction problem (IHCP) and improving the stability and accuracy of the results.As a general inverse method, it can be used to identify not only the material physical properties but also the initial and boundary conditions' parameters.
2003-01-01
An efficient numerical method is developed for the numerical solution of non-linear wave equations typified by the regularized long wave equation (RLW) and its generalization (GRLW). The method developed uses a pseudo-spectral (Fourier transform) treatment of the space dependence together with a linearized implicit scheme in time. An important advantage to be gained from the use of this method, is the ability to vary the mesh length, thereby reducing the computational time. Using a linearized...
29 CFR 778.209 - Method of inclusion of bonus in regular rate.
2010-07-01
... LABOR STATEMENTS OF GENERAL POLICY OR INTERPRETATION NOT DIRECTLY RELATED TO REGULATIONS OVERTIME COMPENSATION Payments That May Be Excluded From the âRegular Rateâ Bonuses § 778.209 Method of inclusion of... overtime compensation. No difficulty arises in computing overtime compensation if the bonus covers only...
Implementation of an optimal first-order method for strongly convex total variation regularization
Jensen, Tobias Lindstrøm; Jørgensen, Jakob Heide; Hansen, Per Christian;
2012-01-01
We present a practical implementation of an optimal first-order method, due to Nesterov, for large-scale total variation regularization in tomographic reconstruction, image deblurring, etc. The algorithm applies to μ-strongly convex objective functions with L-Lipschitz continuous gradient...
Stokes coupling method for the exterior flowPart III: regularity
无
2001-01-01
Based on the Stokes coupling method for solving the two-dimensional exterior unsteady Navier-Stokes equations, the existence of the strong solution of the Stokes coupling equations is proven. And the regularity of the solution for the reduced Stokes coupling equations is discussed.
Two-Level Bregman Method for MRI Reconstruction with Graph Regularized Sparse Coding
刘且根; 卢红阳; 张明辉
2016-01-01
In this paper, a two-level Bregman method is presented with graph regularized sparse coding for highly undersampled magnetic resonance image reconstruction. The graph regularized sparse coding is incorporated with the two-level Bregman iterative procedure which enforces the sampled data constraints in the outer level and up-dates dictionary and sparse representation in the inner level. Graph regularized sparse coding and simple dictionary updating applied in the inner minimization make the proposed algorithm converge with a relatively small number of iterations. Experimental results demonstrate that the proposed algorithm can consistently reconstruct both simulated MR images and real MR data efficiently, and outperforms the current state-of-the-art approaches in terms of visual comparisons and quantitative measures.
Rezaie, Mohammad; Moradzadeh, Ali; Kalate, Ali Nejati; Aghajani, Hamid
2016-09-01
Inversion of gravity data is one of the important steps in the interpretation of practical data. One of the most interesting geological frameworks for gravity data inversion is the detection of sharp boundaries between orebody and host rocks. The focusing inversion is able to reconstruct a sharp image of the geological target. This technique can be efficiently applied for the quantitative interpretation of gravity data. In this study, a new reweighted regularized method for the 3D focusing inversion technique based on Lanczos bidiagonalization method is developed. The inversion results of synthetic data show that the new method is faster than common reweighted regularized conjugate gradient method to produce an acceptable solution for focusing inverse problem. The new developed inversion scheme is also applied for inversion of the gravity data collected over the San Nicolas Cu-Zn orebody in Zacatecas State, Mexico. The inversion results indicate a remarkable correlation with the true structure of the orebody that is achieved from drilling data.
Two regularization methods for solving a Riesz-Feller space-fractional backward diffusion problem
Zheng, G. H.; Wei, T.
2010-11-01
In this paper, a backward diffusion problem for a space-fractional diffusion equation (SFDE) in a strip is investigated. Such a problem is obtained from the classical diffusion equation in which the second-order space derivative is replaced with a Riesz-Feller derivative of order β in (0, 2]. We show that such a problem is severely ill-posed and further propose a new regularization method and apply a spectral regularization method to solve it based on the solution given by the Fourier method. Convergence estimates are presented under a priori bound assumptions for the exact solution. Finally, numerical examples are given to show that the proposed numerical methods are effective.
Rezaie, Mohammad; Moradzadeh, Ali; Kalate, Ali Nejati; Aghajani, Hamid
2017-01-01
Inversion of gravity data is one of the important steps in the interpretation of practical data. One of the most interesting geological frameworks for gravity data inversion is the detection of sharp boundaries between orebody and host rocks. The focusing inversion is able to reconstruct a sharp image of the geological target. This technique can be efficiently applied for the quantitative interpretation of gravity data. In this study, a new reweighted regularized method for the 3D focusing inversion technique based on Lanczos bidiagonalization method is developed. The inversion results of synthetic data show that the new method is faster than common reweighted regularized conjugate gradient method to produce an acceptable solution for focusing inverse problem. The new developed inversion scheme is also applied for inversion of the gravity data collected over the San Nicolas Cu-Zn orebody in Zacatecas State, Mexico. The inversion results indicate a remarkable correlation with the true structure of the orebody that is achieved from drilling data.
On Landweber-Kaczmarz methods for regularizing systems of ill-posed equations in Banach spaces
Leitão, A.; Marques Alves, M.
2012-10-01
In this paper, iterative regularization methods of Landweber-Kaczmarz type are considered for solving systems of ill-posed equations modeled (finitely many) by operators acting between Banach spaces. Using assumptions of uniform convexity and smoothness on the parameter space, we are able to prove a monotony result for the proposed method, as well as to establish convergence (for exact data) and stability results (in the noisy data case).
Xu, Yanbin; Pei, Yang; Dong, Feng
2016-11-01
The L-curve method is a popular regularization parameter choice method for the ill-posed inverse problem of electrical resistance tomography (ERT). However the method cannot always determine a proper parameter for all situations. An investigation into those situations where the L-curve method failed show that a new corner point appears on the L-curve and the parameter corresponding to the new corner point can obtain a satisfactory reconstructed solution. Thus an extended L-curve method, which determines the regularization parameter associated with either global corner or the new corner, is proposed. Furthermore, two strategies are provided to determine the new corner-one is based on the second-order differential of L-curve, and the other is based on the curvature of L-curve. The proposed method is examined by both numerical simulations and experimental tests. And the results indicate that the extended method can handle the parameter choice problem even in the case where the typical L-curve method fails. Finally, in order to reduce the running time of the method, the extended method is combined with a projection method based on the Krylov subspace, which was able to boost the extended L-curve method. The results verify that the speed of the extended L-curve method is distinctly improved. The proposed method extends the application of the L-curve in the field of choosing regularization parameter with an acceptable running time and can also be used in other kinds of tomography.
A regularization method for the reconstruction of adsorption isotherms in liquid chromatography
Zhang, Ye; Lin, Guang-Liang; Forssén, Patrik; Gulliksson, Mårten; Fornstedt, Torgny; Cheng, Xiao-Liang
2016-10-01
Determining competitive adsorption isotherms is an open problem in liquid chromatography. Since traditional experimental trial-and-error approaches are too complex and expensive, a modern technique of obtaining adsorption isotherms is to solve the inverse problem so that the simulated batch separation coincides with actual experimental results. This is a typical ill-posed problem. Moreover, in almost all cases the observed concentration at the outlet is the total response of all components, which makes the problem more difficult. In this work, we tackle the ill-posedness with a new regularization method, which is based on the fact that the adsorption isotherms do not depend on the injection profile. The proposed method transfers the original problem to an optimization problem with a time-dependent convection-diffusion equation constraint. Iterative algorithms for solving constraint optimization problems for both the equilibrium-dispersive and the transport-dispersive models are developed. The mass transfer resistance is also estimated by the proposed inverse method. A regularization parameter selection method and the convergence property of the proposed algorithm are discussed. Finally, numerical tests for both synthetic problems and real-world problems are given to show the efficiency and feasibility of the proposed regularization method.
Chen, Xueli, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn; Yang, Defu; Zhang, Qitan; Liang, Jimin, E-mail: xlchen@xidian.edu.cn, E-mail: jimleung@mail.xidian.edu.cn [School of Life Science and Technology, Xidian University, Xi' an 710071 (China); Engineering Research Center of Molecular and Neuro Imaging, Ministry of Education (China)
2014-05-14
Even though bioluminescence tomography (BLT) exhibits significant potential and wide applications in macroscopic imaging of small animals in vivo, the inverse reconstruction is still a tough problem that has plagued researchers in a related area. The ill-posedness of inverse reconstruction arises from insufficient measurements and modeling errors, so that the inverse reconstruction cannot be solved directly. In this study, an l{sub 1/2} regularization based numerical method was developed for effective reconstruction of BLT. In the method, the inverse reconstruction of BLT was constrained into an l{sub 1/2} regularization problem, and then the weighted interior-point algorithm (WIPA) was applied to solve the problem through transforming it into obtaining the solution of a series of l{sub 1} regularizers. The feasibility and effectiveness of the proposed method were demonstrated with numerical simulations on a digital mouse. Stability verification experiments further illustrated the robustness of the proposed method for different levels of Gaussian noise.
Application of L1/2 regularization logistic method in heart disease diagnosis.
Zhang, Bowen; Chai, Hua; Yang, Ziyi; Liang, Yong; Chu, Gejin; Liu, Xiaoying
2014-01-01
Heart disease has become the number one killer of human health, and its diagnosis depends on many features, such as age, blood pressure, heart rate and other dozens of physiological indicators. Although there are so many risk factors, doctors usually diagnose the disease depending on their intuition and experience, which requires a lot of knowledge and experience for correct determination. To find the hidden medical information in the existing clinical data is a noticeable and powerful approach in the study of heart disease diagnosis. In this paper, sparse logistic regression method is introduced to detect the key risk factors using L(1/2) regularization on the real heart disease data. Experimental results show that the sparse logistic L(1/2) regularization method achieves fewer but informative key features than Lasso, SCAD, MCP and Elastic net regularization approaches. Simultaneously, the proposed method can cut down the computational complexity, save cost and time to undergo medical tests and checkups, reduce the number of attributes needed to be taken from patients.
A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks
Xinrong Ji
2016-07-01
Full Text Available In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN test platform further shows the advantages of the proposed algorithm with respect to communication cost.
A Distributed Learning Method for ℓ1-Regularized Kernel Machine over Wireless Sensor Networks
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-01-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ1 norm regularization (ℓ1-regularized) is investigated, and a novel distributed learning algorithm for the ℓ1-regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost. PMID:27376298
A Distributed Learning Method for ℓ 1 -Regularized Kernel Machine over Wireless Sensor Networks.
Ji, Xinrong; Hou, Cuiqin; Hou, Yibin; Gao, Fang; Wang, Shulong
2016-07-01
In wireless sensor networks, centralized learning methods have very high communication costs and energy consumption. These are caused by the need to transmit scattered training examples from various sensor nodes to the central fusion center where a classifier or a regression machine is trained. To reduce the communication cost, a distributed learning method for a kernel machine that incorporates ℓ 1 norm regularization ( ℓ 1 -regularized) is investigated, and a novel distributed learning algorithm for the ℓ 1 -regularized kernel minimum mean squared error (KMSE) machine is proposed. The proposed algorithm relies on in-network processing and a collaboration that transmits the sparse model only between single-hop neighboring nodes. This paper evaluates the proposed algorithm with respect to the prediction accuracy, the sparse rate of model, the communication cost and the number of iterations on synthetic and real datasets. The simulation results show that the proposed algorithm can obtain approximately the same prediction accuracy as that obtained by the batch learning method. Moreover, it is significantly superior in terms of the sparse rate of model and communication cost, and it can converge with fewer iterations. Finally, an experiment conducted on a wireless sensor network (WSN) test platform further shows the advantages of the proposed algorithm with respect to communication cost.
A PARALLEL NUMERICAL MODEL OF SOLVING N-S EQUATIONS BY USING SEQUENTIAL REGULARIZATION METHOD
无
2003-01-01
A parallel numerical model was established for solving the Navier-Stokes equations by using Sequential Regularization Method (SRM). The computational domain is decomposed into P sub-domains in which the difference formulae were obtained from the governing equations. The data were exchannged at the virtual boundary of sub-domains in parallel computation. The close-channel cavity flow was solved by the implicit method. The driven square cavity flow was solved by the explicit method. The results were compared well those given by Ghia.
Ma, Denglong; Tan, Wei; Zhang, Zaoxiao; Hu, Jun
2017-03-05
In order to identify the parameters of hazardous gas emission source in atmosphere with less previous information and reliable probability estimation, a hybrid algorithm coupling Tikhonov regularization with particle swarm optimization (PSO) was proposed. When the source location is known, the source strength can be estimated successfully by common Tikhonov regularization method, but it is invalid when the information about both source strength and location is absent. Therefore, a hybrid method combining linear Tikhonov regularization and PSO algorithm was designed. With this method, the nonlinear inverse dispersion model was transformed to a linear form under some assumptions, and the source parameters including source strength and location were identified simultaneously by linear Tikhonov-PSO regularization method. The regularization parameters were selected by L-curve method. The estimation results with different regularization matrixes showed that the confidence interval with high-order regularization matrix is narrower than that with zero-order regularization matrix. But the estimation results of different source parameters are close to each other with different regularization matrixes. A nonlinear Tikhonov-PSO hybrid regularization was also designed with primary nonlinear dispersion model to estimate the source parameters. The comparison results of simulation and experiment case showed that the linear Tikhonov-PSO method with transformed linear inverse model has higher computation efficiency than nonlinear Tikhonov-PSO method. The confidence intervals from linear Tikhonov-PSO are more reasonable than that from nonlinear method. The estimation results from linear Tikhonov-PSO method are similar to that from single PSO algorithm, and a reasonable confidence interval with some probability levels can be additionally given by Tikhonov-PSO method. Therefore, the presented linear Tikhonov-PSO regularization method is a good potential method for hazardous emission
Hemagglutinin outer contour detection methods based on regular hexagon bar template
Tian, Miaomiao; Jing, Wenbo; Duan, Jin; Wang, Xiaoman
2014-11-01
In order to extract hemagglutinin outer contour accurately in the hemagglutinin image, analyzes the hemagglutinin protein content by the size of detected contour, presents a regular hexagon bar circle detection algorithm which uses regular hexagon bar detection template to detect outer contour of the hemagglutinin. Firstly, the hemagglutinin image thresholded by using OTSU adaptive thresholding method; and then using regular hexagon bar detection template method to rough align hemagglutinin after thresholded, intersection of detection template and the hemagglutinin contour area is attained, the noise near hemagglutinin contour is reduced by using the standardization relationship of the hexagon bars, so the hemagglutinin pixels are accurately obtained; finally the hemagglutinin outer contour information is gained by the geometric relationship of pixels, the hemagglutinin position is achieved precisely. The experimental results show that: the contour detection error due to the density uneven and the edge unclearly of hemagglutinin image protein is better reduced, the detection accuracy is increased by a factor of 0.47, detection speed is increased by a factor of 0.56.The hemagglutinin contour can be dected stablely, fastly, accurately and the is significant to the study of the hemagglutinin protein content.
Deng, Liang-Jian; Huang, Ting-Zhu; Zhao, Xi-Le; Zhao, Liang; Wang, Si
2013-05-01
Singular value decomposition (SVD)-based approaches, e.g., truncated SVD and Tikhonov regularization methods, are effective ways to solve problems of small or moderate size. However, SVD, in the sense of computation, is expensive when it is applied in large-sized cases. A multilevel method (MLM) combining SVD-based methods with the thresholding technique for signal restoration is proposed in this paper. Our MLM will transfer large-sized problems to small- or moderate-sized problems in order to make the SVD-based methods available. The linear systems on the coarsest level in the multilevel process will be solved by the Tikhonov regularization method. No presmoothers are implemented in the multilevel process to avoid damaging the parameter choice on the coarsest level. Furthermore, the soft-thresholding denoising technique is employed for the postsmoothers aiming to eliminate the leaving high-frequency information due to the lack of presmoothers. Finally, computational experiments show that our method outperforms other SVD-based methods in signal restoration ability at a shorter CPU-time consumption.
Shkvarko Yuriy
2006-01-01
Full Text Available We address a new approach to solve the ill-posed nonlinear inverse problem of high-resolution numerical reconstruction of the spatial spectrum pattern (SSP of the backscattered wavefield sources distributed over the remotely sensed scene. An array or synthesized array radar (SAR that employs digital data signal processing is considered. By exploiting the idea of combining the statistical minimum risk estimation paradigm with numerical descriptive regularization techniques, we address a new fused statistical descriptive regularization (SDR strategy for enhanced radar imaging. Pursuing such an approach, we establish a family of the SDR-related SSP estimators, that encompass a manifold of existing beamforming techniques ranging from traditional matched filter to robust and adaptive spatial filtering, and minimum variance methods.
Regularizing the molecular potential in electronic structure calculations. I. SCF methods
Bischoff, Florian A., E-mail: florian.bischoff@hu-berlin.de [Institut für Chemie, Humboldt-Universität zu Berlin, Unter den Linden 6, 10099 Berlin (Germany)
2014-11-14
We present a method to remove the singular nuclear potential in a molecule and replace it with a regularized potential that is more amenable to be represented numerically. The singular nuclear potential is canceled by the similarity-transformed kinetic energy operator giving rise to an effective nuclear potential that contains derivative operators acting on the wave function. The method is fully equivalent to the non-similarity-transformed version. We give numerical examples within the framework of multi-resolution analysis for medium-sized molecules.
[Health and economic benefits of compulsory regular vaccination in the Slovak Republic. I. Methods].
Hudecková, H; Straka, S
2000-02-01
The authors will submit in their series of contributions under the common title of "Health and Economic Benefits of Compulsory Regular Vaccination in the Slovak Republic" estimates of benefits and effectiveness of particular vaccinations. In their first contribution they deal with the objectives and methods of this evaluation essential for the allocation of funds for the maintenance of existing preventive programmes and also for the implementation of new preventive measures. On the basis of literary data and their own experience they formulate modified methods of cost/effectiveness and cost/benefit and other parameters adjusted for the conditions of the vaccination programme in the Slovak Republic.
Fourier Moment Method with Regularization for the Cauchy Problem of Helmholtz Equation
MA YUN-YUN; MA FU-MING
2012-01-01
In this paper,we consider the reconstruction of the wave field in a bounded domain.By choosing a special family of functions,the Cauchy problem can be transformed into a Fourier moment problem.This problem is ill-posed.We propose a regularization method for obtaining an approximate solution to the wave field on the unspecified boundary.We also give the convergence analysis and error estimate of the numerical algorithm.Finally,we present some numerical examples to show the effectiveness of this method.
Wissocq, Gauthier; Gourdain, Nicolas; Malaspinas, Orestis; Eyssartier, Alexandre
2017-02-01
This paper reports the investigations done to adapt the Characteristic Boundary Conditions (CBC) to the Lattice-Boltzmann formalism for high Reynolds number applications. Three CBC formalisms are implemented and tested in an open source LBM code: the baseline local one-dimension inviscid (BL-LODI) approach, its extension including the effects of the transverse terms (CBC-2D) and a local streamline approach in which the problem is reformulated in the incident wave framework (LS-LODI). Then all implementations of the CBC methods are tested for a variety of test cases, ranging from canonical problems (such as 2D plane and spherical waves and 2D vortices) to a 2D NACA profile at high Reynolds number (Re =105), representative of aeronautic applications. The LS-LODI approach provides the best results for pure acoustics waves (plane and spherical waves). However, it is not well suited to the outflow of a convected vortex for which the CBC-2D associated with a relaxation on density and transverse waves provides the best results. As regards numerical stability, a regularized adaptation is necessary to simulate high Reynolds number flows. The so-called regularized FD (Finite Difference) adaptation, a modified regularized approach where the off-equilibrium part of the stress tensor is computed thanks to a finite difference scheme, is the only tested adaptation that can handle the high Reynolds computation.
Abbasi Mahdi
2012-06-01
Full Text Available Abstract Background Electrical Impedance Tomography (EIT is used as a fast clinical imaging technique for monitoring the health of the human organs such as lungs, heart, brain and breast. Each practical EIT reconstruction algorithm should be efficient enough in terms of convergence rate, and accuracy. The main objective of this study is to investigate the feasibility of precise empirical conductivity imaging using a sinc-convolution algorithm in D-bar framework. Methods At the first step, synthetic and experimental data were used to compute an intermediate object named scattering transform. Next, this object was used in a two-dimensional integral equation which was precisely and rapidly solved via sinc-convolution algorithm to find the square root of the conductivity for each pixel of image. For the purpose of comparison, multigrid and NOSER algorithms were implemented under a similar setting. Quality of reconstructions of synthetic models was tested against GREIT approved quality measures. To validate the simulation results, reconstructions of a phantom chest and a human lung were used. Results Evaluation of synthetic reconstructions shows that the quality of sinc-convolution reconstructions is considerably better than that of each of its competitors in terms of amplitude response, position error, ringing, resolution and shape-deformation. In addition, the results confirm near-exponential and linear convergence rates for sinc-convolution and multigrid, respectively. Moreover, the least degree of relative errors and the most degree of truth were found in sinc-convolution reconstructions from experimental phantom data. Reconstructions of clinical lung data show that the related physiological effect is well recovered by sinc-convolution algorithm. Conclusions Parametric evaluation demonstrates the efficiency of sinc-convolution to reconstruct accurate conductivity images from experimental data. Excellent results in phantom and clinical
Sander, Renan S; Ferreira, Silvio C
2016-01-01
A major hurdle in the simulation of the steady state of epidemic processes is that the system will unavoidably visit an absorbing, disease-free state at sufficiently long times due to the finite size of the networks where epidemics evolves. In the present work, we compare different quasistationary (QS) simulation methods where the absorbing states are suitably handled and the thermodynamical limit of the original dynamics can be achieved. We analyzed the standard QS (SQS) method, where the sampling is constrained to active configurations, the reflecting boundary condition (RBC), where the dynamics returns to the pre-absorbing configuration, and hub reactivation (HR), where the most connected vertices of the network is reactivated after a visit to an absorbing state. We applied the methods to the contact process (CP) and susceptible-infected-susceptible (SIS) models on regular and scale-free networks. The investigated methods yield the same epidemic threshold for both models. For CP, that undergoes a standard ...
Backtracking-Based Iterative Regularization Method for Image Compressive Sensing Recovery
Lingjun Liu
2017-01-01
Full Text Available This paper presents a variant of the iterative shrinkage-thresholding (IST algorithm, called backtracking-based adaptive IST (BAIST, for image compressive sensing (CS reconstruction. For increasing iterations, IST usually yields a smoothing of the solution and runs into prematurity. To add back more details, the BAIST method backtracks to the previous noisy image using L2 norm minimization, i.e., minimizing the Euclidean distance between the current solution and the previous ones. Through this modification, the BAIST method achieves superior performance while maintaining the low complexity of IST-type methods. Also, BAIST takes a nonlocal regularization with an adaptive regularizor to automatically detect the sparsity level of an image. Experimental results show that our algorithm outperforms the original IST method and several excellent CS techniques.
Nielsen, Allan Aasbjerg
2007-01-01
This paper describes new extensions to the previously published multivariate alteration detection (MAD) method for change detection in bi-temporal, multi- and hypervariate data such as remote sensing imagery. Much like boosting methods often applied in data mining work, the iteratively reweighted...... an agricultural region in Kenya, and hyperspectral airborne HyMap data from a small rural area in southeastern Germany are given. The latter case demonstrates the need for regularization....... (IR) MAD method in a series of iterations places increasing focus on “difficult” observations, here observations whose change status over time is uncertain. The MAD method is based on the established technique of canonical correlation analysis: for the multivariate data acquired at two points in time...
Zi-Luan Wei
2002-01-01
A regular splitting and potential reduction method is presented for solving a quadratic programming problem with box constraints (QPB) in this paper. A general algorithm is designed to solve the QPB problem and generate a sequence of iterative points. We show that the number of iterations to generate an e-minimum solution or an e-KKT solution by the algorithm is bounded by O O(n2/∈log1/∈+nlong(1+√2n) and the total running time is bounded by O(n2(n + logn +log1/∈ )(n/∈log1/∈ + logn) ) arithmetic operations.
Ni, G; Ni, Guang-jiong; Wang, Haibin
1997-01-01
A simple but effective method for regularization-renormalization (R-R) is proposed for handling the Feynman diagram integral (FDI) at one loop level in quantum electrodynamics (QED). The divergence is substituted by some constants to be fixed via experiments. So no counter term, no bare parameter and no arbitrary running mass scale is involved. Then the Lamb Shift in Hydrogen atom can be calculated qualitatively and simply as $\\Delta E(2S_{1/2})- \\Delta E(2P_{1/2})=996.7 MHz$ versus the experimental value $1057.85 MHz$.
S. J. Noh
2011-10-01
Full Text Available Data assimilation techniques have received growing attention due to their capability to improve prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", are a Bayesian learning process that has the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response times of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until the uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on the Markov chain Monte Carlo (MCMC methods is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, water and energy transfer processes (WEP, is implemented for the sequential data assimilation through the updating of state variables. The lagged regularized particle filter (LRPF and the sequential importance resampling (SIR particle filter are implemented for hindcasting of streamflow at the Katsura catchment, Japan. Control state variables for filtering are soil moisture content and overland flow. Streamflow measurements are used for data assimilation. LRPF shows consistent forecasts regardless of the process noise assumption, while SIR has different values of optimal process noise and shows sensitive variation of confidential intervals, depending on the process noise. Improvement of LRPF forecasts compared to SIR is particularly found for rapidly varied high flows due to preservation of sample diversity from the kernel, even if particle impoverishment takes place.
The method of intrinsic scaling a systematic approach to regularity for degenerate and singular PDEs
Urbano, José Miguel
2008-01-01
This set of lectures, which had its origin in a mini course delivered at the Summer Program of IMPA (Rio de Janeiro), is an introduction to intrinsic scaling, a powerful method in the analysis of degenerate and singular PDEs. In the first part, the theory is presented from scratch for the model case of the degenerate p-Laplace equation. This approach brings to light what is really essential in the method, leaving aside technical refinements needed to deal with more general equations, and is entirely self-contained. The second part deals with three applications of the theory to relevant models arising from flows in porous media and phase transitions. The aim is to convince the reader of the strength of the method as a systematic approach to regularity for this important class of equations.
The Projection Method for Reaching Consensus and the Regularized Power Limit of a Stochastic Matrix
Agaev, R P
2011-01-01
In the coordination/consensus problem for multi-agent systems, a well-known condition of achieving consensus is the presence of a spanning arborescence in the communication digraph. The paper deals with the discrete consensus problem in the case where this condition is not satisfied. A characterization of the subspace $T_P$ of initial opinions (where $P$ is the influence matrix) that \\emph{ensure} consensus in the DeGroot model is given. We propose a method of coordination that consists of: (1) the transformation of the vector of initial opinions into a vector belonging to $T_P$ by orthogonal projection and (2) subsequent iterations of the transformation $P.$ The properties of this method are studied. It is shown that for any non-periodic stochastic matrix $P,$ the resulting matrix of the orthogonal projection method can be treated as a regularized power limit of $P.$
Fast ℓ1-regularized space-time adaptive processing using alternating direction method of multipliers
Qin, Lilong; Wu, Manqing; Wang, Xuan; Dong, Zhen
2017-04-01
Motivated by the sparsity of filter coefficients in full-dimension space-time adaptive processing (STAP) algorithms, this paper proposes a fast ℓ1-regularized STAP algorithm based on the alternating direction method of multipliers to accelerate the convergence and reduce the calculations. The proposed algorithm uses a splitting variable to obtain an equivalent optimization formulation, which is addressed with an augmented Lagrangian method. Using the alternating recursive algorithm, the method can rapidly result in a low minimum mean-square error without a large number of calculations. Through theoretical analysis and experimental verification, we demonstrate that the proposed algorithm provides a better output signal-to-clutter-noise ratio performance than other algorithms.
S. J. Noh
2011-04-01
Full Text Available Applications of data assimilation techniques have been widely used to improve hydrologic prediction. Among various data assimilation techniques, sequential Monte Carlo (SMC methods, known as "particle filters", provide the capability to handle non-linear and non-Gaussian state-space models. In this paper, we propose an improved particle filtering approach to consider different response time of internal state variables in a hydrologic model. The proposed method adopts a lagged filtering approach to aggregate model response until uncertainty of each hydrologic process is propagated. The regularization with an additional move step based on Markov chain Monte Carlo (MCMC is also implemented to preserve sample diversity under the lagged filtering approach. A distributed hydrologic model, WEP is implemented for the sequential data assimilation through the updating of state variables. Particle filtering is parallelized and implemented in the multi-core computing environment via open message passing interface (MPI. We compare performance results of particle filters in terms of model efficiency, predictive QQ plots and particle diversity. The improvement of model efficiency and the preservation of particle diversity are found in the lagged regularized particle filter.
OU Jikun; WANG Zhenjie
2004-01-01
A new approach is employed in GPS rapid positioning using several-epoch single frequency phase data. Firstly, the structure characteristic of the normal matrix in GPS rapid positioning is analyzed. Then, in the light of the characteristic, based on TIKHONOV regularization theorem, a new regularizer is designed to mitigate the ill-condition of the normal matrix. The accurate float ambiguity solutions and their MSEM (Mean Squared Error Matrix) are obtained using several-epoch single frequency phase data. Combined with LAMBDA method, the new approach was used to fix the integer ambiguities correctly and quickly using MSEM instead of the cofactor matrix of the ambiguities. Finally, a baseline over 3 km is taken as an example. The fixed integer ambiguities by the new approach using five epoch single frequency phase data are the same as those fixed by Bernese software using long time data. The success rate of fixing the integer ambiguities is 100 percent using 197 group data. Compared with the traditional methods, the new approach provides better accuracy and efficiency in GPS rapid positioning. So, the new approach has an extensive application outlook in deformation monitoring, pseudokinematic relative positioning, and attitude determination, etc.
Min Sun
2017-01-01
Full Text Available Abstract The proximal alternating direction method of multipliers (P-ADMM is an efficient first-order method for solving the separable convex minimization problems. Recently, He et al. have further studied the P-ADMM and relaxed the proximal regularization matrix of its second subproblem to be indefinite. This is especially significant in practical applications since the indefinite proximal matrix can result in a larger step size for the corresponding subproblem and thus can often accelerate the overall convergence speed of the P-ADMM. In this paper, without the assumptions that the feasible set of the studied problem is bounded or the objective function’s component θ i ( ⋅ $\\theta_{i}(\\cdot$ of the studied problem is strongly convex, we prove the worst-case O ( 1 / t $\\mathcal{O}(1/t$ convergence rate in an ergodic sense of the P-ADMM with a general Glowinski relaxation factor γ ∈ ( 0 , 1 + 5 2 $\\gamma\\in(0,\\frac{1+\\sqrt{5}}{2}$ , which is a supplement of the previously known results in this area. Furthermore, some numerical results on compressive sensing are reported to illustrate the effectiveness of the P-ADMM with indefinite proximal regularization.
W. Holmes Finch
2016-05-01
Full Text Available Researchers and data analysts are sometimes faced with the problem of very small samples, where the number of variables approaches or exceeds the overall sample size; i.e. high dimensional data. In such cases, standard statistical models such as regression or analysis of variance cannot be used, either because the resulting parameter estimates exhibit very high variance and can therefore not be trusted, or because the statistical algorithm cannot converge on parameter estimates at all. There exist an alternative set of model estimation procedures, known collectively as regularization methods, which can be used in such circumstances, and which have been shown through simulation research to yield accurate parameter estimates. The purpose of this paper is to describe, for those unfamiliar with them, the most popular of these regularization methods, the lasso, and to demonstrate its use on an actual high dimensional dataset involving adults with autism, using the R software language. Results of analyses involving relating measures of executive functioning with a full scale intelligence test score are presented, and implications of using these models are discussed.
Ablikim, M; Ai, X C; Albayrak, O; Albrecht, M; Ambrose, D J; Amoroso, A; An, F F; An, Q; Bai, J Z; Ferroli, R Baldini; Ban, Y; Bennett, D W; Bennett, J V; Bertani, M; Bettoni, D; Bian, J M; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, H Y; Chen, J C; Chen, M L; Chen, S J; Chen, X; Chen, X R; Chen, Y B; Cheng, H P; Chu, X K; Cibinetto, G; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Du, S X; Duan, P F; Eren, E E; Fan, J Z; Fang, J; Fang, S S; Fang, X; Fang, Y; Fava, L; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X Y; Gao, Y; Gao, Z; Garzia, I; Goetzen, K; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guan, Y H; Guo, A Q; Guo, L B; Guo, Y; Guo, Y P; Haddadi, Z; Hafner, A; Han, S; Hao, X Q; Harris, F A; He, K L; He, X Q; Held, T; Heng, Y K; Hou, Z L; Hu, C; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G M; Huang, G S; Huang, J S; Huang, X T; Huang, Y; Hussain, T; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, L L; Jiang, L W; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X L; Kang, X S; Kavatsyuk, M; Ke, B C; Kiese, P; Kliemt, R; Kloss, B; Kolcu, O B; Kopf, B; Kornicer, M; Kuehn, W; Kupsc, A; Lange, J S; Lara, M; Larin, P; Leng, C; Li, C; Li, Cheng; Li, D M; Li, F; Li, F Y; Li, G; Li, H B; Li, J C; Li, Jin; Li, K; Li, Lei; Li, P R; Li, T; Li, W D; Li, W G; Li, X L; Li, X M; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Lin, D X; Liu, B J; Liu, C L; Liu, C X; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H H; Liu, H M; Liu, J; Liu, J B; Liu, J P; Liu, J Y; Liu, K; Liu, K Y; Liu, L D; Liu, P L; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqing; Loehner, H; Lou, X C; Lu, H J; Lu, J G; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, T; Luo, X L; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Maas, F E; Maggiora, M; Mao, Y J; Mao, Z P; Marcello, S; Messchendorp, J G; Min, J; Mitchell, R E; Mo, X H; Mo, Y J; Morales, C Morales; Moriya, K; Muchnoi, N Yu; Muramatsu, H; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Patteri, P; Pelizaeus, M; Peng, H P; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Poling, R; Prasad, V; Qi, M; Qian, S; Qiao, C F; Qin, L Q; Qin, N; Qin, X S; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Ripka, M; Rong, G; Rosner, Ch; Ruan, X D; Santoro, V; Sarantsev, A; Savrié, M; Schoenning, K; Schumann, S; Shan, W; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Song, W M; Song, X Y; Sosio, S; Spataro, S; Sun, G X; Sun, J F; Sun, S S; Sun, Y J; Sun, Y Z; Sun, Z J; Sun, Z T; Tang, C J; Tang, X; Tapan, I; Thorndike, E H; Tiemens, M; Ullrich, M; Uman, I; Varner, G S; Wang, B; Wang, D; Wang, D Y; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, P; Wang, P L; Wang, S G; Wang, W; Wang, X F; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z H; Wang, Z Y; Weber, T; Wei, D H; Wei, J B; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, Z; Xia, L G; Xia, Y; Xiao, D; Xiao, H; Xiao, Z J; Xie, Y G; Xiu, Q L; Xu, G F; Xu, L; Xu, Q J; Xu, X P; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y; Yang, Y X; Ye, M; Ye, M H; Yin, J H; Yu, B X; Yu, C X; Yu, J S; Yuan, C Z; Yuan, W L; Yuan, Y; Yuncu, A; Zafar, A A; Zallo, A; Zeng, Y; Zhang, B X; Zhang, B Y; Zhang, C; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, X Y; Zhang, Y; Zhang, Y N; Zhang, Y H; Zhang, Y T; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, Q W; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, W J; Zheng, Y H; Zhong, B; Zhou, L; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, K; Zhu, K J; Zhu, S; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zotti, L; Zou, B S; Zou, J H
2015-01-01
A neutral structure in the $D\\bar{D}^{*}$ system around the $D\\bar{D}^{*}$ mass threshold is observed with a statistical significance greater than 10$\\sigma$ in the processes $e^{+}e^{-}\\rightarrow D^{+}D^{*-}\\pi^{0}+c.c.$ and $e^{+}e^{-}\\rightarrow D^{0}\\bar{D}^{*0}\\pi^{0}+c.c.$ at $\\sqrt{s}$ = 4.226 and 4.257 GeV in the BESIII experiment. The structure is denoted as $Z_{c}(3885)^{0}$. Assuming the presence of a resonance, its pole mass and width are determined to be ($3885.7^{+4.3}_{-5.7}$(stat.)$\\pm 8.4$(syst.))~MeV/$c^{2}$ and ($35^{+11}_{-12}$(stat.)$ \\pm 15$(syst.))~MeV, respectively. The Born cross sections are measured to be $\\sigma(e^{+}e^{-}\\to Z_{c}(3885)^{0}\\pi^{0}, Z_{c}(3885)^{0} \\to D\\bar{D}^{*})=(77 \\pm 13$(stat.)$\\pm 17$(syst.)) pb at 4.226 GeV and ($47 \\pm 9$(stat.)$ \\pm 10$(syst.)) pb at 4.257 GeV. The ratio of decay rates $\\frac{\\mathcal{B}({Z_{c}(3885)^{0} \\to D^{+}D^{*-}+c.c.})}{\\mathcal{B}({Z_{c}(3885)^{0} \\to D^{0}\\bar{D}^{*0}+c.c.})}$ is determined to be $0.96 \\pm 0.18$(stat.)$\\pm 0.1...
Observation of $\\psi(4415)\\to D \\bar D{}^{*}_2(2460)$ decay using initial-state radiation
Pakhlova, G; Aihara, H; Arinstein, K; Aulchenko, V; Aushev, T; Bakich, A M; Balagura, V; Barberio, E; Bedny, I; Belous, K S; Bitenc, U; Bondar, A; Bracko, M; Brodzicka, J; Browder, T E; Chen, A; Chen, W T; Cheon, B G; Chiang, C C; Chistov, R; Cho, I S; Choi, Y; Dalseno, J; Danilov, M; Dash, M; Drutskoy, A; Eidelman, S; Epifanov, D; Gabyshev, N; Golob, B; Ha, H; Haba, J; Hayasaka, K; Hazumi, M; Heffernan, D; Hoshi, Y; Hou, W S; Hsiung, Y B; Hyun, H J; Inami, K; Ishikawa, A; Ishino, H; Itoh, R; Iwasaki, M; Iwasaki, Y; Joshi, N J; Kah, D H; Kang, J H; Kawasaki, T; Kibayashi, A; Kichimi, H; Kim, H J; Kim, H O; Kim, Y J; Kinoshita, K; Korpar, S; Krizan, P; Krokovny, P; Kumar, R; Kuo, C C; Kuzmin, A; Kwon, Y J; Lange, J S; Lee, M J; Lee, S E; Lesiak, T; Lin, S W; Liventsev, D; Mandl, F; Marlow, D; McOnie, S; Medvedeva, T; Miyake, H; Mizuk, R; Mohapatra, D; Moloney, G R; Nagasaka, Y; Nakano, E; Nakao, M; Nakazawa, H; Nishida, S; Nitoh, O; Ogawa, S; Ohshima, T; Okuno, S; Olsen, S L; Ozaki, H; Pakhlov, P; Park, H; Park, K S; Peak, L S; Pestotnik, R; Piilonen, L E; Poluektov, A; Sakai, Y; Schneider, O; Schwanda, C; Senyo, K; Shapkin, M; Shen, C P; Shibuya, H; Shiu, J G; Shwartz, B; Singh, J B; Somov, A; Stanic, S; Sumiyoshi, T; Takasaki, F; Tamai, K; Tanaka, M; Taylor, G N; Teramoto, Y; Tikhomirov, I; Uehara, S; Ueno, K; Uglov, T; Unno, Y; Uno, S; Usov, Yu; Varner, G; Vinokurova, A; Wang, C H; Wang, M Z; Wang, P; Wang, X L; Watanabe, Y; Won, E; Yabsley, B D; Yamaguchi, A; Yamashita, Y; Yamauchi, M; Yuan, C Z; Zhang, C C; Zhang, L M; Zhang, Z P; Zhilich, V; Zhulanov, V; Zupanc, A
2007-01-01
We report the first observation of the $\\psi(4415)$ resonance in the reaction $\\e^+e^-\\to D^0 D^-\\pi^+$ and a measurement of its cross section in the center-of-mass energy range $4.0\\mathrm{GeV}$ to $5.0\\mathrm{GeV}$ with initial state radiation. From a study of resonant structure in $\\psi(4415)$ decay we conclude that the $\\psi(4415)\\to D^0 D^-\\pi^+$ decay is dominated by $\\psi(4415)\\to D \\bar D{}^{*}_2(2460)$. We obtain $\\mathcal{B}(\\psi(4415)\\to D^0 D^-\\pi^+_{\\mathrm {non-resonant}})/\\mathcal{B}(\\psi(4415)\\to D \\bar D{}^{*}_2(2460)\\to D^0 D^-\\pi^+)<0.22$ at 90% C.L. The analysis is based on a data sample collected with the Belle detector with an integrated luminosity of 673 $\\mathrm{fb}^{-1}$.
Sander, Renan S.; Costa, Guilherme S.; Ferreira, Silvio C.
2016-10-01
A major hurdle in the simulation of the steady state of epidemic processes is that the system will unavoidably visit an absorbing, disease-free state at sufficiently long times due to the finite size of the networks where epidemics evolves. In the present work, we compare different quasistationary (QS) simulation methods where the absorbing states are suitably handled and the thermodynamical limit of the original dynamics can be achieved. We analyze the standard QS (SQS) method, where the sampling is constrained to active configurations, the reflecting boundary condition (RBC), where the dynamics returns to the pre-absorbing configuration, and hub reactivation (HR), where the most connected vertex of the network is reactivated after a visit to an absorbing state. We apply the methods to the contact process (CP) and susceptible-infected-susceptible (SIS) models on regular and scale free networks. The investigated methods yield the same epidemic threshold for both models. For CP, that undergoes a standard collective phase transition, the methods are equivalent. For SIS, whose phase transition is ruled by the hub mutual reactivation, the SQS and HR methods are able to capture localized epidemic phases while RBC is not. We also apply the autocorrelation time as a tool to characterize the phase transition and observe that this analysis provides the same finite-size scaling exponents for the critical relaxation time for the investigated methods. Finally, we verify the equivalence between RBC method and a weak external field for epidemics on networks.
Hejlesen, Mads Mølholm; Spietz, Henrik J.; Walther, Jens Honore
2014-01-01
In resent work we have developed a new FFT based Poisson solver, which uses regularized Greens functions to obtain arbitrary high order convergence to the unbounded Poisson equation. The high order Poisson solver has been implemented in an unbounded particle-mesh based vortex method which uses a re......-meshing of the vortex particles to ensure the convergence of the method. Furthermore, we use a re-projection of the vorticity field to include the constraint of a divergence-free stream function which is essential for the underlying Helmholtz decomposition and ensures a divergence free vorticity field. The high order...... with the principal axis of the strain rate tensor. We find that the dynamics of the enstrophy density is dominated by the local flow deformation and axis of rotation, which is used to infer some concrete tendencies related to the topology of the vorticity field....
Schöpfer, F.; Schuster, T.; Louis, A. K.
2008-10-01
The split feasibility problem (SFP) consists of finding a common point in the intersection of finitely many convex sets, where some of the sets arise by imposing convex constraints in the range of linear operators. We are concerned with its solution in Banach spaces. To this end we generalize the CQ algorithm of Byrne with Bregman and metric projections to obtain an iterative solution method. In case the sets projected onto are contaminated with noise we show that a discrepancy principle renders this algorithm a regularization method. We measure the distance between convex sets by local versions of the Hausdorff distance, which in contrast to the standard Hausdorff distance allow us to measure the distance between unbounded sets. Hereby we prove a uniform continuity result for both kind of projections. The performance of the algorithm is demonstrated with some numerical experiments.
In Kang, Suk; Khambampati, Anil Kumar; Jeon, Min Ho; Kim, Bong Seok; Kim, Kyung Youn
2016-02-01
Electrical impedance tomography (EIT) is a non-invasive imaging technique that can be used as a bed-side monitoring tool for human thorax imaging. EIT has high temporal resolution characteristics but at the same time it suffers from poor spatial resolution due to ill-posedness of the inverse problem. Often regularization methods are used as a penalty term in the cost function to stabilize the sudden changes in resistivity. In human thorax monitoring, with conventional regularization methods employing Tikhonov type regularization, the reconstructed image is smoothed between the heart and the lungs, that is, it makes it difficult to distinguish the exact boundaries of the lungs and the heart. Sometimes, obtaining structural information of the object prior to this can be incorporated into the regularization method to improve the spatial resolution along with helping create clear and distinct boundaries between the objects. However, the boundary of the heart is changed rapidly due to the cardiac cycle hence there is no information concerning the exact boundary of the heart. Therefore, to improve the spatial resolution for human thorax monitoring during the cardiac cycle, in this paper, a sub-domain based regularization method is proposed assuming the lungs and part of background region is known. In the proposed method, the regularization matrix is modified anisotropically to include sub-domains as prior information, and the regularization parameter is assigned with different weights to each sub-domain. Numerical simulations and phantom experiments for 2D human thorax monitoring are performed to evaluate the performance of the proposed regularization method. The results show a better reconstruction performance with the proposed regularization method.
Balima, O., E-mail: ofbalima@gmail.com [Département des Sciences Appliquées, Université du Québec à Chicoutimi, 555 bd de l’Université, Chicoutimi, QC, Canada G7H 2B1 (Canada); Favennec, Y. [LTN UMR CNRS 6607 – Polytech’ Nantes – La Chantrerie, Rue Christian Pauc, BP 50609 44 306 Nantes Cedex 3 (France); Rousse, D. [Chaire de recherche industrielle en technologies de l’énergie et en efficacité énergétique (t3e), École de technologie supérieure, 201 Boul. Mgr, Bourget Lévis, QC, Canada G6V 6Z3 (Canada)
2013-10-15
Highlights: •New strategies to improve the accuracy of the reconstruction through mesh and finite element parameterization. •Use of gradient filtering through an alternative inner product within the adjoint method. •An integral form of the cost function is used to make the reconstruction compatible with all finite element formulations, continuous and discontinuous. •Gradient-based algorithm with the adjoint method is used for the reconstruction. -- Abstract: Optical tomography is mathematically treated as a non-linear inverse problem where the optical properties of the probed medium are recovered through the minimization of the errors between the experimental measurements and their predictions with a numerical model at the locations of the detectors. According to the ill-posed behavior of the inverse problem, some regularization tools must be performed and the Tikhonov penalization type is the most commonly used in optical tomography applications. This paper introduces an optimized approach for optical tomography reconstruction with the finite element method. An integral form of the cost function is used to take into account the surfaces of the detectors and make the reconstruction compatible with all finite element formulations, continuous and discontinuous. Through a gradient-based algorithm where the adjoint method is used to compute the gradient of the cost function, an alternative inner product is employed for preconditioning the reconstruction algorithm. Moreover, appropriate re-parameterization of the optical properties is performed. These regularization strategies are compared with the classical Tikhonov penalization one. It is shown that both the re-parameterization and the use of the Sobolev cost function gradient are efficient for solving such an ill-posed inverse problem.
Provencher, Stephen W.
1982-09-01
CONTIN is a portable Fortran IV package for inverting noisy linear operator equations. These problems occur in the analysis of data from a wide variety experiments. They are generally ill-posed problems, which means that errors in an unregularized inversion are unbounded. Instead, CONTIN seeks the optimal solution by incorporating parsimony and any statistical prior knowledge into the regularizor and absolute prior knowledge into equallity and inequality constraints. This can be greatly increase the resolution and accuracyh of the solution. CONTIN is very flexible, consisting of a core of about 50 subprograms plus 13 small "USER" subprograms, which the user can easily modify to specify special-purpose constraints, regularizors, operator equations, simulations, statistical weighting, etc. Specjial collections of USER subprograms are available for photon correlation spectroscopy, multicomponent spectra, and Fourier-Bessel, Fourier and Laplace transforms. Numerically stable algorithms are used throughout CONTIN. A fairly precise definition of information content in terms of degrees of freedom is given. The regularization parameter can be automatically chosen on the basis of an F-test and confidence region. The interpretation of the latter and of error estimates based on the covariance matrix of the constrained regularized solution are discussed. The strategies, methods and options in CONTIN are outlined. The program itself is described in the following paper.
Regular pipeline maintenance of gas pipeline using technical operational diagnostics methods
Volentic, J. [Gas Transportation Department, Slovensky plynarensky priemysel, Slovak Gas Industry, Bratislava (Slovakia)
1997-12-31
Slovensky plynarensky priemysel (SPP) has operated 17 487 km of gas pipelines in 1995. The length of the long-line pipelines reached 5 191 km, distribution network was 12 296 km. The international transit system of long-line gas pipelines ranged 1 939 km of pipelines of various dimensions. The described scale of transport and distribution system represents a multibillion investments stored in the ground, which are exposed to the environmental influences and to pipeline operational stresses. In spite of all technical and maintenance arrangements, which have to be performed upon operating gas pipelines, the gradual ageing takes place anyway, expressed in degradation process both in steel tube, as well as in the anti-corrosion coating. Within a certain time horizon, a consistent and regular application of methods and means of in-service technical diagnostics and rehabilitation of existing pipeline systems make it possible to save substantial investment funds, postponing the need in funds for a complex or partial reconstruction or a new construction of a specific gas section. The purpose of this presentation is to report on the implementation of the programme of in-service technical diagnostics of gas pipelines within the framework of regular maintenance of SPP s.p. Bratislava high pressure gas pipelines. (orig.) 6 refs.
Zhao, Lu; Zhu, Shi-Lin
2014-01-01
In the framework of the one boson exchange model, we have calculated the effective potentials between two heavy mesons $B \\bar{B}^{*}$ and $D \\bar{D}^{*}$ from the t- and u-channel $\\pi$, $\\eta$, $\\rho$, $\\omega$ and $\\sigma$ meson exchange with four kinds of quantum number: $I=0$, $J^{PC}=1^{++}$; $I=0$, $J^{PC}=1^{+-}$; $I=1$, $J^{PC}=1^{++}$; $I=1$, $J^{PC}=1^{+-}$. We keep the recoil corrections to the $B \\bar{B}^{*}$ and $D \\bar{D}^{*}$ system up to $O(\\frac{1}{M^2})$. The spin orbit force appears at $O(\\frac{1}{M})$, which turns out to be important for the very loosely bound molecular states. Our numerical results show that the momentum-related corrections are unfavorable to the formation of the molecular states in the $I=0$, $J^{PC}=1^{++}$ and $I=1$, $J^{PC}=1^{+-}$ channels in the $D \\bar{D}^{*}$ systems.
REGULAR METHOD FOR SYNTHESIS OF BASIC BENT-SQUARES OF RANDOM ORDER
A. V. Sokolov
2016-01-01
Full Text Available The paper is devoted to the class construction of the most non-linear Boolean bent-functions of any length N = 2k (k = 2, 4, 6…, on the basis of their spectral representation – Agievich bent squares. These perfect algebraic constructions are used as a basis to build many new cryptographic primitives, such as generators of pseudo-random key sequences, crypto graphic S-boxes, etc. Bent-functions also find their application in the construction of C-codes in the systems with code division multiple access (CDMA to provide the lowest possible value of Peak-to-Average Power Ratio (PAPR k = 1, as well as for the construction of error-correcting codes and systems of orthogonal biphasic signals. All the numerous applications of bent-functions relate to the theory of their synthesis. However, regular methods for complete class synthesis of bent-functions of any length N = 2k are currently unknown. The paper proposes a regular synthesis method for the basic Agievich bent squares of any order n, based on a regular operator of dyadic shift. Classification for a complete set of spectral vectors of lengths (l = 8, 16, … based on a criterion of the maximum absolute value and set of absolute values of spectral components has been carried out in the paper. It has been shown that any spectral vector can be a basis for building bent squares. Results of the synthesis for the Agievich bent squares of order n = 8 have been generalized and it has been revealed that there are only 3 basic bent squares for this order, while the other 5 can be obtained with help the operation of step-cyclic shift. All the basic bent squares of order n = 16 have been synthesized that allows to construct the bent-functions of length N = 256. The obtained basic bent squares can be used either for direct synthesis of bent-functions and their practical application or for further research in order to synthesize new structures of bent squares of orders n = 16, 32, 64, …
A three-dimensional sound ray tracing method by deploying regular tetrahedrons
JIANG Wei; LI Taibao
2005-01-01
A sound ray tracing algorithm is presented, which helps to rapidly find the sound ray trajectories in three-dimensional (3-D) space. At each step of ray tracing, a small regular tetrahedron is made in front of a ray, so that the sound speed field inside may be approximately regarded as linear. Since a ray trajectory in the linear sound speed field is always on a plane, it may be obtained by the two-dimensional (2-D) sound ray tracing method by deploying triangles.The theoretical derivation is given and a numerical model is discussed. It shows that the algorithm is fast and precise. It is also more concise and reliable than the traditional 3-D algorithms, and may be used to avoid the damage to the precision by the acoustic refraction in the 3-D ultrasound computerized tomography.
Comparison of two regularization methods for soft x-ray tomography at Tore Supra
Jardin, A.; Mazon, D.; Bielecki, J.
2016-04-01
Soft x-ray (SXR) emission in the range 0.1-20 keV is widely used to obtain valuable information on tokamak plasma physics, such as particle transport, magnetic configuration or magnetohydrodynamic activity. In particular, 2D tomography is the usual plasma diagnostic to access the local SXR emissivity. The tomographic inversion is traditionally performed from line-integrated measurements of two or more cameras viewing the plasma in a poloidal cross-section, like at Tore Supra (TS). Unfortunately, due to the limited number of measured projections and presence of noise, the tomographic reconstruction of SXR emissivity is a mathematical ill-posed problem. Thus, obtaining reliable results of the tomographic inversion is a very challenging task. In order to perform the reconstruction, inversion algorithms implemented in present tokamaks use a priori information as additional constraints imposed on the plasma SXR emissivity. Among several potential inversion methods, some of them have been identified as well suited to tokamak plasmas. The purpose of this work is to compare two promising inversion methods, i.e. the minimum fisher information method already used at TS and planned for WEST configuration, and the alternative 2nd order Phillips-Tikhonov regularization with smoothness constraints imposed on the second derivative norm. Respective accuracy of both reconstruction methods as well as overall robustness and computational time are studied, using several synthetic SXR emissivity profiles. Finally, a real case is studied through tomographic reconstruction from TS SXR database.
Niu, Xiao-Dong; Hyodo, Shi-Aki; Munekata, Toshihisa; Suga, Kazuhiko
2007-09-01
It is well known that the Navier-Stokes equations cannot adequately describe gas flows in the transition and free-molecular regimes. In these regimes, the Boltzmann equation (BE) of kinetic theory is invoked to govern the flows. However, this equation cannot be solved easily, either by analytical techniques or by numerical methods. Hence, in order to efficiently maneuver around this equation for modeling microscale gas flows, a kinetic lattice Boltzmann method (LBM) has been introduced in recent years. This method is regarded as a numerical approach for solving the BE in discrete velocity space with Gauss-Hermite quadrature. In this paper, a systematic description of the kinetic LBM, including the lattice Boltzmann equation, the diffuse-scattering boundary condition for gas-surface interactions, and definition of the relaxation time, is provided. To capture the nonlinear effects due to the high-order moments and wall boundaries, an effective relaxation time and a modified regularization procedure of the nonequilibrium part of the distribution function are further presented based on previous work [Guo et al., J. Appl. Phys. 99, 074903 (2006); Shan et al., J. Fluid Mech. 550, 413 (2006)]. The capability of the kinetic LBM of simulating microscale gas flows is illustrated based on the numerical investigations of micro Couette and force-driven Poiseuille flows.
Regularized Newton Methods for X-ray Phase Contrast and General Imaging Problems
Maretzke, Simon; Krenkel, Martin; Salditt, Tim; Hohage, Thorsten
2015-01-01
Like many other advanced imaging methods, x-ray phase contrast imaging and tomography require mathematical inversion of the observed data to obtain real-space information. While an accurate forward model describing the generally nonlinear image formation from a given object to the observations is often available, explicit inversion formulas are typically not known. Moreover, the measured data might be insufficient for stable image reconstruction, in which case it has to be complemented by suitable a priori information. In this work, regularized Newton methods are presented as a general framework for the solution of such ill-posed nonlinear imaging problems. For a proof of principle, the approach is applied to x-ray phase contrast imaging in the near-field propagation regime. Simultaneous recovery of the phase- and amplitude from a single near-field diffraction pattern is demonstrated for the first time. The presented methods further permit all-at-once phase contrast tomography, i.e. simultaneous phase retriev...
REGULARIZATION METHODS FOR THE NUMERICAL SOLUTION OF THE DIVERGENCE EQUATION ▽·u =f
Alexandre Caboussat; Roland Glowinski
2012-01-01
The problem of finding a L∞-bounded two-dimensional vector field whose divergence is given in L2 is discussed from the numerical viewpoint.A systematic way to find such a vector field is to introduce a non-smooth variational problem involving a L∞-norm.To solve this problem from calculus of variations,we use a method relying on a wellchosen augmented Lagrangian functional and on a mixed finite element approximation.An Uzawa algorithm allows to decouple the differential operators from the nonlinearities introduced by the L∞-norm,and leads to the solution of a sequence of Stokes-like systems and of an infinite family of local nonlinear problems.A simpler method,based on a L2-regularization is also considered. Numerical experiments are performed,making use of appropriate numerical integration techniques when non-smooth data are considered; they allow to compare the merits of the two approaches discussed in this article and to show the ability of the related methods at capturing L∞-bounded solutions.
GHARAKHANI,ADRIN; WOLFE,WALTER P.
1999-10-01
the collocation points. Unfortunately, the development of elements with C{sup 1} continuity for the potential jumps is quite complicated in 3-D. To this end, the application of Galerkin ''smoothing'' to the boundary integral equations removes the singularity at the collocation points; thus allowing the use of C{sup o} elements and potential jump distributions [4]. Successful implementations of the Galerkin Boundary Element Method to 2-D conduction [4] and elastostatic [5] problems have been reported in the literature. Thus far, the singularity removal algorithms have been based on a posterior and mathematically complex reasoning, which have required Taylor series expansion and limit processes. The application of these strategies to 3-D is expected to be significantly more complicated. In this report, we develop the formulation for a ''Regularized'' Galerkin Boundary Element Method (RGBEM). The regularization procedure involves simple manipulations using vector calculus to reduce the singularity of the hypersingular boundary integral equation by two orders for C{sup o} elements. For the case of linear potential jump distributions over plane triangles the regularized integral is simplified considerably to a double surface integral of the Green function. This is the case implemented and tested in this report. Using the example problem of flow normal to a square flat plate, the linear RGBEM predictions are demonstrated here to be more accurate, to converge faster, and to be significantly less spiked than the solutions obtained by the vortex loop method.
Paynter, R.W., E-mail: Royston_Paynter@emt.inrs.ca [INRS Energie Materiaux Telecommunications, 1650 boul. Lionel-Boulet, Varennes, Quebec (Canada)
2012-01-15
Highlights: Black-Right-Pointing-Pointer Regularization improved the accuracy and reproducibility of ARXPS depth profiles. Black-Right-Pointing-Pointer The 'S-curve' and 'L-curve' regularization parameters were shown to be equivalent. Black-Right-Pointing-Pointer 'S-curve' parameterization was optimal in 50% of cases for the MaxEnt regulator. - Abstract: Starting from posited input depth profiles of silicon oxide on silicon, 100 sets of noisy simulated ARXPS data were created for each oxide layer thickness of 3, 6, 9, 12, 15, 18, 21, 24 and 27 Angstrom-Sign . Oxygen depth profiles were then recovered from the noisy simulated data using regularized inversion methods, including maximum entropy and Tikhonov regularization. Three regularization parameters were used: one determined by the S-curve method, one determined by the L-curve method and a third corresponding to the closest correspondence between the input and extracted profiles. The various regularization schemes evaluated were ranked with respect to their ability to reproduce the input profile.
Dashan SUI; Zhenshan CUI
2009-01-01
The inverse heat conduction method is one of methods to identify the casting simu-lation parameters. A new inverse method was presented according to the Tikhonov regularization theory. One appropriate regularized functional was established, and the functional was solved by the sensitivity coefficient and Newton-Raphson iteration method. Moreover, the orthogonal experimental design was used to estimate the ap-propriate initial value and variation domain of each variable to decrease the number of iteration and improve the identification accuracy and efficiency. It illustrated a detailed case of AI SiTMg sand mold casting and the temperature measurement ex- periment was done. The physical properties of sand mold and the interfacial heat transfer coefficient were identified at the meantime. The results indicated that the new regularization method was efficient in overcoming the ill-posedness of the inverse heat conduction problem and improving the stability and accuracy of the solutions.
Void Structures in Regularly Patterned ZnO Nanorods Grown with the Hydrothermal Method
Yu-Feng Yao
2014-01-01
Full Text Available The void structures and related optical properties after thermal annealing with ambient oxygen in regularly patterned ZnO nanrorod (NR arrays grown with the hydrothermal method are studied. In increasing the thermal annealing temperature, void distribution starts from the bottom and extends to the top of an NR in the vertical (c-axis growth region. When the annealing temperature is higher than 400°C, void distribution spreads into the lateral (m-axis growth region. Photoluminescence measurement shows that the ZnO band-edge emission, in contrast to defect emission in the yellow-red range, is the strongest under the n-ZnO NR process conditions of 0.003 M in Ga-doping concentration and 300°C in thermal annealing temperature with ambient oxygen. Energy dispersive X-ray spectroscopy data indicate that the concentration of hydroxyl groups in the vertical growth region is significantly higher than that in the lateral growth region. During thermal annealing, hydroxyl groups are desorbed from the NR leaving anion vacancies for reacting with cation vacancies to form voids.
3D DC Resistivity Inversion with Topography Based on Regularized Conjugate Gradient Method
Jian-ke Qiang
2013-01-01
Full Text Available During the past decades, we observed a strong interest in 3D DC resistivity inversion and imaging with complex topography. In this paper, we implemented 3D DC resistivity inversion based on regularized conjugate gradient method with FEM. The Fréchet derivative is assembled with the electric potential in order to speed up the inversion process based on the reciprocity theorem. In this study, we also analyzed the sensitivity of the electric potential on the earth’s surface to the conductivity in each cell underground and introduced an optimized weighting function to produce new sensitivity matrix. The synthetic model study shows that this optimized weighting function is helpful to improve the resolution of deep anomaly. By incorporating topography into inversion, the artificial anomaly which is actually caused by topography can be eliminated. As a result, this algorithm potentially can be applied to process the DC resistivity data collected in mountain area. Our synthetic model study also shows that the convergence and computation speed are very stable and fast.
A multiresolution method for solving the Poisson equation using high order regularization
Hejlesen, Mads Mølholm; Walther, Jens Honore
2016-01-01
We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches and regulari......We present a novel high order multiresolution Poisson solver based on regularized Green's function solutions to obtain exact free-space boundary conditions while using fast Fourier transforms for computational efficiency. Multiresolution is a achieved through local refinement patches...... and regularized Green's functions corresponding to the difference in the spatial resolution between the patches. The full solution is obtained utilizing the linearity of the Poisson equation enabling super-position of solutions. We show that the multiresolution Poisson solver produces convergence rates...... that correspond to the regularization order of the derived Green's functions....
A fast and adaptive method for complex-valued S AR image denoising based on lk norm regularization
WANG WeiWei; WANG ZhengMing; YUAN ZhenYu; LI MingShan
2009-01-01
This paper developed a fast and adaptive method for SAR complex image denoising based on lk norm regularization, as viewed from parameters estimation. We firstly establish the relationship between denoising model and ill-posed inverse problem via convex half-quadratic regularization, and compare the difference between the estimator variance obtained from the iterative formula and biased CramerRao bound, which proves the theoretic flaw of the existent methods of parameter selection. Then, the analytic expression of the model solution as the function with respect to the regularization parameter is obtained. On this basis, we study the method for selecting the regularization parameter through minimizing mean-square error of estimators and obtain the final analytic expression, which resulted in the direct calculation, high processing speed, and adaptability. Finally, the effect of regularization parameter selection on the resolution of point targets is analyzed. The experiment results of simulation and real complex-valued SAR images illustrate the validity of the proposed method.
Babanov, Yu.A., E-mail: babanov@imp.uran.ru [M.N. Miheev Institute of Metal Physics, Ural Branch, Russian Academy of Sciences, Ekaterinburg 620990 (Russian Federation); Ponomarev, D.A.; Ustinov, V.V. [M.N. Miheev Institute of Metal Physics, Ural Branch, Russian Academy of Sciences, Ekaterinburg 620990 (Russian Federation); Baranov, A.N. [M.V. Lomonosov Moscow State University, Moscow 119991 (Russian Federation); Zubavichus, Ya.V. [Russian Research Centre “Kurchatov Institute”, 123182 Moscow (Russian Federation)
2016-08-15
Highlights: • A method for determining bond lengths from combined EXAFS spectra for solid oxide solutions is proposed. • We have demonstrated a high resolution in r-space of close spacing atoms in the Periodical Table. • These results were obtained without any assumptions concerning interatomic distances for multi-component systems. • Coordinates ions for the solid solution with rock salt structure are determined. - Abstract: The regularization method of solving ill-posed problem is used to determine five partial interatomic distances on the basis of combined two EXAFS spectra. Mathematical algorithm and experimental results of the EXAFS analysis for Ni{sub c}Zn{sub 1−c}O (c = 0.0, 0.3, 0.5, 0.7, 1.0) solid solutions with the rock salt (rs) crystal structure are discussed. Samples were synthesized from the binary oxide powders at pressure of 7.7 GPa and temperatures 1450–1650 K. The measurements were performed using synchrotron facilities (Russian Research Centre “Kurchatov Institute”, Moscow). The Ni and Zn K absorption spectra were recorded in transmission mode under room temperature. It is shown, the ideal rock salt lattice is distorted and long-range order exists only in the average (Vegard law). In order to determine coordinates ions for the solid solution with rock salt structure, we used the Pauling model. The simulation is performed for 343,000 cluster of oxide ions. The distribution functions for ions (Ni−O, Ni−Ni, Ni−Zn, Zn−Zn, Zn−O, O−O) depending on the distance are obtained. The width of the Gaussian distribution function is determined by the difference of the radii of the metal ions. The results are consistent with the data both X-ray diffraction and the EXAFS spectroscopy.
J. Awrejcewicz; A.V. Krysko; J. Mrozowski; O.A. Saltykova; M.V. Zhigalov
2011-01-01
Chaotic vibrations of flexible non-linear EulerBernoulli beams subjected to harmonic load and with various boundary conditions (symmetric and non-symmetric) are studied in this work. Reliability of the obtained results is verified by the finite difference method (FDM) and the finite element method (FEM) with the Bubnov-Galerkin approximation for various boundary conditions and various dynamic regimes (regular and non-regular). The influence of boundary conditions on the Euler-Bernoulli beams dynamics is studied mainly, dynamic behavior vs. control parameters {ωp, q0} is reported, and scenarios of the system transition into chaos are illustrated.
Areej M. Abduldaim
2013-01-01
Full Text Available We introduced and studied -regular modules as a generalization of -regular rings to modules as well as regular modules (in the sense of Fieldhouse. An -module is called -regular if for each and , there exist and a positive integer such that . The notion of -pure submodules was introduced to generalize pure submodules and proved that an -module is -regular if and only if every submodule of is -pure iff is a -regular -module for each maximal ideal of . Many characterizations and properties of -regular modules were given. An -module is -regular iff is a -regular ring for each iff is a -regular ring for finitely generated module . If is a -regular module, then .
Solution of inverse heat conduction problem using the Tikhonov regularization method
Duda, Piotr
2017-02-01
It is hard to solve ill-posed problems, as calculated temperatures are very sensitive to errors made while calculating "measured" temperatures or performing real-time measurements. The errors can create temperature oscillation, which can be the cause of an unstable solution. In order to overcome such difficulties, a variety of techniques have been proposed in literature, including regularization, future time steps and smoothing digital filters. In this paper, the Tikhonov regularization is applied to stabilize the solution of the inverse heat conduction problem. The impact on the inverse solution stability and accuracy is demonstrated.
A Mixed L2 Norm Regularized HRF Estimation Method for Rapid Event-Related fMRI Experiments
Yu Lei
2013-01-01
Full Text Available Brain state decoding or “mind reading” via multivoxel pattern analysis (MVPA has become a popular focus of functional magnetic resonance imaging (fMRI studies. In brain decoding, stimulus presentation rate is increased as fast as possible to collect many training samples and obtain an effective and reliable classifier or computational model. However, for extremely rapid event-related experiments, the blood-oxygen-level-dependent (BOLD signals evoked by adjacent trials are heavily overlapped in the time domain. Thus, identifying trial-specific BOLD responses is difficult. In addition, voxel-specific hemodynamic response function (HRF, which is useful in MVPA, should be used in estimation to decrease the loss of weak information across voxels and obtain fine-grained spatial information. Regularization methods have been widely used to increase the efficiency of HRF estimates. In this study, we propose a regularization framework called mixed L2 norm regularization. This framework involves Tikhonov regularization and an additional L2 norm regularization term to calculate reliable HRF estimates. This technique improves the accuracy of HRF estimates and significantly increases the classification accuracy of the brain decoding task when applied to a rapid event-related four-category object classification experiment. At last, some essential issues such as the impact of low-frequency fluctuation (LFF and the influence of smoothing are discussed for rapid event-related experiments.
Ohsawa, Takeo
2015-01-01
The purpose of this monograph is to present the current status of a rapidly developing part of several complex variables, motivated by the applicability of effective results to algebraic geometry and differential geometry. Highlighted are the new precise results on the L² extension of holomorphic functions. In Chapter 1, the classical questions of several complex variables motivating the development of this field are reviewed after necessary preparations from the basic notions of those variables and of complex manifolds such as holomorphic functions, pseudoconvexity, differential forms, and cohomology. In Chapter 2, the L² method of solving the d-bar equation is presented emphasizing its differential geometric aspect. In Chapter 3, a refinement of the Oka–Cartan theory is given by this method. The L² extension theorem with an optimal constant is included, obtained recently by Z. Błocki and by Q.-A. Guan and X.-Y. Zhou separately. In Chapter 4, various results on the Bergman kernel are presented, includi...
A Projection free method for Generalized Eigenvalue Problem with a nonsmooth Regularizer.
Hwang, Seong Jae; Collins, Maxwell D; Ravi, Sathya N; Ithapu, Vamsi K; Adluru, Nagesh; Johnson, Sterling C; Singh, Vikas
2015-12-01
Eigenvalue problems are ubiquitous in computer vision, covering a very broad spectrum of applications ranging from estimation problems in multi-view geometry to image segmentation. Few other linear algebra problems have a more mature set of numerical routines available and many computer vision libraries leverage such tools extensively. However, the ability to call the underlying solver only as a "black box" can often become restrictive. Many 'human in the loop' settings in vision frequently exploit supervision from an expert, to the extent that the user can be considered a subroutine in the overall system. In other cases, there is additional domain knowledge, side or even partial information that one may want to incorporate within the formulation. In general, regularizing a (generalized) eigenvalue problem with such side information remains difficult. Motivated by these needs, this paper presents an optimization scheme to solve generalized eigenvalue problems (GEP) involving a (nonsmooth) regularizer. We start from an alternative formulation of GEP where the feasibility set of the model involves the Stiefel manifold. The core of this paper presents an end to end stochastic optimization scheme for the resultant problem. We show how this general algorithm enables improved statistical analysis of brain imaging data where the regularizer is derived from other 'views' of the disease pathology, involving clinical measurements and other image-derived representations.
电阻抗图象重建的新正则化方法%New Regularization Method in Electrical Impedance Tomography
侯卫东; 莫玉龙
2002-01-01
Image reconstruction in electrical impedance tomography(EIT) is a highly ill-posed inverse problem. Regularization techniques must be used in order to solve the problem. In this paper, a new regularization method based on the spatial filtering theory is proposed. The new regularized reconstruction for EIT is independent of the estimation of impedance distribution, so it can be implemented more easily than the maximum a posteriori(MAP) method. The regularization level in our proposed method varies spatially so as to be suited to the correlation character of the object's impedance distribution. We implemented our regularization method with two dimensional computer simulations. The experimental results indicate that the quality of the reconstructed impedance images with the descibed regularization method based on spatial filtering theory is better than that with Tikhonov method.
Zhong Jian; Huang Si-Xun; Du Hua-Dong; Zhang Liang
2011-01-01
Scatterometer is an instrument which provides all-day and large-scale wind field information, and its application especially to wind retrieval always attracts meteorologists. Certain reasons cause large direction error, so it is important to find where the error mainly comes. Does it mainly result from the background field, the normalized radar cross-section (NRCS) or the method of wind retrieval? It is valuable to research. First, depending on SDP2.0, the simulated 'true' NRCS is calculated from the simulated 'true' wind through the geophysical model function NSCAT2. The simulated background field is configured by adding a noise to the simulated 'true' wind with the non-divergence constraint. Also, the simulated 'measured' NRCS is formed by adding a noise to the simulated 'true' NRCS. Then, the sensitivity experiments are taken, and the new method of regularization is used to improve the ambiguity removal with simulation experiments. The results show that the accuracy of wind retrieval is more sensitive to the noise in the background than in the measured NRCS; compared with the two-dimensional variational (2DVAR) ambiguity removal method, the accuracy of wind retrieval can be improved with the new method of Tikhonov regularization through choosing an appropriate regularization parameter, especially for the case of large error in the background. The work will provide important information and a new method for the wind, retrieval with real data.
A Noise-Robust Method with Smoothed \\ell_1/\\ell_2 Regularization for Sparse Moving-Source Mapping
Pham, Mai Quyen; Mars, Jérôme I; Nicolas, Barbara
2016-01-01
The method described here performs blind deconvolution of the beamforming output in the frequency domain. To provide accurate blind deconvolution, sparsity priors are introduced with a smooth \\ell_1/\\ell_2 regularization term. As the mean of the noise in the power spectrum domain is dependent on its variance in the time domain, the proposed method includes a variance estimation step, which allows more robust blind deconvolution. Validation of the method on both simulated and real data, and of its performance, are compared with two well-known methods from the literature: the deconvolution approach for the mapping of acoustic sources, and sound density modeling.
Chen, Jianlin; Wang, Linyuan; Yan, Bin; Zhang, Hanming; Cheng, Genyang
2015-01-01
Iterative reconstruction algorithms for computed tomography (CT) through total variation regularization based on piecewise constant assumption can produce accurate, robust, and stable results. Nonetheless, this approach is often subject to staircase artefacts and the loss of fine details. To overcome these shortcomings, we introduce a family of novel image regularization penalties called total generalized variation (TGV) for the effective production of high-quality images from incomplete or noisy projection data for 3D reconstruction. We propose a new, fast alternating direction minimization algorithm to solve CT image reconstruction problems through TGV regularization. Based on the theory of sparse-view image reconstruction and the framework of augmented Lagrange function method, the TGV regularization term has been introduced in the computed tomography and is transformed into three independent variables of the optimization problem by introducing auxiliary variables. This new algorithm applies a local linearization and proximity technique to make the FFT-based calculation of the analytical solutions in the frequency domain feasible, thereby significantly reducing the complexity of the algorithm. Experiments with various 3D datasets corresponding to incomplete projection data demonstrate the advantage of our proposed algorithm in terms of preserving fine details and overcoming the staircase effect. The computation cost also suggests that the proposed algorithm is applicable to and is effective for CBCT imaging. Theoretical and technical optimization should be investigated carefully in terms of both computation efficiency and high resolution of this algorithm in application-oriented research.
Prot, Olivier; SantolíK, OndřEj; Trotignon, Jean-Gabriel; Deferaudy, Hervé
2006-06-01
An entropy regularization algorithm (ERA) has been developed to compute the wave-energy density from electromagnetic field measurements. It is based on the wave distribution function (WDF) concept. To assess its suitability and efficiency, the algorithm is applied to experimental data that has already been analyzed using other inversion techniques. The FREJA satellite data that is used consists of six spectral matrices corresponding to six time-frequency points of an ELF hiss-event spectrogram. The WDF analysis is performed on these six points and the results are compared with those obtained previously. A statistical stability analysis confirms the stability of the solutions. The WDF computation is fast and without any prespecified parameters. The regularization parameter has been chosen in accordance with the Morozov's discrepancy principle. The Generalized Cross Validation and L-curve criterions are then tentatively used to provide a fully data-driven method. However, these criterions fail to determine a suitable value of the regularization parameter. Although the entropy regularization leads to solutions that agree fairly well with those already published, some differences are observed, and these are discussed in detail. The main advantage of the ERA is to return the WDF that exhibits the largest entropy and to avoid the use of a priori models, which sometimes seem to be more accurate but without any justification.
Yin, Gang; Zhang, Yingtang; Mi, Songlin; Fan, Hongbo; Li, Zhining
2016-11-01
To obtain accurate magnetic gradient tensor data, a fast and robust calculation method based on regularized method in frequency domain was proposed. Using the potential field theory, the transform formula in frequency domain was deduced in order to calculate the magnetic gradient tensor from the pre-existing total magnetic anomaly data. By analyzing the filter characteristics of the Vertical vector transform operator (VVTO) and Gradient tensor transform operator (GTTO), we proved that the conventional transform process was unstable which would zoom in the high-frequency part of the data in which measuring noise locate. Due to the existing unstable problem that led to a low signal-to-noise (SNR) for the calculated result, we introduced regularized method in this paper. By selecting the optimum regularization parameters of different transform phases using the C-norm approach, the high frequency noise was restrained and the SNR was improved effectively. Numerical analysis demonstrates that most value and characteristics of the calculated data by the proposed method compare favorably with reference magnetic gradient tensor data. In addition, calculated magnetic gradient tensor components form real aeromagnetic survey provided better resolution of the magnetic sources and original profile.
An efficient regularization method for a large scale ill-posed geothermal problem
Berntsson, Fredrik; Lin, Chen; Xu, Tao; Wokiyi, Dennis
2017-08-01
The inverse geothermal problem consists of estimating the temperature distribution below the earth's surface using measurements on the surface. The problem is important since temperature governs a variety of geologic processes, including the generation of magmas and the deformation style of rocks. Since the thermal properties of rocks depend strongly on temperature the problem is non-linear. The problem is formulated as an ill-posed operator equation, where the righthand side is the heat-flux at the surface level. Since the problem is ill-posed regularization is needed. In this study we demonstrate that Tikhonov regularization can be implemented efficiently for solving the operator equation. The algorithm is based on having a code for solving a well-posed problem related to the above mentioned operator. The algorithm is designed in such a way that it can deal with both 2 D and 3 D calculations. Numerical results, for 2 D domains, show that the algorithm works well and the inverse problem can be solved accurately with a realistic noise level in the surface data.
Sandhu, Ali Imran
2016-04-10
A sparsity-regularized Born iterative method (BIM) is proposed for efficiently reconstructing two-dimensional piecewise-continuous inhomogeneous dielectric profiles. Such profiles are typically not spatially sparse, which reduces the efficiency of the sparsity-promoting regularization. To overcome this problem, scattered fields are represented in terms of the spatial derivative of the dielectric profile and reconstruction is carried out over samples of the dielectric profile\\'s derivative. Then, like the conventional BIM, the nonlinear problem is iteratively converted into a sequence of linear problems (in derivative samples) and sparsity constraint is enforced on each linear problem using the thresholded Landweber iterations. Numerical results, which demonstrate the efficiency and accuracy of the proposed method in reconstructing piecewise-continuous dielectric profiles, are presented.
Tóth, L Fejes; Ulam, S; Stark, M
1964-01-01
Regular Figures concerns the systematology and genetics of regular figures. The first part of the book deals with the classical theory of the regular figures. This topic includes description of plane ornaments, spherical arrangements, hyperbolic tessellations, polyhedral, and regular polytopes. The problem of geometry of the sphere and the two-dimensional hyperbolic space are considered. Classical theory is explained as describing all possible symmetrical groupings in different spaces of constant curvature. The second part deals with the genetics of the regular figures and the inequalities fo
Lestari, D.; Raharjo, D.; Bustamam, A.; Abdillah, B.; Widhianto, W.
2017-07-01
Dengue virus consists of 10 different constituent proteins and are classified into 4 major serotypes (DEN 1 - DEN 4). This study was designed to perform clustering against 30 protein sequences of dengue virus taken from Virus Pathogen Database and Analysis Resource (VIPR) using Regularized Markov Clustering (R-MCL) algorithm and then we analyze the result. By using Python program 3.4, R-MCL algorithm produces 8 clusters with more than one centroid in several clusters. The number of centroid shows the density level of interaction. Protein interactions that are connected in a tissue, form a complex protein that serves as a specific biological process unit. The analysis of result shows the R-MCL clustering produces clusters of dengue virus family based on the similarity role of their constituent protein, regardless of serotypes.
Ablikim, M; Ai, X C; Albayrak, O; Albrecht, M; Ambrose, D J; Amoroso, A; An, F F; An, Q; Bai, J Z; Ferroli, R Baldini; Ban, Y; Bennett, D W; Bennett, J V; Bertani, M; Bettoni, D; Bian, J M; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, H Y; Chen, J C; Chen, M L; Chen, S J; Chen, X; Chen, X R; Chen, Y B; Cheng, H P; Chu, X K; Cibinetto, G; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Du, S X; Duan, P F; Eren, E E; Fan, J Z; Fang, J; Fang, S S; Fang, X; Fang, Y; Fava, L; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X Y; Gao, Y; Gao, Z; Garzia, I; Goetzen, K; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guan, Y H; Guo, A Q; Guo, L B; Guo, Y; Guo, Y P; Haddadi, Z; Hafner, A; Han, S; Hao, X Q; Harris, F A; He, K L; He, X Q; Held, T; Heng, Y K; Hou, Z L; Hu, C; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G M; Huang, G S; Huang, J S; Huang, X T; Huang, Y; Hussain, T; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, L L; Jiang, L W; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X L; Kang, X S; Kavatsyuk, M; Ke, B C; Kiese, P; Kliemt, R; Kloss, B; Kolcu, O B; Kopf, B; Kornicer, M; Kuehn, W; Kupsc, A; Lange, J S; Lara, M; Larin, P; Leng, C; Li, C; Li, Cheng; Li, D M; Li, F; Li, F Y; Li, G; Li, H B; Li, J C; Li, Jin; Li, K; Li, Lei; Li, P R; Li, T; Li, W D; Li, W G; Li, X L; Li, X M; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Lin, D X; Liu, B J; Liu, C L; Liu, C X; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H H; Liu, H M; Liu, J; Liu, J B; Liu, J P; Liu, J Y; Liu, K; Liu, K Y; Liu, L D; Liu, P L; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqing; Loehner, H; Lou, X C; Lu, H J; Lu, J G; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, T; Luo, X L; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Maas, F E; Maggiora, M; Mao, Y J; Mao, Z P; Marcello, S; Messchendorp, J G; Min, J; Mitchell, R E; Mo, X H; Mo, Y J; Morales, C Morales; Moriya, K; Muchnoi, N Yu; Muramatsu, H; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Patteri, P; Pelizaeus, M; Peng, H P; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Poling, R; Prasad, V; Qi, M; Qian, S; Qiao, C F; Qin, L Q; Qin, N; Qin, X S; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Ripka, M; Rong, G; Rosner, Ch; Ruan, X D; Santoro, V; Sarantsev, A; Savrié, M; Schoenning, K; Schumann, S; Shan, W; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Song, W M; Song, X Y; Sosio, S; Spataro, S; Sun, G X; Sun, J F; Sun, S S; Sun, Y J; Sun, Y Z; Sun, Z J; Sun, Z T; Tang, C J; Tang, X; Tapan, I; Thorndike, E H; Tiemens, M; Ullrich, M; Uman, I; Varner, G S; Wang, B; Wang, D; Wang, D Y; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, P; Wang, P L; Wang, S G; Wang, W; Wang, X F; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z H; Wang, Z Y; Weber, T; Wei, D H; Wei, J B; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, Z; Xia, L G; Xia, Y; Xiao, D; Xiao, H; Xiao, Z J; Xie, Y G; Xiu, Q L; Xu, G F; Xu, L; Xu, Q J; Xu, X P; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y; Yang, Y X; Ye, M; Ye, M H; Yin, J H; Yu, B X; Yu, C X; Yu, J S; Yuan, C Z; Yuan, W L; Yuan, Y; Yuncu, A; Zafar, A A; Zallo, A; Zeng, Y; Zhang, B X; Zhang, B Y; Zhang, C; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, X Y; Zhang, Y; Zhang, Y N; Zhang, Y H; Zhang, Y T; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, Q W; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, W J; Zheng, Y H; Zhong, B; Zhou, L; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, K; Zhu, K J; Zhu, S; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zotti, L; Zou, B S; Zou, J H
2015-01-01
We present a study of the process $e^+e^-\\to\\pi^{\\pm}(D\\bar{D}^*)^{\\mp}$ using data samples of 1092 pb$^{-1}$ at $\\sqrt{s}=4.23$ GeV and 826 pb$^{-1}$ at $\\sqrt{s}=4.26$ GeV collected with the BESIII detector at the BEPCII storage ring. With full reconstruction of the $D$ meson pair and the bachelor $\\pi^{\\pm}$ in the final state, we confirm the existence of the charged structure $Z_c(3885)^{\\mp}$ in the $(D\\bar{D}^*)^{\\mp}$ system in the two isospin processes $e^+e^-\\to\\pi^+D^0D^{*-}$ and $e^+e^-\\to\\pi^+D^-D^{*0}$. By performing a simultaneous fit, the statistical significance of $Zc(3885)^{\\mp}$ signal is determined to be greater than 10$\\sigma$, and its pole mass and width are measured to be $M_{\\rm{pole}}$=(3881.7$\\pm$1.6(stat.)$\\pm$2.1(syst.)) MeV/$c^2$ and $\\Gamma_{\\rm{pole}}$=(26.6$\\pm$2.0(stat.)$\\pm$2.3(syst.)) MeV, respectively. The Born cross section times the $(D\\bar{D}^*)^{\\mp}$ branching fraction ($\\sigma(e^+e^-\\to\\pi^{\\pm}Z_{c}(3885)^{\\mp}) \\times Br(Z_{c}(3885)^{\\mp}\\to(D\\bar{D}^*)^{\\mp})$) is ...
Niu, Xiaofeng; Yang, Yongyi; King, Michael A.
2012-09-01
Temporal regularization plays a critical role in cardiac gated dynamic SPECT reconstruction, of which the goal is to obtain an image sequence from a single acquisition which simultaneously shows both cardiac motion and tracer distribution change over the course of imaging (termed 5D). In our recent work, we explored two different approaches for temporal regularization of the dynamic activities in gated dynamic reconstruction without the use of fast camera rotation: one is the dynamic EM (dEM) approach which is imposed on the temporal trend of the time activity of each voxel, and the other is a B-spline modeling approach in which the time activity is regulated by a set of B-spline basis functions. In this work, we extend the B-spline approach to fully 5D reconstruction and conduct a thorough quantitative comparison with the dEM approach. In the evaluation of the reconstruction results, we apply a number of quantitative measures on two major aspects of the reconstructed dynamic images: (1) the accuracy of the reconstructed activity distribution in the myocardium and (2) the ability of the reconstructed dynamic activities to differentiate perfusion defects from normal myocardial wall uptake. These measures include the mean square error (MSE), bias-variance analysis, accuracy of time-activity curves (TAC), contrast-to-noise ratio of a defect, composite kinetic map of the left ventricle wall and perfusion defect detectability with channelized Hotelling observer. In experiments, we simulated cardiac gated imaging with the NURBS-based cardiac-torso phantom and Tc99m-Teboroxime as the imaging agent, where acquisition with the equivalent of only three full camera rotations was used during the imaging period. The results show that both dEM and B-spline 5D could achieve similar overall accuracy in the myocardium in terms of MSE. However, compared to dEM 5D, the B-spline approach could achieve a more accurate reconstruction of the voxel TACs; in particular, B-spline 5D could
3D Inversion of Magnetic Data through Wavelet based Regularization Method
Maysam Abedi
2015-06-01
Full Text Available This study deals with the 3D recovering of magnetic susceptibility model by incorporating the sparsity-based constraints in the inversion algorithm. For this purpose, the area under prospect was divided into a large number of rectangular prisms in a mesh with unknown susceptibilities. Tikhonov cost functions with two sparsity functions were used to recover the smooth parts as well as the sharp boundaries of model parameters. A pre-selected basis namely wavelet can recover the region of smooth behaviour of susceptibility distribution while Haar or finite-difference (FD domains yield a solution with rough boundaries. Therefore, a regularizer function which can benefit from the advantages of both wavelets and Haar/FD operators in representation of the 3D magnetic susceptibility distributionwas chosen as a candidate for modeling magnetic anomalies. The optimum wavelet and parameter β which controls the weight of the two sparsifying operators were also considered. The algorithm assumed that there was no remanent magnetization and observed that magnetometry data represent only induced magnetization effect. The proposed approach is applied to a noise-corrupted synthetic data in order to demonstrate its suitability for 3D inversion of magnetic data. On obtaining satisfactory results, a case study pertaining to the ground based measurement of magnetic anomaly over a porphyry-Cu deposit located in Kerman providence of Iran. Now Chun deposit was presented to be 3D inverted. The low susceptibility in the constructed model coincides with the known location of copper ore mineralization.
Liu, Chang; Xu, Lijun; Cao, Zhang
2013-07-10
Regularization methods were combined with line-of-sight tunable diode laser absorption spectroscopy (TDLAS) to measure nonuniform temperature and concentration distributions along the laser path when a priori information of the temperature distribution tendency is available. Relying on measurements of 12 absorption transitions of water vapor from 1300 to 1350 nm, the nonuniform temperature and concentration distributions were retrieved by making the use of nonlinear and linear regularization methods, respectively. To examine the effectiveness of regularization methods, a simulated annealing algorithm for nonlinear regularization was implemented to reconstruct the temperature distribution, while three linear regularization methods, namely truncated singular value decomposition, Tikhonov regularization, and a revised Tikhonov regularization method, were implemented to retrieve the concentration distribution. The results show that regularization methods not only can be used to retrieve temperature and concentration distributions closer to the original but also are less sensitive to measurement noise. When no sufficient optical access is available for TDLAS tomography, the methods proposed in the paper can be used to obtain more details of the combustion field with higher accuracy and robustness, which are expected to play a more important role in combustion diagnosis.
New Observables In the Decay Mode \\bar B_d \\-->\\bar K^{0*} \\ell^+ \\ell^-
Egede, U.; /Imperial Coll., London; Hurth, T.; /CERN /SLAC; Matias, J.; Ramon, M.; /Barcelona, IFAE; Reece, W.; /Imperial Coll., London
2008-08-07
We discuss the large set of observables available from the angular distributions of the decay {bar B}{sub d} {yields} {bar K}*{sup 0}{ell}{sup +}{ell}{sup -}. We present a NLO analysis of all observables based on the QCD factorization approach in the low-dilepton mass region and an estimate of {Lambda}/m{sub b} corrections. Moreover, we discuss their sensitivity to new physics. We explore the experimental sensitivities at LHCb (10 fb{sup -1}) and SuperLHCb (100 fb{sup -1}) based on a full-angular fit method and explore the sensitivity to right handed currents. We also show that the previously discussed transversity amplitude A{sub T}{sup (1)} cannot be measured at the LHCb experiment or at future B factory experiments as it requires a measurement of the spin of the final state particles.
Comparing parameter choice methods for the regularization in the SONAH algorithm
Gomes, Jesper Skovhus
2006-01-01
is needed. A parameter choice method based on a priori information about the signal-to-noise-ratio (SNR) in the measurement setup is often chosen. However, this parameter choice method may be undesirable since SNR is difficult to determine in practice. In this paper, data based parameter choice methods...
2007-01-01
In this paper, we are concerned with the partial regularity for the weak solutions of energy minimizing p-harmonic maps under the controllable growth condition. We get the interior partial regularity by the p-harmonic approximation method together with the technique used to get the decay estimation on some Degenerate elliptic equations and the obstacle problem by Tan and Yan. In particular, we directly get the optimal regularity.
Özkan Güner
2014-01-01
Full Text Available We apply the functional variable method, exp-function method, and (G′/G-expansion method to establish the exact solutions of the nonlinear fractional partial differential equation (NLFPDE in the sense of the modified Riemann-Liouville derivative. As a result, some new exact solutions for them are obtained. The results show that these methods are very effective and powerful mathematical tools for solving nonlinear fractional equations arising in mathematical physics. As a result, these methods can also be applied to other nonlinear fractional differential equations.
Tian, Wenyi; Yuan, Xiaoming
2016-11-01
Linear inverse problems with total variation regularization can be reformulated as saddle-point problems; the primal and dual variables of such a saddle-point reformulation can be discretized in piecewise affine and constant finite element spaces, respectively. Thus, the well-developed primal-dual approach (a.k.a. the inexact Uzawa method) is conceptually applicable to such a regularized and discretized model. When the primal-dual approach is applied, the resulting subproblems may be highly nontrivial and it is necessary to discuss how to tackle them and thus make the primal-dual approach implementable. In this paper, we suggest linearizing the data-fidelity quadratic term of the hard subproblems so as to obtain easier ones. A linearized primal-dual method is thus proposed. Inspired by the fact that the linearized primal-dual method can be explained as an application of the proximal point algorithm, a relaxed version of the linearized primal-dual method, which can often accelerate the convergence numerically with the same order of computation, is also proposed. The global convergence and worst-case convergence rate measured by the iteration complexity are established for the new algorithms. Their efficiency is verified by some numerical results.
Luo, X.; Ou, J.; Yuan, Y.; Gao, J.; Jin, X.; Zhang, K.; Xu, H.
2008-08-01
It is well known that the key problem associated with network-based real-time kinematic (RTK) positioning is the estimation of systematic errors of GPS observations, such as residual ionospheric delays, tropospheric delays, and orbit errors, particularly for medium-long baselines. Existing methods dealing with these systematic errors are either not applicable for making estimations in real-time or require additional observations in the computation. In both cases, the result is a difficulty in performing rapid positioning. We have developed a new strategy for estimating the systematic errors for near real-time applications. In this approach, only two epochs of observations are used each time to estimate the parameters. In order to overcome severe ill-conditioned problems of the normal equation, the Tikhonov regularization method is used. We suggest that the regularized matrix be constructed by combining the a priori information of the known coordinates of the reference stations, followed by the determination of the corresponding regularized parameter. A series of systematic errors estimation can be obtained using a session of GPS observations, and the new process can assist in resolving the integer ambiguities of medium-long baselines and in constructing the virtual observations for the virtual reference station. A number of tests using three medium- to long-range baselines (from tens of kilometers to longer than 1000 kilometers) are used to validate the new approach. Test results indicate that the coordinates of three baseline lengths derived are in the order of several centimeters after the systematical errors are successfully removed. Our results demonstrate that the proposed method can effectively estimate systematic errors in the near real-time for medium-long GPS baseline solutions.
Mory, Cyril, E-mail: cyril.mory@philips.com [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Auvray, Vincent; Zhang, Bo [Philips Research Medisys, 33 rue de Verdun, 92156 Suresnes (France); Grass, Michael; Schäfer, Dirk [Philips Research, Röntgenstrasse 24–26, D-22335 Hamburg (Germany); Chen, S. James; Carroll, John D. [Department of Medicine, Division of Cardiology, University of Colorado Denver, 12605 East 16th Avenue, Aurora, Colorado 80045 (United States); Rit, Simon [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Centre Léon Bérard, 28 rue Laënnec, F-69373 Lyon (France); Peyrin, Françoise [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1, F-69621 Villeurbanne Cedex (France); X-ray Imaging Group, European Synchrotron, Radiation Facility, BP 220, F-38043 Grenoble Cedex (France); Douek, Philippe; Boussel, Loïc [Université de Lyon, CREATIS, CNRS UMR5220, Inserm U1044, INSA-Lyon, Université Lyon 1 (France); Hospices Civils de Lyon, 28 Avenue du Doyen Jean Lépine, 69500 Bron (France)
2014-02-15
Purpose: Reconstruction of the beating heart in 3D + time in the catheter laboratory using only the available C-arm system would improve diagnosis, guidance, device sizing, and outcome control for intracardiac interventions, e.g., electrophysiology, valvular disease treatment, structural or congenital heart disease. To obtain such a reconstruction, the patient's electrocardiogram (ECG) must be recorded during the acquisition and used in the reconstruction. In this paper, the authors present a 4D reconstruction method aiming to reconstruct the heart from a single sweep 10 s acquisition. Methods: The authors introduce the 4D RecOnstructiOn using Spatial and TEmporal Regularization (short 4D ROOSTER) method, which reconstructs all cardiac phases at once, as a 3D + time volume. The algorithm alternates between a reconstruction step based on conjugate gradient and four regularization steps: enforcing positivity, averaging along time outside a motion mask that contains the heart and vessels, 3D spatial total variation minimization, and 1D temporal total variation minimization. Results: 4D ROOSTER recovers the different temporal representations of a moving Shepp and Logan phantom, and outperforms both ECG-gated simultaneous algebraic reconstruction technique and prior image constrained compressed sensing on a clinical case. It generates 3D + time reconstructions with sharp edges which can be used, for example, to estimate the patient's left ventricular ejection fraction. Conclusions: 4D ROOSTER can be applied for human cardiac C-arm CT, and potentially in other dynamic tomography areas. It can easily be adapted to other problems as regularization is decoupled from projection and back projection.
Numerical methods for estimating J integral in models with regular rectangular meshes
Kozłowiec, B.
2017-02-01
Cracks and delaminations are the common structural degradation mechanisms studied recently using numerous methods and techniques. Among them, numerical methods based on FEM analyses are in widespread commercial use. The scope of these methods has focused i.e. on energetic approach to linear elastic fracture mechanics (LEFM) theory, encompassing such quantities as the J-integral and the energy release rate G. This approach enables to introduce damage criteria of analyzed structures without dealing with the details of the physical singularities occurring at the crack tip. In this paper, two numerical methods based on LEFM are used to analyze both isotropic and orthotropic specimens and the results are compared with well-known analytical solutions as well as (in some cases) VCCT results. These methods are optimized for industrial use with simple, rectangular meshes. The verification is made based on two dimensional mode partitioning.
Li, L; Tan, S [Huazhong University of Science and Technology, Wuhan, Hubei (China); Lu, W [University of Maryland School of Medicine, Baltimore, MD (United States)
2015-06-15
Purpose: To propose a new variational method which couples image restoration with tumor segmentation for PET images using multiple regularizations. Methods: Partial volume effect (PVE) is a major degrading factor impacting tumor segmentation accuracy in PET imaging. The existing segmentation methods usually need to take prior calibrations to compensate PVE and they are highly system-dependent. Taking into account that image restoration and segmentation can promote each other and they are tightly coupled, we proposed a variational method to solve the two problems together. Our method integrated total variation (TV) semi-blind deconvolution and Mumford-Shah (MS) segmentation. The TV norm was used on edges to protect the edge information, and the L{sub 2} norm was used to avoid staircase effect in the no-edge area. The blur kernel was constrained to the Gaussian model parameterized by its variance and we assumed that the variances in the X-Y and Z directions are different. The energy functional was iteratively optimized by an alternate minimization algorithm. Segmentation performance was tested on eleven patients with non-Hodgkin’s lymphoma, and evaluated by Dice similarity index (DSI) and classification error (CE). For comparison, seven other widely used methods were also tested and evaluated. Results: The combination of TV and L{sub 2} regularizations effectively improved the segmentation accuracy. The average DSI increased by around 0.1 than using either the TV or the L{sub 2} norm. The proposed method was obviously superior to other tested methods. It has an average DSI and CE of 0.80 and 0.41, while the FCM method — the second best one — has only an average DSI and CE of 0.66 and 0.64. Conclusion: Coupling image restoration and segmentation can handle PVE and thus improves tumor segmentation accuracy in PET. Alternate use of TV and L2 regularizations can further improve the performance of the algorithm. This work was supported in part by National Natural
Shkvarko, Yuriy, IV; Butenko, Sergiy
2006-05-01
We address a new approach to the problem of improvement of the quality of multi-grade spatial-spectral images provided by several remote sensing (RS) systems as required for environmental resource management with the use of multisource RS data. The problem of multi-spectral reconstructive imaging with multisource information fusion is stated and treated as an aggregated ill-conditioned inverse problem of reconstruction of a high-resolution image from the data provided by several sensor systems that employ the same or different image formation methods. The proposed fusionoptimization technique aggregates the experiment design regularization paradigm with neural-network-based implementation of the multisource information fusion method. The maximum entropy (ME) requirement and projection regularization constraints are posed as prior knowledge for fused reconstruction and the experiment-design regularization methodology is applied to perform the optimization of multisource information fusion. Computationally, the reconstruction and fusion are accomplished via minimization of the energy function of the proposed modified multistate Hopfield-type neural network (NN) that integrates the model parameters of all systems incorporating a priori information, aggregate multisource measurements and calibration data. The developed theory proves that the designed maximum entropy neural network (MENN) is able to solve the multisource fusion tasks without substantial complication of its computational structure independent on the number of systems to be fused. For each particular case, only the proper adjustment of the MENN's parameters (i.e. interconnection strengths and bias inputs) should be accomplished. Simulation examples are presented to illustrate the good overall performance of the fused reconstruction achieved with the developed MENN algorithm applied to the real-world multi-spectral environmental imagery.
Huang, Da
2011-01-01
The consistency of loop regularization (LORE) method is explored in multiloop calculations. A key concept of the LORE method is the introduction of irreducible loop integrals (ILIs) which are evaluated from the Feynman diagrams by adopting the Feynman parametrization and ultraviolet-divergence-preserving(UVDP) parametrization. It is then inevitable for the ILIs to encounter the divergences in the UVDP-parameter space due to the generic overlapping divergences in the 4-dimensional momentum space. By computing the so-called $\\alpha\\beta\\gamma$ integrals arising from two loop Feynman diagrams, we show how to deal with the divergences in the parameter space by applying for the LORE method. By identifying the divergences in the UVDP-parameter space to those in the subdiagrams of two loop diagrams, we arrive at the Bjorken-Drell's analogy between Feynman diagrams and electrical circuits, where the UVDP parameters are associated with the conductance or resistance in the electrical circuits. In particular, the sets o...
Zhong, Peng; Que, Wenxiu; Zhang, Jin
2010-11-01
In this paper, honeycomb-like regular TiO2 nanoporous films deposited on different substrates including ITO glass and silicon wafer are fabricated by combining a nanoimprint technique with a sol-gel method. A novel soft polymer mold containing a thin layer of polymethylmethacrylate and a thicker layer of polydimethylsiloxane, which is obtained from an anodic aluminum oxide template, is carried out for the nanoimprint process. TiO2 precursor solution prepared by the sol-gel processing is used as the nanoimprinted material. After imprinting, the polydimethylsiloxane back layer is easily peeled off before the polymethylmethacrylate mold is chemically removed to avoid any demolding problem. The SEM images show that the honeycomb-like regular nanostructure of the initial anodic aluminum oxide template can be preserved completely on TiO2 via this method, and the XRD results indicate that there is a crystalline transition from amorphous to anatase of TiO2 after 450 degrees C heat treatment.
Dmitriy Y. Anistratov; Adrian Constantinescu; Loren Roberts; William Wieselquist
2007-04-30
This is a project in the field of fundamental research on numerical methods for solving the particle transport equation. Numerous practical problems require to use unstructured meshes, for example, detailed nuclear reactor assembly-level calculations, large-scale reactor core calculations, radiative hydrodynamics problems, where the mesh is determined by hydrodynamic processes, and well-logging problems in which the media structure has very complicated geometry. Currently this is an area of very active research in numerical transport theory. main issues in developing numerical methods for solving the transport equation are the accuracy of the numerical solution and effectiveness of iteration procedure. The problem in case of unstructured grids is that it is very difficult to derive an iteration algorithm that will be unconditionally stable.
Zhang, Xinhua; Vishwanathan, S V N
2010-01-01
Nesterov's accelerated gradient methods (AGM) have been successfully applied in many machine learning areas. However, their empirical performance on training max-margin models has been inferior to existing specialized solvers. In this paper, we first extend AGM to strongly convex and composite objective functions with Bregman style prox-functions. Our unifying framework covers both the $\\infty$-memory and 1-memory styles of AGM, tunes the Lipschiz constant adaptively, and bounds the duality gap. Then we demonstrate various ways to apply this framework of methods to a wide range of machine learning problems. Emphasis will be given on their rate of convergence and how to efficiently compute the gradient and optimize the models. The experimental results show that with our extensions AGM outperforms state-of-the-art solvers on max-margin models.
A wavelet regularization method for an inverse heat conduction problem with convection term
Wei Cheng
2013-05-01
Full Text Available In this article, we consider an inverse heat conduction problem with convection, which is ill-posed; i.e., the solution does not depend continuously on the given data. A special projection dual least squares method generated by the family of Shannon wavelets is applied to formulate an approximate solution. Also an optimal-order estimate for the error between the approximate solution and exact solution is obtained.
<Regular article>Determination of Methyltins by a Hydridization Solvent Extraction Method
HAMASAKI,TETSUO/SATO,TAKAHIKO/NAGASE,HISAMITSU/KITO,HIDEAKI
1994-01-01
Analytical methods for the determination of methyltins in aqueous solutions were investigated. Methyltins ((CH_3)_nSn^) were derived to hydrides ((CH_3)_nSnH_) using sodium borohydride and extracted with benzene. Various factors related to hydridization and extraction were studied, and the optimum analytical conditions were established. Each methyltin in 50 ml of aqueous solution could be detected in the range of 0.5-250 μg as Sn using a gas chromatography-flame photometric detector (tin sele...
Xu, Zhengwei
Modeling of induced polarization (IP) phenomena is important for developing effective methods for remote sensing of subsurface geology and is widely used in mineral exploration. However, the quantitative interpretation of IP data in a complex 3D environment is still a challenging problem of applied geophysics. In this dissertation I use the regularized conjugate gradient method to determine the 3D distribution of the four parameters of the Cole-Cole model based on surface induced polarization (IP) data. This method takes into account the nonlinear nature of both electromagnetic induction (EMI) and IP phenomena. The solution of the 3D IP inverse problem is based on the regularized smooth inversion only. The method was tested on synthetic models with DC conductivity, intrinsic chargeability, time constant, and relaxation parameters, and it was also applied to the practical 3D IP survey data. I demonstrate that the four parameters of the Cole-Cole model, DC electrical resistivity, rho 0 , chargeability, eta time constant, tau and the relaxation parameter, C, can be recovered from the observed IP data simultaneously. There are four Cole-Cole parameters involved in the inversion, in other words, within each cell, there are DC conductivity (sigma0 ), chargeability (eta), time parameters (tau), and relaxation parameters (C) compared to conductivity only, used in EM only inversion. In addition to more inversion parameters used in IP survey, dipole-dipole configuration which requires more sources and receivers. One the other hand, calculating Green tensor and Frechet matrix time consuming and storing them requires a lot of memory. So, I develop parallel computation using MATLAB parallel tool to speed up the calculation.
Bi, Hui; Zhang, Bingchen; Hong, Wen
2016-07-01
The elevation image quality of tomographic synthetic aperture radar (TomoSAR) data depends mainly on the elevation aperture size, number of baselines, and baseline distribution. In TomoSAR, due to the restricted number of baselines with irregular distributions, the elevation imaging quality is always unacceptable using the conventional spectral analysis approach. Therefore, for a given limited number of irregular baselines, the completion of data for the unobserved virtual uniform baseline distribution should be addressed to improve the spectral analysis-based TomoSAR reconstruction quality. We propose an Lq(0optimization problem, before calculating the data for virtual baseline distribution based on the acquisitions and the transformation matrix. Finally, the elevation reflectivity function is recovered using the spectral analysis method based on the estimated data. Compared with the reconstructed results only based on the limited irregular acquisitions, the image recovered using the dataset with a virtual uniform baseline distribution can improve the elevation image quality in an efficient manner.
Ablikim, M; Ai, X C; Albayrak, O; Albrecht, M; Ambrose, D J; Amoroso, A; An, F F; An, Q; Bai, J Z; Ferroli, R Baldini; Ban, Y; Bennett, D W; Bennett, J V; Bertani, M; Bettoni, D; Bian, J M; Bianchi, F; Boger, E; Boyko, I; Briere, R A; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, H Y; Chen, J C; Chen, M L; Chen, S J; Chen, X; Chen, X R; Chen, Y B; Cheng, H P; Chu, X K; Cibinetto, G; Dai, H L; Dai, J P; Dbeyssi, A; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; De Mori, F; Ding, Y; Dong, C; Dong, J; Dong, L Y; Dong, M Y; Du, S X; Duan, P F; Eren, E E; Fan, J Z; Fang, J; Fang, S S; Fang, X; Fang, Y; Fava, L; Feldbauer, F; Felici, G; Feng, C Q; Fioravanti, E; Fritsch, M; Fu, C D; Gao, Q; Gao, X Y; Gao, Y; Gao, Z; Garzia, I; Geng, C; Goetzen, K; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guan, Y H; Guo, A Q; Guo, L B; Guo, Y; Guo, Y P; Haddadi, Z; Hafner, A; Han, S; Han, Y L; Hao, X Q; Harris, F A; He, K L; He, Z Y; Held, T; Heng, Y K; Hou, Z L; Hu, C; Hu, H M; Hu, J F; Hu, T; Hu, Y; Huang, G M; Huang, G S; Huang, H P; Huang, J S; Huang, X T; Huang, Y; Hussain, T; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, L L; Jiang, L W; Jiang, X S; Jiang, X Y; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Johansson, T; Julin, A; Kalantar-Nayestanaki, N; Kang, X L; Kang, X S; Kavatsyuk, M; Ke, B C; Kiese, P; Kliemt, R; Kloss, B; Kolcu, O B; Kopf, B; Kornicer, M; Kühn, W; Kupsc, A; Lange, J S; Lara, M; Larin, P; Leng, C; Li, C; Li, C H; Li, Cheng; Li, D M; Li, F; Li, G; Li, H B; Li, J C; Li, Jin; Li, K; Li, Lei; Li, P R; Li, T; Li, W D; Li, W G; Li, X L; Li, X M; Li, X N; Li, X Q; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Lin, D X; Liu, B J; Liu, C X; Liu, F H; Liu, Fang; Liu, Feng; Liu, H B; Liu, H H; Liu, H M; Liu, J; Liu, J B; Liu, J P; Liu, J Y; Liu, K; Liu, K Y; Liu, L D; Liu, P L; Liu, Q; Liu, S B; Liu, X; Liu, X X; Liu, Y B; Liu, Z A; Liu, Zhiqiang; Liu, Zhiqing; Loehner, H; Lou, X C; Lu, H J; Lu, J G; Lu, R Q; Lu, Y; Lu, Y P; Luo, C L; Luo, M X; Luo, T; Luo, X L; Lv, M; Lyu, X R; Ma, F C; Ma, H L; Ma, L L; Ma, Q M; Ma, T; Ma, X N; Ma, X Y; Maas, F E; Maggiora, M; Mao, Y J; Mao, Z P; Marcello, S; Messchendorp, J G; Min, J; Min, T J; Mitchell, R E; Mo, X H; Mo, Y J; Morales, C Morales; Moriya, K; Muchnoi, N Yu; Muramatsu, H; Nefedov, Y; Nerling, F; Nikolaev, I B; Ning, Z; Nisar, S; Niu, S L; Niu, X Y; Olsen, S L; Ouyang, Q; Pacetti, S; Patteri, P; Pelizaeus, M; Peng, H P; Peters, K; Pettersson, J; Ping, J L; Ping, R G; Poling, R; Prasad, V; Pu, Y N; Qi, M; Qian, S; Qiao, C F; Qin, L Q; Qin, N; Qin, X S; Qin, Y; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Ren, H L; Ripka, M; Rong, G; Rosner, Ch; Ruan, X D; Santoro, V; Sarantsev, A; Savrié, M; Schoenning, K; Schumann, S; Shan, W; Shao, M; Shen, C P; Shen, P X; Shen, X Y; Sheng, H Y; Song, W M; Song, X Y; Sosio, S; Spataro, S; Sun, G X; Sun, J F; Sun, S S; Sun, Y J; Sun, Y Z; Sun, Z J; Sun, Z T; Tang, C J; Tang, X; Tapan, I; Thorndike, E H; Tiemens, M; Ullrich, M; Uman, I; Varner, G S; Wang, B; Wang, B L; Wang, D; Wang, D Y; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, P; Wang, P L; Wang, S G; Wang, W; Wang, X F; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z H; Wang, Z Y; Weber, T; Wei, D H; Wei, J B; Weidenkaff, P; Wen, S P; Wiedner, U; Wolke, M; Wu, L H; Wu, Z; Xia, L G; Xia, Y; Xiao, D; Xiao, Z J; Xie, Y G; Xiu, Q L; Xu, G F; Xu, L; Xu, Q J; Xu, Q N; Xu, X P; Yan, L; Yan, W B; Yan, W C; Yan, Y H; Yang, H J; Yang, H X; Yang, L; Yang, Y; Yang, Y X; Ye, H; Ye, M; Ye, M H; Yin, J H; Yu, B X; Yu, C X; Yu, H W; Yu, J S; Yuan, C Z; Yuan, W L; Yuan, Y; Yuncu, A; Zafar, A A; Zallo, A; Zeng, Y; Zhang, B X; Zhang, B Y; Zhang, C; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J J; Zhang, J L; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, K; Zhang, L; Zhang, S H; Zhang, X Y; Zhang, Y; Zhang, Y N; Zhang, Y H; Zhang, Y T; Zhang, Yu; Zhang, Z H; Zhang, Z P; Zhang, Z Y; Zhao, G; Zhao, J W; Zhao, J Y; Zhao, J Z; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, Q W; Zhao, S J; Zhao, T C; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, W J; Zheng, Y H; Zhong, B; Zhou, L; Zhou, Li; Zhou, X; Zhou, X K; Zhou, X R; Zhou, X Y; Zhu, K; Zhu, K J; Zhu, S; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuan, J; Zotti, L; Zou, B S; Zou, J H
2015-01-01
We report a study of the process $e^{+} e^{-} \\to (D^{*} \\bar{D}^{*})^{0} \\pi^0$ using $e^+e^-$ collision data samples with integrated luminosity of $1092 \\rm{pb}^{-1}$ at $\\sqrt{s}=4.23 \\rm{GeV}$ and $826 \\rm{pb}^{-1}$ at $\\sqrt{s}=4.26 \\rm{GeV}$ collected with the BESIII detector at the BEPCII storage ring. We observe a new neutral structure near the $(D^{*} \\bar{D}^{*})^{0}$ mass threshold in the $\\pi^0$ recoil mass spectrum, which we denote as $Z_{c}(4025)^{0}$. Assuming a Breit-Wigner line shape, its pole mass and pole width are determined to be $(4025.5^{+2.0}_{-4.7}\\pm3.1) \\rm{MeV}/c^2$ and $(23.0\\pm 6.0\\pm 1.0) \\rm{MeV}$, respectively. The statistical significance of the observation is $7.4\\sigma$. The Born cross sections of $e^{+}e^{-}\\to Z_{c}(4025)^{0} \\pi^0\\to (D^{*} \\bar{D}^{*})^{0}\\pi^0$ are measured to be $(61.6\\pm8.2\\pm9.0) \\rm{pb}$ at $\\sqrt{s}=4.23 \\rm{GeV}$ and $(43.4\\pm8.0\\pm5.4) \\rm{pb}$ at $\\sqrt{s}=4.26 \\rm{GeV}$. The first uncertainties are statistical and the second are systematic.
A recursive method to calculate UV-divergent parts at one-loop level in dimensional regularization
Feng, Feng
2012-07-01
A method is introduced to calculate the UV-divergent parts at one-loop level in dimensional regularization. The method is based on the recursion, and the basic integrals are just the scaleless integrals after the recursive reduction, which involve no other momentum scales except the loop momentum itself. The method can be easily implemented in any symbolic computer language, and a implementation in MATHEMATICA is ready to use. Catalogue identifier: AELY_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/AELY_v1_0.html Program obtainable from: CPC Program Library, Queen's University, Belfast, N. Ireland Licensing provisions: Standard CPC licence, http://cpc.cs.qub.ac.uk/licence/licence.html No. of lines in distributed program, including test data, etc.: 26 361 No. of bytes in distributed program, including test data, etc.: 412 084 Distribution format: tar.gz Programming language: Mathematica Computer: Any computer where the Mathematica is running. Operating system: Any capable of running Mathematica. Classification: 11.1 External routines: FeynCalc (http://www.feyncalc.org/), FeynArts (http://www.feynarts.de/) Nature of problem: To get the UV-divergent part of any one-loop expression. Solution method: UVPart is a Mathematica package where the recursive method has been implemented. Running time: In general it is below one second.
Vested Madsen, Matias; Macario, Alex; Yamamoto, Satoshi
2016-01-01
In this study, we examined the regularly scheduled, formal teaching sessions in a single anesthesiology residency program to (1) map the most common primary instructional methods, (2) map the use of 10 known teaching techniques, and (3) assess if residents scored sessions that incorporated active......; range, 0-9). Clinical applicability (85%) and attention grabbers (85%) were the 2 most common teaching techniques. Thirty-eight percent of the sessions defined learning objectives, and one-third of sessions engaged in active learning. The overall survey response rate equaled 42%, and passive sessions...... learning as higher quality than sessions with little or no verbal interaction between teacher and learner. A modified Delphi process was used to identify useful teaching techniques. A representative sample of each of the formal teaching session types was mapped, and residents anonymously completed a 5...
Zhuo, Congshan; Sagaut, Pierre
2017-06-01
In this paper, a variant of the acoustic multipole source (AMS) method is proposed within the framework of the lattice Boltzmann method. A quadrupole term is directly included in the stress system (equilibrium momentum flux), and the dependency of the quadrupole source in the inviscid limit upon the fortuitous discretization error reported in the works of E. M. Viggen [Phys. Rev. E 87, 023306 (2013)PLEEE81539-375510.1103/PhysRevE.87.023306] is removed. The regularized lattice Boltzmann (RLB) method with this variant AMS method is presented for the 2D and 3D acoustic problems in the inviscid limit, and without loss of generality, the D3Q19 model is considered in this work. To assess the accuracy and the advantage of the RLB scheme with this AMS for acoustic point sources, the numerical investigations and comparisons with the multiple-relaxation-time (MRT) models and the analytical solutions are performed on the 2D and 3D acoustic multipole point sources in the inviscid limit, including monopoles, x dipoles, and xx quadrupoles. From the present results, the good precision of this AMS method is validated, and the RLB scheme exhibits some superconvergence properties for the monopole sources compared with the MRT models, and both the RLB and MRT models have the same accuracy for the simulations of acoustic dipole and quadrupole sources. To further validate the capability of the RLB scheme with AMS, another basic acoustic problem, the acoustic scattering from a solid cylinder presented at the Second Computational Aeroacoustics Workshop on Benchmark Problems, is numerically considered. The directivity pattern of the acoustic field is computed at r=7.5; the present results agree well with the exact solutions. Also, the effects of slip and no-slip wall treatments within the regularized boundary condition on this pure acoustic scattering problem are tested, and compared with the exact solution, the slip wall treatment can present a better result. All simulations demonstrate
2008-01-01
This paper presents a novel disturbance function method to avoid turning point singularities for the semi-regular hexagons 6-SPS Gough-Stewart manipulator. Through analysis of the configuration bifurcation characteristics of the manipulator at the type-II singular points, it is found that the type-II singularities under signal input parameter belong to turning point bifurcation. The configuration patterns for the manipulator to pass through the turning points are divided into three types: persistent, non-persistent and path configuration. Utilizing the universal unfolding approach, the configuration bifurcation characteristics under the perturbation pa- rameters applied to the extendable legs are analyzed. The investigation reveals that all configuration branches converged in the same singular point in the unperturbed system will be separated in the disturbed system. Based on this discovery, a novel method for the parallel manipulator to pass through the singular points with the desired configuration is presented. Then the disturbance functions for the manipulator to pass through the turning points with the persistent configuration and the non-persistent configuration are constructed. The method presented in this paper can be applied to avoiding the singularities in such cases where the path and orientation of the manipulator are strictly programmed.
Hansen, Lars Kai; Rasmussen, Carl Edward; Svarer, C.
1994-01-01
Regularization, e.g., in the form of weight decay, is important for training and optimization of neural network architectures. In this work the authors provide a tool based on asymptotic sampling theory, for iterative estimation of weight decay parameters. The basic idea is to do a gradient descent...... in the estimated generalization error with respect to the regularization parameters. The scheme is implemented in the authors' Designer Net framework for network training and pruning, i.e., is based on the diagonal Hessian approximation. The scheme does not require essential computational overhead in addition...... to what is needed for training and pruning. The viability of the approach is demonstrated in an experiment concerning prediction of the chaotic Mackey-Glass series. The authors find that the optimized weight decays are relatively large for densely connected networks in the initial pruning phase, while...
Lucido, Mario; Panariello, Gaetano; Schettino, Fulvio
2017-01-01
The aim of this paper is the introduction of a new analytically regularizing procedure, based on Helmholtz decomposition and Galerkin method, successfully employed to analyze the electromagnetic scattering by zero-thickness perfectly electrically conducting circular disk. After expanding the fields in cylindrical harmonics, the problem is formulated as an electric field integral equation in the vector Hankel transform domain. Assuming as unknowns the surface curl-free and divergence-free contributions of the surface current density, a second-kind Fredholm infinite matrix-operator equation is obtained by means of Galerkin method with expansion functions reconstructing the expected physical behavior of the surface current density and with closed-form spectral domain counterparts, which form a complete set of orthogonal eigenfunctions of the most singular part of the integral operator. The coefficients of the scattering matrix are single improper integrals which can be quickly computed by means of analytical asymptotic acceleration technique. Comparisons with the literature have been provided in order to show the accuracy and efficiency of the presented technique.
Huang, Da; Wu, Yue-Liang
2012-07-01
The consistency of loop regularization (LORE) method is explored in multiloop calculations. A key concept of the LORE method is the introduction of irreducible loop integrals (ILIs) which are evaluated from the Feynman diagrams by adopting the Feynman parametrization and ultraviolet-divergence-preserving (UVDP) parametrization. It is then inevitable for the ILIs to encounter the divergences in the UVDP parameter space due to the generic overlapping divergences in the four-dimensional momentum space. By computing the so-called αβγ integrals arising from two-loop Feynman diagrams, we show how to deal with the divergences in the parameter space with the LORE method. By identifying the divergences in the UVDP parameter space to those in the subdiagrams, we arrive at the Bjorken-Drell analogy between Feynman diagrams and electrical circuits. The UVDP parameters are shown to correspond to the conductance or resistance in the electrical circuits, and the divergence in Feynman diagrams is ascribed to the infinite conductance or zero resistance. In particular, the sets of conditions required to eliminate the overlapping momentum integrals for obtaining the ILIs are found to be associated with the conservations of electric voltages, and the momentum conservations correspond to the conservations of electrical currents, which are known as the Kirchhoff laws in the electrical circuits analogy. As a practical application, we carry out a detailed calculation for one-loop and two-loop Feynman diagrams in the massive scalar ϕ 4 theory, which enables us to obtain the well-known logarithmic running of the coupling constant and the consistent power-law running of the scalar mass at two-loop level. Especially, we present an explicit demonstration on the general procedure of applying the LORE method to the multiloop calculations of Feynman diagrams when merging with the advantage of Bjorken-Drell's circuit analogy.
Coxeter, H S M
1973-01-01
Polytopes are geometrical figures bounded by portions of lines, planes, or hyperplanes. In plane (two dimensional) geometry, they are known as polygons and comprise such figures as triangles, squares, pentagons, etc. In solid (three dimensional) geometry they are known as polyhedra and include such figures as tetrahedra (a type of pyramid), cubes, icosahedra, and many more; the possibilities, in fact, are infinite! H. S. M. Coxeter's book is the foremost book available on regular polyhedra, incorporating not only the ancient Greek work on the subject, but also the vast amount of information
Vested Madsen, Matias; Macario, Alex; Yamamoto, Satoshi; Tanaka, Pedro
2016-06-01
In this study, we examined the regularly scheduled, formal teaching sessions in a single anesthesiology residency program to (1) map the most common primary instructional methods, (2) map the use of 10 known teaching techniques, and (3) assess if residents scored sessions that incorporated active learning as higher quality than sessions with little or no verbal interaction between teacher and learner. A modified Delphi process was used to identify useful teaching techniques. A representative sample of each of the formal teaching session types was mapped, and residents anonymously completed a 5-question written survey rating the session. The most common primary instructional methods were computer slides-based classroom lectures (66%), workshops (15%), simulations (5%), and journal club (5%). The number of teaching techniques used per formal teaching session averaged 5.31 (SD, 1.92; median, 5; range, 0-9). Clinical applicability (85%) and attention grabbers (85%) were the 2 most common teaching techniques. Thirty-eight percent of the sessions defined learning objectives, and one-third of sessions engaged in active learning. The overall survey response rate equaled 42%, and passive sessions had a mean score of 8.44 (range, 5-10; median, 9; SD, 1.2) compared with a mean score of 8.63 (range, 5-10; median, 9; SD, 1.1) for active sessions (P = 0.63). Slides-based classroom lectures were the most common instructional method, and faculty used an average of 5 known teaching techniques per formal teaching session. The overall education scores of the sessions as rated by the residents were high.
正则长波方程的一个新的差分方法%A NEW FINITE DIFFERENCE METHOD FOR REGULARIZED LONG-WAVE EQUATION
张鲁明; 常谦顺
2000-01-01
In this paper, a finite difference method for a initial-boundary valueproblem of regularized long-wave equation was considered. A energyconservative finite difference scheme of three levels was proposed.Convergence and stability of difference solution were proved. The schemeneedn't iterate, thus, requires less CPU time. Numerical experimentresults demonstrate that the method is efficient and reliable.
Regularized Structural Equation Modeling.
Jacobucci, Ross; Grimm, Kevin J; McArdle, John J
A new method is proposed that extends the use of regularization in both lasso and ridge regression to structural equation models. The method is termed regularized structural equation modeling (RegSEM). RegSEM penalizes specific parameters in structural equation models, with the goal of creating easier to understand and simpler models. Although regularization has gained wide adoption in regression, very little has transferred to models with latent variables. By adding penalties to specific parameters in a structural equation model, researchers have a high level of flexibility in reducing model complexity, overcoming poor fitting models, and the creation of models that are more likely to generalize to new samples. The proposed method was evaluated through a simulation study, two illustrative examples involving a measurement model, and one empirical example involving the structural part of the model to demonstrate RegSEM's utility.
Regularization Gmres Method for Solving Symm Integral Equations%Symm 积分方程的正则化 Gmres 方法∗
闵涛; 赵苗苗; 胡刚
2013-01-01
Symme 积分方程是 Hadamard 意义下的不适定问题，在位势理论中具有重要意义。本文提出了数值求解 Symme 积分方程的正则化 Gmres 方法，首先将 Symm 积分方程进行离散，其次利用正则化方法—参数化信赖域法，将离散后的方程转化为一个适定方程，最后通过 Gmres 算法得到其数值解。该方法与一般的正则化方法相比，克服了正则参数选取的困难。数值结果显示，在数据出现噪声的情况下，正则化 Gmres 方法能有效地求得 Symm 积分方程的数值解，表明该方法的可行性和有效性。%The Symm integral equation is typically ill-posed in the sense of Hadamard, and has an important significance in potential theory. A regularization Gmres method is presented to reconstruct the solution of the Symm integral equation in the article. Firsly, we present the discrete form of the Symm integral equation. Then, we transform the discrete equation into a well-posed equation by using the regularization method-parameterized trust region method. Finally, we obtain the numerical solution of the Symm integral equation by applying Gmres method. Compared with the general regularization methods, the regularization Gmres method overcomes the diﬃculties in selecting regularization parameter. In the numerical simulation, different methods are compared with regularization Gmres method, and the latter can recon-struct the numerical solution of the Symm integral equation eﬃciently under the conditions of noise or corrupted data, and the results show that the method is feasible and effective.
Lie-heng Wang
2000-01-01
The abstract L2-norm error estimate of nonconforming finite element method is established. The uniformly L2-norm error estimate is obtained for the noncon-forming finite element method for the second order elliptic problem with the lowest regularity, i.e., in the case that the solution u ∈ H1(Ω) only. It is also shown that the L2-norm error bound we obtained is one order heigher than the energe-norm error bound.
Chen, Ying; Lei, Yu-Hong; Li, Ning; Liang, Jian; Liu, Chuan; Liu, Jin-Long; Liu, Yong-Fu; Liu, Yu-Bin; Liu, Zhaofeng; Ma, Jian-Ping; Wang, Zhan-Lin; Zhang, Jian-Bo
2015-01-01
In this paper, low-energy scattering of the $(D^{*}\\bar{D}^{*})^\\pm$ meson system is studied within L\\"uscher's finite-size formalism using $N_{f}=2$ twisted mass gauge field configurations. With three different pion mass values, the $s$-wave threshold scattering parameters, namely the scattering length $a_0$ and the effective range $r_0$, are extracted in $J^P=1^+$ channel. Our results indicate that, in this particular channel, the interaction between the two vector charmed mesons is weakly repulsive in nature hence do not support the possibility of a shallow bound state for the two mesons, at least for the pion mass values being studied. This study provides some useful information on the nature of the newly discovered resonance-like structure $Z_c(4025)$ observed in various experiments.
Manifold Regularized Reinforcement Learning.
Li, Hongliang; Liu, Derong; Wang, Ding
2017-01-27
This paper introduces a novel manifold regularized reinforcement learning scheme for continuous Markov decision processes. Smooth feature representations for value function approximation can be automatically learned using the unsupervised manifold regularization method. The learned features are data-driven, and can be adapted to the geometry of the state space. Furthermore, the scheme provides a direct basis representation extension for novel samples during policy learning and control. The performance of the proposed scheme is evaluated on two benchmark control tasks, i.e., the inverted pendulum and the energy storage problem. Simulation results illustrate the concepts of the proposed scheme and show that it can obtain excellent performance.
Zhang Liang; Huang Si-Xun; Shen Chun; Shi Wei-Lai
2011-01-01
The sea level pressure field can be computed from sea surface winds retrieved from satellite microwave scatterometer measurements,based on variational assimilation in combination with a regularization method given in part I of this paper.First,the validity of the new method is proved with a simulation experiment.Then,a new processing procedure for the sea level pressure retrieval is built by combining the geostrophic wind,which is computed from the scatterometer 10-meter wind using the University of Washington planetary boundary layer model using this method.Finally,the feasibility of the method is proved using an actual case study.
Chen, Maomao; Su, Han; Zhou, Yuan; Cai, Chuangjian; Zhang, Dong; Luo, Jianwen
2016-12-01
Dynamic fluorescence molecular tomography (FMT) is a promising technique for the study of the metabolic process of fluorescent agents in the biological body in vivo, and the quality of the parametric images relies heavily on the accuracy of the reconstructed FMT images. In typical dynamic FMT implementations, the imaged object is continuously monitored for more than 50 minutes. During each minute, a set of the fluorescent measurements is acquired and the corresponding FMT image is reconstructed. It is difficult to manually set the regularization parameter in the reconstruction of each FMT image. In this paper, the parametric images obtained with the L-curve and U-curve methods are quantitatively evaluated through numerical simulations, phantom experiments and in vivo experiments. The results illustrate that the U-curve method obtains better accuracy, stronger robustness and higher noise-resistance in parametric imaging. Therefore, it is a promising approach to automatic selection of the regularization parameters for dynamic FMT.
Kabana, Sonja; Ambrosini, G.; Arsenescu, R.; Baglin, C.; Beringer, J.; Borer, K.; Bussiere, A.; Dittus, F.; Elsener, K.; Gorodetzky, Ph.; Guillaud, J.P.; Hess, P.; Klingenberg, R.; Linden, T.; Lohmann, K.D.; Mommsen, R.; Moser, U.; Pretzl, K.; Schacher, J.; Stoffel, F.; Tuominiemi, J.; Weber, M
1999-12-27
The impact parameter dependence of {pi}{sup {+-}}, K{sup {+-}}, p, p-bar, d and d-bar yields produced in fixed target lead+lead collisions at 158 A GeV incident energy is presented. The particle yields are measured near zero transverse momentum and in the forward rapidity region.
Kabana, S; Arsenescu, R; Baglin, C; Beringer, J; Borer, K; Bussière, A; Dittus, F; Elsener, K; Gorodetzky, P; Guillaud, J P; Hess, P; Klingenberg, R; Lindén, T; Lohmann, K D; Mommsen, R K; Moser, U; Pretzl, K; Schacher, J; Stoffel, F; Tuominiemi, J; Weber, M
1999-01-01
The impact parameter dependence of pi sup+-, K sup+-, p, p-bar, d and d-bar yields produced in fixed target lead+lead collisions at 158 A GeV incident energy is presented. The particle yields are measured near zero transverse momentum and in the forward rapidity region.
Ablikim, M; Albayrak, O; Ambrose, D J; An, F F; An, Q; Bai, J Z; Ferroli, R Baldini; Ban, Y; Becker, J; Bennett, J V; Bertani, M; Bian, J M; Boger, E; Bondarenko, O; Boyko, I; Braun, S; Briere, R A; Bytev, V; Cai, H; Cai, X; Cakir, O; Calcaterra, A; Cao, G F; Cetin, S A; Chang, J F; Chelkov, G; Chen, G; Chen, H S; Chen, J C; Chen, M L; Chen, S J; Chen, X R; Chen, Y B; Cheng, H P; Chu, Y P; Cronin-Hennessy, D; Dai, H L; Dai, J P; Dedovich, D; Deng, Z Y; Denig, A; Denysenko, I; Destefanis, M; Ding, W M; Ding, Y; Dong, L Y; Dong, M Y; Du, S X; Fang, J; Fang, S S; Fava, L; Feng, C Q; Friedel, P; Fu, C D; Fu, J L; Fuks, O; Gao, Y; Geng, C; Goetzen, K; Gong, W X; Gradl, W; Greco, M; Gu, M H; Gu, Y T; Guan, Y H; Guo, A Q; Guo, L B; Guo, T; Guo, Y P; Han, Y L; Harris, F A; He, K L; He, M; He, Z Y; Held, T; Heng, Y K; Hou, Z L; Hu, C; Hu, H M; Hu, J F; Hu, T; Huang, G M; Huang, G S; Huang, J S; Huang, L; Huang, X T; Huang, Y; Hussain, T; Ji, C S; Ji, Q; Ji, Q P; Ji, X B; Ji, X L; Jiang, L L; Jiang, X S; Jiao, J B; Jiao, Z; Jin, D P; Jin, S; Jing, F F; Kalantar-Nayestanaki, N; Kavatsyuk, M; Kloss, B; Kopf, B; Kornicer, M; Kuehn, W; Lai, W; Lange, J S; Lara, M; Larin, P; Leyhe, M; Li, C H; Li, Cheng; Li, Cui; Li, D M; Li, F; Li, G; Li, H B; Li, J C; Li, K; Li, Lei; Li, P R; Li, Q J; Li, W D; Li, W G; Li, X L; Li, X N; Li, X Q; Li, X R; Li, Z B; Liang, H; Liang, Y F; Liang, Y T; Liao, G R; Liao, X T; Lin, D X; Liu, B J; Liu, C L; Liu, C X; Liu, F H; Liu, Fang; Liu, Feng; Liu, H; Liu, H B; Liu, H H; Liu, H M; Liu, H W; Liu, J P; Liu, K; Liu, K Y; Liu, L D; Liu, P L; Liu, Q; Liu, S B; Liu, X; Liu, Y B; Liu, Z A; Liu, Zhiqiang; Liu, Zhiqing; Loehner, H; Lou, X C; Lu, G R; Lu, H J; Lu, J G; Lu, X R; Lu, Y P; Luo, C L; Luo, M X; Luo, T; Luo, X L; Lv, M; Ma, F C; Ma, H L; Ma, Q M; Ma, S; Ma, T; Ma, X Y; Maas, F E; Maggiora, M; Malik, Q A; Mao, Y J; Mao, Z P; Messchendorp, J G; Min, J; Min, T J; Mitchell, R E; Mo, X H; Moeini, H; Morales, C Morales; Moriya, K; Muchnoi, N Yu; Muramatsu, H; Nefedov, Y; Nikolaev, I B; Ning, Z; Olsen, S L; Ouyang, Q; Pacetti, S; Park, J W; Pelizaeus, M; Peng, H P; Peters, K; Ping, J L; Ping, R G; Poling, R; Prencipe, E; Qi, M; Qian, S; Qiao, C F; Qin, L Q; Qin, X S; Qin, Y; Qin, Z H; Qiu, J F; Rashid, K H; Redmer, C F; Rong, G; Ruan, X D; Sarantsev, A; Shao, M; Shen, C P; Shen, X Y; Sheng, H Y; Shepherd, M R; Song, W M; Song, X Y; Spataro, S; Spruck, B; Sun, D H; Sun, G X; Sun, J F; Sun, S S; Sun, Y J; Sun, Y Z; Sun, Z J; Sun, Z T; Tang, C J; Tang, X; Tapan, I; Thorndike, E H; Toth, D; Ullrich, M; Uman, I; Varner, G S; Wang, B; Wang, D; Wang, D Y; Wang, K; Wang, L L; Wang, L S; Wang, M; Wang, P; Wang, P L; Wang, Q J; Wang, S G; Wang, X F; Wang, X L; Wang, Y D; Wang, Y F; Wang, Y Q; Wang, Z; Wang, Z G; Wang, Z Y; Wei, D H; Wei, J B; Weidenkaff, P; Wen, Q G; Wen, S P; Werner, M; Wiedner, U; Wu, L H; Wu, N; Wu, S X; Wu, W; Wu, Z; Xia, L G; Xia, Y X; Xiao, Z J; Xie, Y G; Xiu, Q L; Xu, G F; Xu, Q J; Xu, Q N; Xu, X P; Xu, Z R; Xue, Z; Yan, L; Yan, W B; Yan, Y H; Yang, H X; Yang, Y; Yang, Y X; Ye, H; Ye, M; Ye, M H; Yu, B X; Yu, C X; Yu, H W; Yu, J S; Yu, S P; Yuan, C Z; Yuan, Y; Zafar, A A; Zallo, A; Zang, S L; Zeng, Y; Zhang, B X; Zhang, B Y; Zhang, C; Zhang, C C; Zhang, D H; Zhang, H H; Zhang, H Y; Zhang, J Q; Zhang, J W; Zhang, J Y; Zhang, J Z; Zhang, LiLi; Zhang, R; Zhang, S H; Zhang, X J; Zhang, X Y; Zhang, Y; Zhang, Y H; Zhang, Z P; Zhang, Z Y; Zhang, Zhenghao; Zhao, G; Zhao, H S; Zhao, J W; Zhao, Lei; Zhao, Ling; Zhao, M G; Zhao, Q; Zhao, S J; Zhao, T C; Zhao, X H; Zhao, Y B; Zhao, Z G; Zhemchugov, A; Zheng, B; Zheng, J P; Zheng, Y H; Zhong, B; Zhou, L; Zhou, X; Zhou, X K; Zhou, X R; Zhu, C; Zhu, K; Zhu, K J; Zhu, S H; Zhu, X L; Zhu, Y C; Zhu, Y S; Zhu, Z A; Zhuang, J; Zou, B S; Zou, J H
2013-01-01
We study the process $e^+e^- \\to (D^{*} \\bar{D}^{*})^{\\pm} \\pi^\\mp$ at a center-of-mass energy of 4.26\\,GeV using a 827\\,pb$^{-1}$ data sample obtained with the BESIII detector at the Beijing Electron Positron Collider. Based on a partial reconstruction technique, the Born cross section is measured to be $(137\\pm9\\pm15)$\\,pb. We observe a structure near the $(D^{*} \\bar{D}^{*})^{\\pm}$ threshold in the $\\pi^\\mp$ recoil mass spectrum, which we denote as the $Z^{\\pm}_c(4025)$. The measured mass and width of the structure are $(4026.3\\pm2.6\\pm3.7)$\\,MeV/c$^2$ and $(24.8\\pm5.6\\pm7.7)$\\,MeV, respectively. Its production ratio $\\frac{\\sigma(e^+e^-\\to Z^{\\pm}_c(4025)\\pi^\\mp \\to (D^{*} \\bar{D}^{*})^{\\pm} \\pi^\\mp)}{\\sigma(e^+e^-\\to (D^{*} \\bar{D}^{*})^{\\pm} \\pi^\\mp)}$ is determined to be $0.65\\pm0.09\\pm0.06$. The first uncertainties are statistical and the second are systematic.
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Online co-regularized algorithms
Ruijter, T. de; Tsivtsivadze, E.; Heskes, T.
2012-01-01
We propose an online co-regularized learning algorithm for classification and regression tasks. We demonstrate that by sequentially co-regularizing prediction functions on unlabeled data points, our algorithm provides improved performance in comparison to supervised methods on several UCI benchmarks
Rijssel, Jos van; Kuipers, Bonny W.M.; Erné, Ben H., E-mail: B.H.Erne@uu.nl
2014-03-15
A numerical inversion method known from the analysis of light scattering by colloidal dispersions is now applied to magnetization curves of ferrofluids. The distribution of magnetic particle sizes or dipole moments is determined without assuming that the distribution is unimodal or of a particular shape. The inversion method enforces positive number densities via a non-negative least squares procedure. It is tested successfully on experimental and simulated data for ferrofluid samples with known multimodal size distributions. The created computer program MINORIM is made available on the web. - Highlights: • A method from light scattering is applied to analyze ferrofluid magnetization curves. • A magnetic size distribution is obtained without prior assumption of its shape. • The method is tested successfully on ferrofluids with a known size distribution. • The practical limits of the method are explored with simulated data including noise. • This method is implemented in the program MINORIM, freely available online.
Kropf, Pascal; Shmuel, Amir
2016-07-01
Estimation of current source density (CSD) from the low-frequency part of extracellular electric potential recordings is an unstable linear inverse problem. To make the estimation possible in an experimental setting where recordings are contaminated with noise, it is necessary to stabilize the inversion. Here we present a unified framework for zero- and higher-order singular-value-decomposition (SVD)-based spectral regularization of 1D (linear) CSD estimation from local field potentials. The framework is based on two general approaches commonly employed for solving inverse problems: quadrature and basis function expansion. We first show that both inverse CSD (iCSD) and kernel CSD (kCSD) fall into the category of basis function expansion methods. We then use these general categories to introduce two new estimation methods, quadrature CSD (qCSD), based on discretizing the CSD integral equation with a chosen quadrature rule, and representer CSD (rCSD), an even-determined basis function expansion method that uses the problem's data kernels (representers) as basis functions. To determine the best candidate methods to use in the analysis of experimental data, we compared the different methods on simulations under three regularization schemes (Tikhonov, tSVD, and dSVD), three regularization parameter selection methods (NCP, L-curve, and GCV), and seven different a priori spatial smoothness constraints on the CSD distribution. This resulted in a comparison of 531 estimation schemes. We evaluated the estimation schemes according to their source reconstruction accuracy by testing them using different simulated noise levels, lateral source diameters, and CSD depth profiles. We found that ranking schemes according to the average error over all tested conditions results in a reproducible ranking, where the top schemes are found to perform well in the majority of tested conditions. However, there is no single best estimation scheme that outperforms all others under all tested
Nonconvex Regularization in Remote Sensing
Tuia, Devis; Flamary, Remi; Barlaud, Michel
2016-11-01
In this paper, we study the effect of different regularizers and their implications in high dimensional image classification and sparse linear unmixing. Although kernelization or sparse methods are globally accepted solutions for processing data in high dimensions, we present here a study on the impact of the form of regularization used and its parametrization. We consider regularization via traditional squared (2) and sparsity-promoting (1) norms, as well as more unconventional nonconvex regularizers (p and Log Sum Penalty). We compare their properties and advantages on several classification and linear unmixing tasks and provide advices on the choice of the best regularizer for the problem at hand. Finally, we also provide a fully functional toolbox for the community.
ZOU Zhi-Yun; MAO Bao-Hua; HAO Hai-Ming; GAO Jian-Zhi; YANG Jie-Jiao
2009-01-01
According to the deficiencies in Watts and Strogatz's small-world network model, we present a new regular model to establish the small-world network. Besides the property of the small-world, this model has other properties such as accuracy in controlling the average shortest path length L, and the average clustering coefficient C, also regular network topology as well as enhanced network robustness. This method improves the construction of the small-world network essentially, so that the regular small-world network closely resembles the actual network. We also present studies on the relationships among the quantities of a variety of edges, L and C in regular small-world network in detail. This research lays the foundation for the establishment of the regular small-world network and acts as a good guidance for further research of this model and its applications.
Regularization with a pruning prior
Goutte, Cyril; Hansen, Lars Kai
1997-01-01
We investigate the use of a regularization priorthat we show has pruning properties. Analyses areconducted both using a Bayesian framework and withthe generalization method, on a simple toyproblem. Results are thoroughly compared withthose obtained with a traditional weight decay.......We investigate the use of a regularization priorthat we show has pruning properties. Analyses areconducted both using a Bayesian framework and withthe generalization method, on a simple toyproblem. Results are thoroughly compared withthose obtained with a traditional weight decay....
Regularizing portfolio optimization
Still, Susanne; Kondor, Imre
2010-07-01
The optimization of large portfolios displays an inherent instability due to estimation error. This poses a fundamental problem, because solutions that are not stable under sample fluctuations may look optimal for a given sample, but are, in effect, very far from optimal with respect to the average risk. In this paper, we approach the problem from the point of view of statistical learning theory. The occurrence of the instability is intimately related to over-fitting, which can be avoided using known regularization methods. We show how regularized portfolio optimization with the expected shortfall as a risk measure is related to support vector regression. The budget constraint dictates a modification. We present the resulting optimization problem and discuss the solution. The L2 norm of the weight vector is used as a regularizer, which corresponds to a diversification 'pressure'. This means that diversification, besides counteracting downward fluctuations in some assets by upward fluctuations in others, is also crucial because it improves the stability of the solution. The approach we provide here allows for the simultaneous treatment of optimization and diversification in one framework that enables the investor to trade off between the two, depending on the size of the available dataset.
Yong Hua LI; Hai Bin KAN; Bing Jun YU
2004-01-01
In this paper, a special kind of partial algebras called projective partial groupoids is defined.It is proved that the inverse image of all projections of a fundamental weak regular *-semigroup under the homomorphism induced by the maximum idempotent-separating congruence of a weak regular *-semigroup has a projective partial groupoid structure. Moreover, a weak regular *-product which connects a fundamental weak regular *-semigroup with corresponding projective partial groupoid is defined and characterized. It is finally proved that every weak regular *-product is in fact a weak regular *-semigroup and any weak regular *-semigroup is constructed in this way.
2000-01-01
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University. This thesis examines the development of value for money (VFM) audit methods used by the National Audit Office (NAO) and considers what factors have influenced the identified changes. It also considers how developments compare with those elsewhere in Europe. The thesis is based on examination of more than 300 NAO reports, interviews with senior staff, focus groups, a thorough review of relev...
Travel time calculation in regular 3D grid in local and regional scale using fast marching method
Polkowski, M.
2015-12-01
Local and regional 3D seismic velocity models of crust and sediments are very important for numerous technics like mantle and core tomography, localization of local and regional events and others. Most of those techniques require calculation of wave travel time through the 3D model. This can be achieved using multiple approaches from simple ray tracing to advanced full waveform calculation. In this study simple and efficient implementation of fast marching method is presented. This method provides more information than ray tracing and is much less complicated than methods like full waveform being the perfect compromise. Presented code is written in C++, well commented and is easy to modify for different types of studies. Additionally performance is widely discussed including possibilities of multithreading and massive parallelism like GPU. Source code will be published in 2016 as it is part of the PhD thesis. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
A New Method for Achieving an Initial Regular Solution of a Linear Programming%求线性规划初始正则解的一个新方法
梁平; 孙艳华; 魏德宾; 张相斌
2008-01-01
A method is provided for fnding an initial regular solution of a linear programming in this paper.The key to this method is to solve an auxiliary linear programming instead of to introduce any artificial variable or constraint.Compared with the traditional method of achieving the regular solution by introducing an artificial constraint,it has advantages of saving the memories and little computational efforts.
von Larcher, Thomas; Blome, Therese; Klein, Rupert; Schneider, Reinhold; Wolf, Sebastian; Huber, Benjamin
2016-04-01
Handling high-dimensional data sets like they occur e.g. in turbulent flows or in multiscale behaviour of certain types in Geosciences are one of the big challenges in numerical analysis and scientific computing. A suitable solution is to represent those large data sets in an appropriate compact form. In this context, tensor product decomposition methods currently emerge as an important tool. One reason is that these methods often enable one to attack high-dimensional problems successfully, another that they allow for very compact representations of large data sets. We follow the novel Tensor-Train (TT) decomposition method to support the development of improved understanding of the multiscale behavior and the development of compact storage schemes for solutions of such problems. One long-term goal of the project is the construction of a self-consistent closure for Large Eddy Simulations (LES) of turbulent flows that explicitly exploits the tensor product approach's capability of capturing self-similar structures. Secondly, we focus on a mixed deterministic-stochastic subgrid scale modelling strategy currently under development for application in Finite Volume Large Eddy Simulation (LES) codes. Advanced methods of time series analysis for the databased construction of stochastic models with inherently non-stationary statistical properties and concepts of information theory based on a modified Akaike information criterion and on the Bayesian information criterion for the model discrimination are used to construct surrogate models for the non-resolved flux fluctuations. Vector-valued auto-regressive models with external influences form the basis for the modelling approach [1], [2], [4]. Here, we present the reconstruction capabilities of the two modeling approaches tested against 3D turbulent channel flow data computed by direct numerical simulation (DNS) for an incompressible, isothermal fluid at Reynolds number Reτ = 590 (computed by [3]). References [1] I
MAXIMAL POINTS OF A REGULAR TRUTH FUNCTION
Every canonical linearly separable truth function is a regular function, but not every regular truth function is linearly separable. The most...promising method of determining which of the regular truth functions are linearly separable r quires finding their maximal and minimal points. In this...report is developed a quick, systematic method of finding the maximal points of any regular truth function in terms of its arithmetic invariants. (Author)
正则化方法求解偶应力反问题%Solving Inverse Couple Stress Problem via Regularization Method
姚宇新; 薛齐文
2011-01-01
Tikhonov's regularization approach has been used to identify parameters for the inverse couple-stress problem based on Bregman distances and weighted Bregman distances in the construction of regularization terms for the Tikhonov's function. The inverse problem is formulated implicitly as an optimization problem with the cost functional of squared residues between calculated and measured quantities. A FE model is given, taking account of inhomogeneity and facilitating to sensitivity analysis for direct and inverse problems. Satisfactory numerical validation is given including a preliminary investigation of effect of noise data on the results and the computational efficiency for different regularization terms. Results show that the proposed method can identify parameters for the inverse couple-stress problem with high computational precision/efficiiency and the ability of anti-noisy data. It could improve computational efficiency for the weighted Bregman distances function as regularization terms.%引入Bregman函数及其加权函数作为正则项,应用Tikhonov正则化方法,对偶应力反问题相关参数进行识别.利用相关测量信息和计算信息构造最小二乘函数.在考虑材料非均质的同时,建立了便于敏度分析的偶应力正/反问题数值求解模型.给出了相关的数值算例,并对信息误差以及不同正则项的计算效率作了探讨.数值结果表明所提的求解策略不仅能够对相关的材料参数进行有效识别,而且具有较高的计算精度、较好的稳定性和一定的抗噪性.采用加权的Bregman距离函数作正则项可以提高计算效率.
Renata Bujak
2016-07-01
Full Text Available Non-targeted metabolomics constitutes a part of systems biology and aims to determine many metabolites in complex biological samples. Datasets obtained in non-targeted metabolomics studies are multivariate and high-dimensional due to the sensitivity of mass spectrometry-based detection methods as well as complexity of biological matrices. Proper selection of variables which contribute into group classification is a crucial step, especially in metabolomics studies which are focused on searching for disease biomarker candidates. In the present study, three different statistical approaches were tested using two metabolomics datasets (RH and PH study. Orthogonal projections to latent structures-discriminant analysis (OPLS-DA without and with multiple testing correction as well as least absolute shrinkage and selection operator (LASSO were tested and compared. For the RH study, OPLS-DA model built without multiple testing correction, selected 46 and 218 variables based on VIP criteria using Pareto and UV scaling, respectively. In the case of the PH study, 217 and 320 variables were selected based on VIP criteria using Pareto and UV scaling, respectively. In the RH study, OPLS-DA model built with multiple testing correction, selected 4 and 19 variables as statistically significant in terms of Pareto and UV scaling, respectively. For PH study, 14 and 18 variables were selected based on VIP criteria in terms of Pareto and UV scaling, respectively. Additionally, the concept and fundaments of the least absolute shrinkage and selection operator (LASSO with bootstrap procedure evaluating reproducibility of results, was demonstrated. In the RH and PH study, the LASSO selected 14 and 4 variables with reproducibility between 99.3% and 100%. However, apart from the popularity of PLS-DA and OPLS-DA methods in metabolomics, it should be highlighted that they do not control type I or type II error, but only arbitrarily establish a cut-off value for PLS-DA loadings
Ranganathan, Sushilee; Izotov, Dmitry; Kraka, Elfi; Cremer, Dieter
2009-08-01
The Automated Protein Structure Analysis (APSA) method, which describes the protein backbone as a smooth line in three-dimensional space and characterizes it by curvature kappa and torsion tau as a function of arc length s, was applied on 77 proteins to determine all secondary structural units via specific kappa(s) and tau(s) patterns. A total of 533 alpha-helices and 644 beta-strands were recognized by APSA, whereas DSSP gives 536 and 651 units, respectively. Kinks and distortions were quantified and the boundaries (entry and exit) of secondary structures were classified. Similarity between proteins can be easily quantified using APSA, as was demonstrated for the roll architecture of proteins ubiquitin and spinach ferridoxin. A twenty-by-twenty comparison of all alpha domains showed that the curvature-torsion patterns generated by APSA provide an accurate and meaningful similarity measurement for secondary, super secondary, and tertiary protein structure. APSA is shown to accurately reflect the conformation of the backbone effectively reducing three-dimensional structure information to two-dimensional representations that are easy to interpret and understand.
Mode-sum regularization of $\\left\\langle \\phi^{2}\\right\\rangle$ in the angular-splitting method
Levi, Adam
2016-01-01
The computation of the renormalized stress-energy tensor or $\\left\\langle\\phi^{2}\\right\\rangle_{ren}$ in curved spacetime is a challenging task, at both the conceptual and technical levels. Recently we developed a new approach to compute such renormalized quantities in asymptotically-flat curved spacetimes, based on the point-splitting procedure. Our approach requires the spacetime to admit some symmetry. We already implemented this approach to compute $\\left\\langle \\phi^{2}\\right\\rangle _{ren}$ in a stationary spacetime using t-splitting, namely splitting in the time-translation direction. Here we present the angular-splitting version of this approach, aimed for computing renormalized quantities in a general (possibly dynamical) spherically-symmetric spacetime. To illustrate how the angular-splitting method works, we use it here to compute $\\left\\langle \\phi^{2}\\right\\rangle _{ren}$ for a quantum massless scalar field in Schwarzschild background, in various quantum states (Boulware, Unruh, and Hartle-Hawking...
M. Madheswaran
2012-06-01
Full Text Available Modern fighter aircrafts, ships, missiles etc need to be very low Radar Cross Section (RCS designs, to avoid detection by hostile radars. Hence accurate prediction of RCS of complex objects like aircrafts is essential to meet this requirement. A simple and efficient numerical procedure for treating problems of wide band RCS prediction Perfect Electric Conductor (PEC objects is developed using Method of Moment (MoM. Implementation of MoM for prediction of RCS involves solving Electric Field Integral Equation (EFIE for electric current using the vector and scalar potential solutions, which satisfy the boundary condition that the tangential electric field at the boundary of the PEC body is zero. For numerical purposes, the objects are modeled using planar triangular surfaces patches. Set of special sub-domain type basis functions are defined on pairs of adjacent triangular patches. These basis functions yield a current representation free of line or point charges at sub-domain boundaries. Once the current distribution is obtained, dipole model is used to find Scattering field in free space. RCS can be calculated from the scattered and incident fields. Numerical results for a square plate, a cube, and a sphere are presented over a bandwidth.
Effect of regularization parameters on geophysical reconstruction
Zhou Hui; Wang Zhaolei; Qiu Dongling; Li Guofa; Shen Jinsong
2009-01-01
In this paper we discuss the edge-preserving regularization method in the reconstruction of physical parameters from geophysical data such as seismic and ground-penetrating radar data.In the regularization method a potential function of model parameters and its corresponding functions are introduced.This method is stable and able to preserve boundaries, and protect resolution.The effect of regularization depends to a great extent on the suitable choice of regularization parameters.The influence of the edge-preserving parameters on the reconstruction results is investigated and the relationship between the regularization parameters and the error of data is described.
NOETHERIAN GR-REGULAR RINGS ARE REGULAR
LIHUISHI
1994-01-01
It is proved that for a left Noetherian z-graded ring A,if every finitely generated graded A-module has finite projective dimension(i.e-,A is gr-regular)then every finitely generated A-module has finite projective dimension(i.e.,A is regular).Some applications of this result to filtered rings and some classical cases are also given.
Li, Hao; Li, Peng; Xie, Jing; Yi, Shengjie; Yang, Chaojie; Wang, Jian; Sun, Jichao; Liu, Nan; Wang, Xu; Wu, Zhihao; Wang, Ligui; Hao, Rongzhang; Wang, Yong; Jia, Leili; Li, Kaiqin; Qiu, Shaofu; Song, Hongbin
2014-08-01
A clustered regularly interspaced short palindromic repeat (CRISPR) typing method has recently been developed and used for typing and subtyping of Salmonella spp., but it is complicated and labor intensive because it has to analyze all spacers in two CRISPR loci. Here, we developed a more convenient and efficient method, namely, CRISPR locus spacer pair typing (CLSPT), which only needs to analyze the two newly incorporated spacers adjoining the leader array in the two CRISPR loci. We analyzed a CRISPR array of 82 strains belonging to 21 Salmonella serovars isolated from humans in different areas of China by using this new method. We also retrieved the newly incorporated spacers in each CRISPR locus of 537 Salmonella isolates which have definite serotypes in the Pasteur Institute's CRISPR Database to evaluate this method. Our findings showed that this new CLSPT method presents a high level of consistency (kappa = 0.9872, Matthew's correlation coefficient = 0.9712) with the results of traditional serotyping, and thus, it can also be used to predict serotypes of Salmonella spp. Moreover, this new method has a considerable discriminatory power (discriminatory index [DI] = 0.8145), comparable to those of multilocus sequence typing (DI = 0.8088) and conventional CRISPR typing (DI = 0.8684). Because CLSPT only costs about $5 to $10 per isolate, it is a much cheaper and more attractive method for subtyping of Salmonella isolates. In conclusion, this new method will provide considerable advantages over other molecular subtyping methods, and it may become a valuable epidemiologic tool for the surveillance of Salmonella infections. Copyright © 2014, American Society for Microbiology. All Rights Reserved.
Regular Expression Pocket Reference
Stubblebine, Tony
2007-01-01
This handy little book offers programmers a complete overview of the syntax and semantics of regular expressions that are at the heart of every text-processing application. Ideal as a quick reference, Regular Expression Pocket Reference covers the regular expression APIs for Perl 5.8, Ruby (including some upcoming 1.9 features), Java, PHP, .NET and C#, Python, vi, JavaScript, and the PCRE regular expression libraries. This concise and easy-to-use reference puts a very powerful tool for manipulating text and data right at your fingertips. Composed of a mixture of symbols and text, regular exp
Modular Regularization Algorithms
Jacobsen, Michael
2004-01-01
The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed into indepen......The class of linear ill-posed problems is introduced along with a range of standard numerical tools and basic concepts from linear algebra, statistics and optimization. Known algorithms for solving linear inverse ill-posed problems are analyzed to determine how they can be decomposed...... into independent modules. These modules are then combined to form new regularization algorithms with other properties than those we started out with. Several variations are tested using the Matlab toolbox MOORe Tools created in connection with this thesis. Object oriented programming techniques are explained...... and used to set up the illposed problems in the toolbox. Hereby, we are able to write regularization algorithms that automatically exploit structure in the ill-posed problem without being rewritten explicitly. We explain how to implement a stopping criteria for a parameter choice method based upon...
Dimensional regularization is generic
Fujikawa, Kazuo
2016-01-01
The absence of the quadratic divergence in the Higgs sector of the Standard Model in the dimensional regularization is usually regarded to be an exceptional property of a specific regularization. To understand what is going on in the dimensional regularization, we illustrate how to reproduce the results of the dimensional regularization for the $\\lambda\\phi^{4}$ theory in the more conventional regularization such as the higher derivative regularization; the basic postulate involved is that the quadratically divergent induced mass, which is independent of the scale change of the physical mass, is kinematical and unphysical. This is consistent with the derivation of the Callan-Symanzik equation, which is a comparison of two theories with slightly different masses, for the $\\lambda\\phi^{4}$ theory without encountering the quadratic divergence. We thus suggest that the dimensional regularization is generic in a bottom-up approach starting with a successful low-energy theory. We also define a modified version of t...
Hamzawy, Ayman; Grozdanov, Dimitar N.; Badawi, Mohamed S.; Aliyev, Fuad A.; Thabet, Abouzeid A.; Abbas, Mahmoud I.; Ruskov, Ivan N.; El-Khatib, Ahmed M.; Kopatch, Yuri N.; Gouda, Mona M.
2016-11-01
Scintillation crystals are usually used for detection of energetic photons at room temperature in high energy and nuclear physics research, non-destructive analysis of materials testing, safeguards, nuclear treaty verification, geological exploration, and medical imaging. Therefore, new designs and construction of radioactive beam facilities are coming on-line with these science brunches. A good number of researchers are investigating the efficiency of the γ-ray detectors to improve the models and techniques used in order to deal with the most pressing problems in physics research today. In the present work, a new integrative and uncomplicated numerical simulation method (NSM) is used to compute the full-energy (photo) peak efficiency of a regular hexagonal prism NaI(Tl) gamma-ray detector using radioactive point sources situated non-axial within its front surface boundaries. This simulation method is based on the efficiency transfer method. Most of the mathematical formulas in this work are derived analytically and solved numerically. The main core of the NSM is the calculation of the effective solid angle for radioactive point sources, which are situated non-axially at different distances from the front surface of the detector. The attenuation of the γ-rays through the detector's material and any other materials in-between the source and the detector is taken into account. A remarkable agreement between the experimental and calculated by present formalism results has been observed.
一种快速在线图形识别与规整化方法%A Method of Fast On-Line Graphics Recognition and Regularization
孙建勇; 金翔宇; 彭彬彬; 孙正兴; 刘文印
2003-01-01
A novel and fast shape classification and regularization algorithm for on-line sketchy graphics recognition is proposed. We divide the on-line graphics recognition process into four stages: preprocessing,shape classification,shape fitting,and regularization. Attraction Force Model is employed to progressively combine the vertices on the input sketchy stroke and reduce the total number of vertices before the type of shape can be determined. After that ,the shape is fitted and gradually rectified to a regular one,thus the regularized shape fits the user intended one precisely.Experimental results show that this algorithm can yield good recognition precision(averagely above 90% )and fine regularization effect but with fast speed. Consequently,it is especially suitable to computational critical environment such as PDAs,which solely depends on a pen-based user interface.
Jirasek, A [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Matthews, Q [Department of Physics and Astronomy, University of Victoria, Victoria BC V8W 3P6 (Canada); Hilts, M [Medical Physics, BC Cancer Agency-Vancouver Island Centre, Victoria BC V8R 6V5 (Canada); Schulze, G [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Blades, M W [Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Turner, R F B [Michael Smith Laboratories, University of British Columbia, Vancouver BC V6T 1Z4 (Canada); Department of Chemistry, University of British Columbia, Vancouver BC V6T 1Z1 (Canada); Department of Electrical and Computer Engineering, University of British Columbia, Vancouver BC V6T 1Z4 (Canada)
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Jirasek, A; Matthews, Q; Hilts, M; Schulze, G; Blades, M W; Turner, R F B
2006-05-21
This study presents a new method of image signal-to-noise ratio (SNR) enhancement by utilizing a newly developed 2D two-point maximum entropy regularization method (TPMEM). When utilized as an image filter, it is shown that 2D TPMEM offers unsurpassed flexibility in its ability to balance the complementary requirements of image smoothness and fidelity. The technique is evaluated for use in the enhancement of x-ray computed tomography (CT) images of irradiated polymer gels used in radiation dosimetry. We utilize a range of statistical parameters (e.g. root-mean square error, correlation coefficient, error histograms, Fourier data) to characterize the performance of TPMEM applied to a series of synthetic images of varying initial SNR. These images are designed to mimic a range of dose intensity patterns that would occur in x-ray CT polymer gel radiation dosimetry. Analysis is extended to a CT image of a polymer gel dosimeter irradiated with a stereotactic radiation therapy dose distribution. Results indicate that TPMEM performs strikingly well on radiation dosimetry data, significantly enhancing the SNR of noise-corrupted images (SNR enhancement factors >15 are possible) while minimally distorting the original image detail (as shown by the error histograms and Fourier data). It is also noted that application of this new TPMEM filter is not restricted exclusively to x-ray CT polymer gel dosimetry image data but can in future be extended to a wide range of radiation dosimetry data.
Blocked-regularized Gmres method of image restoration%图像恢复的分块正则化 Gmres 方法
陈亚文; 闵涛
2013-01-01
利用分块Gmres算法在处理大规模线性方程组时具有的优势，将其同正则化技术相结合应用于图像恢复领域，提出一种新的图像恢复的方法。该方法考虑了图像恢复中的时间复杂度与空间复杂度2个方面。数值模拟时，对不同的方法进行了对比分析，结果表明所提出的方法能够明显改善图像恢复的质量。%The blocked Gmres algorithm has certain superiority in dealing with the large systems of linear equations ,a new algorithm to combine the regularization algorithm with blocked Gmres algorithm in im-age restoration filed is proposed .T he algorithm considers the time and space complexity in image restora-tion .In the numerical simulation ,different methods are compared ,the results show that the method can significantly improves the quality of image restoration .
Robust Sparse Analysis Regularization
Vaiter, Samuel; Dossal, Charles; Fadili, Jalal
2011-01-01
This paper studies the properties of L1-analysis regularization for the resolution of linear inverse problems. Most previous works consider sparse synthesis priors where the sparsity is measured as the L1 norm of the coefficients that synthesize the signal in a given dictionary. In contrast, the more general analysis regularization minimizes the L1 norm of the correlations between the signal and the atoms in the dictionary. The corresponding variational problem includes several well-known regularizations such as the discrete total variation and the fused lasso. We first prove that a solution of analysis regularization is a piecewise affine function of the observations. Similarly, it is a piecewise affine function of the regularization parameter. This allows us to compute the degrees of freedom associated to sparse analysis estimators. Another contribution gives a sufficient condition to ensure that a signal is the unique solution of the analysis regularization when there is no noise in the observations. The s...
Huang, Da; Wu, Yue-Liang [Chinese Academy of Science, State Key Laboratory of Theoretical Physics (SKLTP), Kavli Institute for Theoretical Physics China (KITPC), Institute of Theoretical Physics, Beijing (China)
2012-07-15
The consistency of loop regularization (LORE) method is explored in multiloop calculations. A key concept of the LORE method is the introduction of irreducible loop integrals (ILIs) which are evaluated from the Feynman diagrams by adopting the Feynman parametrization and ultraviolet-divergence-preserving (UVDP) parametrization. It is then inevitable for the ILIs to encounter the divergences in the UVDP parameter space due to the generic overlapping divergences in the four-dimensional momentum space. By computing the so-called {alpha}{beta}{gamma} integrals arising from two-loop Feynman diagrams, we show how to deal with the divergences in the parameter space with the LORE method. By identifying the divergences in the UVDP parameter space to those in the subdiagrams, we arrive at the Bjorken-Drell analogy between Feynman diagrams and electrical circuits. The UVDP parameters are shown to correspond to the conductance or resistance in the electrical circuits, and the divergence in Feynman diagrams is ascribed to the infinite conductance or zero resistance. In particular, the sets of conditions required to eliminate the overlapping momentum integrals for obtaining the ILIs are found to be associated with the conservations of electric voltages, and the momentum conservations correspond to the conservations of electrical currents, which are known as the Kirchhoff laws in the electrical circuits analogy. As a practical application, we carry out a detailed calculation for one-loop and two-loop Feynman diagrams in the massive scalar {phi}{sup 4} theory, which enables us to obtain the well-known logarithmic running of the coupling constant and the consistent power-law running of the scalar mass at two-loop level. Especially, we present an explicit demonstration on the general procedure of applying the LORE method to the multiloop calculations of Feynman diagrams when merging with the advantage of Bjorken-Drell's circuit analogy. (orig.)
Goyvaerts, Jan
2009-01-01
This cookbook provides more than 100 recipes to help you crunch data and manipulate text with regular expressions. Every programmer can find uses for regular expressions, but their power doesn't come worry-free. Even seasoned users often suffer from poor performance, false positives, false negatives, or perplexing bugs. Regular Expressions Cookbook offers step-by-step instructions for some of the most common tasks involving this tool, with recipes for C#, Java, JavaScript, Perl, PHP, Python, Ruby, and VB.NET. With this book, you will: Understand the basics of regular expressions through a
Regularization algorithms based on total least squares
Hansen, Per Christian; O'Leary, Dianne P.
1996-01-01
Discretizations of inverse problems lead to systems of linear equations with a highly ill-conditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. Classical regularization methods, such as Tikhonov's method or trunc...
Polkowski, Marcin
2016-04-01
Seismic wave travel time calculation is the most common numerical operation in seismology. The most efficient is travel time calculation in 1D velocity model - for given source, receiver depths and angular distance time is calculated within fraction of a second. Unfortunately, in most cases 1D is not enough to encounter differentiating local and regional structures. Whenever possible travel time through 3D velocity model has to be calculated. It can be achieved using ray calculation or time propagation in space. While single ray path calculation is quick it is complicated to find the ray path that connects source with the receiver. Time propagation in space using Fast Marching Method seems more efficient in most cases, especially when there are multiple receivers. In this presentation a Python module pySeismicFMM is presented - simple and very efficient tool for calculating travel time from sources to receivers. Calculation requires regular 2D or 3D velocity grid either in Cartesian or geographic coordinates. On desktop class computer calculation speed is 200k grid cells per second. Calculation has to be performed once for every source location and provides travel time to all receivers. pySeismicFMM is free and open source. Development of this tool is a part of authors PhD thesis. National Science Centre Poland provided financial support for this work via NCN grant DEC-2011/02/A/ST10/00284.
Kostenko, I.F.
1983-01-01
The isotropy and regularity are measured as a complex characteristic of a structure which is then used as one of the basic parameters for classifying structures of the pore space of collector rocks for oil and gas.
Regularization in kernel learning
Mendelson, Shahar; 10.1214/09-AOS728
2010-01-01
Under mild assumptions on the kernel, we obtain the best known error rates in a regularized learning scenario taking place in the corresponding reproducing kernel Hilbert space (RKHS). The main novelty in the analysis is a proof that one can use a regularization term that grows significantly slower than the standard quadratic growth in the RKHS norm.
Regular database update logics
Spruit, Paul; Wieringa, Roel; Meyer, John-Jules
2001-01-01
We study regular first-order update logic (FUL), which is a variant of regular dynamic logic in which updates to function symbols as well as to predicate symbols are possible. We fi1rst study FUL without making assumptions about atomic updates. Second, we look at relational algebra update logic (RAU
Regularized Statistical Analysis of Anatomy
Sjöstrand, Karl
2007-01-01
This thesis presents the application and development of regularized methods for the statistical analysis of anatomical structures. Focus is on structure-function relationships in the human brain, such as the connection between early onset of Alzheimer’s disease and shape changes of the corpus cal...
Quotient Complexity of Regular Languages
Janusz Brzozowski
2009-07-01
Full Text Available The past research on the state complexity of operations on regular languages is examined, and a new approach based on an old method (derivatives of regular expressions is presented. Since state complexity is a property of a language, it is appropriate to define it in formal-language terms as the number of distinct quotients of the language, and to call it "quotient complexity". The problem of finding the quotient complexity of a language f(K,L is considered, where K and L are regular languages and f is a regular operation, for example, union or concatenation. Since quotients can be represented by derivatives, one can find a formula for the typical quotient of f(K,L in terms of the quotients of K and L. To obtain an upper bound on the number of quotients of f(K,L all one has to do is count how many such quotients are possible, and this makes automaton constructions unnecessary. The advantages of this point of view are illustrated by many examples. Moreover, new general observations are presented to help in the estimation of the upper bounds on quotient complexity of regular operations.
A regularized stationary mean-field game
Yang, Xianjin
2016-04-19
In the thesis, we discuss the existence and numerical approximations of solutions of a regularized mean-field game with a low-order regularization. In the first part, we prove a priori estimates and use the continuation method to obtain the existence of a solution with a positive density. Finally, we introduce the monotone flow method and solve the system numerically.
Regularization by External Variables
Bossolini, Elena; Edwards, R.; Glendinning, P. A.
2016-01-01
Regularization was a big topic at the 2016 CRM Intensive Research Program on Advances in Nonsmooth Dynamics. There are many open questions concerning well known kinds of regularization (e.g., by smoothing or hysteresis). Here, we propose a framework for an alternative and important kind of regula...... of regularization, by external variables that shadow either the state or the switch of the original system. The shadow systems are derived from and inspired by various applications in electronic control, predator-prey preference, time delay, and genetic regulation....
A multiplicative regularization for force reconstruction
Aucejo, M.; De Smet, O.
2017-02-01
Additive regularizations, such as Tikhonov-like approaches, are certainly the most popular methods for reconstructing forces acting on a structure. These approaches require, however, the knowledge of a regularization parameter, that can be numerically computed using specific procedures. Unfortunately, these procedures are generally computationally intensive. For this particular reason, it could be of primary interest to propose a method able to proceed without defining any regularization parameter beforehand. In this paper, a multiplicative regularization is introduced for this purpose. By construction, the regularized solution has to be calculated in an iterative manner. In doing so, the amount of regularization is automatically adjusted throughout the resolution process. Validations using synthetic and experimental data highlight the ability of the proposed approach in providing consistent reconstructions.
Regularized maximum correntropy machine
Wang, Jim Jing-Yan
2015-02-12
In this paper we investigate the usage of regularized correntropy framework for learning of classifiers from noisy labels. The class label predictors learned by minimizing transitional loss functions are sensitive to the noisy and outlying labels of training samples, because the transitional loss functions are equally applied to all the samples. To solve this problem, we propose to learn the class label predictors by maximizing the correntropy between the predicted labels and the true labels of the training samples, under the regularized Maximum Correntropy Criteria (MCC) framework. Moreover, we regularize the predictor parameter to control the complexity of the predictor. The learning problem is formulated by an objective function considering the parameter regularization and MCC simultaneously. By optimizing the objective function alternately, we develop a novel predictor learning algorithm. The experiments on two challenging pattern classification tasks show that it significantly outperforms the machines with transitional loss functions.
Regular Expression Containment
Henglein, Fritz; Nielsen, Lasse
2011-01-01
We present a new sound and complete axiomatization of regular expression containment. It consists of the conventional axiomatiza- tion of concatenation, alternation, empty set and (the singleton set containing) the empty string as an idempotent semiring, the fixed- point rule E* = 1 + E × E......* for Kleene-star, and a general coin- duction rule as the only additional rule. Our axiomatization gives rise to a natural computational inter- pretation of regular expressions as simple types that represent parse trees, and of containment proofs as coercions. This gives the axiom- atization a Curry......-Howard-style constructive interpretation: Con- tainment proofs do not only certify a language-theoretic contain- ment, but, under our computational interpretation, constructively transform a membership proof of a string in one regular expres- sion into a membership proof of the same string in another regular expression. We...
Regularities of Multifractal Measures
Hun Ki Baek
2008-05-01
First, we prove the decomposition theorem for the regularities of multifractal Hausdorff measure and packing measure in $\\mathbb{R}^d$. This decomposition theorem enables us to split a set into regular and irregular parts, so that we can analyze each separately, and recombine them without affecting density properties. Next, we give some properties related to multifractal Hausdorff and packing densities. Finally, we extend the density theorem in [6] to any measurable set.
T. (A)LVAREZ
2012-01-01
For a closed linear relation in a Banach space the concept of regularity is introduced and studied.It is shown that many of the results of Mbekhta and other authors for operators remain valid in the context of multivalued linear operators.We also extend the punctured neighbourhood theorem for operators to linear relations and as an application we obtain a characterization of semiFredholm linear relations which are regular.
Local and Nonlocal Regularization to Image Interpolation
Yi Zhan
2014-01-01
Full Text Available This paper presents an image interpolation model with local and nonlocal regularization. A nonlocal bounded variation (BV regularizer is formulated by an exponential function including gradient. It acts as the Perona-Malik equation. Thus our nonlocal BV regularizer possesses the properties of the anisotropic diffusion equation and nonlocal functional. The local total variation (TV regularizer dissipates image energy along the orthogonal direction to the gradient to avoid blurring image edges. The derived model efficiently reconstructs the real image, leading to a natural interpolation which reduces blurring and staircase artifacts. We present experimental results that prove the potential and efficacy of the method.
Hidden Regular Variation: Detection and Estimation
Mitra, Abhimanyu
2010-01-01
Hidden regular variation defines a subfamily of distributions satisfying multivariate regular variation on $\\mathbb{E} = [0, \\infty]^d \\backslash \\{(0,0, ..., 0) \\} $ and models another regular variation on the sub-cone $\\mathbb{E}^{(2)} = \\mathbb{E} \\backslash \\cup_{i=1}^d \\mathbb{L}_i$, where $\\mathbb{L}_i$ is the $i$-th axis. We extend the concept of hidden regular variation to sub-cones of $\\mathbb{E}^{(2)}$ as well. We suggest a procedure of detecting the presence of hidden regular variation, and if it exists, propose a method of estimating the limit measure exploiting its semi-parametric structure. We exhibit examples where hidden regular variation yields better estimates of probabilities of risk sets.
Faupin, Jeremy; Møller, Jacob Schach; Skibsted, Erik
2011-01-01
We study regularity of bound states pertaining to embedded eigenvalues of a self-adjoint operator H, with respect to an auxiliary operator A that is conjugate to H in the sense of Mourre. We work within the framework of singular Mourre theory which enables us to deal with confined massless Pauli–......–Fierz models, our primary example, and many-body AC-Stark Hamiltonians. In the simpler context of regular Mourre theory, our results boil down to an improvement of results obtained recently in [8, 9]....
Low power implementation of datapath using regularity
LAI Li-ya; LIU Peng
2005-01-01
Datapath accounts for a considerable part of power consumption in VLSI circuit design. This paper presents a method for physical implementation of datapath to achieve low power consumption. Regularity is a characteristic of datapath and the key of the proposed method, where synthesis is tightly combined with placement to make full use of regularity, so that low power consumption is achieved. In This paper, a new concept of Synthesis In Relative Placement (SIRP) is given to deal with the semi-regularity in some datapath. Experimental results of a sample circuit validated the proposed method.
Generalized variation-based regularization method for infrared image denoising%基于广义变分正则化的红外图像噪声抑制方法
钱伟新; 王婉丽; 祁双喜; 程晋明; 刘冬兵
2014-01-01
文中提出了一种广义变分正则化的红外图像噪声抑制方法，该方法采用p-范数代替目前广泛被采用的全变分范数作为正则项，构造了用于抑制图像噪声的展平泛函，从而将图像噪声抑制问题转化为能量泛函优化问题。通过推导，得到了相应的用于图像噪声抑制的非线性偏微分方程，并采用固定点迭代算法进行线性化求解，使得迭代解稳定收敛。数值试验结果表明，该方法能够有效地去除图像噪声，较之全变分图像噪声抑制方法，新方法进一步提高了对小宽度图像边缘的保持能力，是一种有效且性能优良的红外图像噪声抑制方法。%A generalized variation (GV) regularization based infrared image denoising method was proposed in this paper. In the new method, a p-norm was used as regularized term to replace total variation (TV) norm in traditional TV based image denoising methods which were used popular in image processing domain. Then a smoothing functional was constructed for noised removal. Thus, the problem of image denoising was transformed to a problem of a functional minimization. A nonlinear partial differential equation (PDE) was deduced from the new image denoising model. To solve the nonlinear PDE, the fixed point iteration (FPI) scheme was introduced to linear the PDE. The stability and convergence of regularized solution were ensured by FPI scheme. The numerical experimental results show that comparison with TV regularized method, the GV regularized method can preserve image edge including those small width edges more efficiently while removing noise. The GV regularized method is an efficient image noise removed method with better performance of noise removal and edge preserving.
Annotation of Regular Polysemy
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...
Multiple graph regularized protein domain ranking
Wang, Jim Jing-Yan
2012-11-19
Background: Protein domain ranking is a fundamental task in structural biology. Most protein domain ranking methods rely on the pairwise comparison of protein domains while neglecting the global manifold structure of the protein domain database. Recently, graph regularized ranking that exploits the global structure of the graph defined by the pairwise similarities has been proposed. However, the existing graph regularized ranking methods are very sensitive to the choice of the graph model and parameters, and this remains a difficult problem for most of the protein domain ranking methods.Results: To tackle this problem, we have developed the Multiple Graph regularized Ranking algorithm, MultiG-Rank. Instead of using a single graph to regularize the ranking scores, MultiG-Rank approximates the intrinsic manifold of protein domain distribution by combining multiple initial graphs for the regularization. Graph weights are learned with ranking scores jointly and automatically, by alternately minimizing an objective function in an iterative algorithm. Experimental results on a subset of the ASTRAL SCOP protein domain database demonstrate that MultiG-Rank achieves a better ranking performance than single graph regularized ranking methods and pairwise similarity based ranking methods.Conclusion: The problem of graph model and parameter selection in graph regularized protein domain ranking can be solved effectively by combining multiple graphs. This aspect of generalization introduces a new frontier in applying multiple graphs to solving protein domain ranking applications. 2012 Wang et al; licensee BioMed Central Ltd.
Maximum mutual information regularized classification
Wang, Jim Jing-Yan
2014-09-07
In this paper, a novel pattern classification approach is proposed by regularizing the classifier learning to maximize mutual information between the classification response and the true class label. We argue that, with the learned classifier, the uncertainty of the true class label of a data sample should be reduced by knowing its classification response as much as possible. The reduced uncertainty is measured by the mutual information between the classification response and the true class label. To this end, when learning a linear classifier, we propose to maximize the mutual information between classification responses and true class labels of training samples, besides minimizing the classification error and reducing the classifier complexity. An objective function is constructed by modeling mutual information with entropy estimation, and it is optimized by a gradient descend method in an iterative algorithm. Experiments on two real world pattern classification problems show the significant improvements achieved by maximum mutual information regularization.
On regularizations of the Dirac delta distribution
Hosseini, Bamdad; Nigam, Nilima; Stockie, John M.
2016-01-01
In this article we consider regularizations of the Dirac delta distribution with applications to prototypical elliptic and hyperbolic partial differential equations (PDEs). We study the convergence of a sequence of distributions SH to a singular term S as a parameter H (associated with the support size of SH) shrinks to zero. We characterize this convergence in both the weak-* topology of distributions and a weighted Sobolev norm. These notions motivate a framework for constructing regularizations of the delta distribution that includes a large class of existing methods in the literature. This framework allows different regularizations to be compared. The convergence of solutions of PDEs with these regularized source terms is then studied in various topologies such as pointwise convergence on a deleted neighborhood and weighted Sobolev norms. We also examine the lack of symmetry in tensor product regularizations and effects of dissipative error in hyperbolic problems.
Sparse structure regularized ranking
Wang, Jim Jing-Yan
2014-04-17
Learning ranking scores is critical for the multimedia database retrieval problem. In this paper, we propose a novel ranking score learning algorithm by exploring the sparse structure and using it to regularize ranking scores. To explore the sparse structure, we assume that each multimedia object could be represented as a sparse linear combination of all other objects, and combination coefficients are regarded as a similarity measure between objects and used to regularize their ranking scores. Moreover, we propose to learn the sparse combination coefficients and the ranking scores simultaneously. A unified objective function is constructed with regard to both the combination coefficients and the ranking scores, and is optimized by an iterative algorithm. Experiments on two multimedia database retrieval data sets demonstrate the significant improvements of the propose algorithm over state-of-the-art ranking score learning algorithms.
Regularized Reduced Order Models
Wells, David; Xie, Xuping; Iliescu, Traian
2015-01-01
This paper puts forth a regularization approach for the stabilization of proper orthogonal decomposition (POD) reduced order models (ROMs) for the numerical simulation of realistic flows. Two regularized ROMs (Reg-ROMs) are proposed: the Leray ROM (L-ROM) and the evolve-then-filter ROM (EF-ROM). These new Reg-ROMs use spatial filtering to smooth (regularize) various terms in the ROMs. Two spatial filters are used: a POD projection onto a POD subspace (Proj) and a new POD differential filter (DF). The four Reg-ROM/filter combinations are tested in the numerical simulation of the one-dimensional Burgers equation with a small diffusion coefficient and the three-dimensional flow past a circular cylinder at a low Reynolds number (Re = 100). Overall, the most accurate Reg-ROM/filter combination is EF-ROM-DF. Furthermore, the DF generally yields better results than Proj. Finally, the four Reg-ROM/filter combinations are computationally efficient and generally more accurate than the standard Galerkin ROM.
Continuum regularization of quantum field theory
Bern, Z.
1986-04-01
Possible nonperturbative continuum regularization schemes for quantum field theory are discussed which are based upon the Langevin equation of Parisi and Wu. Breit, Gupta and Zaks made the first proposal for new gauge invariant nonperturbative regularization. The scheme is based on smearing in the ''fifth-time'' of the Langevin equation. An analysis of their stochastic regularization scheme for the case of scalar electrodynamics with the standard covariant gauge fixing is given. Their scheme is shown to preserve the masslessness of the photon and the tensor structure of the photon vacuum polarization at the one-loop level. Although stochastic regularization is viable in one-loop electrodynamics, two difficulties arise which, in general, ruins the scheme. One problem is that the superficial quadratic divergences force a bottomless action for the noise. Another difficulty is that stochastic regularization by fifth-time smearing is incompatible with Zwanziger's gauge fixing, which is the only known nonperturbaive covariant gauge fixing for nonabelian gauge theories. Finally, a successful covariant derivative scheme is discussed which avoids the difficulties encountered with the earlier stochastic regularization by fifth-time smearing. For QCD the regularized formulation is manifestly Lorentz invariant, gauge invariant, ghost free and finite to all orders. A vanishing gluon mass is explicitly verified at one loop. The method is designed to respect relevant symmetries, and is expected to provide suitable regularization for any theory of interest. Hopefully, the scheme will lend itself to nonperturbative analysis. 44 refs., 16 figs.
The Iterated Regularization With Perturbed Operators and Noisy Data
陈宏; 侯宗义
1994-01-01
The method of iterated Tikhonov regularization with perturbed operators and noisy data for solving operators equations of the first kind is investigated. The rates of convergence of regularization approximation are achieved by using generalized Arcangeli’s method for the choice of the regularization parameter.
Wen LIU; Jing LIN
2011-01-01
In this paper,we define a class of strongly connected digraph,called the k-walk-regular digraph,study some properties of it,provide its some algebraic characterization and point out that the O-walk-regular digraph is the same as the walk-regular digraph discussed BY Liu and Lin in 2010 and the D-walk-regular digraph is identical with the weakly distance-regular digraph defined by Comellas et al in 2004.
Regularized degenerate multi-solitons
Correa, Francisco
2016-01-01
We report complex PT-symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schroedinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Baecklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.
Liu, Jinzhen; Ling, Lin; Li, Gang
2013-07-01
A Tikhonov regularization method in the inverse problem of electrical impedance tomography (EIT) often results in a smooth distribution reconstruction, with which we can barely make a clear separation between the inclusions and background. The recently popular total variation (TV)regularization method including the lagged diffusivity (LD) method can sharpen the edges, and is robust to noise in a small convergence region. Therefore, in this paper, we propose a novel regularization method combining the Tikhonov and LD regularization methods. Firstly, we clarify the implementation details of the Tikhonov, LD and combined methods in two-dimensional open EIT by performing the current injection and voltage measurement on one boundary of the imaging object. Next, we introduce a weighted parameter to the Tikhonov regularization method aiming to explore the effect of the weighted parameter on the resolution and quality of reconstruction images with the inclusion at different depths. Then, we analyze the performance of these algorithms with noisy data. Finally, we evaluate the effect of the current injection pattern on reconstruction quality and propose a modified current injection pattern.The results indicate that the combined regularization algorithm with stable convergence is able to improve the reconstruction quality with sharp contrast and more robust to noise in comparison to the Tikhonov and LD regularization methods solely. In addition, the results show that the current injection pattern with a bigger driver angle leads to a better reconstruction quality.
Limitations on Dimensional Regularization in Renyi Entropy
Bao, Ning
2016-01-01
Dimensional regularization is a common method used to regulate the UV divergence of field theoretic quantities. When it is used in the context of Renyi entropy, however, it is important to consider whether such a procedure eliminates the statistical interpretation thereof as a measure of entanglement of states living on a Hilbert space. We therefore examine the dimensionally regularized Renyi entropy of a 4d unitary CFT and show that it admits no underlying Hilbert space in the state-counting sense. This gives a concrete proof that dimensionally regularized Renyi entropy cannot always be obtained as a limit of the Renyi entropy of some finite-dimensional quantum system.
Annotation of Regular Polysemy
Martinez Alonso, Hector
Regular polysemy has received a lot of attention from the theory of lexical semantics and from computational linguistics. However, there is no consensus on how to represent the sense of underspecified examples at the token level, namely when annotating or disambiguating senses of metonymic words...... like “London” (Location/Organization) or “cup” (Container/Content). The goal of this dissertation is to assess whether metonymic sense underspecification justifies incorporating a third sense into our sense inventories, thereby treating the underspecified sense as independent from the literal...
Regularity of eigenstates in regular Mourre theory
Møller, Jacob Schach; Westrich, Matthias
2011-01-01
The present paper gives an abstract method to prove that possibly embedded eigenstates of a self-adjoint operator H lie in the domain of the kth power of a conjugate operator A. Conjugate means here that H and A have a positive commutator locally near the relevant eigenvalue in the sense of Mourr...... with respect to A. Natural applications are ‘dilation analytic’ systems satisfying a Mourre estimate, where our result can be viewed as an abstract version of a theorem due to Balslev and Combes (1971) [3] . As a new application we consider the massive Spin-Boson Model....
Landweber iterative regularization for nearfield acoustic holography
BI Chuanxing; CHEN Xinzhao; ZHOU Rong; CHEN Jian
2006-01-01
On the basis of the distributed source boundary point method (DSBPM)-based nearfield acoustic holography (NAH), Landweber iterative regularization method is proposed to stabilize the NAH reconstruction process, control the influence of measurement errors on the reconstructed results and ensure the validity of the reconstructed results. And a new method, the auxiliary surface method, is proposed to determine the optimal iterative number for optimizing the regularization effect. Here, the optimal number is determined by minimizing the relative error between the calculated pressure on the auxiliary surface corresponding to each iterative number and the measured pressure. An experiment on a speaker is investigated to demonstrate the high sensitivity of the reconstructed results to measurement errors and to validate the chosen method of the optimal iterative number and the Landweber iterative regularization method for controlling the influence of measurement errors on the reconstructed results.
From regular modules to von Neumann regular rings via coordinatization
Leonard Daus
2014-07-01
Full Text Available In this paper we establish a very close link (in terms of von Neu- mann's coordinatization between regular modules introduced by Zel- manowitz, on one hand, and von Neumann regular rings, on the other hand: we prove that the lattice L^{fg}(M of all finitely generated submodules of a finitely generated regular module M, over an arbitrary ring, can be coordinatized as the lattice of all principal right ideals of some von Neumann regular ring S.
Structure for Regular Inclusions
Pitts, David R
2012-01-01
We study pairs (C,D) of unital C*-algebras where D is an abelian C*-subalgebra of C which is regular in C. When D is a MASA in C, there exists a unique completely positive unital map E of C into the injective envelope I(D) of D whose restriction to D is the identity on D. We show that the left kernel of E is the unique closed two-sided ideal of C maximal with respect to having trivial intersection with D. We introduce a new class of well behaved state extensions, the compatible states; we identify compatible states when D is a MASA in C in terms of groups constructed from local dynamics near a pure state on D. When C is separable, D is a MASA in C, and the pair (C,D) is regular, the set of pure states on D with unique state extensions to C is dense in D. The map E can be used as a substitute for a conditional expectation in the construction of coordinates for C relative to D. We show that certain classes of compatible states have natural groupoid operations, and we show that constructions of Kumjian and Renau...
普通高校民间舞教学法探究%Exploration on Folk Dance Teaching Methods in Regular ;Colleges and Universities
刘芳; 曹丽坤
2015-01-01
The application of"music-dance combination"teach-ing approach in the folk dance teaching of regular colleges and universities can give play to students' subjectivity and establish students' principal role in teaching based on fully developing the art and cultural connotation of dance, so it can make the mastery of styles more effective, thus improving teaching quality and de-veloping students' creativity and performance.%在普通高校民族民间舞教学中运用“乐舞结合式”教学法，可以在充分挖掘舞蹈艺术性与文化内涵的同时，发挥学生的主观能动性，确立学生在教学中的主体地位，对于风格的掌握起到事半功倍的作用，有效提高了教学质量，开发了学生的创造力和表现力。
Schyns, Emile
1997-01-01
Measurement of $\\pi^{+/-}, K^{+/-}, p$ and $\\bar{p}$ production in $Z^0 \\to q\\bar{q}, Z^{0} \\to b\\bar{b}$ and $Z^{0} \\to u\\bar{u}, d\\bar{d}, s\\bar{s}$ (Particle Identification with the DELPHI Barrel Ring Imaging Cherenkov Counter)
Iterative implementation of the adaptive regularization yields optimality
MA; Qinghua; WANG; Yanfei
2005-01-01
The adaptive regularization method is first proposed by Ryzhikov et al. for the deconvolution in elimination of multiples. This method is stronger than the Tikhonov regularization in the sense that itis adaptive, i.e. it eliminates the small eigenvalues of theadjoint operator when it is nearly singular. We will show in this paper that the adaptive regularization can be implemented iterately. Some properties of the proposed non-stationary iterated adaptive regularization method are analyzed. The rate of convergence for inexact data is proved. Therefore the iterative implementation of the adaptive regularization can yield optimality.
Evolutionary internalized regularities.
Schwartz, R
2001-08-01
Roger Shepard's proposals and supporting experiments concerning evolutionary internalized regularities have been very influential in the study of vision and in other areas of psychology and cognitive science. This paper examines issues concerning the need, nature, explanatory role, and justification for postulating such internalized constraints. In particular, I seek further clarification from Shepard on how best to understand his claim that principles of kinematic geometry underlie phenomena of motion perception. My primary focus is on the ecological validity of Shepard's kinematic constraint in the context of ordinary motion perception. First, I explore the analogy Shepard draws between internalized circadian rhythms and the supposed internalization of kinematic geometry. Next, questions are raised about how to interpret and justify applying results from his own and others' experimental studies of apparent motion to more everyday cases of motion perception in richer environments. Finally, some difficulties with Shepard's account of the evolutionary development of his kinematic constraint are considered.
Adaptive Regularization of Neural Classifiers
Andersen, Lars Nonboe; Larsen, Jan; Hansen, Lars Kai
1997-01-01
We present a regularization scheme which iteratively adapts the regularization parameters by minimizing the validation error. It is suggested to use the adaptive regularization scheme in conjunction with optimal brain damage pruning to optimize the architecture and to avoid overfitting. Furthermo...
Modeling polycrystals with regular polyhedra
Paulo Rangel Rios
2006-06-01
Full Text Available Polycrystalline structure is of paramount importance to materials science and engineering. It provides an important example of a space-filling irregular network structure that also occurs in foams as well as in certain biological tissues. Therefore, seeking an accurate description of the characteristics of polycrystals is of fundamental importance. Recently, one of the authors (MEG published a paper in which a method was devised of representation of irregular networks by regular polyhedra with curved faces. In Glicksman's method a whole class of irregular polyhedra with a given number of faces, N, is represented by a single symmetrical polyhedron with N curved faces. This paper briefly describes the topological and metric properties of these special polyhedra. They are then applied to two important problems of irregular networks: the dimensionless energy 'cost' of irregular networks, and the derivation of a 3D analogue of the von Neumann-Mullins equation for the growth rate of grains in a polycrystal.
Rega, G.; Lenci, S.; Thompson, J. M. T.
In this chapter we review the development of the control of chaos theory subsequent to the seminal paper by Ott, Grebogi and Yorke in 1990. After summarizing the main characteristics of the OGY method, we analyze and discuss various applications in several fields of mechanics.We then illustrate the main aspects of an alternative control method which aims at controlling the overall system dynamics instead of stabilizing a single periodic orbit, as the OGY method does. The two methods are both based on the modern idea of exploiting the chaotic properties of systems, instead of simply eliminating chaos. This paper is one of a collection written in honour of Celso Grebogi, on the occasion of his 60th birthday. So we have thought it appropriate to start with short personal reminiscences by two of the present authors.
Bambi, Cosimo
2013-01-01
The formation of spacetime singularities is a quite common phenomenon in General Relativity and it is regulated by specific theorems. It is widely believed that spacetime singularities do not exist in Nature, but that they represent a limitation of the classical theory. While we do not yet have any solid theory of quantum gravity, toy models of black hole solutions without singularities have been proposed. So far, there are only non-rotating regular black holes in the literature. These metrics can be hardly tested by astrophysical observations, as the black hole spin plays a fundamental role in any astrophysical process. In this letter, we apply the Newman-Janis algorithm to the Hayward and to the Bardeen black hole metrics. In both cases, we obtain a family of rotating solutions. Every solution corresponds to a different matter configuration. Each family has one solution with special properties, which can be written in Kerr-like form in Boyer-Lindquist coordinates. These special solutions are of Petrov type ...
Bambi, Cosimo, E-mail: bambi@fudan.edu.cn; Modesto, Leonardo, E-mail: lmodesto@fudan.edu.cn
2013-04-25
The formation of spacetime singularities is a quite common phenomenon in General Relativity and it is regulated by specific theorems. It is widely believed that spacetime singularities do not exist in Nature, but that they represent a limitation of the classical theory. While we do not yet have any solid theory of quantum gravity, toy models of black hole solutions without singularities have been proposed. So far, there are only non-rotating regular black holes in the literature. These metrics can be hardly tested by astrophysical observations, as the black hole spin plays a fundamental role in any astrophysical process. In this Letter, we apply the Newman–Janis algorithm to the Hayward and to the Bardeen black hole metrics. In both cases, we obtain a family of rotating solutions. Every solution corresponds to a different matter configuration. Each family has one solution with special properties, which can be written in Kerr-like form in Boyer–Lindquist coordinates. These special solutions are of Petrov type D, they are singularity free, but they violate the weak energy condition for a non-vanishing spin and their curvature invariants have different values at r=0 depending on the way one approaches the origin. We propose a natural prescription to have rotating solutions with a minimal violation of the weak energy condition and without the questionable property of the curvature invariants at the origin.
Ensemble manifold regularization.
Geng, Bo; Tao, Dacheng; Xu, Chao; Yang, Linjun; Hua, Xian-Sheng
2012-06-01
We propose an automatic approximation of the intrinsic manifold for general semi-supervised learning (SSL) problems. Unfortunately, it is not trivial to define an optimization function to obtain optimal hyperparameters. Usually, cross validation is applied, but it does not necessarily scale up. Other problems derive from the suboptimality incurred by discrete grid search and the overfitting. Therefore, we develop an ensemble manifold regularization (EMR) framework to approximate the intrinsic manifold by combining several initial guesses. Algorithmically, we designed EMR carefully so it 1) learns both the composite manifold and the semi-supervised learner jointly, 2) is fully automatic for learning the intrinsic manifold hyperparameters implicitly, 3) is conditionally optimal for intrinsic manifold approximation under a mild and reasonable assumption, and 4) is scalable for a large number of candidate manifold hyperparameters, from both time and space perspectives. Furthermore, we prove the convergence property of EMR to the deterministic matrix at rate root-n. Extensive experiments over both synthetic and real data sets demonstrate the effectiveness of the proposed framework.
Deconvolution and Regularization with Toeplitz Matrices
Hansen, Per Christian
2002-01-01
of these discretized deconvolution problems, with emphasis on methods that take the special structure of the matrix into account. Wherever possible, analogies to classical DFT-based deconvolution problems are drawn. Among other things, we present direct methods for regularization with Toeplitz matrices, and we show...
Regularized degenerate multi-solitons
Correa, Francisco; Fring, Andreas
2016-09-01
We report complex {P}{T} -symmetric multi-soliton solutions to the Korteweg de-Vries equation that asymptotically contain one-soliton solutions, with each of them possessing the same amount of finite real energy. We demonstrate how these solutions originate from degenerate energy solutions of the Schrödinger equation. Technically this is achieved by the application of Darboux-Crum transformations involving Jordan states with suitable regularizing shifts. Alternatively they may be constructed from a limiting process within the context Hirota's direct method or on a nonlinear superposition obtained from multiple Bäcklund transformations. The proposed procedure is completely generic and also applicable to other types of nonlinear integrable systems.
基于正则表达式的防火墙安全配置核查方法研究%Method of ifrewall conifguration checking based on Regular Expression
汤飞
2015-01-01
本文以铁路车站客票网防火墙为研究对象，提出了一种基于正则表达式的防火墙安全配置核查方法。此方法使用正则表达式匹配代替人工评判，提高了核查的效率；同时，该方法的安全配置基于信息安全国家标准要求和实际业务需求制定，降低了核查的主观性。%This paper proposed a method based on Regular Expression to check configurations through studying ifrewalls in TRS network of railway station. The method adopted Regular Expression matching instead of manual judging, made checking process more efifcient. In addition, conifguration checking lists were made based on national standards of information security and practical requirements of TRS network, which made checking process less subjective.
Saide, Pablo (CGRER, Center for Global and Regional Environmental Research, Univ. of Iowa, Iowa City, IA (United States)), e-mail: pablo-saide@uiowa.edu; Bocquet, Marc (Universite Paris-Est, CEREA Joint Laboratory Ecole des Ponts ParisTech and EDF RandD, Champs-sur-Marne (France); INRIA, Paris Rocquencourt Research Center (France)); Osses, Axel (Departamento de Ingeniera Matematica, Universidad de Chile, Santiago (Chile); Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile)); Gallardo, Laura (Centro de Modelamiento Matematico, UMI 2807/Universidad de Chile-CNRS, Santiago (Chile); Departamento de Geofisica, Universidad de Chile, Santiago (Chile))
2011-07-15
When constraining surface emissions of air pollutants using inverse modelling one often encounters spurious corrections to the inventory at places where emissions and observations are colocated, referred to here as the colocalization problem. Several approaches have been used to deal with this problem: coarsening the spatial resolution of emissions; adding spatial correlations to the covariance matrices; adding constraints on the spatial derivatives into the functional being minimized; and multiplying the emission error covariance matrix by weighting factors. Intercomparison of methods for a carbon monoxide inversion over a city shows that even though all methods diminish the colocalization problem and produce similar general patterns, detailed information can greatly change according to the method used ranging from smooth, isotropic and short range modifications to not so smooth, non-isotropic and long range modifications. Poisson (non-Gaussian) and Gaussian assumptions both show these patterns, but for the Poisson case the emissions are naturally restricted to be positive and changes are given by means of multiplicative correction factors, producing results closer to the true nature of emission errors. Finally, we propose and test a new two-step, two-scale, fully Bayesian approach that deals with the colocalization problem and can be implemented for any prior density distribution
Regular Bisimple ω2-semigroups
汪立民; 商宇
2008-01-01
@@ The regular semigroups S with an idempotent set Es = {e0,e1,…,en,…} such that e0 ＞ e1 ＞…＞ en ＞… is called a regular ω-semigroup. In [5] Reilly determined the structure of a regular bisimple ω-semigroup as BR(G,θ),which is the classical Bruck-Reilly extension of a group G.
Completely regular fuzzifying topological spaces
A. K. Katsaras
2005-12-01
Full Text Available Some of the properties of the completely regular fuzzifying topological spaces are investigated. It is shown that a fuzzifying topology ÃÂ„ is completely regular if and only if it is induced by some fuzzy uniformity or equivalently by some fuzzifying proximity. Also, ÃÂ„ is completely regular if and only if it is generated by a family of probabilistic pseudometrics.
UNI-VECTOR-SENSOR DIRECTION FINDING WITH REGULARIZED ESPRIT
无
2008-01-01
The regularized Least-Squares Estimation method of Signal Parameters via Rotational Invariance Techniques (LS-ESPRIT) is herein proposed for Direction-Of-Arrival (DOA) estimation of non-Gaussian sources with only one acoustic vector-sensor. The Second-Order Statistics (SOS) and Higher-Order Statistics (HOS) of data are fused within a regularization framework. The steering vectors can be blindly identified by the regularized ESPRIT, from which the aim of DOA estimation can be achieved. Several variants of the regularized ESPRIT are discussed. A suboptimal scheme for determination of the regularization parameters is also given.
On regular rotating black holes
Torres, R.; Fayos, F.
2017-01-01
Different proposals for regular rotating black hole spacetimes have appeared recently in the literature. However, a rigorous analysis and proof of the regularity of this kind of spacetimes is still lacking. In this note we analyze rotating Kerr-like black hole spacetimes and find the necessary and sufficient conditions for the regularity of all their second order scalar invariants polynomial in the Riemann tensor. We also show that the regularity is linked to a violation of the weak energy conditions around the core of the rotating black hole.
Constrained and regularized system identification
Tor A. Johansen
1998-04-01
Full Text Available Prior knowledge can be introduced into system identification problems in terms of constraints on the parameter space, or regularizing penalty functions in a prediction error criterion. The contribution of this work is mainly an extension of the well known FPE (Final Production Error statistic to the case when the system identification problem is constrained and contains a regularization penalty. The FPECR statistic (Final Production Error with Constraints and Regularization is of potential interest as a criterion for selection of both regularization parameters and structural parameters such as order.
On regular rotating black holes
Torres, Ramon
2016-01-01
Different proposals for regular rotating black hole spacetimes have appeared recently in the literature. However, a rigorous analysis and proof of the regularity of this kind of spacetimes is still lacking. In this note we analyze rotating Kerr-like black hole spacetimes and find the necessary and sufficient conditions for the regularity of all their second order scalar invariants polynomial in the Riemann tensor. We also show that the regularity is linked to a violation of the weak energy conditions around the core of the rotating black hole.
CHEN Huan Yin; LI Fu An
2002-01-01
In this paper, we investigate ideals of regular rings and give several characterizations for an ideal to satisfy the comparability. In addition, it is shown that, if Ⅰ is a minimal two-sided ideal of a regular ring R, then Ⅰ satisfies the comparability if and only if Ⅰ is separative. Furthermore, we prove that, for ideals with stable range one, Roth's problem has an affirmative solution. These extend the corresponding results on unit-regularity and one-sided unit-regularity.
P Dutt; Akhlaq Husain; A S Vasudeva Murthy; C S Upadhyay
2015-05-01
This is the first of a series of papers devoted to the study of ℎ- spectral element methods for solving three dimensional elliptic boundary value problems on non-smooth domains using parallel computers. In three dimensions there are three different types of singularities namely; the vertex, the edge and the vertex-edge singularities. In addition, the solution is anisotropic in the neighbourhoods of the edges and vertex-edges. To overcome the singularities which arise in the neighbourhoods of vertices, vertex-edges and edges, we use local systems of coordinates. These local coordinates are modified versions of spherical and cylindrical coordinate systems in their respective neighbourhoods. Away from these neighbourhoods standard Cartesian coordinates are used. In each of these neighbourhoods we use a geometrical mesh which becomes finer near the corners and edges. The geometrical mesh becomes a quasi-uniform mesh in the new system of coordinates. We then derive differentiability estimates in these new set of variables and state our main stability estimate theorem using a non-conforming ℎ- spectral element method whose proof is given in a separate paper.
Modified sparse regularization for electrical impedance tomography.
Fan, Wenru; Wang, Huaxiang; Xue, Qian; Cui, Ziqiang; Sun, Benyuan; Wang, Qi
2016-03-01
Electrical impedance tomography (EIT) aims to estimate the electrical properties at the interior of an object from current-voltage measurements on its boundary. It has been widely investigated due to its advantages of low cost, non-radiation, non-invasiveness, and high speed. Image reconstruction of EIT is a nonlinear and ill-posed inverse problem. Therefore, regularization techniques like Tikhonov regularization are used to solve the inverse problem. A sparse regularization based on L1 norm exhibits superiority in preserving boundary information at sharp changes or discontinuous areas in the image. However, the limitation of sparse regularization lies in the time consumption for solving the problem. In order to further improve the calculation speed of sparse regularization, a modified method based on separable approximation algorithm is proposed by using adaptive step-size and preconditioning technique. Both simulation and experimental results show the effectiveness of the proposed method in improving the image quality and real-time performance in the presence of different noise intensities and conductivity contrasts.
Bartlett, Yvonne Kiera; Webb, Thomas L; Hawley, Mark S
2017-04-20
People with chronic obstructive pulmonary disease (PwCOPD) often experience breathlessness and fatigue, making physical activity challenging. Although many persuasive technologies (such as mobile phone apps) have been designed to support physical activity among members of the general population, current technologies aimed at PwCOPD are underdeveloped and only use a limited range of persuasive technology design principles. The aim of this study was to explore how acceptable different persuasive technology design principles were considered to be in supporting and encouraging physical activity among PwCOPD. Three prototypes for mobile apps using different persuasive technology design principles as defined by the persuasive systems design (PSD) model-namely, dialogue support, primary task support, and social support-were developed. Opinions of these prototypes were explored through 28 interviews with PwCOPD, carers, and the health care professionals (HCPs) involved in their care and questionnaires completed by 87 PwCOPD. Participants also ranked how likely individual techniques (eg, competition) would be to convince them to use a technology designed to support physical activity. Data were analyzed using framework analysis, Friedman tests, and Wilcoxon signed rank tests and a convergent mixed methods design was used to integrate findings. The prototypes for mobile apps were received positively by participants. The prototype that used a dialogue support approach was identified as the most likely to be used or recommended by those interviewed, and was perceived as more persuasive than both of the other prototypes (Z=-3.06, P=.002; Z=-5.50, Ppersuasive by PwCOPD, carers, and HCPs. In the future, these approaches should be considered when designing apps to encourage physical activity by PwCOPD.
Regularly timed events amid chaos
Blakely, Jonathan N.; Cooper, Roy M.; Corron, Ned J.
2015-11-01
We show rigorously that the solutions of a class of chaotic oscillators are characterized by regularly timed events in which the derivative of the solution is instantaneously zero. The perfect regularity of these events is in stark contrast with the well-known unpredictability of chaos. We explore some consequences of these regularly timed events through experiments using chaotic electronic circuits. First, we show that a feedback loop can be implemented to phase lock the regularly timed events to a periodic external signal. In this arrangement the external signal regulates the timing of the chaotic signal but does not strictly lock its phase. That is, phase slips of the chaotic oscillation persist without disturbing timing of the regular events. Second, we couple the regularly timed events of one chaotic oscillator to those of another. A state of synchronization is observed where the oscillators exhibit synchronized regular events while their chaotic amplitudes and phases evolve independently. Finally, we add additional coupling to synchronize the amplitudes, as well, however in the opposite direction illustrating the independence of the amplitudes from the regularly timed events.
A short proof of increased parabolic regularity
Stephen Pankavich
2015-08-01
Full Text Available We present a short proof of the increased regularity obtained by solutions to uniformly parabolic partial differential equations. Though this setting is fairly introductory, our new method of proof, which uses a priori estimates and an inductive method, can be extended to prove analogous results for problems with time-dependent coefficients, advection-diffusion or reaction diffusion equations, and nonlinear PDEs even when other tools, such as semigroup methods or the use of explicit fundamental solutions, are unavailable.
Conservative regularization of compressible flow
Krishnaswami, Govind S; Thyagaraja, Anantanarayanan
2015-01-01
Ideal Eulerian flow may develop singularities in vorticity w. Navier-Stokes viscosity provides a dissipative regularization. We find a local, conservative regularization - lambda^2 w times curl(w) of compressible flow and compressible MHD: a three dimensional analogue of the KdV regularization of the one dimensional kinematic wave equation. The regulator lambda is a field subject to the constitutive relation lambda^2 rho = constant. Lambda is like a position-dependent mean-free path. Our regularization preserves Galilean, parity and time-reversal symmetries. We identify locally conserved energy, helicity, linear and angular momenta and boundary conditions ensuring their global conservation. Enstrophy is shown to remain bounded. A swirl velocity field is identified, which transports w/rho and B/rho generalizing the Kelvin-Helmholtz and Alfven theorems. A Hamiltonian and Poisson bracket formulation is given. The regularized equations are used to model a rotating vortex, channel flow, plane flow, a plane vortex ...
Approximate Sparse Regularized Hyperspectral Unmixing
Chengzhi Deng
2014-01-01
Full Text Available Sparse regression based unmixing has been recently proposed to estimate the abundance of materials present in hyperspectral image pixel. In this paper, a novel sparse unmixing optimization model based on approximate sparsity, namely, approximate sparse unmixing (ASU, is firstly proposed to perform the unmixing task for hyperspectral remote sensing imagery. And then, a variable splitting and augmented Lagrangian algorithm is introduced to tackle the optimization problem. In ASU, approximate sparsity is used as a regularizer for sparse unmixing, which is sparser than l1 regularizer and much easier to be solved than l0 regularizer. Three simulated and one real hyperspectral images were used to evaluate the performance of the proposed algorithm in comparison to l1 regularizer. Experimental results demonstrate that the proposed algorithm is more effective and accurate for hyperspectral unmixing than state-of-the-art l1 regularizer.
Neural Classifier Construction using Regularization, Pruning
Hintz-Madsen, Mads; Hansen, Lars Kai; Larsen, Jan;
1998-01-01
In this paper we propose a method for construction of feed-forward neural classifiers based on regularization and adaptive architectures. Using a penalized maximum likelihood scheme, we derive a modified form of the entropic error measure and an algebraic estimate of the test error. In conjunction...
The moduli space of regular stable maps
Robbin, Joel; Salamon, Dietmar; 10.1007/s00209-007-0237-x
2012-01-01
The moduli space of regular stable maps with values in a complex manifold admits naturally the structure of a complex orbifold. Our proof uses the methods of differential geometry rather than algebraic geometry. It is based on Hardy decompositions and Fredholm intersection theory in the loop space of the target manifold.
Regular conformal system for Einstein equations
Choquet-Bruhat, Y.; Novello, M.
1987-06-21
We give a system of partial differential equations satisfied by a metric g conformal to an Einstein metric and by the conformal factor ..omega.., regular system in the sense that it does not contain negative powers of ..omega... We use the ideas of Friedrich but we obtain here a hyperbolic system in the sense of Leray, by a different method.
Tikhonov Regularization and Total Least Squares
Golub, G. H.; Hansen, Per Christian; O'Leary, D. P.
2000-01-01
formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that...
Annotation of regular polysemy and underspecification
Martínez Alonso, Héctor; Pedersen, Bolette Sandford; Bel, Núria
2013-01-01
We present the result of an annotation task on regular polysemy for a series of seman- tic classes or dot types in English, Dan- ish and Spanish. This article describes the annotation process, the results in terms of inter-encoder agreement, and the sense distributions obtained with two methods...
Learning regularized LDA by clustering.
Pang, Yanwei; Wang, Shuang; Yuan, Yuan
2014-12-01
As a supervised dimensionality reduction technique, linear discriminant analysis has a serious overfitting problem when the number of training samples per class is small. The main reason is that the between- and within-class scatter matrices computed from the limited number of training samples deviate greatly from the underlying ones. To overcome the problem without increasing the number of training samples, we propose making use of the structure of the given training data to regularize the between- and within-class scatter matrices by between- and within-cluster scatter matrices, respectively, and simultaneously. The within- and between-cluster matrices are computed from unsupervised clustered data. The within-cluster scatter matrix contributes to encoding the possible variations in intraclasses and the between-cluster scatter matrix is useful for separating extra classes. The contributions are inversely proportional to the number of training samples per class. The advantages of the proposed method become more remarkable as the number of training samples per class decreases. Experimental results on the AR and Feret face databases demonstrate the effectiveness of the proposed method.
何人杰; 樊养余; WANG Zhiyong; FENG David
2016-01-01
Based on the property that the scene radiance is of high contrast and the atmospheric veil is locally smooth, a novel single hazy image restoration method based on nonlocal total variation regularization optimization is proposed in this paper. In order to obtain the atmospheric veil of a hazy image, a constrained nonlocal total variation regularization is firstly applied. Then, the accurate atmospheric veil is estimated using a nonlocal Rudin- Osher-Fatemi model, which is solved by a modified split Bregman method. Experimental results demonstrate that the proposed approach is capable of recovering the scene radiance from a single hazy image effectively, especially for the regions with multi-texture.%该文针对无雾图像具有高灰度对比度且大气遮罩局部平滑的特性，提出一种基于非局部全变分正则化优化的单幅雾天图像恢复新方法。先构建一种基于非局部全变分正则化的有约束优化算法对大气遮罩进行估计，然后通过优化Bregman分离迭代法求解非局部Rudin-Osher-Fatemi模型获得准确的大气遮罩，进而从雾天场景图像恢复出场景图像。实验结果表明，所提新方法可以有效地对雾天降质图像进行复原，对多纹理复杂区域的恢复效果也较好。
A Criterion for Regular Sequences
D P Patil; U Storch; J Stückrad
2004-05-01
Let be a commutative noetherian ring and $f_1,\\ldots,f_r \\in R$. In this article we give (cf. the Theorem in $\\mathcal{x}$2) a criterion for $f_1,\\ldots,f_r$ to be regular sequence for a finitely generated module over which strengthens and generalises a result in [2]. As an immediate consequence we deduce that if $V(g_1,\\ldots,g_r) \\subseteq V(f_1,\\ldots,f_r)$ in Spec and if $f_1,\\ldots,f_r$ is a regular sequence in , then $g_1,\\ldots,g_r$ is also a regular sequence in .
Huanyin CHEN
2009-01-01
The necessary and sufficient conditions under which a ring satisfies regular power-substitution are investigated. It is shown that a ring R satisfies regular power-substitution if and only if a(-～)b in R implies that there exist n ∈ N and a U ∈ GLn(R) such that aU =Ub if and only if for any regular x ∈ R there exist m,n ∈ N and U ∈ GLn(R) such that xmIn = xmUxm, where a(-～)b means that there exists x, y, z ∈ R such that a = ybx, b = xaz and x = xyx = xzx. It is proved that every directly finite simple ring satisfies regular power-substitution. Some applications for stably free R-modules are also obtained.
NONCONVEX REGULARIZATION FOR SHAPE PRESERVATION
CHARTRAND, RICK [Los Alamos National Laboratory
2007-01-16
The authors show that using a nonconvex penalty term to regularize image reconstruction can substantially improve the preservation of object shapes. The commonly-used total-variation regularization, {integral}|{del}u|, penalizes the length of the object edges. They show that {integral}|{del}u|{sup p}, 0 < p < 1, only penalizes edges of dimension at least 2-p, and thus finite-length edges not at all. We give numerical examples showing the resulting improvement in shape preservation.
Regular and Periodic Tachyon Kinks
Bazeia, D.; Menezes, R.; Ramos, J. G.
2004-01-01
We search for regular tachyon kinks in an extended model, which includes the tachyon action recently proposed to describe the tachyon field. The extended model that we propose adds a new contribution to the tachyon action, and seems to enrich the present scenario for the tachyon field. We have found stable tachyon kinks of regular profile, which may appropriately lead to the singular kink found by Sen sometime ago. Also, under specific conditions we may find periodic array of kink-antikink co...
Shervin Sahebi
2014-05-01
Full Text Available $R$ is called commuting regular ring (resp. semigroupif for each $x,y\\in R$ there exists $a\\in R$ such that$xy=yxayx$. In this paper, we introduce the concept ofcommuting $\\pi$-regular rings (resp. semigroups andstudy various properties of them.
Condition Number Regularized Covariance Estimation.
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2013-06-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the "large p small n" setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required.
Condition Number Regularized Covariance Estimation*
Won, Joong-Ho; Lim, Johan; Kim, Seung-Jean; Rajaratnam, Bala
2012-01-01
Estimation of high-dimensional covariance matrices is known to be a difficult problem, has many applications, and is of current interest to the larger statistics community. In many applications including so-called the “large p small n” setting, the estimate of the covariance matrix is required to be not only invertible, but also well-conditioned. Although many regularization schemes attempt to do this, none of them address the ill-conditioning problem directly. In this paper, we propose a maximum likelihood approach, with the direct goal of obtaining a well-conditioned estimator. No sparsity assumption on either the covariance matrix or its inverse are are imposed, thus making our procedure more widely applicable. We demonstrate that the proposed regularization scheme is computationally efficient, yields a type of Steinian shrinkage estimator, and has a natural Bayesian interpretation. We investigate the theoretical properties of the regularized covariance estimator comprehensively, including its regularization path, and proceed to develop an approach that adaptively determines the level of regularization that is required. Finally, we demonstrate the performance of the regularized estimator in decision-theoretic comparisons and in the financial portfolio optimization setting. The proposed approach has desirable properties, and can serve as a competitive procedure, especially when the sample size is small and when a well-conditioned estimator is required. PMID:23730197
冯德山; 王珣
2013-01-01
Based on the boundary value problem of partial differential equation of the two-dimensional magnetotelluric (MT) forward modeling meet, the detail algorithm of finite element method deduced by the rectangular grid subdivision and cell biquadratic interpolation method were used to solve the electromagnetic problems both in TE and TM polarization mode. By using the basic theory of inversion, the solving ill-posed problem regularization method was applied to the least-square optimization approach, and the most smoothness constrained least square regularization inversion objective function was gotten, then a completed two-dimensional magnetotelluric forward computational program was written by Matlab. This program was applied on high/low resistance geoelectric model and Sasaki model, the inversion cross-sectional profiles of TE mode, TM mode and TE&TM joint inversion model separately were plotted. Compared the inversion results with the original models, TE mode inversion profile has higher vertical resolution, TM mode has higher lateral resolution, TE&TM joint inversion is superior to the single inversion of polarization mode. At the same time, MT biquadratic interpolation finite element forward modeling and least-square regularization inversion algorithm are proved to be effective and feasible.%从大地电磁(MT)二维正演所满足的偏微分方程边值问题出发，应用矩形网格剖分和单元内双二次插值推导有限单元法求解大地电磁TE与TM两种极化模式正问题详细算法。应用反演理论将病态问题求解的正则化方法应用到最小二乘优化方法中，获得最光滑约束最小二乘正则化反演目标函数，并利用Matlab编制了大地电磁二维正反演计算程序。应用该程序对高低阻地电模型和Sasaki模型开展了正反演计算，并绘制TE模式和TM模式、TE&TM联合反演模式的反演成果剖面图。将所得的反演剖面与初始模型对比可知，TE模式反演
Ratanpal B S; Sharma Jaita
2016-03-01
The charged anisotropic star on paraboloidal space-time is reported by choosing a particular form of radial pressure and electric field intensity. The non-singular solution of Einstein–Maxwell system of equation has been derived and it is shown that the model satisfies all the physical plausibility conditions. It is observed that in the absence of electric field intensity, the model reducesto a particular case of uncharged Sharma and Ratanpal model. It is also observed that the parameter used in the electric field intensity directly affects mass of the star.
A Splitting Algorithm for Directional Regularization and Sparsification
Rakêt, Lars Lau; Nielsen, Mads
2012-01-01
be computed pointwise and are easily implemented on massively parallel processors. Furthermore the splitting method allows for the computation of solutions to a large number of more advanced directional regularization problems. In particular we are able to handle robust, non-convex data terms, and to define...... a 0-harmonic regularization energy where we sparsify directions by means of an L0 norm....
Effort variation regularization in sound field reproduction
Stefanakis, Nick; Jacobsen, Finn; Sarris, Ioannis
2010-01-01
. Specifically, it is suggested that the phase differential of the source driving signals should be in agreement with the phase differential of the desired sound pressure field. The performance of the suggested method is compared with that of conventional effort regularization, wave field synthesis (WFS......In this paper, active control is used in order to reproduce a given sound field in an extended spatial region. A method is proposed which minimizes the reproduction error at a number of control positions with the reproduction sources holding a certain relation within their complex strengths......), and adaptive wave field synthesis (AWFS), both under free-field conditions and in reverberant rooms. It is shown that effort variation regularization overcomes the problems associated with small spaces and with a low ratio of direct to reverberant energy, improving thus the reproduction accuracy...
SPATIAL MODELING OF SOLID-STATE REGULAR POLYHEDRA (SOLIDS OF PLATON IN AUTOCAD SYSTEM
P. V. Bezditko
2009-03-01
Full Text Available This article describes the technology of modeling regular polyhedra by graphic methods. The authors came to the conclusion that in order to create solid models of regular polyhedra the method of extrusion is best to use.
Efficient Hyperelastic Regularization for Registration
Darkner, Sune; Hansen, Michael Sass; Larsen, Rasmus;
2011-01-01
For most image registration problems a smooth one-to-one mapping is desirable, a diffeomorphism. This can be obtained using priors such as volume preservation, certain kinds of elasticity or both. The key principle is to regularize the strain of the deformation which can be done through penalizat......For most image registration problems a smooth one-to-one mapping is desirable, a diffeomorphism. This can be obtained using priors such as volume preservation, certain kinds of elasticity or both. The key principle is to regularize the strain of the deformation which can be done through...... penalization of the eigen values of the stress tensor. We present a computational framework for regularization of image registration for isotropic hyper elasticity. We formulate an efficient and parallel scheme for computing the principal stain based for a given parameterization by decomposing the left Cauchy...
Regular algebra and finite machines
Conway, John Horton
2012-01-01
World-famous mathematician John H. Conway based this classic text on a 1966 course he taught at Cambridge University. Geared toward graduate students of mathematics, it will also prove a valuable guide to researchers and professional mathematicians.His topics cover Moore's theory of experiments, Kleene's theory of regular events and expressions, Kleene algebras, the differential calculus of events, factors and the factor matrix, and the theory of operators. Additional subjects include event classes and operator classes, some regulator algebras, context-free languages, communicative regular alg
Keller, Kai Johannes
2010-01-01
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmer...
Sparse regularization for force identification using dictionaries
Qiao, Baijie; Zhang, Xingwu; Wang, Chenxi; Zhang, Hang; Chen, Xuefeng
2016-04-01
The classical function expansion method based on minimizing l2-norm of the response residual employs various basis functions to represent the unknown force. Its difficulty lies in determining the optimum number of basis functions. Considering the sparsity of force in the time domain or in other basis space, we develop a general sparse regularization method based on minimizing l1-norm of the coefficient vector of basis functions. The number of basis functions is adaptively determined by minimizing the number of nonzero components in the coefficient vector during the sparse regularization process. First, according to the profile of the unknown force, the dictionary composed of basis functions is determined. Second, a sparsity convex optimization model for force identification is constructed. Third, given the transfer function and the operational response, Sparse reconstruction by separable approximation (SpaRSA) is developed to solve the sparse regularization problem of force identification. Finally, experiments including identification of impact and harmonic forces are conducted on a cantilever thin plate structure to illustrate the effectiveness and applicability of SpaRSA. Besides the Dirac dictionary, other three sparse dictionaries including Db6 wavelets, Sym4 wavelets and cubic B-spline functions can also accurately identify both the single and double impact forces from highly noisy responses in a sparse representation frame. The discrete cosine functions can also successfully reconstruct the harmonic forces including the sinusoidal, square and triangular forces. Conversely, the traditional Tikhonov regularization method with the L-curve criterion fails to identify both the impact and harmonic forces in these cases.
Regularized Regression and Density Estimation based on Optimal Transport
Burger, M.
2012-03-11
The aim of this paper is to investigate a novel nonparametric approach for estimating and smoothing density functions as well as probability densities from discrete samples based on a variational regularization method with the Wasserstein metric as a data fidelity. The approach allows a unified treatment of discrete and continuous probability measures and is hence attractive for various tasks. In particular, the variational model for special regularization functionals yields a natural method for estimating densities and for preserving edges in the case of total variation regularization. In order to compute solutions of the variational problems, a regularized optimal transport problem needs to be solved, for which we discuss several formulations and provide a detailed analysis. Moreover, we compute special self-similar solutions for standard regularization functionals and we discuss several computational approaches and results. © 2012 The Author(s).
Yibin XIAO; Guoji TANG; Xianjun LONG; Nanjing HUANG
2015-01-01
This paper studies the Browder-Tikhonov regularization of a second-order evolution hemivariational inequality (SOEHVI) with non-coercive operators. With dual-ity mapping, the regularized formulations and a derived first-order evolution hemivaria-tional inequality (FOEHVI) for the problem considered are presented. By applying the Browder-Tikhonov regularization method to the derived FOEHVI, a sequence of regular-ized solutions to the regularized SOEHVI is constructed, and the strong convergence of the whole sequence of regularized solutions to a solution to the problem is proved.
Regularization in Matrix Relevance Learning
Schneider, Petra; Bunte, Kerstin; Stiekema, Han; Hammer, Barbara; Villmann, Thomas; Biehl, Michael
2010-01-01
A In this paper, we present a regularization technique to extend recently proposed matrix learning schemes in learning vector quantization (LVQ). These learning algorithms extend the concept of adaptive distance measures in LVQ to the use of relevance matrices. In general, metric learning can displa
Singularities of slice regular functions
Stoppato, Caterina
2010-01-01
Beginning in 2006, G. Gentili and D.C. Struppa developed a theory of regular quaternionic functions with properties that recall classical results in complex analysis. For instance, in each Euclidean ball centered at 0 the set of regular functions coincides with that of quaternionic power series converging in the same ball. In 2009 the author proposed a classification of singularities of regular functions as removable, essential or as poles and studied poles by constructing the ring of quotients. In that article, not only the statements, but also the proving techniques were confined to the special case of balls centered at 0. In a subsequent paper, F. Colombo, G. Gentili, I. Sabadini and D.C. Struppa (2009) identified a larger class of domains, on which the theory of regular functions is natural and not limited to quaternionic power series. The present article studies singularities in this new context, beginning with the construction of the ring of quotients and of Laurent-type expansions at points other than ...
Regular inference as vertex coloring
Costa Florêncio, C.; Verwer, S.
2012-01-01
This paper is concerned with the problem of supervised learning of deterministic finite state automata, in the technical sense of identification in the limit from complete data, by finding a minimal DFA consistent with the data (regular inference). We solve this problem by translating it in its enti
Regularized Generalized Structured Component Analysis
Hwang, Heungsun
2009-01-01
Generalized structured component analysis (GSCA) has been proposed as a component-based approach to structural equation modeling. In practice, GSCA may suffer from multi-collinearity, i.e., high correlations among exogenous variables. GSCA has yet no remedy for this problem. Thus, a regularized extension of GSCA is proposed that integrates a ridge…
Regular inference as vertex coloring
Costa Florêncio, C.; Verwer, S.
2012-01-01
This paper is concerned with the problem of supervised learning of deterministic finite state automata, in the technical sense of identification in the limit from complete data, by finding a minimal DFA consistent with the data (regular inference). We solve this problem by translating it in its
2011-01-20
... meeting of the Board will be held at the offices of the Farm Credit Administration in McLean, Virginia, on...Lean, Virginia 22102. SUPPLEMENTARY INFORMATION: This meeting of the Board will be open to the ] public... CORPORATION Farm Credit System Insurance Corporation Board Regular Meeting SUMMARY: Notice is hereby given of...
Power-law regularities in human language
Mehri, Ali; Lashkari, Sahar Mohammadpour
2016-11-01
Complex structure of human language enables us to exchange very complicated information. This communication system obeys some common nonlinear statistical regularities. We investigate four important long-range features of human language. We perform our calculations for adopted works of seven famous litterateurs. Zipf's law and Heaps' law, which imply well-known power-law behaviors, are established in human language, showing a qualitative inverse relation with each other. Furthermore, the informational content associated with the words ordering, is measured by using an entropic metric. We also calculate fractal dimension of words in the text by using box counting method. The fractal dimension of each word, that is a positive value less than or equal to one, exhibits its spatial distribution in the text. Generally, we can claim that the Human language follows the mentioned power-law regularities. Power-law relations imply the existence of long-range correlations between the word types, to convey an especial idea.
Chiral Perturbation Theory With Lattice Regularization
Ouimet, P P A
2005-01-01
In this work, alternative methods to regularize chiral perturbation theory are discussed. First, Long Distance Regularization will be considered in the presence of the decuplet of the lightest spin 32 baryons for several different observables. This serves motivation and introduction to the use of the lattice regulator for chiral perturbation theory. The mesonic, baryonic and anomalous sectors of chiral perturbation theory will be formulated on a lattice of space time points. The consistency of the lattice as a regulator will be discussed in the context of the meson and baryon masses. Order a effects will also be discussed for the baryon masses, sigma terms and magnetic moments. The work will close with an attempt to derive an effective Wess-Zumino-Witten Lagrangian for Wilson fermions at non-zero a. Following this discussion, there will be a proposal for a phenomenologically useful WZW Lagrangian at non-zero a.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Spatially varying regularization of deconvolution in 3D microscopy.
Seo, J; Hwang, S; Lee, J-M; Park, H
2014-08-01
Confocal microscopy has become an essential tool to explore biospecimens in 3D. Confocal microcopy images are still degraded by out-of-focus blur and Poisson noise. Many deconvolution methods including the Richardson-Lucy (RL) method, Tikhonov method and split-gradient (SG) method have been well received. The RL deconvolution method results in enhanced image quality, especially for Poisson noise. Tikhonov deconvolution method improves the RL method by imposing a prior model of spatial regularization, which encourages adjacent voxels to appear similar. The SG method also contains spatial regularization and is capable of incorporating many edge-preserving priors resulting in improved image quality. The strength of spatial regularization is fixed regardless of spatial location for the Tikhonov and SG method. The Tikhonov and the SG deconvolution methods are improved upon in this study by allowing the strength of spatial regularization to differ for different spatial locations in a given image. The novel method shows improved image quality. The method was tested on phantom data for which ground truth and the point spread function are known. A Kullback-Leibler (KL) divergence value of 0.097 is obtained with applying spatially variable regularization to the SG method, whereas KL value of 0.409 is obtained with the Tikhonov method. In tests on a real data, for which the ground truth is unknown, the reconstructed data show improved noise characteristics while maintaining the important image features such as edges.
Recursively-regular subdivisions and applications
Rafel Jaume
2016-05-01
Full Text Available We generalize regular subdivisions (polyhedral complexes resulting from the projection of the lower faces of a polyhedron introducing the class of recursively-regular subdivisions. Informally speaking, a recursively-regular subdivision is a subdivision that can be obtained by splitting some faces of a regular subdivision by other regular subdivisions (and continue recursively. We also define the finest regular coarsening and the regularity tree of a polyhedral complex. We prove that recursively-regular subdivisions are not necessarily connected by flips and that they are acyclic with respect to the in-front relation. We show that the finest regular coarsening of a subdivision can be efficiently computed, and that whether a subdivision is recursively regular can be efficiently decided. As an application, we also extend a theorem known since 1981 on illuminating space by cones and present connections of recursive regularity to tensegrity theory and graph-embedding problems.
Yoenia Virgen Barbán Sarduy
2012-12-01
Full Text Available In this article, it is shown the results gotten through the application of the case study method in a sample of four school children deaf and blind. This method allowed the determination of theoretical and practical regularities, from the coherent application of tools and techniques in different periods for the gathering and evaluation of the results gotten in the investigation. To understand it, it begins with the theoretical budget which portrays the conception of the author and it is exemplified through four cases studied and the theoretical and practical regularities which were determined for the social integration of those school children.
VIBRATING VELOCITY RECONSTRUCTION USING IBEM AND TIKHONOV REGULARIZATION
无
2003-01-01
The inverse problem to determine the vibrating velocity from known exterior field measurement pressure, involves the solution of a discrete ill-posed problem. To facilitate the computation of a meaningful approximate solution possible, the indirect boundary element method (IBEM) code for investigating vibration velocity reconstruction and Tikhonov regularization method by means of singular value decomposition (SVD) are used. The amount of regularization is determined by a regularization parameter. Its optimal value is given by the L-curve approach. Numerical results indicate the reconstructed normal surface velocity is a good approximation to the real source.
Generalization Performance of Regularized Ranking With Multiscale Kernels.
Zhou, Yicong; Chen, Hong; Lan, Rushi; Pan, Zhibin
2016-05-01
The regularized kernel method for the ranking problem has attracted increasing attentions in machine learning. The previous regularized ranking algorithms are usually based on reproducing kernel Hilbert spaces with a single kernel. In this paper, we go beyond this framework by investigating the generalization performance of the regularized ranking with multiscale kernels. A novel ranking algorithm with multiscale kernels is proposed and its representer theorem is proved. We establish the upper bound of the generalization error in terms of the complexity of hypothesis spaces. It shows that the multiscale ranking algorithm can achieve satisfactory learning rates under mild conditions. Experiments demonstrate the effectiveness of the proposed method for drug discovery and recommendation tasks.
A Regularized Algorithm for the Proximal Split Feasibility Problem
Zhangsong Yao
2014-01-01
Full Text Available The proximal split feasibility problem has been studied. A regularized method has been presented for solving the proximal split feasibility problem. Strong convergence theorem is given.
The lattice generated by some subvarieties of completely regular semigroups
2008-01-01
Using a construction theorem of cryptogroups and congruence methods, we determine an 18-element lattice, generated by {NOBG, ROBG, OBG, NBA, RBA, BA}, of subvarieties of completely regular semigroups.
Full L1-regularized Traction Force Microscopy over whole cells.
Suñé-Auñón, Alejandro; Jorge-Peñas, Alvaro; Aguilar-Cuenca, Rocío; Vicente-Manzanares, Miguel; Van Oosterwyck, Hans; Muñoz-Barrutia, Arrate
2017-08-10
Traction Force Microscopy (TFM) is a widespread technique to estimate the tractions that cells exert on the surrounding substrate. To recover the tractions, it is necessary to solve an inverse problem, which is ill-posed and needs regularization to make the solution stable. The typical regularization scheme is given by the minimization of a cost functional, which is divided in two terms: the error present in the data or data fidelity term; and the regularization or penalty term. The classical approach is to use zero-order Tikhonov or L2-regularization, which uses the L2-norm for both terms in the cost function. Recently, some studies have demonstrated an improved performance using L1-regularization (L1-norm in the penalty term) related to an increase in the spatial resolution and sensitivity of the recovered traction field. In this manuscript, we present a comparison between the previous two regularization schemes (relying in the L2-norm for the data fidelity term) and the full L1-regularization (using the L1-norm for both terms in the cost function) for synthetic and real data. Our results reveal that L1-regularizations give an improved spatial resolution (more important for full L1-regularization) and a reduction in the background noise with respect to the classical zero-order Tikhonov regularization. In addition, we present an approximation, which makes feasible the recovery of cellular tractions over whole cells on typical full-size microscope images when working in the spatial domain. The proposed full L1-regularization improves the sensitivity to recover small stress footprints. Moreover, the proposed method has been validated to work on full-field microscopy images of real cells, what certainly demonstrates it is a promising tool for biological applications.
刘新庚; 朱新洲; 刘邦捷
2014-01-01
;the approaches of method have been diversified in expanding of people’s ideas. Every link in the development of its evolution reflects something of society’s objective requirement, as well as the inevitable regularity of“strategic guidance”.
Катерина Ігорівна Сізова
2015-03-01
Full Text Available Large-scale sinter plants at metallurgical enterprises incorporate highly productive transport-and-handling complexes (THC that receive and process mass iron-bearing raw materials. Such THCs as a rule include unloading facilities and freight railway station. The central part of the THC is a technological line that carries out operations of reception and unloading of unit trains with raw materials. The technological line consists of transport and freight modules. The latter plays a leading role and, in its turn, consists of rotary car dumpers and conveyor belts. This module represents a determinate system that carries out preparation and unloading operations. Its processing capacity is set in accordance with manufacturing capacity of the sinter plant. The research has shown that in existing operating conditions, which is characterized by “arrhythmia” of interaction between external transport operation and production, technological line of THC functions inefficiently. Thus, it secures just 18-20 % of instances of processing of inbound unit trains within set standard time. It was determined that duration of the cycle of processing of inbound unit train can play a role of regulator, under stochastic characteristics of intervals between inbound unit trains with raw materials on the one hand, and determined unloading system on the other hand. That is why evaluation of interdependence between these factors allows determination of duration of cycle of processing of inbound unit trains. Basing on the results of the study, the method of logistical management of the processing of inbound unit trains was offered. At the same time, real duration of processing of inbound unit train is taken as the regulated value. The regulation process implies regular evaluation and comparison of these values, and, taking into account different disturbances, decision-making concerning adaptation of functioning of technological line. According to the offered principles
马吉祥; 孙华
2011-01-01
单相异步电机是一种应用极为广泛的电机。电机优化设计是一种复杂的、有约束、非线性、混合离散多变量规划问题。本文选用正多面体优化算法对单相异步电机进行了优化设计。在数学模型的基础上，对正多面体优化算法进行了深入的分析和讨论，并对选定的电机进行优化，对获得的优化结果进行了比较和分析。结果表明，这种算法的性能较为理想，具有一定的实用价值。%Single-phase AC Asynchronous Motor is a very widely-used type electrical motor which has been thoroughly studied. Design optimization of electrical motors is a nonlinear constrained and mixed-discrete multiple variables optimization problem. In this thesis, the fundamental principle of Single-phase AC Asynchronous Motor is introduced and the Regular Polyhedron Method is fully discussed. We study its principle and give the detailed procedure for Single-phase AC Asynchronous Motor optimization. The algorithm is used to optimize selected Single-phase AC Asynchronous Motor and the results are accumulated and analyzed. The results prove that the performance of the algorithms is good and has some practical value.
General inverse problems for regular variation
Damek, Ewa; Mikosch, Thomas Valentin; Rosinski, Jan
2014-01-01
Regular variation of distributional tails is known to be preserved by various linear transformations of some random structures. An inverse problem for regular variation aims at understanding whether the regular variation of a transformed random object is caused by regular variation of components ...
Regular Motions of Resonant Asteroids
Ferraz-Mello, S.
1990-11-01
RESUMEN. Se revisan resultados analiticos relativos a soluciones regulares del problema asteroidal eliptico promediados en la vecindad de una resonancia con jupiten Mencionamos Ia ley de estructura para libradores de alta excentricidad, la estabilidad de los centros de liberaci6n, las perturbaciones forzadas por la excentricidad de jupiter y las 6rbitas de corotaci6n. ABSTRAC This paper reviews analytical results concerning the regular solutions of the elliptic asteroidal problem averaged in the neighbourhood of a resonance with jupiter. We mention the law of structure for high-eccentricity librators, the stability of the libration centers, the perturbations forced by the eccentricity ofjupiter and the corotation orbits. Key words: ASThROIDS
Energy functions for regularization algorithms
Delingette, H.; Hebert, M.; Ikeuchi, K.
1991-01-01
Regularization techniques are widely used for inverse problem solving in computer vision such as surface reconstruction, edge detection, or optical flow estimation. Energy functions used for regularization algorithms measure how smooth a curve or surface is, and to render acceptable solutions these energies must verify certain properties such as invariance with Euclidean transformations or invariance with parameterization. The notion of smoothness energy is extended here to the notion of a differential stabilizer, and it is shown that to void the systematic underestimation of undercurvature for planar curve fitting, it is necessary that circles be the curves of maximum smoothness. A set of stabilizers is proposed that meet this condition as well as invariance with rotation and parameterization.
Physical model of dimensional regularization
Schonfeld, Jonathan F.
2016-12-15
We explicitly construct fractals of dimension 4-ε on which dimensional regularization approximates scalar-field-only quantum-field theory amplitudes. The construction does not require fractals to be Lorentz-invariant in any sense, and we argue that there probably is no Lorentz-invariant fractal of dimension greater than 2. We derive dimensional regularization's power-law screening first for fractals obtained by removing voids from 3-dimensional Euclidean space. The derivation applies techniques from elementary dielectric theory. Surprisingly, fractal geometry by itself does not guarantee the appropriate power-law behavior; boundary conditions at fractal voids also play an important role. We then extend the derivation to 4-dimensional Minkowski space. We comment on generalization to non-scalar fields, and speculate about implications for quantum gravity. (orig.)
Central charges in regular mechanics
Cabo-Montes de Oca, Alejandro; Villanueva, V M
1997-01-01
We consider the algebra associated to a group of transformations which are symmetries of a regular mechanical system (i.e. system free of constraints). For time dependent coordinate transformations we show that a central extension may appear at the classical level which is coordinate and momentum independent. A cochain formalism naturally arises in the argument and extends the usual configuration space cochain concepts to phase space.
Hyperspectral Image Recovery via Hybrid Regularization
Arablouei, Reza; de Hoog, Frank
2016-12-01
Natural images tend to mostly consist of smooth regions with individual pixels having highly correlated spectra. This information can be exploited to recover hyperspectral images of natural scenes from their incomplete and noisy measurements. To perform the recovery while taking full advantage of the prior knowledge, we formulate a composite cost function containing a square-error data-fitting term and two distinct regularization terms pertaining to spatial and spectral domains. The regularization for the spatial domain is the sum of total-variation of the image frames corresponding to all spectral bands. The regularization for the spectral domain is the l1-norm of the coefficient matrix obtained by applying a suitable sparsifying transform to the spectra of the pixels. We use an accelerated proximal-subgradient method to minimize the formulated cost function. We analyze the performance of the proposed algorithm and prove its convergence. Numerical simulations using real hyperspectral images exhibit that the proposed algorithm offers an excellent recovery performance with a number of measurements that is only a small fraction of the hyperspectral image data size. Simulation results also show that the proposed algorithm significantly outperforms an accelerated proximal-gradient algorithm that solves the classical basis-pursuit denoising problem to recover the hyperspectral image.
Charge-regularization effects on polyelectrolytes
Muthukumar, Murugappan
2012-02-01
When electrically charged macromolecules are dispersed in polar solvents, their effective net charge is generally different from their chemical charges, due to competition between counterion adsorption and the translational entropy of dissociated counterions. The effective charge changes significantly as the experimental conditions change such as variations in solvent quality, temperature, and the concentration of added small electrolytes. This charge-regularization effect leads to major difficulties in interpreting experimental data on polyelectrolyte solutions and challenges in understanding the various polyelectrolyte phenomena. Even the most fundamental issue of experimental determination of molar mass of charged macromolecules by light scattering method has been difficult so far due to this feature. We will present a theory of charge-regularization of flexible polyelectrolytes in solutions and discuss the consequences of charge-regularization on (a) experimental determination of molar mass of polyelectrolytes using scattering techniques, (b) coil-globule transition, (c) macrophase separation in polyelectrolyte solutions, (c) phase behavior in coacervate formation, and (d) volume phase transitions in polyelectrolyte gels.
Efficient Hyperelastic Regularization for Registration
Darkner, Sune; Hansen, Michael S; Larsen, Rasmus;
2011-01-01
For most image registration problems a smooth one-to-one mapping is desirable, a diffeomorphism. This can be obtained using priors such as volume preservation, certain kinds of elasticity or both. The key principle is to regularize the strain of the deformation which can be done through penalizat......For most image registration problems a smooth one-to-one mapping is desirable, a diffeomorphism. This can be obtained using priors such as volume preservation, certain kinds of elasticity or both. The key principle is to regularize the strain of the deformation which can be done through...... penalization of the eigen values of the stress tensor. We present a computational framework for regularization of image registration for isotropic hyper elasticity. We formulate an efficient and parallel scheme for computing the principal stain based for a given parameterization by decomposing the left Cauchy...... elastic priors such at the Saint Vernant Kirchoff model, the Ogden material model or Riemanian elasticity. We exemplify the approach through synthetic registration and special tests as well as registration of different modalities; 2D cardiac MRI and 3D surfaces of the human ear. The artificial examples...
Regular aspirin use and lung cancer risk
Cummings K
2002-11-01
Full Text Available Abstract Background Although a large number of epidemiological studies have examined the role of aspirin in the chemoprevention of colon cancer and other solid tumors, there is a limited body of research focusing on the association between aspirin and lung cancer risk. Methods We conducted a hospital-based case-control study to evaluate the role of regular aspirin use in lung cancer etiology. Study participants included 868 cases with primary, incident lung cancer and 935 hospital controls with non-neoplastic conditions who completed a comprehensive epidemiological questionnaire. Participants were classified as regular aspirin users if they had taken the drug at least once a week for at least one year. Results Results indicated that lung cancer risk was significantly lower for aspirin users compared to non-users (adjusted OR = 0.57; 95% CI 0.41–0.78. Although there was no clear evidence of a dose-response relationship, we observed risk reductions associated with greater frequency of use. Similarly, prolonged duration of use and increasing tablet years (tablets per day × years of use was associated with reduced lung cancer risk. Risk reductions were observed in both sexes, but significant dose response relationships were only seen among male participants. When the analyses were restricted to former and current smokers, participants with the lowest cigarette exposure tended to benefit most from the potential chemopreventive effect of aspirin. After stratification by histology, regular aspirin use was significantly associated with reduced risk of small cell lung cancer and non-small cell lung cancer. Conclusions Overall, results from this hospital-based case-control study suggest that regular aspirin use may be associated with reduced risk of lung cancer.
From Dimensional to Cut-Off Regularization
Dillig, M
2006-01-01
We extent the standard approach of dimensional regularization of Feynman diagrams: we replace the transition to lower dimensions by a 'natural' cut-off regulator. Introducing an external regulator of mass Lambda^(2e), we regain in the limit e -> 0 and e > 0 the results of dimensional and cut-off regularization, respectively. We demonstrate the versatility and adequacy of the different regularization schemes for practical examples (such as non covariant regularization, the axial anomaly or regularization in effective field theories).
Regularized multiple criteria linear programs for classification
SHI Yong; TIAN YingJie; CHEN XiaoJun; ZHANG Peng
2009-01-01
Although multiple criteria mathematical program (MCMP), as an alternative method of classification, has been used in various real-life data mining problems, its mathematical structure of solvability is still challengeable. This paper proposes a regularized multiple criteria linear program (RMCLP) for two classes of classification problems. It first adds some regularization terms in the objective function of the known multiple criteria linear program (MCLP) model for possible existence of solution. Then the paper describes the mathematical framework of the solvability. Finally, a series of experimental tests are conducted to illustrate the performance of the proposed RMCLP with the existing methods: MCLP, multiple criteria quadratic program (MCQP), and support vector machine (SVM). The results of four publicly available datasets and a real-life credit dataset all show that RMCLP is a competitive method in classification. Furthermore, this paper explores an ordinal RMCLP (ORMCLP) model for ordinal multi-group problems. Comparing ORMCLP with traditional methods such as One-Against-One, One-Against-The rest on large-scale credit card dataset, experimental results show that both ORMCLP and RMCLP perform well.
Kowalski, Karol; Valiev, Marat
2009-12-01
The recently introduced energy expansion based on the use of generating functional (GF) [K. Kowalski and P. D. Fan, J. Chem. Phys. 130, 084112 (2009)] provides a way of constructing size-consistent noniterative coupled cluster (CC) corrections in terms of moments of the CC equations. To take advantage of this expansion in a strongly interacting regime, the regularization of the cluster amplitudes is required in order to counteract the effect of excessive growth of the norm of the CC wave function. Although proven to be efficient, the previously discussed form of the regularization does not lead to rigorously size-consistent corrections. In this paper we address the issue of size-consistent regularization of the GF expansion by redefining the equations for the cluster amplitudes. The performance and basic features of proposed methodology are illustrated on several gas-phase benchmark systems. Moreover, the regularized GF approaches are combined with quantum mechanical molecular mechanics module and applied to describe the SN2 reaction of CHCl3 and OH- in aqueous solution.
Learning regularization parameters for general-form Tikhonov
Chung, Julianne; Español, Malena I.
2017-07-01
Computing regularization parameters for general-form Tikhonov regularization can be an expensive and difficult task, especially if multiple parameters or many solutions need to be computed in real time. In this work, we assume training data is available and describe an efficient learning approach for computing regularization parameters that can be used for a large set of problems. We consider an empirical Bayes risk minimization framework for finding regularization parameters that minimize average errors for the training data. We first extend methods from Chung et al (2011 SIAM J. Sci. Comput. 33 3132-52) to the general-form Tikhonov problem. Then we develop a learning approach for multi-parameter Tikhonov problems, for the case where all involved matrices are simultaneously diagonalizable. For problems where this is not the case, we describe an approach to compute near-optimal regularization parameters by using operator approximations for the original problem. Finally, we propose a new class of regularizing filters, where solutions correspond to multi-parameter Tikhonov solutions, that requires less data than previously proposed optimal error filters, avoids the generalized SVD, and allows flexibility and novelty in the choice of regularization matrices. Numerical results for 1D and 2D examples using different norms on the errors show the effectiveness of our methods.
Invariant Regularization of Supersymmetric Chiral Gauge Theory
Suzuki, H
1999-01-01
We present a regularization scheme which respects the supersymmetry and the maximal background gauge covariance in supersymmetric chiral gauge theories. When the anomaly cancellation condition is satisfied, the effective action in the superfield background field method automatically restores the gauge invariance without counterterms. The scheme also provides a background gauge covariant definition of composite operators that is especially useful in analyzing anomalies. We present several applications: The minimal consistent gauge anomaly; the super-chiral anomaly and the superconformal anomaly; as the corresponding anomalous commutators, the Konishi anomaly and an anomalous supersymmetric transformation law of the supercurrent (the ``central extension'' of N=1 supersymmetry algebra) and of the R-current.
Multichannel image regularization using anisotropic geodesic filtering
Grazzini, Jacopo A [Los Alamos National Laboratory
2010-01-01
This paper extends a recent image-dependent regularization approach introduced in aiming at edge-preserving smoothing. For that purpose, geodesic distances equipped with a Riemannian metric need to be estimated in local neighbourhoods. By deriving an appropriate metric from the gradient structure tensor, the associated geodesic paths are constrained to follow salient features in images. Following, we design a generalized anisotropic geodesic filter; incorporating not only a measure of the edge strength, like in the original method, but also further directional information about the image structures. The proposed filter is particularly efficient at smoothing heterogeneous areas while preserving relevant structures in multichannel images.
Academic Training Lecture - Regular Programme
PH Department
2011-01-01
Regular Lecture Programme 9 May 2011 ACT Lectures on Detectors - Inner Tracking Detectors by Pippa Wells (CERN) 10 May 2011 ACT Lectures on Detectors - Calorimeters (2/5) by Philippe Bloch (CERN) 11 May 2011 ACT Lectures on Detectors - Muon systems (3/5) by Kerstin Hoepfner (RWTH Aachen) 12 May 2011 ACT Lectures on Detectors - Particle Identification and Forward Detectors by Peter Krizan (University of Ljubljana and J. Stefan Institute, Ljubljana, Slovenia) 13 May 2011 ACT Lectures on Detectors - Trigger and Data Acquisition (5/5) by Dr. Brian Petersen (CERN) from 11:00 to 12:00 at CERN ( Bldg. 222-R-001 - Filtration Plant )
Regularized Semiparametric Estimation for Ordinary Differential Equations.
Li, Yun; Zhu, Ji; Wang, Naisyin
2015-07-01
Ordinary differential equations (ODEs) are widely used in modeling dynamic systems and have ample applications in the fields of physics, engineering, economics and biological sciences. The ODE parameters often possess physiological meanings and can help scientists gain better understanding of the system. One key interest is thus to well estimate these parameters. Ideally, constant parameters are preferred due to their easy interpretation. In reality, however, constant parameters can be too restrictive such that even after incorporating error terms, there could still be unknown sources of disturbance that lead to poor agreement between observed data and the estimated ODE system. In this paper, we address this issue and accommodate short-term interferences by allowing parameters to vary with time. We propose a new regularized estimation procedure on the time-varying parameters of an ODE system so that these parameters could change with time during transitions but remain constants within stable stages. We found, through simulation studies, that the proposed method performs well and tends to have less variation in comparison to the non-regularized approach. On the theoretical front, we derive finite-sample estimation error bounds for the proposed method. Applications of the proposed method to modeling the hare-lynx relationship and the measles incidence dynamic in Ontario, Canada lead to satisfactory and meaningful results.
Lu, Yao; Chan, Heang-Ping; Wei, Jun; Hadjiiski, Lubomir; Zhou, Chuan
2012-03-01
Digital breast tomosynthesis (DBT) holds strong promise for improving the sensitivity of detecting subtle mass lesions. Detection of microcalcifications is more difficult because of high noise and subtle signals in the large DBT volume. It is important to enhance the contrast-to-noise ratio (CNR) of microcalcifications in DBT reconstruction. A major challenge of implementing microcalcification enhancement or noise regularization in DBT reconstruction is to preserve the image quality of masses, especially those with ill-defined margins and subtle spiculations. We are developing a new multiscale regularization (MSR) method for the simultaneous algebraic reconstruction technique (SART) to improve the CNR of microcalcifications without compromising the quality of masses. Each DBT slice is stratified into different frequency bands via wavelet decomposition and the regularization method applies different degrees of regularization to different frequency bands to preserve features of interest and suppress noise. Regularization is constrained by a characteristic map to avoid smoothing subtle microcalcifications. The characteristic map is generated via image feature analysis to identify potential microcalcification locations in the DBT volume. The MSR method was compared to the non-convex total pvariation (TpV) method and SART with no regularization (NR) in terms of the CNR and the full width at half maximum of the line profiles intersecting calcifications and mass spiculations in DBT of human subjects. The results demonstrated that SART regularized by the MSR method was superior to the TpV method for subtle microcalcifications in terms of CNR enhancement. The MSR method preserved the quality of subtle spiculations better than the TpV method in comparison to NR.
Regularization of Instantaneous Frequency Attribute Computations
Yedlin, M. J.; Margrave, G. F.; Van Vorst, D. G.; Ben Horin, Y.
2014-12-01
We compare two different methods of computation of a temporally local frequency:1) A stabilized instantaneous frequency using the theory of the analytic signal.2) A temporally variant centroid (or dominant) frequency estimated from a time-frequency decomposition.The first method derives from Taner et al (1979) as modified by Fomel (2007) and utilizes the derivative of the instantaneous phase of the analytic signal. The second method computes the power centroid (Cohen, 1995) of the time-frequency spectrum, obtained using either the Gabor or Stockwell Transform. Common to both methods is the necessity of division by a diagonal matrix, which requires appropriate regularization.We modify Fomel's (2007) method by explicitly penalizing the roughness of the estimate. Following Farquharson and Oldenburg (2004), we employ both the L curve and GCV methods to obtain the smoothest model that fits the data in the L2 norm.Using synthetic data, quarry blast, earthquakes and the DPRK tests, our results suggest that the optimal method depends on the data. One of the main applications for this work is the discrimination between blast events and earthquakesFomel, Sergey. " Local seismic attributes." , Geophysics, 72.3 (2007): A29-A33.Cohen, Leon. " Time frequency analysis theory and applications." USA: Prentice Hall, (1995).Farquharson, Colin G., and Douglas W. Oldenburg. "A comparison of automatic techniques for estimating the regularization parameter in non-linear inverse problems." Geophysical Journal International 156.3 (2004): 411-425.Taner, M. Turhan, Fulton Koehler, and R. E. Sheriff. " Complex seismic trace analysis." Geophysics, 44.6 (1979): 1041-1063.
Discriminative Elastic-Net Regularized Linear Regression.
Zhang, Zheng; Lai, Zhihui; Xu, Yong; Shao, Ling; Wu, Jian; Xie, Guo-Sen
2017-03-01
In this paper, we aim at learning compact and discriminative linear regression models. Linear regression has been widely used in different problems. However, most of the existing linear regression methods exploit the conventional zero-one matrix as the regression targets, which greatly narrows the flexibility of the regression model. Another major limitation of these methods is that the learned projection matrix fails to precisely project the image features to the target space due to their weak discriminative capability. To this end, we present an elastic-net regularized linear regression (ENLR) framework, and develop two robust linear regression models which possess the following special characteristics. First, our methods exploit two particular strategies to enlarge the margins of different classes by relaxing the strict binary targets into a more feasible variable matrix. Second, a robust elastic-net regularization of singular values is introduced to enhance the compactness and effectiveness of the learned projection matrix. Third, the resulting optimization problem of ENLR has a closed-form solution in each iteration, which can be solved efficiently. Finally, rather than directly exploiting the projection matrix for recognition, our methods employ the transformed features as the new discriminate representations to make final image classification. Compared with the traditional linear regression model and some of its variants, our method is much more accurate in image classification. Extensive experiments conducted on publicly available data sets well demonstrate that the proposed framework can outperform the state-of-the-art methods. The MATLAB codes of our methods can be available at http://www.yongxu.org/lunwen.html.
Finite Deformations of Conformal Field Theories Using Analytically Regularized Connections
von Gussich, Alexander; Sundell, Per
1996-01-01
We study some natural connections on spaces of conformal field theories using an analytical regularization method. The connections are based on marginal conformal field theory deformations. We show that the analytical regularization preserves conformal invariance and leads to integrability of the marginal deformations. The connections are shown to be flat and to generate well-defined finite parallel transport. These finite parallel transports yield formulations of the deformed theories in the...
Estimation of the global regularity of a multifractional Brownian motion
Lebovits, Joachim; Podolskij, Mark
This paper presents a new estimator of the global regularity index of a multifractional Brownian motion. Our estimation method is based upon a ratio statistic, which compares the realized global quadratic variation of a multifractional Brownian motion at two different frequencies. We show...... that a logarithmic transformation of this statistic converges in probability to the minimum of the Hurst functional parameter, which is, under weak assumptions, identical to the global regularity index of the path....
Quantitative regularities in floodplain formation
Nevidimova, O.
2009-04-01
Quantitative regularities in floodplain formation Modern methods of the theory of complex systems allow to build mathematical models of complex systems where self-organizing processes are largely determined by nonlinear effects and feedback. However, there exist some factors that exert significant influence on the dynamics of geomorphosystems, but hardly can be adequately expressed in the language of mathematical models. Conceptual modeling allows us to overcome this difficulty. It is based on the methods of synergetic, which, together with the theory of dynamic systems and classical geomorphology, enable to display the dynamics of geomorphological systems. The most adequate for mathematical modeling of complex systems is the concept of model dynamics based on equilibrium. This concept is based on dynamic equilibrium, the tendency to which is observed in the evolution of all geomorphosystems. As an objective law, it is revealed in the evolution of fluvial relief in general, and in river channel processes in particular, demonstrating the ability of these systems to self-organization. Channel process is expressed in the formation of river reaches, rifts, meanders and floodplain. As floodplain is a periodically flooded surface during high waters, it naturally connects river channel with slopes, being one of boundary expressions of the water stream activity. Floodplain dynamics is inseparable from the channel dynamics. It is formed at simultaneous horizontal and vertical displacement of the river channel, that is at Y=Y(x, y), where х, y - horizontal and vertical coordinates, Y - floodplain height. When dу/dt=0 (for not lowering river channel), the river, being displaced in a horizontal plane, leaves behind a low surface, which flooding during high waters (total duration of flooding) changes from the maximum during the initial moment of time t0 to zero in the moment tn. In a similar manner changed is the total amount of accumulated material on the floodplain surface
Semisupervised Support Vector Machines With Tangent Space Intrinsic Manifold Regularization.
Sun, Shiliang; Xie, Xijiong
2016-09-01
Semisupervised learning has been an active research topic in machine learning and data mining. One main reason is that labeling examples is expensive and time-consuming, while there are large numbers of unlabeled examples available in many practical problems. So far, Laplacian regularization has been widely used in semisupervised learning. In this paper, we propose a new regularization method called tangent space intrinsic manifold regularization. It is intrinsic to data manifold and favors linear functions on the manifold. Fundamental elements involved in the formulation of the regularization are local tangent space representations, which are estimated by local principal component analysis, and the connections that relate adjacent tangent spaces. Simultaneously, we explore its application to semisupervised classification and propose two new learning algorithms called tangent space intrinsic manifold regularized support vector machines (TiSVMs) and tangent space intrinsic manifold regularized twin SVMs (TiTSVMs). They effectively integrate the tangent space intrinsic manifold regularization consideration. The optimization of TiSVMs can be solved by a standard quadratic programming, while the optimization of TiTSVMs can be solved by a pair of standard quadratic programmings. The experimental results of semisupervised classification problems show the effectiveness of the proposed semisupervised learning algorithms.
Keller, Kai Johannes
2010-04-15
The present work contains a consistent formulation of the methods of dimensional regularization (DimReg) and minimal subtraction (MS) in Minkowski position space. The methods are implemented into the framework of perturbative Algebraic Quantum Field Theory (pAQFT). The developed methods are used to solve the Epstein-Glaser recursion for the construction of time-ordered products in all orders of causal perturbation theory. A solution is given in terms of a forest formula in the sense of Zimmermann. A relation to the alternative approach to renormalization theory using Hopf algebras is established. (orig.)
陈海亮; 雷琳; 周石琳
2012-01-01
Ship team is an important group target of ships which cruise and battle in the sea. Focusing on the complex problem of team form and order changing, this paper summarizes the spatial regularity existing in the ship team,and establishes the fuzzy reasoning rules based on it to find the spatial regularity quantitatively, then extracts the ship team which has high spatial regularity using spectral graph partitioning. Emulational and real data show that the algorithm can extract the group target which has a team form in the case of disturbance and the team spatial relationship changes.%舰船编队是海上舰船巡航和作战中的重要群目标.针对编队组成和队形变化的复杂情况,本文总结了编队存在的空问规则性,由此制定模糊推理规则得到定量的空间规则度,最后用谱图划分方法提取具有较高空间规则度的编队目标.仿真和实测的数据表明,该方法可以在有干扰和编队空间关系适当变化的情况下提取出具有编队形式的群目标.
Robust integral stabilization of regular linear systems
XU Chengzheng; FENG Dexing
2004-01-01
We consider regular systems with control and observation. We prove some necessary and sufficient condition for an exponentially stable regular system to admit an integral stabilizing controller. We propose also some robust integral controllers when they exist.
Generalization performance of regularized neural network models
Larsen, Jan; Hansen, Lars Kai
1994-01-01
Architecture optimization is a fundamental problem of neural network modeling. The optimal architecture is defined as the one which minimizes the generalization error. This paper addresses estimation of the generalization performance of regularized, complete neural network models. Regularization...
Weakly and Strongly Regular Near-rings
N.Argac; N.J.Groenewald
2005-01-01
In this paper, we prove some basic properties of left weakly regular near-rings.We give an affirmative answer to the question whether a left weakly regular near-ring with left unity and satisfying the IFP is also right weakly regular. In the last section, we use among others left 0-prime and left completely prime ideals to characterize strongly regular near-rings.
Image Super-Resolution via Adaptive Regularization and Sparse Representation.
Cao, Feilong; Cai, Miaomiao; Tan, Yuanpeng; Zhao, Jianwei
2016-07-01
Previous studies have shown that image patches can be well represented as a sparse linear combination of elements from an appropriately selected over-complete dictionary. Recently, single-image super-resolution (SISR) via sparse representation using blurred and downsampled low-resolution images has attracted increasing interest, where the aim is to obtain the coefficients for sparse representation by solving an l0 or l1 norm optimization problem. The l0 optimization is a nonconvex and NP-hard problem, while the l1 optimization usually requires many more measurements and presents new challenges even when the image is the usual size, so we propose a new approach for SISR recovery based on regularization nonconvex optimization. The proposed approach is potentially a powerful method for recovering SISR via sparse representations, and it can yield a sparser solution than the l1 regularization method. We also consider the best choice for lp regularization with all p in (0, 1), where we propose a scheme that adaptively selects the norm value for each image patch. In addition, we provide a method for estimating the best value of the regularization parameter λ adaptively, and we discuss an alternate iteration method for selecting p and λ . We perform experiments, which demonstrates that the proposed regularization nonconvex optimization method can outperform the convex optimization method and generate higher quality images.
Natural frequency of regular basins
Tjandra, Sugih S.; Pudjaprasetya, S. R.
2014-03-01
Similar to the vibration of a guitar string or an elastic membrane, water waves in an enclosed basin undergo standing oscillatory waves, also known as seiches. The resonant (eigen) periods of seiches are determined by water depth and geometry of the basin. For regular basins, explicit formulas are available. Resonance occurs when the dominant frequency of external force matches the eigen frequency of the basin. In this paper, we implement the conservative finite volume scheme to 2D shallow water equation to simulate resonance in closed basins. Further, we would like to use this scheme and utilizing energy spectra of the recorded signal to extract resonant periods of arbitrary basins. But here we first test the procedure for getting resonant periods of a square closed basin. The numerical resonant periods that we obtain are comparable with those from analytical formulas.
REGULARITY FOR CERTAIN QUASILINEARELLIPTIC SYSTEMS OF DIVERGENCESTRUCTURE
周树清; 冉启康
2001-01-01
The regularity of the gradient of H lder continuous solutions of quasi-linear elliptic systems of the form -Dj(aij(x, u, Du)Diuk) = -Difik + gkis investigated. Partial regularity and ε-regularity are shown to hold under the structural assumption-Dj(aij(x,u, Du)) = hi ∈ L∞.
Technology Corner: A Regular Expression Training App
Nick Flor
2012-12-01
Full Text Available Regular expressions enable digital forensic analysts to find information in files. The best way for an analyst to become proficient in writing regular expressions is to practice. This paper presents the code for an app that allows an analyst to practice writing regular expressions.
Counting Rooted Nearly 2-regular Planar Maps
郝荣霞; 蔡俊亮
2004-01-01
The number of rooted nearly 2-regular maps with the valency of rootvertex, the number of non-rooted vertices and the valency of root-face as three parameters is obtained. Furthermore, the explicit expressions of the special cases including loopless nearly 2-regular maps and simple nearly 2-regular maps in terms of the above three parameters are derived.
On the Construction of Regular Orthocryptogroups
Xiang Zhi KONG
2002-01-01
The aim of this paper is to study regular orthocryptogroups. After obtaining some charac-terizations of such semigroups, we establish the construction theorem of regular orthocryptogroups. Asan application, we give the construction theorem of right quasi-normal orthocryptogroups and studyhomomorphisms between two regular orthocryptogroups.
REGULAR RELATIONS AND MONOTONE NORMAL ORDERED SPACES
XU XIAOQUAN; LIU YINGMING
2004-01-01
In this paper the classical theorem of Zareckii about regular relations is generalized and an intrinsic characterization of regularity is obtained. Based on the generalized Zareckii theorem and the intrinsic characterization of regularity, the authors give a characterization of monotone normality of ordered spaces. A new proof of the UrysohnNachbin lemma is presented which is quite different from the classical one.
Regular Pentagons and the Fibonacci Sequence.
French, Doug
1989-01-01
Illustrates how to draw a regular pentagon. Shows the sequence of a succession of regular pentagons formed by extending the sides. Calculates the general formula of the Lucas and Fibonacci sequences. Presents a regular icosahedron as an example of the golden ratio. (YP)
ON A REGULARIZATION OF INDEX 2 DIFFERENTIAL-ALGEBRAIC EQUATIONS WITH PROPERLY STATED LEADING TERM
Liu Hong; Song Yongzhong
2011-01-01
In this article, linear regular index 2 DAEs A(t)[D(t)x(t)]'+ B(t)x(t) = q(t)are considered. Using a decoupling technique, initial condition and boundary condition are properly formulated. Regular index 1 DAEs are obtained by a regulaxization method. We study the behavior of the solution of the regularization system via asymptotic expansions. The error analysis between the solutions of the DAEs and its regularization system is given.
Estimating signal loss in regularized GRACE gravity field solutions
Swenson, S. C.; Wahr, J. M.
2011-05-01
Gravity field solutions produced using data from the Gravity Recovery and Climate Experiment (GRACE) satellite mission are subject to errors that increase as a function of increasing spatial resolution. Two commonly used techniques to improve the signal-to-noise ratio in the gravity field solutions are post-processing, via spectral filters, and regularization, which occurs within the least-squares inversion process used to create the solutions. One advantage of post-processing methods is the ability to easily estimate the signal loss resulting from the application of the spectral filter by applying the filter to synthetic gravity field coefficients derived from models of mass variation. This is a critical step in the construction of an accurate error budget. Estimating the amount of signal loss due to regularization, however, requires the execution of the full gravity field determination process to create synthetic instrument data; this leads to a significant cost in computation and expertise relative to post-processing techniques, and inhibits the rapid development of optimal regularization weighting schemes. Thus, while a number of studies have quantified the effects of spectral filtering, signal modification in regularized GRACE gravity field solutions has not yet been estimated. In this study, we examine the effect of one regularization method. First, we demonstrate that regularization can in fact be performed as a post-processing step if the solution covariance matrix is available. Regularization then is applied as a post-processing step to unconstrained solutions from the Center for Space Research (CSR), using weights reported by the Centre National d'Etudes Spatiales/Groupe de Recherches de geodesie spatiale (CNES/GRGS). After regularization, the power spectra of the CSR solutions agree well with those of the CNES/GRGS solutions. Finally, regularization is performed on synthetic gravity field solutions derived from a land surface model, revealing that in
Automatic Constraint Detection for 2D Layout Regularization
Jiang, Haiyong
2015-09-18
In this paper, we address the problem of constraint detection for layout regularization. As layout we consider a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important for digitizing plans or images, such as floor plans and facade images, and for the improvement of user created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate the layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm to automatically detect constraints. In our results, we evaluate the proposed framework on a variety of input layouts from different applications, which demonstrates our method has superior performance to the state of the art.
Kaluza-Klein thresholds and regularization (in)dependence
Kubo, Jisuke; Terao, Haruhiko; Zoupanos, George
2000-05-15
We present a method to control the regularization scheme dependence in the running of couplings in Kaluza-Klein theories. Specifically we consider the scalar theory in five dimensions, assuming that one dimension is compactified, and we study various regularization schemes in order to analyze concretely the regularization scheme dependence of the Kaluza-Klein threshold effects. We find that in one-loop order, although the {beta}-functions are different for the different schemes, the net difference in the running of the coupling among the different schemes is very small for the entire range of energies. Our results have been extended to include more than one radius, and the gauge coupling unification is re-examined. Strings are also used as a regulator. We obtain a particular regularization scheme of the effective field theory which can accurately describe the string Kaluza-Klein threshold effects.
Lipschitz regularity results for nonlinear strictly elliptic equations and applications
Ley, Olivier; Nguyen, Vinh Duc
2017-10-01
Most of Lipschitz regularity results for nonlinear strictly elliptic equations are obtained for a suitable growth power of the nonlinearity with respect to the gradient variable (subquadratic for instance). For equations with superquadratic growth power in gradient, one usually uses weak Bernstein-type arguments which require regularity and/or convex-type assumptions on the gradient nonlinearity. In this article, we obtain new Lipschitz regularity results for a large class of nonlinear strictly elliptic equations with possibly arbitrary growth power of the Hamiltonian with respect to the gradient variable using some ideas coming from Ishii-Lions' method. We use these bounds to solve an ergodic problem and to study the regularity and the large time behavior of the solution of the evolution equation.
Automatic Constraint Detection for 2D Layout Regularization.
Jiang, Haiyong; Nan, Liangliang; Yan, Dong-Ming; Dong, Weiming; Zhang, Xiaopeng; Wonka, Peter
2016-08-01
In this paper, we address the problem of constraint detection for layout regularization. The layout we consider is a set of two-dimensional elements where each element is represented by its bounding box. Layout regularization is important in digitizing plans or images, such as floor plans and facade images, and in the improvement of user-created contents, such as architectural drawings and slide layouts. To regularize a layout, we aim to improve the input by detecting and subsequently enforcing alignment, size, and distance constraints between layout elements. Similar to previous work, we formulate layout regularization as a quadratic programming problem. In addition, we propose a novel optimization algorithm that automatically detects constraints. We evaluate the proposed framework using a variety of input layouts from different applications. Our results demonstrate that our method has superior performance to the state of the art.
An adaptive regularization parameter choice strategy for multispectral bioluminescence tomography
Feng Jinchao; Qin Chenghu; Jia Kebin; Han Dong; Liu Kai; Zhu Shouping; Yang Xin; Tian Jie [Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); College of Electronic Information and Control Engineering, Beijing University of Technology, Beijing 100124 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China); Medical Image Processing Group, Institute of Automation, Chinese Academy of Sciences, P. O. Box 2728, Beijing 100190 (China) and School of Life Sciences and Technology, Xidian University, Xi' an 710071 (China)
2011-11-15
Purpose: Bioluminescence tomography (BLT) provides an effective tool for monitoring physiological and pathological activities in vivo. However, the measured data in bioluminescence imaging are corrupted by noise. Therefore, regularization methods are commonly used to find a regularized solution. Nevertheless, for the quality of the reconstructed bioluminescent source obtained by regularization methods, the choice of the regularization parameters is crucial. To date, the selection of regularization parameters remains challenging. With regards to the above problems, the authors proposed a BLT reconstruction algorithm with an adaptive parameter choice rule. Methods: The proposed reconstruction algorithm uses a diffusion equation for modeling the bioluminescent photon transport. The diffusion equation is solved with a finite element method. Computed tomography (CT) images provide anatomical information regarding the geometry of the small animal and its internal organs. To reduce the ill-posedness of BLT, spectral information and the optimal permissible source region are employed. Then, the relationship between the unknown source distribution and multiview and multispectral boundary measurements is established based on the finite element method and the optimal permissible source region. Since the measured data are noisy, the BLT reconstruction is formulated as l{sub 2} data fidelity and a general regularization term. When choosing the regularization parameters for BLT, an efficient model function approach is proposed, which does not require knowledge of the noise level. This approach only requests the computation of the residual and regularized solution norm. With this knowledge, we construct the model function to approximate the objective function, and the regularization parameter is updated iteratively. Results: First, the micro-CT based mouse phantom was used for simulation verification. Simulation experiments were used to illustrate why multispectral data were used
Regularization destriping of remote sensing imagery
R. Basnayake
2017-07-01
Full Text Available We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS on the Suomi National Polar Partnership (NPP orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL hyperspectral Portable Remote Imaging Spectrometer (PRISM sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features while promoting data fidelity, and the functional is minimized by solving the Euler–Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
Regularization destriping of remote sensing imagery
Basnayake, Ranil; Bollt, Erik; Tufillaro, Nicholas; Sun, Jie; Gierach, Michelle
2017-07-01
We illustrate the utility of variational destriping for ocean color images from both multispectral and hyperspectral sensors. In particular, we examine data from a filter spectrometer, the Visible Infrared Imaging Radiometer Suite (VIIRS) on the Suomi National Polar Partnership (NPP) orbiter, and an airborne grating spectrometer, the Jet Population Laboratory's (JPL) hyperspectral Portable Remote Imaging Spectrometer (PRISM) sensor. We solve the destriping problem using a variational regularization method by giving weights spatially to preserve the other features of the image during the destriping process. The target functional penalizes the neighborhood of stripes (strictly, directionally uniform features) while promoting data fidelity, and the functional is minimized by solving the Euler-Lagrange equations with an explicit finite-difference scheme. We show the accuracy of our method from a benchmark data set which represents the sea surface temperature off the coast of Oregon, USA. Technical details, such as how to impose continuity across data gaps using inpainting, are also described.
Fast multislice fluorescence molecular tomography using sparsity-inducing regularization.
Hejazi, Sedigheh Marjaneh; Sarkar, Saeed; Darezereshki, Ziba
2016-02-01
Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.
Fast multislice fluorescence molecular tomography using sparsity-inducing regularization
Hejazi, Sedigheh Marjaneh; Sarkar, Saeed; Darezereshki, Ziba
2016-02-01
Fluorescence molecular tomography (FMT) is a rapidly growing imaging method that facilitates the recovery of small fluorescent targets within biological tissue. The major challenge facing the FMT reconstruction method is the ill-posed nature of the inverse problem. In order to overcome this problem, the acquisition of large FMT datasets and the utilization of a fast FMT reconstruction algorithm with sparsity regularization have been suggested recently. Therefore, the use of a joint L1/total-variation (TV) regularization as a means of solving the ill-posed FMT inverse problem is proposed. A comparative quantified analysis of regularization methods based on L1-norm and TV are performed using simulated datasets, and the results show that the fast composite splitting algorithm regularization method can ensure the accuracy and robustness of the FMT reconstruction. The feasibility of the proposed method is evaluated in an in vivo scenario for the subcutaneous implantation of a fluorescent-dye-filled capillary tube in a mouse, and also using hybrid FMT and x-ray computed tomography data. The results show that the proposed regularization overcomes the difficulties created by the ill-posed inverse problem.
Symmetry-Preserving Loop Regularization and Renormalization of QFTs
Wu, Yue-Liang
A new symmetry-preserving loop regularization method proposed in Ref. 1 is further investigated. It is found that its prescription can be understood by introducing a regulating distribution function to the proper-time formalism of irreducible loop integrals. The method simulates in many interesting features to the momentum cutoff, Pauli-Villars and dimensional regularization. The loop regularization method is also simple and general for the practical calculations to higher loop graphs and can be applied to both underlying and effective quantum field theories including gauge, chiral, supersymmetric and gravitational ones as the new method does not modify either the Lagrangian formalism or the spacetime dimension of original theory. The appearance of characteristic energy scale Mc and sliding energy scale μs offers a systematic way for studying the renormalization-group evolution of gauge theories in the spirit of Wilson-Kadanoff and for exploring important effects of higher dimensional interaction terms in the infrared regime.
Adiabatic Regularization for Gauge Field and the Conformal Anomaly
Chu, Chong-Sun
2016-01-01
We construct and provide the adiabatic regularization method for a $U(1)$ gauge field in a conformally flat spacetime by quantizing in the canonical formalism the gauge fixed $U(1)$ theory with mass terms for the gauge fields and the ghost fields. We show that the adiabatic expansion for the mode functions and the adiabatic vacuum can be defined in a similar way using WKB-type solutions as the scalar fields. As an application of the adiabatic method, we compute the trace of the energy momentum tensor and reproduces the known result for the conformal anomaly obtained by the other regularization methods. The availability of the adiabatic expansion scheme for gauge field allows one to study the renormalization of the de-Sitter space maximal superconformal Yang-Mills theory using the adiabatic regularization method.
Multiple graph regularized nonnegative matrix factorization
Wang, Jim Jing-Yan
2013-10-01
Non-negative matrix factorization (NMF) has been widely used as a data representation method based on components. To overcome the disadvantage of NMF in failing to consider the manifold structure of a data set, graph regularized NMF (GrNMF) has been proposed by Cai et al. by constructing an affinity graph and searching for a matrix factorization that respects graph structure. Selecting a graph model and its corresponding parameters is critical for this strategy. This process is usually carried out by cross-validation or discrete grid search, which are time consuming and prone to overfitting. In this paper, we propose a GrNMF, called MultiGrNMF, in which the intrinsic manifold is approximated by a linear combination of several graphs with different models and parameters inspired by ensemble manifold regularization. Factorization metrics and linear combination coefficients of graphs are determined simultaneously within a unified object function. They are alternately optimized in an iterative algorithm, thus resulting in a novel data representation algorithm. Extensive experiments on a protein subcellular localization task and an Alzheimer\\'s disease diagnosis task demonstrate the effectiveness of the proposed algorithm. © 2013 Elsevier Ltd. All rights reserved.
Supporting Regularized Logistic Regression Privately and Efficiently.
Wenfa Li
Full Text Available As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Supporting Regularized Logistic Regression Privately and Efficiently.
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc.
Supporting Regularized Logistic Regression Privately and Efficiently
Li, Wenfa; Liu, Hongzhe; Yang, Peng; Xie, Wei
2016-01-01
As one of the most popular statistical and machine learning models, logistic regression with regularization has found wide adoption in biomedicine, social sciences, information technology, and so on. These domains often involve data of human subjects that are contingent upon strict privacy regulations. Concerns over data privacy make it increasingly difficult to coordinate and conduct large-scale collaborative studies, which typically rely on cross-institution data sharing and joint analysis. Our work here focuses on safeguarding regularized logistic regression, a widely-used statistical model while at the same time has not been investigated from a data security and privacy perspective. We consider a common use scenario of multi-institution collaborative studies, such as in the form of research consortia or networks as widely seen in genetics, epidemiology, social sciences, etc. To make our privacy-enhancing solution practical, we demonstrate a non-conventional and computationally efficient method leveraging distributing computing and strong cryptography to provide comprehensive protection over individual-level and summary data. Extensive empirical evaluations on several studies validate the privacy guarantee, efficiency and scalability of our proposal. We also discuss the practical implications of our solution for large-scale studies and applications from various disciplines, including genetic and biomedical studies, smart grid, network analysis, etc. PMID:27271738
Regular Black Holes with Cosmological Constant
MO Wen-Juan; CAI Rong-Gen; SU Ru-Keng
2006-01-01
We present a class of regular black holes with cosmological constant Λ in nonlinear electrodynamics. Instead of usual singularity behind black hole horizon, all fields and curvature invariants are regular everywhere for the regular black holes. Through gauge invariant approach, the linearly dynamical stability of the regular black hole is studied. In odd-parity sector, we find that the Λ term does not appear in the master equations of perturbations, which shows that the regular black hole is stable under odd-parity perturbations. On the other hand, for the even-parity sector, the master equations are more complicated than the case without the cosmological constant. We obtain the sufficient conditions for stability of the regular black hole. We also investigate the thermodynamic properties of the regular black hole, and find that those thermodynamic quantities do not satisfy the differential form of first law of black hole thermodynamics. The reason for violating the first law is revealed.
HE Chundong; ZHANG Yongbin; BI Chuanxing; CHEN Xinzhao
2012-01-01
The regularization technique for stabilizing the reconstruction based on the nearfield acoustic holography （NAH） was investigated on the basis of the equivalent source method. In order to obtain higher regularization effect, a regularization method based on the idea of partial optimization was proposed, which inherits the advantages of the Tikhonov and another regularization method-truncated singular value decomposition （TSVD）. Through the numerical simulation, it is proved that the proposed method is stabler than the Tikhonov, and more precise than the TSVD. Finally the validity and the feasibility of the proposed method are demonstrated by an experiment carried out in a semi-anechoic room with two speakers.
Regularized Adaptive Notch Filters for Acoustic Howling Suppression
Gil-Cacho, Pepe; van Waterschoot, Toon; Moonen, Marc;
2009-01-01
In this paper, a method for the suppression of acoustic howling is developed, based on adaptive notch filters (ANF) with regularization (RANF). The method features three RANFs working in parallel to achieve frequency tracking, howling detection and suppression. The ANF-based approach to howling...
Variational regularization of 3D data experiments with Matlab
Montegranario, Hebert
2014-01-01
Variational Regularization of 3D Data provides an introduction to variational methods for data modelling and its application in computer vision. In this book, the authors identify interpolation as an inverse problem that can be solved by Tikhonov regularization. The proposed solutions are generalizations of one-dimensional splines, applicable to n-dimensional data and the central idea is that these splines can be obtained by regularization theory using a trade-off between the fidelity of the data and smoothness properties.As a foundation, the authors present a comprehensive guide to the necessary fundamentals of functional analysis and variational calculus, as well as splines. The implementation and numerical experiments are illustrated using MATLAB®. The book also includes the necessary theoretical background for approximation methods and some details of the computer implementation of the algorithms. A working knowledge of multivariable calculus and basic vector and matrix methods should serve as an adequat...
周国华; 肖昌汉; 刘大明; 刘胜道
2012-01-01
Due to the complexity and unpredictability of the magnetization history of ship ferromagnetic material, the remanent magnetic field modeling of ships is always a technical issue in ship magnetic silencing. A new reconstruction method of the remanent magnetic field of ship was proposed based on an integral method and Tikhonov regularization method. Firstly, the magnetic field below a ship was measured, and the induced magnetic field caused by geomagnetic field at the measuring points was calculated. Then, a remanent magnetic field inverse model was given based on the measured data of magnetic field and the calculated induced magnetic field. Tikhonov regularization method was adopted to solve the above inverse model to eliminate the ill-conditioning influence of the inverse model. A mockup was designed to verify the proposed method. The results show that the calculation accuracy is satisfying and the remanent magnetic field can be reconstructed efficiently by the proposed method.%船舶铁磁材料磁化历史的复杂性及其不可预知性使得船舶固定磁场计算一直是船舶磁隐身中的技术难题.针对船舶固定磁场重建与分解问题,提出了一种基于磁场积分法和Tikhonov正则化方法的铁磁物体固定磁场重建与分解新方法.首先通过测量得到了船舶下方空间若干场点处的磁场值,并以正问题形式计算得到了地磁场作用下船舶在测量场点处的感应磁场值；后基于测量场点处的磁场测量数据和所计算的感应磁场数据,建立船舶固定磁场反演计算模型,并采用Tikhonov正则化方法对反演模型进行求解,以克服反演模型病态性的影响.设计船舶固定磁场的计算实例,结果证明该方法具有较好地计算精度,能够有效实现船舶固定磁场的重建与分解.
Ideal regularization for learning kernels from labels.
Pan, Binbin; Lai, Jianhuang; Shen, Lixin
2014-08-01
In this paper, we propose a new form of regularization that is able to utilize the label information of a data set for learning kernels. The proposed regularization, referred to as ideal regularization, is a linear function of the kernel matrix to be learned. The ideal regularization allows us to develop efficient algorithms to exploit labels. Three applications of the ideal regularization are considered. Firstly, we use the ideal regularization to incorporate the labels into a standard kernel, making the resulting kernel more appropriate for learning tasks. Next, we employ the ideal regularization to learn a data-dependent kernel matrix from an initial kernel matrix (which contains prior similarity information, geometric structures, and labels of the data). Finally, we incorporate the ideal regularization to some state-of-the-art kernel learning problems. With this regularization, these learning problems can be formulated as simpler ones which permit more efficient solvers. Empirical results show that the ideal regularization exploits the labels effectively and efficiently.
Regularity extraction from non-adjacent sounds
Alexandra eBendixen
2012-05-01
Full Text Available The regular behavior of sound sources helps us to make sense of the auditory environment. Regular patterns may, for instance, convey information on the identity of a sound source (such as the acoustic signature of a train moving on the rails. Yet typically, this signature overlaps in time with signals emitted from other sound sources. It is generally assumed that auditory regularity extraction cannot operate upon this mixture of signals because it only finds regularities between adjacent sounds. In this view, the auditory environment would be grouped into separate entities by means of readily available acoustic cues such as separation in frequency and location. Regularity extraction processes would then operate upon the resulting groups. Our new experimental evidence challenges this view. We presented two interleaved sound sequences which overlapped in frequency range and shared all acoustic parameters. The sequences only differed in their underlying regular patterns. We inserted deviants into one of the sequences to probe whether the regularity was extracted. In the first experiment, we found that these deviants elicited the mismatch negativity (MMN component. Thus the auditory system was able to find the regularity between the non-adjacent sounds. Regularity extraction was not influenced by sequence cohesiveness as manipulated by the relative duration of tones and silent inter-tone-intervals. In the second experiment, we showed that a regularity connecting non-adjacent sounds was discovered only when the intervening sequence also contained a regular pattern, but not when the intervening sounds were randomly varying. This suggests that separate regular patterns are available to the auditory system as a cue for identifying signals coming from distinct sound sources. Thus auditory regularity extraction is not necessarily confined to a processing stage after initial sound grouping, but may precede grouping when other acoustic cues are unavailable.
Manifold Regularized Experimental Design for Active Learning.
Zhang, Lining; Shum, Hubert P H; Shao, Ling
2016-12-02
Various machine learning and data mining tasks in classification require abundant data samples to be labeled for training. Conventional active learning methods aim at labeling the most informative samples for alleviating the labor of the user. Many previous studies in active learning select one sample after another in a greedy manner. However, this is not very effective because the classification models has to be retrained for each newly labeled sample. Moreover, many popular active learning approaches utilize the most uncertain samples by leveraging the classification hyperplane of the classifier, which is not appropriate since the classification hyperplane is inaccurate when the training data are small-sized. The problem of insufficient training data in real-world systems limits the potential applications of these approaches. This paper presents a novel method of active learning called manifold regularized experimental design (MRED), which can label multiple informative samples at one time for training. In addition, MRED gives an explicit geometric explanation for the selected samples to be labeled by the user. Different from existing active learning methods, our method avoids the intrinsic problems caused by insufficiently labeled samples in real-world applications. Various experiments on synthetic datasets, the Yale face database and the Corel image database have been carried out to show how MRED outperforms existing methods.
Invariant Regularization of Supersymmetric Chiral Gauge Theory
Hayashi, T; Okuyama, K; Suzuki, H; Hayashi, Takuya; Ohshima, Yoshihisa; Okuyama, Kiyoshi; Suzuki, Hiroshi
1998-01-01
We formulate a manifestly supersymmetric gauge-covariant regularization of supersymmetric chiral gauge theories. In our scheme, the effective action in the superfield background-field method above one-loop is always supersymmetric and gauge invariant. The gauge anomaly has the covariant form and can emerge only in one-loop diagrams with all the external lines are the background gauge superfield. We also present several illustrative applications in the one-loop approximation: The self-energy part of the chiral multiplet and the gauge multiplet; the super-chiral anomaly and the superconformal anomaly; as the corresponding anomalous commutators, the Konishi anomaly and the anomalous supersymmetric transformation law of the supercurrent (the ``central extension'' of N=1 supersymmetry algebra) and of the R-current.
Sparse regularization in limited angle tomography
Frikel, Jürgen
2011-01-01
We investigate the reconstruction problem of limited angle tomography. Such problems arise naturally in applications like digital breast tomosynthesis, dental tomography, electron microscopy etc. Since the acquired tomographic data is highly incomplete, the reconstruction problem is severely ill-posed and the traditional reconstruction methods, such as filtered backprojection (FBP), do not perform well in such situations. To stabilize the reconstruction procedure additional prior knowledge about the unknown object has to be integrated into the reconstruction process. In this work, we propose the use of the sparse regularization technique in combination with curvelets. We argue that this technique gives rise to an edge-preserving reconstruction. Moreover, we show that the dimension of the problem can be significantly reduced in the curvelet domain. To this end, we give a characterization of the kernel of limited angle Radon transform in terms of curvelets and derive a characterization of solutions obtained thr...
Local orientational mobility in regular hyperbranched polymers
Dolgushev, Maxim; Fürstenberg, Florian; Guérin, Thomas
2016-01-01
We study the dynamics of local bond orientation in regular hyperbranched polymers modeled by Vicsek fractals. The local dynamics is investigated through the temporal autocorrelation functions of single bonds and the corresponding relaxation forms of the complex dielectric susceptibility. We show that the dynamic behavior of single segments depends on their remoteness from the periphery rather than on the size of the whole macromolecule. Remarkably, the dynamics of the core segments (which are most remote from the periphery) shows a scaling behavior that differs from the dynamics obtained after structural average. We analyze the most relevant processes of single segment motion and provide an analytic approximation for the corresponding relaxation times. Furthermore, we describe an iterative method to calculate the orientational dynamics in the case of very large macromolecular sizes.
Regularized canonical correlation analysis with unlabeled data
Xi-chuan ZHOU; Hai-bin SHEN
2009-01-01
In standard canonical correlation analysis (CCA), the data from definite datasets are used to estimate their canonical correlation. In real applications, for example in bilingual text retrieval, it may have a great portion of data that we do not know which set it belongs to. This part of data is called unlabeled data, while the rest from definite datasets is called labeled data. We propose a novel method called regularized canonical correlation analysis (RCCA), which makes use of both labeled and unlabeled samples. Specifically, we learn to approximate canonical correlation as if all data were labeled. Then. we describe a generalization of RCCA for the multi-set situation. Experiments on four real world datasets, Yeast, Cloud, Iris, and Haberman, demonstrate that,by incorporating the unlabeled data points, the accuracy of correlation coefficients can be improved by over 30%.
SPECT reconstruction using DCT-induced tight framelet regularization
Zhang, Jiahan; Li, Si; Xu, Yuesheng; Schmidtlein, C. R.; Lipson, Edward D.; Feiglin, David H.; Krol, Andrzej
2015-03-01
Wavelet transforms have been successfully applied in many fields of image processing. Yet, to our knowledge, they have never been directly incorporated to the objective function in Emission Computed Tomography (ECT) image reconstruction. Our aim has been to investigate if the ℓ1-norm of non-decimated discrete cosine transform (DCT) coefficients of the estimated radiotracer distribution could be effectively used as the regularization term for the penalized-likelihood (PL) reconstruction, where a regularizer is used to enforce the image smoothness in the reconstruction. In this study, the ℓ1-norm of 2D DCT wavelet decomposition was used as a regularization term. The Preconditioned Alternating Projection Algorithm (PAPA), which we proposed in earlier work to solve penalized likelihood (PL) reconstruction with non-differentiable regularizers, was used to solve this optimization problem. The DCT wavelet decompositions were performed on the transaxial reconstructed images. We reconstructed Monte Carlo simulated SPECT data obtained for a numerical phantom with Gaussian blobs as hot lesions and with a warm random lumpy background. Reconstructed images using the proposed method exhibited better noise suppression and improved lesion conspicuity, compared with images reconstructed using expectation maximization (EM) algorithm with Gaussian post filter (GPF). Also, the mean square error (MSE) was smaller, compared with EM-GPF. A critical and challenging aspect of this method was selection of optimal parameters. In summary, our numerical experiments demonstrated that the ℓ1-norm of discrete cosine transform (DCT) wavelet frame transform DCT regularizer shows promise for SPECT image reconstruction using PAPA method.
Constructions of k-regular maps using finite local schemes
Buczyński, Jarosław; Januszkiewicz, Tadeusz; Jelisiejew, Joachim; Michałek, Mateusz
2015-01-01
A continuous map from R^m to R^N or from C^m to C^N is called k-regular if the images of any $k$ points are linearly independent. Given integers m and k a problem going back to Chebyshev and Borsuk is to determine the minimal value of N for which such maps exist. The methods of algebraic topology provide lower bounds for N, however there are very few results on the existence of such maps for particular values m and k. Using the methods of algebraic geometry we construct k-regular maps. We rel...
Chen, De-Han; Hofmann, Bernd; Zou, Jun
2017-01-01
We consider the ill-posed operator equation Ax = y with an injective and bounded linear operator A mapping between {{\\ell}2} and a Hilbert space Y, possessing the unique solution {{x}\\dagger}=≤ft\\{{{x}\\dagger}k\\right\\}k=1∞ . For the cases that sparsity {{x}\\dagger}\\in {{\\ell}0} is expected but often slightly violated in practice, we investigate in comparison with the {{\\ell}1} -regularization the elastic-net regularization, where the penalty is a weighted superposition of the {{\\ell}1} -norm and the {{\\ell}2} -norm square, under the assumption that {{x}\\dagger}\\in {{\\ell}1} . There occur two positive parameters in this approach, the weight parameter η and the regularization parameter as the multiplier of the whole penalty in the Tikhonov functional, whereas only one regularization parameter arises in {{\\ell}1} -regularization. Based on the variational inequality approach for the description of the solution smoothness with respect to the forward operator A and exploiting the method of approximate source conditions, we present some results to estimate the rate of convergence for the elastic-net regularization. The occurring rate function contains the rate of the decay {{x}\\dagger}k\\to 0 for k\\to ∞ and the classical smoothness properties of {{x}\\dagger} as an element in {{\\ell}2} .
Regularized Laplacian Estimation and Fast Eigenvector Approximation
Perry, Patrick O
2011-01-01
Recently, Mahoney and Orecchia demonstrated that popular diffusion-based procedures to compute a quick \\emph{approximation} to the first nontrivial eigenvector of a data graph Laplacian \\emph{exactly} solve certain regularized Semi-Definite Programs (SDPs). In this paper, we extend that result by providing a statistical interpretation of their approximation procedure. Our interpretation will be analogous to the manner in which $\\ell_2$-regularized or $\\ell_1$-regularized $\\ell_2$-regression (often called Ridge regression and Lasso regression, respectively) can be interpreted in terms of a Gaussian prior or a Laplace prior, respectively, on the coefficient vector of the regression problem. Our framework will imply that the solutions to the Mahoney-Orecchia regularized SDP can be interpreted as regularized estimates of the pseudoinverse of the graph Laplacian. Conversely, it will imply that the solution to this regularized estimation problem can be computed very quickly by running, e.g., the fast diffusion-base...
Total variation regularization with bounded linear variations
Makovetskii, Artyom; Voronin, Sergei; Kober, Vitaly
2016-09-01
One of the most known techniques for signal denoising is based on total variation regularization (TV regularization). A better understanding of TV regularization is necessary to provide a stronger mathematical justification for using TV minimization in signal processing. In this work, we deal with an intermediate case between one- and two-dimensional cases; that is, a discrete function to be processed is two-dimensional radially symmetric piecewise constant. For this case, the exact solution to the problem can be obtained as follows: first, calculate the average values over rings of the noisy function; second, calculate the shift values and their directions using closed formulae depending on a regularization parameter and structure of rings. Despite the TV regularization is effective for noise removal; it often destroys fine details and thin structures of images. In order to overcome this drawback, we use the TV regularization for signal denoising subject to linear signal variations are bounded.
Regular Disjunction-Free Default Theories
Xi-ShunZhao
2004-01-01
In this paper, the class of regular disjunction-free default theories is introduced and investigated. A transformation from regular default theories to normal default theories is established. The initial theory and the transformed theory have the same extensions when restricted to old variables. Hence, regular default theories enjoy some similar properties (e.g., existence of extensions, semi-monotonicity) as normal default theories. Then, a new algorithm for credulous reasoning of regular theories is developed. This algorithm runs in a time not more than O(1.45n), where n is the number of defaults. In case of regular prerequisite-free or semi-2CNF default theories, the credulous reasoning can be solved in polynomial time. However, credulous reasoning for semi-Horn default theories is shown to be NP-complete although it is tractable for Horn default theories. Moreover, skeptical reasoning for regular unary default theories is co-NP-complete.
Regular Disjunction-Free Default Theories
Xi-Shun Zhao
2004-01-01
In this paper, the class of regular disjunction-free default theories is introduced and investigated.A transformation from regular default theories to normal default theories is established. The initial theory and the transformed theory have the same extensions when restricted to old variables. Hence, regular default theories enjoy some similar properties (e.g., existence of extensions, semi-monotonicity) as normal default theories. Then,a new algorithm for credulous reasoning of regular theories is developed. This algorithm runs in a time not more than O(1.45n), where n is the number of defaults. In case of regular prerequisite-free or semi-2CNF default theories, the credulous reasoning can be solved in polynomial time. However, credulous reasoning for semi-Horn default theories is shown to be NP-complete although it is tractable for Horn default theories. Moreover, skeptical reasoning for regular unary default theories is co-NP-complete.
Buong, Nguyen; Dung, Nguyen Dinh
2014-03-01
In this paper, we present a regularized parameter choice in a new regularization method of Browder-Tikhonov type, for finding a common solution of a finite system of ill-posed operator equations involving Lipschitz continuous and accretive mappings in a real reflexive and strictly convex Banach space with a uniformly Gateaux differentiate norm. An estimate for convergence rates of regularized solution is also established.
A model and regularization scheme for ultrasonic beamforming clutter reduction.
Byram, Brett; Dei, Kazuyuki; Tierney, Jaime; Dumont, Douglas
2015-11-01
Acoustic clutter produced by off-axis and multipath scattering is known to cause image degradation, and in some cases these sources may be the prime determinants of in vivo image quality. We have previously shown some success addressing these sources of image degradation by modeling the aperture domain signal from different sources of clutter, and then decomposing aperture domain data using the modeled sources. Our previous model had some shortcomings including model mismatch and failure to recover B-Mode speckle statistics. These shortcomings are addressed here by developing a better model and by using a general regularization approach appropriate for the model and data. We present results with L1 (lasso), L2 (ridge), and L1/L2 combined (elastic-net) regularization methods. We call our new method aperture domain model image reconstruction (ADMIRE). Our results demonstrate that ADMIRE with L1 regularization, or weighted toward L1 in the case of elastic-net regularization, have improved image quality. L1 by itself works well, but additional improvements are seen with elastic-net regularization over the pure L1 constraint. On in vivo example cases, L1 regularization showed mean contrast improvements of 4.6 and 6.8 dB on fundamental and harmonic images, respectively. Elastic net regularization (α = 0.9) showed mean contrast improvements of 17.8 dB on fundamental images and 11.8 dB on harmonic images. We also demonstrate that in uncluttered Field II simulations the decluttering algorithm produces the same contrast, contrast-tonoise ratio, and speckle SNR as normal B-mode imaging, demonstrating that ADMIRE preserves typical image features.
Regularity effect in prospective memory during aging
Geoffrey Blondelle
2016-10-01
Full Text Available Background: Regularity effect can affect performance in prospective memory (PM, but little is known on the cognitive processes linked to this effect. Moreover, its impacts with regard to aging remain unknown. To our knowledge, this study is the first to examine regularity effect in PM in a lifespan perspective, with a sample of young, intermediate, and older adults. Objective and design: Our study examined the regularity effect in PM in three groups of participants: 28 young adults (18–30, 16 intermediate adults (40–55, and 25 older adults (65–80. The task, adapted from the Virtual Week, was designed to manipulate the regularity of the various activities of daily life that were to be recalled (regular repeated activities vs. irregular non-repeated activities. We examine the role of several cognitive functions including certain dimensions of executive functions (planning, inhibition, shifting, and binding, short-term memory, and retrospective episodic memory to identify those involved in PM, according to regularity and age. Results: A mixed-design ANOVA showed a main effect of task regularity and an interaction between age and regularity: an age-related difference in PM performances was found for irregular activities (older < young, but not for regular activities. All participants recalled more regular activities than irregular ones with no age effect. It appeared that recalling of regular activities only involved planning for both intermediate and older adults, while recalling of irregular ones were linked to planning, inhibition, short-term memory, binding, and retrospective episodic memory. Conclusion: Taken together, our data suggest that planning capacities seem to play a major role in remembering to perform intended actions with advancing age. Furthermore, the age-PM-paradox may be attenuated when the experimental design is adapted by implementing a familiar context through the use of activities of daily living. The clinical
Ambiguities in Pauli-Villars regularization
Kleiss, Ronald H P
2014-01-01
We investigate regularization of scalar one-loop integrals in the Pauli- Villars subtraction scheme. The results depend on the number of sub- tractions, in particular the finite terms that survive after the diver- gences have been absorbed by renormalization. Therefore the process of Pauli-Villars regularization is ambiguous. We discuss how these am- biguities may be resolved by applying an asymptotically large number of subtractions, which results in a regularization that is automatically valid in any number of dimensions.
Regularized brain reading with shrinkage and smoothing
Wehbe, Leila; Ramdas, Aaditya; Steorts, Rebecca C.; Shalizi, Cosma Rohilla
2014-01-01
Functional neuroimaging measures how the brain responds to complex stimuli. However, sample sizes are modest, noise is substantial, and stimuli are high dimensional. Hence, direct estimates are inherently imprecise and call for regularization. We compare a suite of approaches which regularize via shrinkage: ridge regression, the elastic net (a generalization of ridge regression and the lasso), and a hierarchical Bayesian model based on small area estimation (SAE). We contrast regularization w...
Low-Complexity Regularization Algorithms for Image Deblurring
Alanazi, Abdulrahman
2016-11-01
Image restoration problems deal with images in which information has been degraded by blur or noise. In practice, the blur is usually caused by atmospheric turbulence, motion, camera shake, and several other mechanical or physical processes. In this study, we present two regularization algorithms for the image deblurring problem. We first present a new method based on solving a regularized least-squares (RLS) problem. This method is proposed to find a near-optimal value of the regularization parameter in the RLS problems. Experimental results on the non-blind image deblurring problem are presented. In all experiments, comparisons are made with three benchmark methods. The results demonstrate that the proposed method clearly outperforms the other methods in terms of both the output PSNR and structural similarity, as well as the visual quality of the deblurred images. To reduce the complexity of the proposed algorithm, we propose a technique based on the bootstrap method to estimate the regularization parameter in low and high-resolution images. Numerical results show that the proposed technique can effectively reduce the computational complexity of the proposed algorithms. In addition, for some cases where the point spread function (PSF) is separable, we propose using a Kronecker product so as to reduce the computations. Furthermore, in the case where the image is smooth, it is always desirable to replace the regularization term in the RLS problems by a total variation term. Therefore, we propose a novel method for adaptively selecting the regularization parameter in a so-called square root regularized total variation (SRTV). Experimental results demonstrate that our proposed method outperforms the other benchmark methods when applied to smooth images in terms of PSNR, SSIM and the restored image quality. In this thesis, we focus on the non-blind image deblurring problem, where the blur kernel is assumed to be known. However, we developed algorithms that also work
Breast ultrasound tomography with total-variation regularization
Huang, Lianjie [Los Alamos National Laboratory; Li, Cuiping [KARMANOS CANCER INSTIT.; Duric, Neb [KARMANOS CANCER INSTIT
2009-01-01
Breast ultrasound tomography is a rapidly developing imaging modality that has the potential to impact breast cancer screening and diagnosis. A new ultrasound breast imaging device (CURE) with a ring array of transducers has been designed and built at Karmanos Cancer Institute, which acquires both reflection and transmission ultrasound signals. To extract the sound-speed information from the breast data acquired by CURE, we have developed an iterative sound-speed image reconstruction algorithm for breast ultrasound transmission tomography based on total-variation (TV) minimization. We investigate applicability of the TV tomography algorithm using in vivo ultrasound breast data from 61 patients, and compare the results with those obtained using the Tikhonov regularization method. We demonstrate that, compared to the Tikhonov regularization scheme, the TV regularization method significantly improves image quality, resulting in sound-speed tomography images with sharp (preserved) edges of abnormalities and few artifacts.
Regularized quadratic cost function for oriented fringe-pattern filtering.
Villa, Jesús; Quiroga, Juan Antonio; De la Rosa, Ismael
2009-06-01
We use the regularization theory in a Bayesian framework to derive a quadratic cost function for denoising fringe patterns. As prior constraints for the regularization problem, we propose a Markov random field model that includes information about the fringe orientation. In our cost function the regularization term imposes constraints to the solution (i.e., the filtered image) to be smooth only along the fringe's tangent direction. In this way as the fringe information and noise are conveniently separated in the frequency space, our technique avoids blurring the fringes. The attractiveness of the proposed filtering method is that the minimization of the cost function can be easily implemented using iterative methods. To show the performance of the proposed technique we present some results obtained by processing simulated and real fringe patterns.
Branch Processes of Regular Magnetic Monopole
MO Shu-Fan; REN Ji-Rong; ZHU Tao
2009-01-01
In this paper, by making use of Duan's topological current theory, the branch process of regular magnetic monopoles is discussed in detail Regular magnetic monopoles are found generating or annihilating at the limit point and encountering, splitting, or merging at the bifurcation point and the degenerate point systematically of the vector order parameter field φ(x).Furthermore, it is also shown that when regular magnetic monopoles split or merge at the degenerate point of field function φ, the total topological charges of the regular magnetic monopoles axe still unchanged.
Ideal-comparability over Regular Rings
Huan Yin CHEN; Miao Sen CHEN
2006-01-01
We introduce the concept of ideal-comparability condition for regular rings. Let I be an ideal of a regular ring R. If R satisfies the Ⅰ-comparability condition, then R is one-sided unit-regular if and only if so is R/I. Also, we show that a regular ring R satisfies the general comparability if and only if the following hold: (1) R/I satisfies the general comparability; (2) R satisfies the general Ⅰ-comparability condition; (3) The natural map B(R) → B(R/I) is surjective.
Regularization and error assignment to unfolded distributions
Zech, Gunter
2011-01-01
The commonly used approach to present unfolded data only in graphical formwith the diagonal error depending on the regularization strength is unsatisfac-tory. It does not permit the adjustment of parameters of theories, the exclusionof theories that are admitted by the observed data and does not allow the com-bination of data from different experiments. We propose fixing the regulariza-tion strength by a p-value criterion, indicating the experimental uncertaintiesindependent of the regularization and publishing the unfolded data in additionwithout regularization. These considerations are illustrated with three differentunfolding and smoothing approaches applied to a toy example.
Bit-coded regular expression parsing
Nielsen, Lasse; Henglein, Fritz
2011-01-01
Regular expression parsing is the problem of producing a parse tree of a string for a given regular expression. We show that a compact bit representation of a parse tree can be produced efficiently, in time linear in the product of input string size and regular expression size, by simplifying...... the DFA-based parsing algorithm due to Dub ´e and Feeley to emit the bits of the bit representation without explicitly materializing the parse tree itself. We furthermore show that Frisch and Cardelli’s greedy regular expression parsing algorithm can be straightforwardly modified to produce bit codings...
The regularity of quotient paratopological groups
Banakh, Taras
2010-01-01
Let $H$ be a closed subgroup of a regular abelian paratopological group $G$. The group reflexion $G^\\flat$ of $G$ is the group $G$ endowed with the strongest group topology, weaker that the original topology of $G$. We show that the quotient $G/H$ is Hausdorff (and regular) if $H$ is closed (and locally compact) in $G^\\flat$. On the other hand, we construct an example of a regular abelian paratopological group $G$ containing a closed discrete subgroup $H$ such that the quotient $G/H$ is Hausdorff but not regular.
Laplacian embedded regression for scalable manifold regularization.
Chen, Lin; Tsang, Ivor W; Xu, Dong
2012-06-01
Semi-supervised learning (SSL), as a powerful tool to learn from a limited number of labeled data and a large number of unlabeled data, has been attracting increasing attention in the machine learning community. In particular, the manifold regularization framework has laid solid theoretical foundations for a large family of SSL algorithms, such as Laplacian support vector machine (LapSVM) and Laplacian regularized least squares (LapRLS). However, most of these algorithms are limited to small scale problems due to the high computational cost of the matrix inversion operation involved in the optimization problem. In this paper, we propose a novel framework called Laplacian embedded regression by introducing an intermediate decision variable into the manifold regularization framework. By using ∈-insensitive loss, we obtain the Laplacian embedded support vector regression (LapESVR) algorithm, which inherits the sparse solution from SVR. Also, we derive Laplacian embedded RLS (LapERLS) corresponding to RLS under the proposed framework. Both LapESVR and LapERLS possess a simpler form of a transformed kernel, which is the summation of the original kernel and a graph kernel that captures the manifold structure. The benefits of the transformed kernel are two-fold: (1) we can deal with the original kernel matrix and the graph Laplacian matrix in the graph kernel separately and (2) if the graph Laplacian matrix is sparse, we only need to perform the inverse operation for a sparse matrix, which is much more efficient when compared with that for a dense one. Inspired by kernel principal component analysis, we further propose to project the introduced decision variable into a subspace spanned by a few eigenvectors of the graph Laplacian matrix in order to better reflect the data manifold, as well as accelerate the calculation of the graph kernel, allowing our methods to efficiently and effectively cope with large scale SSL problems. Extensive experiments on both toy and real
Regularized friction and continuation: Comparison with Coulomb's law
Vigué, Pierre; Vergez, Christophe; Karkar, Sami; Cochelin, Bruno
2017-02-01
Periodic solutions of systems with friction are difficult to investigate because of the non-smooth nature of friction laws. This paper examines periodic solutions and most notably stick-slip, on a simple one-degree-of-freedom system (mass, spring, damper, and belt), with Coulomb's friction law, and with a regularized friction law (i.e. the friction coefficient becomes a function of relative speed, with a stiffness parameter). With Coulomb's law, the stick-slip solution is constructed step by step, which gives a usable existence condition. With the regularized law, the Asymptotic Numerical Method and the Harmonic Balance Method provide bifurcation diagrams with respect to the belt speed or normal force, and for several values of the regularization parameter. Formulations from the Coulomb case give the means of a comparison between regularized solutions and a standard reference. With an appropriate definition, regularized stick-slip motion exists, its amplitude increases with respect to the belt speed and its pulsation decreases with respect to the normal force.
Regularization for Atmospheric Temperature Retrieval Problems
Velez-Reyes, Miguel; Galarza-Galarza, Ruben
1997-01-01
Passive remote sensing of the atmosphere is used to determine the atmospheric state. A radiometer measures microwave emissions from earth's atmosphere and surface. The radiance measured by the radiometer is proportional to the brightness temperature. This brightness temperature can be used to estimate atmospheric parameters such as temperature and water vapor content. These quantities are of primary importance for different applications in meteorology, oceanography, and geophysical sciences. Depending on the range in the electromagnetic spectrum being measured by the radiometer and the atmospheric quantities to be estimated, the retrieval or inverse problem of determining atmospheric parameters from brightness temperature might be linear or nonlinear. In most applications, the retrieval problem requires the inversion of a Fredholm integral equation of the first kind making this an ill-posed problem. The numerical solution of the retrieval problem requires the transformation of the continuous problem into a discrete problem. The ill-posedness of the continuous problem translates into ill-conditioning or ill-posedness of the discrete problem. Regularization methods are used to convert the ill-posed problem into a well-posed one. In this paper, we present some results of our work in applying different regularization techniques to atmospheric temperature retrievals using brightness temperatures measured with the SSM/T-1 sensor. Simulation results are presented which show the potential of these techniques to improve temperature retrievals. In particular, no statistical assumptions are needed and the algorithms were capable of correctly estimating the temperature profile corner at the tropopause independent of the initial guess.
Nondissipative Velocity and Pressure Regularizations for the ICON Model
Restelli, M.; Giorgetta, M.; Hundertmark, T.; Korn, P.; Reich, S.
2009-04-01
A challenging aspect in the numerical simulation of atmospheric and oceanic flows is the multiscale character of the problem both in space and time. The small spacial scales are generated by the turbulent energy and enstrophy cascades, and are usually dealt with by means of turbulence parametrizations, while the small temporal scales are governed by the propagation of acoustic and gravity waves, which are of little importance for the large scale dynamics and are often eliminated by means of a semi-implicit time discretization. We propose to treat both phenomena of subgrid turbulence and temporal scale separation in a unified way by means of nondissipative regularizations of the underlying model equations. More precisely, we discuss the use of two regularized equation sets: the velocity regularization, also know as Lagrangian averaged Navier-Stokes system, and the pressure regularization. Both regularizations are nondissipative since they do not enhance the dissipation of energy and enstrophy of the flow. The velocity regularization models the effects of the subgrid velocity fluctuations on the mean flow, it has thus been proposed as a turbulence parametrization and it has been found to yield promising results in ocean modeling [HHPW08]. In particular, the velocity regularization results in a higher variability of the numerical solution. The pressure regularization, discussed in [RWS07], modifies the propagation of acoustic and gravity waves so that the resulting system can be discretized explicitly in time with time steps analogous to those allowed by a semi-implicit method. Compared to semi-implicit time integrators, however, the pressure regularization takes fully into account the geostrophic balance of the flow. We discuss here the implementation of the velocity and pressure regularizations within the numerical framework of the ICON general circulation model (GCM) [BR05] for the case of the rotating shallow water system, showing how the original numerical
Regularization Techniques for Linear Least-Squares Problems
Suliman, Mohamed
2016-04-01
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA
Early family regularity protects against later disruptive behavior.
Rijlaarsdam, Jolien; Tiemeier, Henning; Ringoot, Ank P; Ivanova, Masha Y; Jaddoe, Vincent W V; Verhulst, Frank C; Roza, Sabine J
2016-07-01
Infants' temperamental anger or frustration reactions are highly stable, but are also influenced by maturation and experience. It is yet unclear why some infants high in anger or frustration reactions develop disruptive behavior problems whereas others do not. We examined family regularity, conceptualized as the consistency of mealtime and bedtime routines, as a protective factor against the development of oppositional and aggressive behavior. This study used prospectively collected data from 3136 families participating in the Generation R Study. Infant anger or frustration reactions and family regularity were reported by mothers when children were ages 6 months and 2-4 years, respectively. Multiple informants (parents, teachers, and children) and methods (questionnaire and interview) were used in the assessment of children's oppositional and aggressive behavior at age 6. Higher levels of family regularity were associated with lower levels of child aggression independent of temperamental anger or frustration reactions (β = -0.05, p = 0.003). The association between child oppositional behavior and temperamental anger or frustration reactions was moderated by family regularity and child gender (β = 0.11, p = 0.046): family regularity reduced the risk for oppositional behavior among those boys who showed anger or frustration reactions in infancy. In conclusion, family regularity reduced the risk for child aggression and showed a gender-specific protective effect against child oppositional behavior associated with anger or frustration reactions. Families that ensured regularity of mealtime and bedtime routines buffered their infant sons high in anger or frustration reactions from developing oppositional behavior.
Regular Decompositions for H(div) Spaces
Kolev, Tzanio [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing; Vassilevski, Panayot [Lawrence Livermore National Lab. (LLNL), Livermore, CA (United States). Center for Applied Scientific Computing
2012-01-01
We study regular decompositions for H(div) spaces. In particular, we show that such regular decompositions are closely related to a previously studied “inf-sup” condition for parameter-dependent Stokes problems, for which we provide an alternative, more direct, proof.
Adaptive regularization of noisy linear inverse problems
Hansen, Lars Kai; Madsen, Kristoffer Hougaard; Lehn-Schiøler, Tue
2006-01-01
In the Bayesian modeling framework there is a close relation between regularization and the prior distribution over parameters. For prior distributions in the exponential family, we show that the optimal hyper-parameter, i.e., the optimal strength of regularization, satisfies a simple relation: T...
12 CFR 725.3 - Regular membership.
2010-01-01
... 12 Banks and Banking 6 2010-01-01 2010-01-01 false Regular membership. 725.3 Section 725.3 Banks and Banking NATIONAL CREDIT UNION ADMINISTRATION REGULATIONS AFFECTING CREDIT UNIONS NATIONAL CREDIT UNION ADMINISTRATION CENTRAL LIQUIDITY FACILITY § 725.3 Regular membership. (a) A natural person...
Fast and compact regular expression matching
Bille, Philip; Farach-Colton, Martin
2008-01-01
We study 4 problems in string matching, namely, regular expression matching, approximate regular expression matching, string edit distance, and subsequence indexing, on a standard word RAM model of computation that allows logarithmic-sized words to be manipulated in constant time. We show how...
Regularity of harmonic maps with the potential
CHU; Yuming
2006-01-01
The aim of this work is to prove the partial regularity of the harmonic maps with potential. The main difficulty caused by the potential is how to find the equation satisfied by the scaling function. Under the assumption on the potential we can obtain the equation, however, for a general potential, even if it is smooth, the partial regularity is still open.
On the Equivalence of Regularization Schemes
YANG Ji-Feng
2002-01-01
We illustrate via the sunset diagram that dimensional regularization ‘deforms' the nonlocal contentsof multi-loop diagrams with its equivalence to cutoff regularization scheme recovered only after sub-divergence wassubtracted. Then we employed a differential equation approach for calculating loop diagrams to verify that dimensionalare argued especially in nonperturbativc perspective.
Regular Event Structures and Finite Petri Nets
Nielsen, M.; Thiagarajan, P.S.
2002-01-01
We present the notion of regular event structures and conjecture that they correspond exactly to finite 1-safe Petri nets. We show that the conjecture holds for the conflict-free case. Even in this restricted setting, the proof is non-trivial and involves a natural subclass of regular event...
Regularity Re-Revisited: Modality Matters
Tsapkini, Kyrana; Jarema, Gonia; Kehayia, Eva
2004-01-01
The issue of regular-irregular past tense formation was examined in a cross-modal lexical decision task in Modern Greek, a language where the orthographic and phonological overlap between present and past tense stems is the same for both regular and irregular verbs. The experiment described here is a follow-up study of previous visual lexical…
The growth regularity and detective technique of collapse column
Liu, Z. [Xingtai Coal Mining Bureau (China)
1997-12-01
The paper summarizes the growth regularity and the related factors of collapse column in Duongpang Coal Mine, introduces the applicability of roadway explorations, drillings and geophysical prospecting methods, expounds how to select an economic and quick exploration method according to the characteristics of each method and difference geological conditions for detecting the place, shape, size and water-bearing property of collapse column. 4 figs.
Local regularization of linear inverse problems via variational filtering
Lamm, Patricia K.
2017-08-01
We develop local regularization methods for ill-posed linear inverse problems governed by general Fredholm integral operators. The methods are executed as filtering algorithms which are simple to implement and computationally efficient for a large class of problems. We establish a convergence theory and give convergence rates for such methods, and illustrate their computational speed in numerical tests for inverse problems in geomagnetic exploration and imaging.
Reducing errors in the GRACE gravity solutions using regularization
Save, Himanshu; Bettadpur, Srinivas; Tapley, Byron D.
2012-09-01
The nature of the gravity field inverse problem amplifies the noise in the GRACE data, which creeps into the mid and high degree and order harmonic coefficients of the Earth's monthly gravity fields provided by GRACE. Due to the use of imperfect background models and data noise, these errors are manifested as north-south striping in the monthly global maps of equivalent water heights. In order to reduce these errors, this study investigates the use of the L-curve method with Tikhonov regularization. L-curve is a popular aid for determining a suitable value of the regularization parameter when solving linear discrete ill-posed problems using Tikhonov regularization. However, the computational effort required to determine the L-curve is prohibitively high for a large-scale problem like GRACE. This study implements a parameter-choice method, using Lanczos bidiagonalization which is a computationally inexpensive approximation to L-curve. Lanczos bidiagonalization is implemented with orthogonal transformation in a parallel computing environment and projects a large estimation problem on a problem of the size of about 2 orders of magnitude smaller for computing the regularization parameter. Errors in the GRACE solution time series have certain characteristics that vary depending on the ground track coverage of the solutions. These errors increase with increasing degree and order. In addition, certain resonant and near-resonant harmonic coefficients have higher errors as compared with the other coefficients. Using the knowledge of these characteristics, this study designs a regularization matrix that provides a constraint on the geopotential coefficients as a function of its degree and order. This regularization matrix is then used to compute the appropriate regularization parameter for each monthly solution. A 7-year time-series of the candidate regularized solutions (Mar 2003-Feb 2010) show markedly reduced error stripes compared with the unconstrained GRACE release 4
钱爱林; 毛建峰; 桂咏新
2011-01-01
Helmholtz方程Cauchy问题是严重不适定问题,本文我们在一个带形区域上考虑了一类Helmholtz方程Cauchy问题:已知Cauchy数据u(0,y)=g(y),在区间0＜x＜1上求解.我们用半离散的中心差分方法得到了这一问题的正则化解,给出了正则化参数的选取规则,得到了误差估计.%The Cauchy problem of Helmholtz equation is severely ill-posed problem. In this paper, we consider the Cauchy problem for the Helmholtz equation where the Cauchy data is given at x = 0 and the solution is sought in the interval 0 ＜ x ＜ 1. A semi-discrete difference schemes together with a choice of regularization parameter is presented and error estimate is obtained.
Helping tools for the regular expression author for test questions in LMS Moodle
O. A. Sychev
2016-01-01
Full Text Available Composing regular expressions for test questions is often a difficult thing for the teachers; so many teachers avoid using regular expression questions. Similar problems hinder students learning regular expressions as a part of computer science. There are many programs developed to help composing and learning of the regular expressions, but they are using different forms of regular expression visualization. The goal of this research was to compare efficiency of different forms of regular expression representation for their learning and composing, methods for linking them together and with regular expression text. A set of helping tools for regular expressions authors (as a plugin for Moodle CMS was developed, using three form of regular expression representation: syntax tree (visualizes expression structure, explanation graph (visualizes paths of expression execution and text description – and testing tool, showing regular expression match with test strings. Developed instruments was used by students learning regular expressions, the students fill a survey after that. Students were divided into four groups by their year of study and country. Survey shows that different group of students prefer different instruments. Most generally popular ones were explanation graph and regular testing, but even text description – a general outsider – was leading in the group of students from Africa learning in English language. The survey also shows that ability to select part of regular expression representation and see that part selected in other representations and regular expression text was very useful in linking representations together and understanding complex expressions. About a quarter of students used other regular expression construction tools before taking part in this experiment, most of them said that developed tools were better than those they used before. Several teachers, which had used regular expressions in their questions, have
Minimal regular 2-graphs and applications
FAN; Hongbing; LIU; Guizhen; LIU; Jiping
2006-01-01
A 2-graph is a hypergraph with edge sizes of at most two. A regular 2-graph is said to be minimal if it does not contain a proper regular factor. Let f2(n) be the maximum value of degrees over all minimal regular 2-graphs of n vertices. In this paper, we provide a structure property of minimal regular 2-graphs, and consequently, prove that f2(n) = n+3-i/3where 1 ≤i≤6, i=n (mod 6) andn≥ 7, which solves a conjecture posed by Fan, Liu, Wu and Wong. As applications in graph theory, we are able to characterize unfactorable regular graphs and provide the best possible factor existence theorem on degree conditions. Moreover, f2(n) and the minimal 2-graphs can be used in the universal switch box designs, which originally motivated this study.
Regular Expression Matching and Operational Semantics
Rathnayake, Asiri; 10.4204/EPTCS.62.3
2011-01-01
Many programming languages and tools, ranging from grep to the Java String library, contain regular expression matchers. Rather than first translating a regular expression into a deterministic finite automaton, such implementations typically match the regular expression on the fly. Thus they can be seen as virtual machines interpreting the regular expression much as if it were a program with some non-deterministic constructs such as the Kleene star. We formalize this implementation technique for regular expression matching using operational semantics. Specifically, we derive a series of abstract machines, moving from the abstract definition of matching to increasingly realistic machines. First a continuation is added to the operational semantics to describe what remains to be matched after the current expression. Next, we represent the expression as a data structure using pointers, which enables redundant searches to be eliminated via testing for pointer equality. From there, we arrive both at Thompson's lock...
A linear functional strategy for regularized ranking.
Kriukova, Galyna; Panasiuk, Oleksandra; Pereverzyev, Sergei V; Tkachenko, Pavlo
2016-01-01
Regularization schemes are frequently used for performing ranking tasks. This topic has been intensively studied in recent years. However, to be effective a regularization scheme should be equipped with a suitable strategy for choosing a regularization parameter. In the present study we discuss an approach, which is based on the idea of a linear combination of regularized rankers corresponding to different values of the regularization parameter. The coefficients of the linear combination are estimated by means of the so-called linear functional strategy. We provide a theoretical justification of the proposed approach and illustrate them by numerical experiments. Some of them are related with ranking the risk of nocturnal hypoglycemia of diabetes patients.
Regularization of subsolutions in discrete weak KAM theory
Bernard, Patrick
2012-01-01
We expose different methods of regularizations of subsolutions in the context of discrete weak KAM theory. They allow to prove the existence and the density of $C^{1,1}$ subsolutions. Moreover, these subsolutions can be made strict and smooth outside of the Aubry set.
A Unified Approach for Solving Nonlinear Regular Perturbation Problems
Khuri, S. A.
2008-01-01
This article describes a simple alternative unified method of solving nonlinear regular perturbation problems. The procedure is based upon the manipulation of Taylor's approximation for the expansion of the nonlinear term in the perturbed equation. An essential feature of this technique is the relative simplicity used and the associated unified…
The Student with Albinism in the Regular Classroom.
Ashley, Julia Robertson
This booklet, intended for regular education teachers who have children with albinism in their classes, begins with an explanation of albinism, then discusses the special needs of the student with albinism in the classroom, and presents information about adaptations and other methods for responding to these needs. Special social and emotional…
"Plug-and-play" edge-preserving regularization
Chen, Donghui; Kilmer, Misha E.; Hansen, Per Christian
2014-01-01
In many inverse problems it is essential to use regularization methods that preserve edges in the reconstructions, and many reconstruction models have been developed for this task, such as the Total Variation (TV) approach. The associated algorithms are complex and require a good knowledge of large...
RECONSTRUCTION OF SCATTERED FIELD FROM FAR-FIELD BY REGULARIZATION
Ji-jun Liu; Jin Cheng; G. Nakamura
2004-01-01
In this paper, we consider an inverse scattering problem for an obstacle D(∪)R2 with Robin boundary condition. By applying the point source, we give a regularizing method to recover the scattered field from the far-field pattern. Numerical implementations are also presented.
Nonlocal regularization of abelian models with spontaneous symmetry breaking
Clayton, M. A.
2001-01-01
We demonstrate how nonlocal regularization is applied to gauge invariant models with spontaneous symmetry breaking. Motivated by the ability to find a nonlocal BRST invariance that leads to the decoupling of longitudinal gauge bosons from physical amplitudes, we show that the original formulation of the method leads to a nontrivial relationship between the nonlocal form factors that can appear in the model.
Regularity of Solution for a Class Zakharov-Kuznestov Equation
MAYun-xin; GUOTian-fen
2004-01-01
In this paper, we consider the regularity of solution in S for Zakharov-Kuznestov equation in Hs(s>2). Meanwhile, by method of undetermined coefficient we prove that there don't exist and conservative integral include 2 order of higher order derived functions.
Parameter optimization in the regularized kernel minimum noise fraction transformation
Nielsen, Allan Aasbjerg; Vestergaard, Jacob Schack
2012-01-01
Based on the original, linear minimum noise fraction (MNF) transformation and kernel principal component analysis, a kernel version of the MNF transformation was recently introduced. Inspired by we here give a simple method for finding optimal parameters in a regularized version of kernel MNF...... analysis. We consider the model signal-to-noise ratio (SNR) as a function of the kernel parameters and the regularization parameter. In 2-4 steps of increasingly refined grid searches we find the parameters that maximize the model SNR. An example based on data from the DLR 3K camera system is given....
Models with hidden regular variation: Generation and detection
Bikramjit Das
2015-12-01
Full Text Available We review the notions of multivariate regular variation (MRV and hidden regular variation (HRV for distributions of random vectors and then discuss methods for generating models exhibiting both properties concentrating on the non-negative orthant in dimension two. Furthermore we suggest diagnostic techniques that detect these properties in multivariate data and indicate when models exhibiting both MRV and HRV are plausible fits for the data. We illustrate our techniques on simulated data, as well as two real Internet data sets.
Radial basis function networks and complexity regularization in function learning.
Krzyzak, A; Linder, T
1998-01-01
In this paper we apply the method of complexity regularization to derive estimation bounds for nonlinear function estimation using a single hidden layer radial basis function network. Our approach differs from previous complexity regularization neural-network function learning schemes in that we operate with random covering numbers and l(1) metric entropy, making it possible to consider much broader families of activation functions, namely functions of bounded variation. Some constraints previously imposed on the network parameters are also eliminated this way. The network is trained by means of complexity regularization involving empirical risk minimization. Bounds on the expected risk in terms of the sample size are obtained for a large class of loss functions. Rates of convergence to the optimal loss are also derived.
Manifold regularized multitask feature learning for multimodality disease classification.
Jie, Biao; Zhang, Daoqiang; Cheng, Bo; Shen, Dinggang
2015-02-01
Multimodality based methods have shown great advantages in classification of Alzheimer's disease (AD) and its prodromal stage, that is, mild cognitive impairment (MCI). Recently, multitask feature selection methods are typically used for joint selection of common features across multiple modalities. However, one disadvantage of existing multimodality based methods is that they ignore the useful data distribution information in each modality, which is essential for subsequent classification. Accordingly, in this paper we propose a manifold regularized multitask feature learning method to preserve both the intrinsic relatedness among multiple modalities of data and the data distribution information in each modality. Specifically, we denote the feature learning on each modality as a single task, and use group-sparsity regularizer to capture the intrinsic relatedness among multiple tasks (i.e., modalities) and jointly select the common features from multiple tasks. Furthermore, we introduce a new manifold-based Laplacian regularizer to preserve the data distribution information from each task. Finally, we use the multikernel support vector machine method to fuse multimodality data for eventual classification. Conversely, we also extend our method to the semisupervised setting, where only partial data are labeled. We evaluate our method using the baseline magnetic resonance imaging (MRI), fluorodeoxyglucose positron emission tomography (FDG-PET), and cerebrospinal fluid (CSF) data of subjects from AD neuroimaging initiative database. The experimental results demonstrate that our proposed method can not only achieve improved classification performance, but also help to discover the disease-related brain regions useful for disease diagnosis.
HU Chang-sheng; ZHAO Wei-min; MA Qiang
2009-01-01
To analyze the stress of the guiding & positioning board and the effectiveness of the guiding & positioning device, aeenrding to guiding & positioning device's operational principle and structure, the guiding & positioning board's motion regular was analyzed bydiagrammatical method based on 2 postulated conditions. Considering about the working conditions' change, simulations in 5 different kinds of working conditions were done to cheek the correctness of the motion regulars obtained by diagrammatical method. Simulation results prove that the motion regulars are right, the postulated conditions have no effect on the obtained motion regulars. According to the simulation results, the motion processs's characters were drawn out at the same time.
李秀丽
2008-01-01
In this paper,we introduce the concept of a strongly regular(α,β)-family.It genealizes the concept of an SPG-family in[4]and[5].We provide a method of constructing strongly regular(α,β)-geometries from strongly regular(α,β)-families.Furthermore,we prove that each strongly regular(α,β)-geometry constructed from a strongly regular(α,β)-regulus translation is isomorphic to a translation strongly regular(α,β)-geometry;while t-r>β,the converse is Mso true.
J-regular rings with injectivities
Shen, Liang
2010-01-01
A ring $R$ is called a J-regular ring if R/J(R) is von Neumann regular, where J(R) is the Jacobson radical of R. It is proved that if R is J-regular, then (i) R is right n-injective if and only if every homomorphism from an $n$-generated small right ideal of $R$ to $R_{R}$ can be extended to one from $R_{R}$ to $R_{R}$; (ii) R is right FP-injective if and only if R is right (J, R)-FP-injective. Some known results are improved.
REGULARIZATION OF SINGULAR SYSTEMS BY OUTPUT FEEDBACK
De-lin Chu; Da-yong Cai
2000-01-01
Problem of regularization of a singular system by derivative and proportional output feedback is studied. Necessary and sufficient conditions are obtained under which a singular system can be regularized into a closed-loop system that is regular and of index at most one. The reduced form is given that can easily explore the system properties as well as the feedback to be determined. The main results of the present paper are based on orthogonal transformations. Therefore, they can be implemented by numerically stable ways.
Inverse problems: Fuzzy representation of uncertainty generates a regularization
Kreinovich, V.; Chang, Ching-Chuang; Reznik, L.; Solopchenko, G. N.
1992-01-01
In many applied problems (geophysics, medicine, and astronomy) we cannot directly measure the values x(t) of the desired physical quantity x in different moments of time, so we measure some related quantity y(t), and then we try to reconstruct the desired values x(t). This problem is often ill-posed in the sense that two essentially different functions x(t) are consistent with the same measurement results. So, in order to get a reasonable reconstruction, we must have some additional prior information about the desired function x(t). Methods that use this information to choose x(t) from the set of all possible solutions are called regularization methods. In some cases, we know the statistical characteristics both of x(t) and of the measurement errors, so we can apply statistical filtering methods (well-developed since the invention of a Wiener filter). In some situations, we know the properties of the desired process, e.g., we know that the derivative of x(t) is limited by some number delta, etc. In this case, we can apply standard regularization techniques (e.g., Tikhonov's regularization). In many cases, however, we have only uncertain knowledge about the values of x(t), about the rate with which the values of x(t) can change, and about the measurement errors. In these cases, usually one of the existing regularization methods is applied. There exist several heuristics that choose such a method. The problem with these heuristics is that they often lead to choosing different methods, and these methods lead to different functions x(t). Therefore, the results x(t) of applying these heuristic methods are often unreliable. We show that if we use fuzzy logic to describe this uncertainty, then we automatically arrive at a unique regularization method, whose parameters are uniquely determined by the experts knowledge. Although we start with the fuzzy description, but the resulting regularization turns out to be quite crisp.
Path integral evaluation of non-abelian anomaly and Pauli-Villars-Gupta regularization
Okuyama, K; Okuyama, Kiyoshi; Suzuki, Hiroshi
1996-01-01
When the path integral method of anomaly evaluation is applied to chiral gauge theories, two different types of gauge anomaly, i.e., the consistent form and the covariant form, appear depending on the regularization scheme for the Jacobian factor. We clarify the relation between the regularization scheme and the Pauli--Villars--Gupta (PVG) type Lagrangian level regularization. The conventional PVG, being non-gauge invariant for chiral gauge theories, in general corresponds to the consistent regularization scheme. The covariant regularization scheme, on the other hand, is realized by the generalized PVG Lagrangian recently proposed by Frolov and Slavnov. These correspondences are clarified by reformulating the PVG method as a regularization of the composite gauge current operator.
Novel Harmonic Regularization Approach for Variable Selection in Cox’s Proportional Hazards Model
Ge-Jin Chu
2014-01-01
Full Text Available Variable selection is an important issue in regression and a number of variable selection methods have been proposed involving nonconvex penalty functions. In this paper, we investigate a novel harmonic regularization method, which can approximate nonconvex Lq (1/2regularizations, to select key risk factors in the Cox’s proportional hazards model using microarray gene expression data. The harmonic regularization method can be efficiently solved using our proposed direct path seeking approach, which can produce solutions that closely approximate those for the convex loss function and the nonconvex regularization. Simulation results based on the artificial datasets and four real microarray gene expression datasets, such as real diffuse large B-cell lymphoma (DCBCL, the lung cancer, and the AML datasets, show that the harmonic regularization method can be more accurate for variable selection than existing Lasso series methods.
A quadratic rate of asymptotic regularity for CAT(0)-spaces
Leustean, Laurentiu
2005-01-01
In this paper we obtain a quadratic bound on the rate of asymptotic regularity for the Krasnoselski-Mann iterations of nonexpansive mappings in CAT(0)-spaces, whereas previous results guarantee only exponential bounds. The method we use is to extend to the more general setting of uniformly convex hyperbolic spaces a quantitative version of a strengthening of Groetsch's theorem obtained by Kohlenbach using methods from mathematical logic (so-called ``proof mining'').
Information theoretic regularization in diffuse optical tomography.
Panagiotou, Christos; Somayajula, Sangeetha; Gibson, Adam P; Schweiger, Martin; Leahy, Richard M; Arridge, Simon R
2009-05-01
Diffuse optical tomography (DOT) retrieves the spatially distributed optical characteristics of a medium from external measurements. Recovering the parameters of interest involves solving a nonlinear and highly ill-posed inverse problem. This paper examines the possibility of regularizing DOT via the introduction of a priori information from alternative high-resolution anatomical modalities, using the information theory concepts of mutual information (MI) and joint entropy (JE). Such functionals evaluate the similarity between the reconstructed optical image and the prior image while bypassing the multimodality barrier manifested as the incommensurate relation between the gray value representations of corresponding anatomical features in the two modalities. By introducing structural information, we aim to improve the spatial resolution and quantitative accuracy of the solution. We provide a thorough explanation of the theory from an imaging perspective, accompanied by preliminary results using numerical simulations. In addition we compare the performance of MI and JE. Finally, we have adopted a method for fast marginal entropy evaluation and optimization by modifying the objective function and extending it to the JE case. We demonstrate its use on an image reconstruction framework and show significant computational savings.
Sparsity-regularized HMAX for visual recognition.
Xiaolin Hu
Full Text Available About ten years ago, HMAX was proposed as a simple and biologically feasible model for object recognition, based on how the visual cortex processes information. However, the model does not encompass sparse firing, which is a hallmark of neurons at all stages of the visual pathway. The current paper presents an improved model, called sparse HMAX, which integrates sparse firing. This model is able to learn higher-level features of objects on unlabeled training images. Unlike most other deep learning models that explicitly address global structure of images in every layer, sparse HMAX addresses local to global structure gradually along the hierarchy by applying patch-based learning to the output of the previous layer. As a consequence, the learning method can be standard sparse coding (SSC or independent component analysis (ICA, two techniques deeply rooted in neuroscience. What makes SSC and ICA applicable at higher levels is the introduction of linear higher-order statistical regularities by max pooling. After training, high-level units display sparse, invariant selectivity for particular individuals or for image categories like those observed in human inferior temporal cortex (ITC and medial temporal lobe (MTL. Finally, on an image classification benchmark, sparse HMAX outperforms the original HMAX by a large margin, suggesting its great potential for computer vision.
WEAK REGULARIZATION FOR A CLASS OF ILL-POSED CAUCHY PROBLEMS
无
2006-01-01
This article is concerned with the ill-posed Cauchy problem associated with a densely defined linear operator A in a Banach space. A family of weak regularizing operators is introduced. If the spectrum of A is contained in a sector of right-half complex plane and its resolvent is polynomially bounded, the weak regularization for such ill-posed Cauchy problem can be shown by using the quasi-reversibility method and regularized semigroups. Finally, an example is given.
Could Regular Pot Smoking Harm Vision?
... fullstory_162441.html Could Regular Pot Smoking Harm Vision? Study suggests that it might slow signaling among ... may be linked to a limited degree of vision impairment, a new French study suggests. The finding ...
Regular-fat dairy and human health
Astrup, Arne; Bradley, Beth H Rice; Brenna, J Thomas
2016-01-01
In recent history, some dietary recommendations have treated dairy fat as an unnecessary source of calories and saturated fat in the human diet. These assumptions, however, have recently been brought into question by current research on regular fat dairy products and human health. In an effort...... dairy foods have on human health. The emerging scientific evidence indicates that the consumption of regular fat dairy foods is not associated with an increased risk of cardiovascular disease and inversely associated with weight gain and the risk of obesity. Dairy foods, including regular-fat milk...... to disseminate, explore and discuss the state of the science on the relationship between regular fat dairy products and health, symposia were programmed by dairy industry organizations in Europe and North America at The Eurofed Lipids Congress (2014) in France, The Dairy Nutrition Annual Symposium (2014...
The regularization of Old English weak verbs
Marta Tío Sáenz
2015-07-01
Full Text Available This article deals with the regularization of non-standard spellings of the verbal forms extracted from a corpus. It addresses the question of what the limits of regularization are when lemmatizing Old English weak verbs. The purpose of such regularization, also known as normalization, is to carry out lexicological analysis or lexicographical work. The analysis concentrates on weak verbs from the second class and draws on the lexical database of Old English Nerthus, which has incorporated the texts of the Dictionary of Old English Corpus. As regards the question of the limits of normalization, the solution adopted are, in the first place, that when it is necessary to regularize, normalization is restricted to correspondences based on dialectal and diachronic variation and, secondly, that normalization has to be unidirectional.
On π-regularity of General Rings
CHEN WEI-XING; CUI SHU-YING
2010-01-01
A general ring means an associative ring with or without identity. An idempotent e in a general ring I is called left (right) semicentral if for every x∈ I,xe = exe (ex = exe). And I is called semiabelian ff every idempotent in I is left or right semicentral. It is proved that a semiabelian general ring I is π-regular if and only ff the set N(I) of nilpotent elements in I is an ideal of I and I/N(I) is regular. It follows that if I is a semiabelian general ring and K is an ideal of I,then I is π-regular if and only if both K and I/K are r-regular. Based on this we prove that every semiabelian GVNL-ring is an SGVNL-ring. These generalize several known results on the relevant subject. Furthermore we give a characterization of a semiabelian GVNL-ring.
A Biordered Set Representation of Regular Semigroups
Bing Jun YU; Mang XU
2005-01-01
In this paper, for an arbitrary regular biordered set E, by using biorder-isomorphisms between the ω-ideals of E, we construct a fundamental regular semigroup WE called NH-semigroup of E, whose idempotent biordered set is isomorphic to E. We prove further that WE can be used to give a new representation of general regular semigroups in the sense that, for any regular semigroup S with the idempotent biordered set isomorphic to E, there exists a homomorphism from S to WE whose kernel is the greatest idempotent-separating congruence on S and the image is a full symmetric subsemigroup of WE. Moreover, when E is a biordered set of a semilattice E0, WE is isomorphic to the Munn-semigroup TE0; and when E is the biordered set of a band B, WE is isomorphic to the Hall-semigroup WB.
Regularities and Radicals in Near-rings
N.J. Groenewald
2002-01-01
Let F be a regularity for near-rings and F(R) the largest FR-regular ideal in R. In the first part of this paper, we introduce the concepts of maximal Fmodular ideals and F-primitive near-rings to characterize F(R) for any near-ring regularity F. Under certain conditions, F(R) is equal to the intersection of all the maximal F-modular ideals of R. As examples, we apply this to the different analogs of the Brown-McCoy radicals and also the Behrens radicals. In the last part of this paper, we show that for certain regularities, the class of F-primitive near-rings forms a special class.
Spectral partitioning of random regular blockmodels
Barucca, Paolo
2016-01-01
Graph partitioning problems emerge in a wide variety of complex systems, ranging from biology to finance, but can be rigorously analyzed and solved only for a few graph ensembles. Here, an ensemble of random graphs with regular block structure is introduced, for which analytical results can be obtained. In particular, the spectral density of such random regular blockmodels is computed exactly for a modular, bipartite and core-periphery structure. McKay's law for random regular graphs is found analytically to apply also for regular modular and bipartite structures when blocks are homogeneous. In core-periphery structures, where blocks are intrinsically heterogeneous, a new law is found to apply for the spectral density. Exact solution to the inference problem is provided for the models discussed. All analytical results show perfect agreement with numerical experiments. Final discussion summarizes results and outlines the relevance of the results for the solution of graph partitioning problems in other graph en...
Comparability for ideals of regular rings
CHEN Huanyin
2005-01-01
In this paper we investigate necessary and sufficient conditions under which the ideals possess comparability structure. For regular rings, we prove that every square matrix over ideals satisfying general comparability admits a diagonal reduction by quasi invertible matrices.
Regularity of optimal transport maps and applications
Philippis, Guido
2013-01-01
In this thesis, we study the regularity of optimal transport maps and its applications to the semi-geostrophic system. The first two chapters survey the known theory, in particular there is a self-contained proof of Brenier’ theorem on existence of optimal transport maps and of Caffarelli’s Theorem on Holder continuity of optimal maps. In the third and fourth chapter we start investigating Sobolev regularity of optimal transport maps, while in Chapter 5 we show how the above mentioned results allows to prove the existence of Eulerian solution to the semi-geostrophic equation. In Chapter 6 we prove partial regularity of optimal maps with respect to a generic cost functions (it is well known that in this case global regularity can not be expected). More precisely we show that if the target and source measure have smooth densities the optimal map is always smooth outside a closed set of measure zero.
Spatially varying regularization based on retrieved support in diffuse optical tomography
Sabir, Sohail; Cho, Sanghoon; Cho, Seunryong
2017-03-01
Diffuse optical tomography (DOT) is a promising noninvasive imaging modality capable of providing the functional characteristics (oxygen saturation and hemodynamic states) of thick biological tissue by quantifying the optical parameters. The parameter recovery problem in DOT is a nonlinear, ill-posed and ill conditioned inverse problem. The non-linear iterative methods are usually employed for image reconstruction in DOT by utilizing Tikhonov based regularization approach. These methods employ l2-norm based regularization where the constant regularization parameter is determined either empirically or generalized cross validation methods or L curve method. The reconstructed images look smoother or noisy depending on the chosen value of the regularization constant. Moreover the edges information of the inclusions appeared to be blurred in such constant regularization methods. In this study we proposed a method to retrieve and utilized a non-zero support (possible tumor location) to generate a spatially varying regularization map. The inclusions locations were determined by considering the imaging problem as a multiple measurements vector (MMV) problem. Based on the recovered inclusion positions spatially regularization map was generated to be used in non-linear image reconstruction framework. The results retrieved with such spatially varying priors shows slightly improved image reconstruction in terms of better contrast recovery, reduction in background noise and preservation of edge information of inclusions compared with the constant regularization approach.
*-Regular Leavitt Path Algebras of Arbitrary Graphs
Gonzalo ARANDA PINO; Kulumani RANGASWAMY; Lia VA(S)
2012-01-01
If K is a field with involution and E an arbitrary graph,the involution from K naturally induces an involution of the Leavitt path algebra LK(E).We show that the involution on LK(E) is proper if the involution on K is positive-definite,even in the case when the graph E is not necessarily finite or row-finite.It has been shown that the Leavitt path algebra LK(E) is regular if and only if E is acyclic.We give necessary and sufficient conditions for LK(E) to be *-regular (i.e.,regular with proper involution).This characterization of *-regularity of a Leavitt path algebra is given in terms of an algebraic property of K,not just a graph-theoretic property of E.This differs from the.known characterizations of various other algebraic properties of a Leavitt path algebra in terms of graphtheoretic properties of E alone.As a corollary,we show that Handelman's conjecture (stating that every *-regular ring is unit-regular) holds for Leavitt path algebras.Moreover,its generalized version for rings with local units also continues to hold for Leavitt path algebras over arbitrary graphs.
Vandenbrink, Stephan Christopher [Univ. of Pittsburgh, PA (United States)
1998-01-13
This thesis presents the results from the investigation of time dependent B$0\\atop{d}$ $\\bar{B}$$0\\atop{d}$ mixing in B → lepton X, B$0\\atop{d}$ → D^{*-} → $\\bar{D}$^{0} π^{-}, $\\bar{D}$^{0} → K^{+} π^{-} channel in p$\\bar{p}$ collisions at √s = 1.8 TeV using 110 pb^{-1} data collected with the CDF detector at the Fermilab Tevatron Collider. The $\\bar{D}$^{0} vertex is reconstructed. The B$0\\atop{d}$ decay length is estimated using the distance from the primary vertex to the measured position of the D^{0} vertex. The B^{0} momentum is estimated using the D^{0} momentum and a kinematic correction factor from Monte Carlo. With the dilution floating, ΔM_{d} = 0.55 ±$0.15\\atop{0.16}$ (stat) ± 0.06 (syst)ps^{-1} is measured.
Henner, V K; Belozerova, T S
2015-01-01
The first part of our analysis uses the wavelet method to compare the Quantum Chromodynamic (QCD) prediction for the ratio of hadronic to muon cross sections in electron-positron collisions, $R$, with experimental data for $R$ over a center of mass energy range up to 7.5 GeV. A direct comparison of the raw experimental data and the QCD prediction is difficult because the data have a wide range of structures and large statistical errors and the QCD description contains sharp quark-antiquark thresholds. However, a meaningful comparison can be made if a type of "smearing" procedure is used to smooth out rapid variations in both the theoretical and experimental values of $R$. A wavelet analysis (WA) can be used to achieve this smearing effect. In the second part of the analysis we concentrate on the 3.0 - 6.0 GeV energy region containing the relatively wide charmonium resonances $\\psi(1^-)$. We use the wavelet methodology to distinguish these resonances from experimental noise, background and from each other, and...
Resolving intravoxel fiber architecture using nonconvex regularized blind compressed sensing
Chu, C. Y.; Huang, J. P.; Sun, C. Y.; Liu, W. Y.; Zhu, Y. M.
2015-03-01
In diffusion magnetic resonance imaging, accurate and reliable estimation of intravoxel fiber architectures is a major prerequisite for tractography algorithms or any other derived statistical analysis. Several methods have been proposed that estimate intravoxel fiber architectures using low angular resolution acquisitions owing to their shorter acquisition time and relatively low b-values. But these methods are highly sensitive to noise. In this work, we propose a nonconvex regularized blind compressed sensing approach to estimate intravoxel fiber architectures in low angular resolution acquisitions. The method models diffusion-weighted (DW) signals as a sparse linear combination of unfixed reconstruction basis functions and introduces a nonconvex regularizer to enhance the noise immunity. We present a general solving framework to simultaneously estimate the sparse coefficients and the reconstruction basis. Experiments on synthetic, phantom, and real human brain DW images demonstrate the superiority of the proposed approach.
Random noise attenuation using an improved anisotropic total variation regularization
Gemechu, Diriba; Yuan, Huan; Ma, Jianwei
2017-09-01
In seismic data processing, attenuation of random noise from the observed data is the basic step which improves the signal-to-noise ratio (SNR) of seismic data. In this paper, we proposed an anisotropic total bounded variation regularization approach to attenuate noise. An improved constraint convex optimization model is formulated for this approach and then the split Bregman algorithm is used to solve the optimization model. Generalized cross validation (GCV) technique is used to estimate the regularization parameter. Synthetic and real seismic data are considered to show the out performance of the proposed method in terms of event-preserving denoising, in comparison with FX deconvolution, shearlet hard thresholding, and anisotropic total variation methods. The numerical results indicate that the proposed method effectively attenuates random noise by preserving the structure and important features of seismic data.
The LPM effect in sequential bremsstrahlung: dimensional regularization
Arnold, Peter; Chang, Han-Chih [Department of Physics, University of Virginia,382 McCormick Road, Charlottesville, VA 22894-4714 (United States); Iqbal, Shahin [National Centre for Physics,Quaid-i-Azam University Campus, Islamabad, 45320 (Pakistan)
2016-10-19
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-level amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.
The LPM effect in sequential bremsstrahlung: dimensional regularization
Arnold, Peter; Chang, Han-Chih; Iqbal, Shahin
2016-10-01
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-level amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.
The LPM effect in sequential bremsstrahlung: dimensional regularization
Arnold, Peter; Iqbal, Shahin
2016-01-01
The splitting processes of bremsstrahlung and pair production in a medium are coherent over large distances in the very high energy limit, which leads to a suppression known as the Landau-Pomeranchuk-Migdal (LPM) effect. Of recent interest is the case when the coherence lengths of two consecutive splitting processes overlap (which is important for understanding corrections to standard treatments of the LPM effect in QCD). In previous papers, we have developed methods for computing such corrections without making soft-gluon approximations. However, our methods require consistent treatment of canceling ultraviolet (UV) divergences associated with coincident emission times, even for processes with tree-level amplitudes. In this paper, we show how to use dimensional regularization to properly handle the UV contributions. We also present a simple diagnostic test that any consistent UV regularization method for this problem needs to pass.
SUI Da-shan; CUI Zhen-shan
2008-01-01
The interfacial heat transfer coefficient(IHTC) between the casting and the mould is essential to the numerical simulation as one of boundary conditions. A new inverse method was presented according to the Tikhonov regularization theory. A regularized functional was established and the regularization parameter was deduced. The functional was solved to determine the interfacial heat transfer coefficient by using the sensitivity coefficient and Newton-Raphson iteration method. The temperature measurement experiment was done to ZL102 sand mold casting, and the appropriate mathematical model of the IHTC was established. Moreover, the regularization method was used to determinate the IHTC. The results indicate that the regularization method is very efficient in overcoming the ill-posedness of the inverse heat conduction problem(IHCP), and ensuring the accuracy and stability of the solutions.
Performance Comparison of Total Variation based Image Regularization Algorithms
Kamalaveni Vanjigounder
2016-07-01
Full Text Available The mathematical approach calculus of variation is commonly used to find an unknown function that minimizes or maximizes the functional. Retrieving the original image from the degraded one, such problems are called inverse problems. The most basic example for inverse problem is image denoising. Variational methods are formulated as optimization problems and provides a good solution to image denoising. Three such variational methods Tikhonov model, ROF model and Total Variation-L1 model for image denoising are studied and implemented. Performance of these variational algorithms are analyzed for different values of regularization parameter. It is found that small value of regularization parameter causes better noise removal whereas large value of regularization parameter preserves well sharp edges. The Euler’s Lagrangian equation corresponding to an energy functional used in variational methods is solved using gradient descent method and the resulting partial differential equation is solved using Euler’s forward finite difference method. The quality metrics are computed and the results are compared in this paper.
Regularization dependence on phase diagram in Nambu-Jona-Lasinio model
Inagaki, T; Kohyama, H
2015-01-01
We study the regularization dependence on meson properties and the phase diagram of quark matter by using the two flavor Nambu-Jona-Lasinio model. We find that the meson properties and the phase structure do not show drastically difference depending the regularization procedures. We also find that the location or the existence of the critical end point highly depends on the regularization methods and the model parameters. Then we think that regularization and parameters are carefully considered when one investigates the QCD critical end point in the effective model studies.
The Least Regular Order with Respect to a Regular Congruence on Ordered Γ-Semigroups
Manoj SIRIPITUKDET; Aiyared IAMPAN
2012-01-01
The motivation mainly comes from the conditions of congruences to be regular that are of importance and interest in ordered semigroups.In 1981,Sen has introduced the concept of the Γ-semigroups.We can see that any semigroup can be considered as a Γ-semigroup.In this paper,we introduce and characterize the concept of the regular congruences on ordered Γ-semigroups and prove the following statements on an ordered Γ-semigroup M:(1) Every ordered semilattice congruences is a regular congruence.(2) There exists the least regular order on the T-semigroup M/p with respect to a regular congruence p on M.(3) The regular congruences are not ordered semilattice congruences in general.
Spectral Regularization Algorithms for Learning Large Incomplete Matrices.
Mazumder, Rahul; Hastie, Trevor; Tibshirani, Robert
2010-03-01
We use convex relaxation techniques to provide a sequence of regularized low-rank solutions for large-scale matrix completion problems. Using the nuclear norm as a regularizer, we provide a simple and very efficient convex algorithm for minimizing the reconstruction error subject to a bound on the nuclear norm. Our algorithm Soft-Impute iteratively replaces the missing elements with those obtained from a soft-thresholded SVD. With warm starts this allows us to efficiently compute an entire regularization path of solutions on a grid of values of the regularization parameter. The computationally intensive part of our algorithm is in computing a low-rank SVD of a dense matrix. Exploiting the problem structure, we show that the task can be performed with a complexity linear in the matrix dimensions. Our semidefinite-programming algorithm is readily scalable to large matrices: for example it can obtain a rank-80 approximation of a 10(6) × 10(6) incomplete matrix with 10(5) observed entries in 2.5 hours, and can fit a rank 40 approximation to the full Netflix training set in 6.6 hours. Our methods show very good performance both in training and test error when compared to other competitive state-of-the art techniques.
Gutierrez T, C.; Flores Ll, H. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2004-07-01
The second derived of the characteristic curve current-voltage (I - V) of a Langmuir probe (I - V) is numerically calculated using the Tikhonov method for to determine the distribution function of the electrons energy (EEDF). One comparison of the obtained EEDF and a fit by least square are discussed (LS). The I - V experimental curve is obtained in a plasma source in the electron cyclotron resonance (ECR) using a cylindrical probe. The parameters of plasma are determined of the EEDF by means of the Laframboise theory. For the case of the LS fit, the obtained results are similar to those obtained by the Tikhonov method, but in the first case the procedure is slow to achieve the best fit. (Author)
A Construction for P-Regular Semigroups
无
2000-01-01
@@A regular semigroup S with a special involution *, i.e., a unaryoperation on S satisfying (x*)*=x, xx*x=x, (xy)*=y*x* x, y S, is called a regular *-semigroup［1］. It has been shown by Yamada［2］ that a regular semigroup S is a regular *-semigroup if and only if ithas a P-system, that is to say, there is a subset P ofE(S) such that (c.1) (1) ( p, q P) pq E(S), pqp P; (2) ( a S) ( | a+ V(a)) aP1a+, a+P1a P.As a generalization of regular *-semigroup and orthodox semigroup,Yamada［3］ defined P-regular semigroup. Let S be a regularsemigroup. A subset P of E(S) is called a C-set in S if (c.2) (1) ( p, q P) pq E(S), pqp P; (2) ( a S) ( a+ V(a)) aP1a+, a+P1a P. In this case, (S,P) forms a P-regular semigroup, innotation S(P). The element a+ in(c.2) (2) is called a P-inverse of a. The set of all P-inverses of a is denoted by VP(a). S(P) is said to bestrongly, meanwhile P is called a strong C-set in S, ifVP(p) P for all p P. A partial groupoid E as well as its partial subgroupoid Pforms a P-regular partial band and is denoted by E(P) if itis exactly the subalgebra of the idempotents in some P-regularsemigroup S(P). In this case, S(P) is called an adjacentsemigroup E(P). All P-regular partial bands are obtained inZhang and He［4］.
Counting colorings of a regular graph
Galvin, David
2012-01-01
At most how many (proper) q-colorings does a regular graph admit? Galvin and Tetali conjectured that among all n-vertex, d-regular graphs with 2d|n, none admits more q-colorings than the disjoint union of n/2d copies of the complete bipartite graph K_{d,d}. In this note we give asymptotic evidence for this conjecture, giving an upper bound on the number of proper q-colorings admitted by an n-vertex, d-regular graph of the form a^n b^{n(1+o(1))/d} (where a and b depend on q and where o(1) goes to 0 as d goes to infinity) that agrees up to the o(1) term with the count of q-colorings of n/2d copies of K_{d,d}. An auxiliary result is an upper bound on the number of colorings of a regular graph in terms of its independence number. For example, we show that for all even q and fixed \\epsilon > 0 there is \\delta=\\delta(\\epsilon,q) such that the number of proper q-colorings admitted by an n-vertex, d-regular graph with no independent set of size n(1-\\epsilon)/2 is at most (a-\\delta)^n.