A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
Robust Homography Estimation Based on Nonlinear Least Squares Optimization
Directory of Open Access Journals (Sweden)
Wei Mou
2014-01-01
Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.
Liu, Jingwei; Liu, Yi; Xu, Meizhi
2015-01-01
Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...
Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao...... estimation. Moreover, simulations on real-life data indicate that the NLS and aNLS methods are applicable even when reverberation is present and the noise is not white Gaussian....
Bootstrapping Nonlinear Least Squares Estimates in the Kalman Filter Model.
1986-01-01
Bias Bootstrapa 3.933 x 103 0.651 x 103 -0.166 x 10-- b b Newton - Rapshon 1.380 x 10- 0.479 x 10- 10_c 0_ c , e -.., Emperical 3.605 x 10 -0.026 x 10...most cases, parameter estimation for the KF model has been accomplished by maximum likelihood techniques involving the use of scoring or Newton ...is well behaved, the Newton -Raphson and scoring procedures enjoy quadratic convergence in the neighborhood of the maximum and one has a ready-made
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2002-01-01
Using difference quotient instead of derivative, the paper presents the solution method and procedure of the nonlinear least square estimation containing different classes of measurements. In the meantime, the paper shows several practical cases, which indicate the method is very valid and reliable.
Cao, Jiguo
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.
Cao, Jiguo; Huang, Jianhua Z; Wu, Hulin
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
A new robust on-line fault diagnosis method based on least squares estimate for nonlinear difference-algebraic systems (DAS) with uncertainties is proposed. Based on the known nominal model of the DAS, this method firstly constructs an auxiliary system consisting of a difference equation and an algebraic equation, then, based on the relationship between the state deviation and the faults in the difference equation and the relationship between the algebraic variable deviation and the faults in algebraic equation, it identifies the faults on-line through least squares estimate. This method can not only detect, isolate and identify faults for DAS, but also give the upper bound of the error of fault identification. The simulation results indicate that it can give satisfactory diagnostic results for both abrupt and incipient faults.
Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples
Harada, Koji; Sakai, Hideaki
In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.
Nonlinear Least Squares for Inverse Problems
Chavent, Guy
2009-01-01
Presents an introduction into the least squares resolution of nonlinear inverse problems. This title intends to develop a geometrical theory to analyze nonlinear least square (NLS) problems with respect to their quadratic wellposedness, that is, both wellposedness and optimizability
DEFF Research Database (Denmark)
Nielsen, Jesper Kjær; Jensen, Tobias Lindstrøm; Jensen, Jesper Rindom;
2016-01-01
time. Additionally, we show via three common examples how the grid size depends on parameters such as the number of data points or the number of sensors in DOA estimation. We also demonstrate that the computation time can potentially be lowered by several orders of magnitude by combining a coarse grid......In many spectral estimation and array processing problems, the process of finding estimates of model parameters often involves the optimisation of a cost function containing multiple peaks and dips. Such non-convex problems are hard to solve using traditional optimisation algorithms developed...
On Perceptual Distortion Minimization and Nonlinear Least-Squares Frequency Estimation
DEFF Research Database (Denmark)
Christensen, Mads Græsbøll; Jensen, Søren Holdt
2006-01-01
In this paper, we present a framework for perceptual error minimization and sinusoidal frequency estimation based on a new perceptual distortion measure, and we state its optimal solution. Using this framework, we relate a number of well-known practical methods for perceptual sinusoidal parameter...
Angelis, Georgios I; Matthews, Julian C; Kotasidis, Fotis A; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J
2014-11-01
Estimation of nonlinear micro-parameters is a computationally demanding and fairly challenging process, since it involves the use of rather slow iterative nonlinear fitting algorithms and it often results in very noisy voxel-wise parametric maps. Direct reconstruction algorithms can provide parametric maps with reduced variance, but usually the overall reconstruction is impractically time consuming with common nonlinear fitting algorithms. In this work we employed a recently proposed direct parametric image reconstruction algorithm to estimate the parametric maps of all micro-parameters of a two-tissue compartment model, used to describe the kinetics of [[Formula: see text]F]FDG. The algorithm decouples the tomographic and the kinetic modelling problems, allowing the use of previously developed post-reconstruction methods, such as the generalised linear least squares (GLLS) algorithm. Results on both clinical and simulated data showed that the proposed direct reconstruction method provides considerable quantitative and qualitative improvements for all micro-parameters compared to the conventional post-reconstruction fitting method. Additionally, region-wise comparison of all parametric maps against the well-established filtered back projection followed by post-reconstruction non-linear fitting, as well as the direct Patlak method, showed substantial quantitative agreement in all regions. The proposed direct parametric reconstruction algorithm is a promising approach towards the estimation of all individual microparameters of any compartment model. In addition, due to the linearised nature of the GLLS algorithm, the fitting step can be very efficiently implemented and, therefore, it does not considerably affect the overall reconstruction time.
Diagonal loading least squares time delay estimation
Institute of Scientific and Technical Information of China (English)
LI Xuan; YAN Shefeng; MA Xiaochuan
2012-01-01
Least squares （LS） time delay estimation is a classical and effective method. However, the performance is degraded severely in the scenario of low ratio of signal-noise （SNR） due to the instability of matrix inversing. In order to solve the problem, diagonal loading least squares （DL-LS） is proposed by adding a positive definite matrix to the inverse matrix. Furthermore, the shortcoming of fixed diagonal loading is analyzed from the point of regularization that when the tolerance of low SNR is increased, veracity is decreased. This problem is resolved by reloading. The primary estimation＇s reciprocal is introduced as diagonal loading and it leads to small diagonal loading at the time of arrival and larger loading at other time. Simulation and pool experiment prove the algorithm has better performance.
A NEW SOLUTION MODEL OF NONLINEAR DYNAMIC LEAST SQUARE ADJUSTMENT
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2000-01-01
The nonlinear least square adjustment is a head object studied in technology fields. The paper studies on the non-derivative solution to the nonlinear dynamic least square adjustment and puts forward a new algorithm model and its solution model. The method has little calculation load and is simple. This opens up a theoretical method to solve the linear dynamic least square adjustment.
A Note on Separable Nonlinear Least Squares Problem
Gharibi, Wajeb
2011-01-01
Separable nonlinear least squares (SNLS)problem is a special class of nonlinear least squares (NLS)problems, whose objective function is a mixture of linear and nonlinear functions. It has many applications in many different areas, especially in Operations Research and Computer Sciences. They are difficult to solve with the infinite-norm metric. In this paper, we give a short note on the separable nonlinear least squares problem, unseparated scheme for NLS, and propose an algorithm for solving mixed linear-nonlinear minimization problem, method of which results in solving a series of least squares separable problems.
Hays, J. R.
1969-01-01
Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.
Simple procedures for imposing constraints for nonlinear least squares optimization
Energy Technology Data Exchange (ETDEWEB)
Carvalho, R. [Petrobras, Rio de Janeiro (Brazil); Thompson, L.G.; Redner, R.; Reynolds, A.C. [Univ. of Tulsa, OK (United States)
1995-12-31
Nonlinear regression method (least squares, least absolute value, etc.) have gained acceptance as practical technology for analyzing well-test pressure data. Even for relatively simple problems, however, commonly used algorithms sometimes converge to nonfeasible parameter estimates (e.g., negative permeabilities) resulting in a failure of the method. The primary objective of this work is to present a new method for imaging the objective function across all boundaries imposed to satisfy physical constraints on the parameters. The algorithm is extremely simple and reliable. The method uses an equivalent unconstrained objective function to impose the physical constraints required in the original problem. Thus, it can be used with standard unconstrained least squares software without reprogramming and provides a viable alternative to penalty functions for imposing constraints when estimating well and reservoir parameters from pressure transient data. In this work, the authors also present two methods of implementing the penalty function approach for imposing parameter constraints in a general unconstrained least squares algorithm. Based on their experience, the new imaging method always converges to a feasible solution in less time than the penalty function methods.
An Algorithm to Solve Separable Nonlinear Least Square Problem
Directory of Open Access Journals (Sweden)
Wajeb Gharibi
2013-07-01
Full Text Available Separable Nonlinear Least Squares (SNLS problem is a special class of Nonlinear Least Squares (NLS problems, whose objective function is a mixture of linear and nonlinear functions. SNLS has many applications in several areas, especially in the field of Operations Research and Computer Science. Problems related to the class of NLS are hard to resolve having infinite-norm metric. This paper gives a brief explanation about SNLS problem and offers a Lagrangian based algorithm for solving mixed linear-nonlinear minimization problem
An Algorithm for Positive Definite Least Square Estimation of Parameters.
1986-05-01
This document presents an algorithm for positive definite least square estimation of parameters. This estimation problem arises from the PILOT...dynamic macro-economic model and is equivalent to an infinite convex quadratic program. It differs from ordinary least square estimations in that the
Abnormal behavior of the least squares estimate of multiple regression
Institute of Scientific and Technical Information of China (English)
陈希孺; 安鸿志
1997-01-01
An example is given to reveal the abnormal behavior of the least squares estimate of multiple regression. It is shown that the least squares estimate of the multiple linear regression may be "improved in the sense of weak consistency when nuisance parameters are introduced into the model. A discussion on the implications of this finding is given.
A Hybrid Method for Nonlinear Least Squares Problems
Institute of Scientific and Technical Information of China (English)
Zhongyi Liu; Linping Sun
2007-01-01
A negative curvature method is applied to nonlinear least squares problems with indefinite Hessian approximation matrices. With the special structure of the method,a new switch is proposed to form a hybrid method. Numerical experiments show that this method is feasible and effective for zero-residual,small-residual and large-residual problems.
Liu, Jingwei
2011-01-01
A function based nonlinear least squares estimation (FNLSE) method is proposed and investigated in parameter estimation of Jelinski-Moranda software reliability model. FNLSE extends the potential fitting functions of traditional least squares estimation (LSE), and takes the logarithm transformed nonlinear least squares estimation (LogLSE) as a special case. A novel power transformation function based nonlinear least squares estimation (powLSE) is proposed and applied to the parameter estimation of Jelinski-Moranda model. Solved with Newton-Raphson method, Both LogLSE and powLSE of Jelinski-Moranda models are applied to the mean time between failures (MTBF) predications on six standard software failure time data sets. The experimental results demonstrate the effectiveness of powLSE with optimal power index compared to the classical least--squares estimation (LSE), maximum likelihood estimation (MLE) and LogLSE in terms of recursively relative error (RE) index and Braun statistic index.
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
DEFF Research Database (Denmark)
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
Multisplitting for linear, least squares and nonlinear problems
Energy Technology Data Exchange (ETDEWEB)
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Ramoelo, A.; Skidmore, A. K.; Cho, M. A.; Mathieu, R.; Heitkönig, I. M. A.; Dudeni-Tlhone, N.; Schlerf, M.; Prins, H. H. T.
2013-08-01
Grass nitrogen (N) and phosphorus (P) concentrations are direct indicators of rangeland quality and provide imperative information for sound management of wildlife and livestock. It is challenging to estimate grass N and P concentrations using remote sensing in the savanna ecosystems. These areas are diverse and heterogeneous in soil and plant moisture, soil nutrients, grazing pressures, and human activities. The objective of the study is to test the performance of non-linear partial least squares regression (PLSR) for predicting grass N and P concentrations through integrating in situ hyperspectral remote sensing and environmental variables (climatic, edaphic and topographic). Data were collected along a land use gradient in the greater Kruger National Park region. The data consisted of: (i) in situ-measured hyperspectral spectra, (ii) environmental variables and measured grass N and P concentrations. The hyperspectral variables included published starch, N and protein spectral absorption features, red edge position, narrow-band indices such as simple ratio (SR) and normalized difference vegetation index (NDVI). The results of the non-linear PLSR were compared to those of conventional linear PLSR. Using non-linear PLSR, integrating in situ hyperspectral and environmental variables yielded the highest grass N and P estimation accuracy (R2 = 0.81, root mean square error (RMSE) = 0.08, and R2 = 0.80, RMSE = 0.03, respectively) as compared to using remote sensing variables only, and conventional PLSR. The study demonstrates the importance of an integrated modeling approach for estimating grass quality which is a crucial effort towards effective management and planning of protected and communal savanna ecosystems.
Performance analysis of the Least-Squares estimator in Astrometry
Lobos, Rodrigo A; Mendez, Rene A; Orchard, Marcos
2015-01-01
We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated...
Nonparametric Least Squares Estimation of a Multivariate Convex Regression Function
Seijo, Emilio
2010-01-01
This paper deals with the consistency of the least squares estimator of a convex regression function when the predictor is multidimensional. We characterize and discuss the computation of such an estimator via the solution of certain quadratic and linear programs. Mild sufficient conditions for the consistency of this estimator and its subdifferentials in fixed and stochastic design regression settings are provided. We also consider a regression function which is known to be convex and componentwise nonincreasing and discuss the characterization, computation and consistency of its least squares estimator.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known principle of least-squares. With this method the estimation of the (co)variance components is based on a linear model of observation equations. The method is flexible since it works with a user-defined we...
A least squares estimation method for the linear learning model
B. Wierenga (Berend)
1978-01-01
textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known
Institute of Scientific and Technical Information of China (English)
Xin LIU; Guo WEI; Jin-wei SUN; Dan LIU
2009-01-01
Least squares support vector machines (LS-SVMs) are modified support vector machines (SVMs) that involve equality constraints and work with a least squares cost function, which simplifies the optimization procedure. In this paper, a novel training algorithm based on total least squares (TLS) for an LS-SVM is presented and applied to muhifunctional sensor signal reconstruction. For three different nonlinearities of a multi functional sensor model, the reconstruction accuracies of input signals are 0.001 36%, 0.03184% and 0.504 80%, respectively. The experimental results demonstrate the higher reliability and accuracy of the proposed method for multi functional sensor signal reconstruction than the original LS-SVM training algorithm, and verify the feasibility and stability of the proposed method.
On derivative estimation and the solution of least squares problems
Belward, John A.; Turner, Ian W.; Ilic, Milos
2008-12-01
Surface interpolation finds application in many aspects of science and technology. Two specific areas of interest are surface reconstruction techniques for plant architecture and approximating cell face fluxes in the finite volume discretisation strategy for solving partial differential equations numerically. An important requirement of both applications is accurate local gradient estimation. In surface reconstruction this gradient information is used to increase the accuracy of the local interpolant, while in the finite volume framework accurate gradient information is essential to ensure second order spatial accuracy of the discretisation. In this work two different least squares strategies for approximating these local gradients are investigated and the errors associated with each analysed. It is shown that although the two strategies appear different, they produce the same least squares error. Some carefully chosen case studies are used to elucidate this finding.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Nonlinear least-squares data fitting in Excel spreadsheets.
Kemmer, Gerdi; Keller, Sandro
2010-02-01
We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.
Non-linear Least Squares Fitting in IDL with MPFIT
Markwardt, Craig B
2009-01-01
MPFIT is a port to IDL of the non-linear least squares fitting program MINPACK-1. MPFIT inherits the robustness of the original FORTRAN version of MINPACK-1, but is optimized for performance and convenience in IDL. In addition to the main fitting engine, MPFIT, several specialized functions are provided to fit 1-D curves and 2-D images; 1-D and 2-D peaks; and interactive fitting from the IDL command line. Several constraints can be applied to model parameters, including fixed constraints, simple bounding constraints, and "tying" the value to another parameter. Several data weighting methods are allowed, and the parameter covariance matrix is computed. Extensive diagnostic capabilities are available during the fit, via a call-back subroutine, and after the fit is complete. Several different forms of documentation are provided, including a tutorial, reference pages, and frequently asked questions. The package has been translated to C and Python as well. The full IDL and C packages can be found at http://purl.co...
Institute of Scientific and Technical Information of China (English)
TAO Hua-xue; GUO Jin-yun
2005-01-01
The unknown parameter's variance-covariance propagation and calculation in the generalized nonlinear least squares remain to be studied now,which didn't appear in the internal and external referencing documents. The unknown parameter's variance-covariance propagation formula, considering the two-power terms, was concluded used to evaluate the accuracy of unknown parameter estimators in the generalized nonlinear least squares problem. It is a new variance-covariance formula and opens up a new way to evaluate the accuracy when processing data which have the multi-source,multi-dimensional, multi-type, multi-time-state, different accuracy and nonlinearity.
Least square estimation of phase, frequency and PDEV
Danielson, Magnus; Rubiola, Enrico
2016-01-01
The Omega-preprocessing was introduced to improve phase noise rejection by using a least square algorithm. The associated variance is the PVAR which is more efficient than MVAR to separate the different noise types. However, unlike AVAR and MVAR, the decimation of PVAR estimates for multi-tau analysis is not possible if each counter measurement is a single scalar. This paper gives a decimation rule based on two scalars, the processing blocks, for each measurement. For the Omega-preprocessing, this implies the definition of an output standard as well as hardware requirements for performing high-speed computations of the blocks.
Estimating Military Aircraft Cost Using Least Squares Support Vector Machines
Institute of Scientific and Technical Information of China (English)
ZHU Jia-yuan; ZHANG Xi-bin; ZHANG Heng-xi; REN Bo
2004-01-01
A multi-layer adaptive optimizing parameters algorithm is developed for improving least squares support vector machines(LS-SVM),and a military aircraft life-cycle-cost(LCC)intelligent estimation model is proposed based on the improved LS-SVM.The intelligent cost estimation process is divided into three steps in the model.In the first step,a cost-drive-factor needs to be selected,which is significant for cost estimation.In the second step,military aircraft training samples within costs and cost-drive-factor set are obtained by the LS-SVM.Then the model can be used for new type aircraft cost estimation.Chinese military aircraft costs are estimated in the paper.The results show that the estimated costs by the new model are closer to the true costs than that of the traditionally used methods.
1985-05-01
first generated the errors and response variables. The errors, i, were produced using the Marsaglia and Tsang pseudo-normal ran- dom number algorithm...34Asymptotic properties of non-linear least squares estimators," The Annals of Mathematical Statistici, 40(2), pp. 633-643. Marsaglia , G., Tsang, W
Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2007-01-01
This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...
Robust regularized least-squares beamforming approach to signal estimation
Suliman, Mohamed
2017-05-12
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Ramoelo, A.; Skidmore, A.K.; Cho, M.A.; Mathieu, R.; Heitkonig, I.M.A.; Dudeni-Tlhone, N.; Schlerf, M.; Prins, H.H.T.
2013-01-01
Grass nitrogen (N) and phosphorus (P) concentrations are direct indicators of rangeland quality and provide imperative information for sound management of wildlife and livestock. It is challenging to estimate grass N and P concentrations using remote sensing in the savanna ecosystems. These areas ar
LEAST-SQUARES MIXED FINITE ELEMENT METHODS FOR NONLINEAR PARABOLIC PROBLEMS
Institute of Scientific and Technical Information of China (English)
Dan-ping Yang
2002-01-01
Two least-squares mixed finite element schemes are formulated to solve the initialboundary value problem of a nonlinear parabolic partial differential equation and the convergence of these schemes are analyzed.
Institute of Scientific and Technical Information of China (English)
TAO Hua-xue (陶华学); GUO Jin-yun (郭金运)
2003-01-01
Data are very important to build the digital mine. Data come from many sources, have different types and temporal states. Relations between one class of data and the other one, or between data and unknown parameters are more nonlinear. The unknown parameters are non-random or random, among which the random parameters often dynamically vary with time. Therefore it is not accurate and reliable to process the data in building the digital mine with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method to process data in building the digital mine is put forward. In the meantime, the corresponding mathematical model is also given. The generalized nonlinear least squares problem is more complex than the common nonlinear least squares problem and its solution is more difficultly obtained because the dimensions of data and parameters in the former are bigger. So a new solution model and the method are put forward to solve the generalized nonlinear dynamic least squares problem. In fact, the problem can be converted to two sub-problems, each of which has a single variable. That is to say, a complex problem can be separated and then solved. So the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations. The method lessens the calculating load and opens up a new way to process the data in building the digital mine, which have more sources, different types and more temporal states.
Computational Issues in Linear Least-Squares Estimation and Control
1979-06-06
Algorithms for Parallel Processing in Optimal Estimation," to appear in Automatica, May, 1979. Newton, Issac, [1926], Philosophe Naturalis Principia ... Mathematica , Ii. Pemberton, Ed. (G. & J. Innys, London, ed. 3). , [1934], Mathematical Principles of Natural Philosophy, A. Motte, Translation, 7. Cajori, Ed
Application of the Marquardt least-squares method to the estimation of pulse function parameters
Lundengârd, Karl; Rančić, Milica; Javor, Vesna; Silvestrov, Sergei
2014-12-01
Application of the Marquardt least-squares method (MLSM) to the estimation of non-linear parameters of functions used for representing various lightning current waveshapes is presented in this paper. Parameters are determined for the Pulse, Heidler's and DEXP function representing the first positive, first and subsequent negative stroke currents as given in IEC 62305-1 Standard Ed.2, and also for some other fast- and slow-decaying lightning current waveshapes. The results prove the ability of the MLSM to be used for the estimation of parameters of the functions important in lightning discharge modeling.
Nonlinear decoupling controller design based on least squares support vector regression
Institute of Scientific and Technical Information of China (English)
WEN Xiang-jun; ZHANG Yu-nong; YAN Wei-wu; XU Xiao-ming
2006-01-01
Support Vector Machines (SVMs) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on a method of least squares SVM (LS-SVM) for multivariate function estimation, a generalized inverse system is developed for the linearization and decoupling control ora general nonlinear continuous system. The approach of inverse modelling via LS-SVM and parameters optimization using the Bayesian evidence framework is discussed in detail. In this paper, complex high-order nonlinear system is decoupled into a number of pseudo-linear Single Input Single Output (SISO) subsystems with linear dynamic components. The poles of pseudo-linear subsystems can be configured to desired positions. The proposed method provides an effective alternative to the controller design of plants whose accurate mathematical model is unknown or state variables are difficult or impossible to measure. Simulation results showed the efficacy of the method.
A Simple Introduction to Moving Least Squares and Local Regression Estimation
Energy Technology Data Exchange (ETDEWEB)
Garimella, Rao Veerabhadra [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-22
In this brief note, a highly simpli ed introduction to esimating functions over a set of particles is presented. The note starts from Global Least Squares tting, going on to Moving Least Squares estimation (MLS) and nally, Local Regression Estimation (LRE).
Acceleration Control in Nonlinear Vibrating Systems based on Damped Least Squares
Pilipchuk, V N
2011-01-01
A discrete time control algorithm using the damped least squares is introduced for acceleration and energy exchange controls in nonlinear vibrating systems. It is shown that the damping constant of least squares and sampling time step of the controller must be inversely related to insure that vanishing the time step has little effect on the results. The algorithm is illustrated on two linearly coupled Duffing oscillators near the 1:1 internal resonance. In particular, it is shown that varying the dissipation ratio of one of the two oscillators can significantly suppress the nonlinear beat phenomenon.
Padovan, J.; Lackney, J.
1986-01-01
The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2016-12-15
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.
Borodachev, S. M.
2016-06-01
The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.
Harmonic estimation in a power system using a novel hybrid Least Squares-Adaline algorithm
Energy Technology Data Exchange (ETDEWEB)
Joorabian, M.; Mortazavi, S.S.; Khayyami, A.A. [Electrical Engineering Department, Shahid Chamran University, Ahwaz, 61355 (Iran)
2009-01-15
Nowadays many algorithms have been proposed for harmonic estimation in a power system. Most of them deal with this estimation as a totally nonlinear problem. Consequently, these methods either converge slowly, like GA algorithm [U. Qidwai, M. Bettayeb, GA based nonlinear harmonic estimation, IEEE Trans. Power Delivery (December) 1998], or need accurate parameter adjustment to track dynamic and abrupt changes of harmonics amplitudes, like adaptive Kalman filter (KF) [Steven Liu, An adaptive Kalman filter for dynamic estimation of harmonic signals, in: 8th International Conference On Harmonics and Quality of Power, ICHQP'98, Athens, Greece, October 14-16, 1998]. In this paper a novel hybrid approach, based on the decomposition of the problem into a linear and a nonlinear problem, is proposed. A linear estimator, i.e., Least Squares (LS), which is simple, fast and does not need any parameter tuning to follow harmonics amplitude changes, is used for amplitude estimation and an adaptive linear combiner called 'Adaline', which is very fast and very simple is used to estimate phases of harmonics. An improvement in convergence and processing time is achieved using this algorithm. Moreover, better performance in online tracking of dynamic and abrupt changes of signals is the result of applying this method. (author)
Directory of Open Access Journals (Sweden)
Nenggen Ding
2010-01-01
Full Text Available A recursive least square (RLS algorithm for estimation of vehicle sideslip angle and road friction coefficient is proposed. The algorithm uses the information from sensors onboard vehicle and control inputs from the control logic and is intended to provide the essential information for active safety systems such as active steering, direct yaw moment control, or their combination. Based on a simple two-degree-of-freedom (DOF vehicle model, the algorithm minimizes the squared errors between estimated lateral acceleration and yaw acceleration of the vehicle and their measured values. The algorithm also utilizes available control inputs such as active steering angle and wheel brake torques. The proposed algorithm is evaluated using an 8-DOF full vehicle simulation model including all essential nonlinearities and an integrated active front steering and direct yaw moment control on dry and slippery roads.
Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine
Institute of Scientific and Technical Information of China (English)
XU Rui-Rui; BIAN Guo-Xing; GAO Chen-Feng; CHEN Tian-Lun
2005-01-01
The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction.First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.
DEFF Research Database (Denmark)
Christensen, Bent Jesper; Varneskov, Rasmus T.
band least squares (MBLS) estimator uses sample dependent trimming of frequencies in the vicinity of the origin to account for such contamination. Consistency and asymptotic normality of the MBLS estimator are established, a feasible inference procedure is proposed, and rigorous tools for assessing...... the cointegration strength and testing MBLS against the existing narrow band least squares estimator are developed. Finally, the asymptotic framework for the MBLS estimator is used to provide new perspectives on volatility factors in an empirical application to long-span realized variance series for S&P 500...
Directory of Open Access Journals (Sweden)
Pudji Ismartini
2010-08-01
Full Text Available One of the major problem facing the data modelling at social area is multicollinearity. Multicollinearity can have significant impact on the quality and stability of the fitted regression model. Common classical regression technique by using Least Squares estimate is highly sensitive to multicollinearity problem. In such a problem area, Partial Least Squares Regression (PLSR is a useful and flexible tool for statistical model building; however, PLSR can only yields point estimations. This paper will construct the interval estimations for PLSR regression parameters by implementing Jackknife technique to poverty data. A SAS macro programme is developed to obtain the Jackknife interval estimator for PLSR.
Institute of Scientific and Technical Information of China (English)
孙孝前; 尤进红
2003-01-01
In this paper we consider the estimating problem of a semiparametric regression modelling whenthe data are longitudinal. An iterative weighted partial spline least squares estimator (IWPSLSE) for the para-metric component is proposed which is more efficient than the weighted partial spline least squares estimator(WPSLSE) with weights constructed by using the within-group partial spline least squares residuals in the senseof asymptotic variance. The asymptotic normality of this IWPSLSE is established. An adaptive procedure ispresented which ensures that the iterative process stops after a finite number of iterations and produces anestimator asymptotically equivalent to the best estimator that can be obtained by using the iterative proce-dure. These results are generalizations of those in heteroscedastic linear model to the case of semiparametric regression.
Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization
Transtrum, Mark K
2012-01-01
When minimizing a nonlinear least-squares function, the Levenberg-Marquardt algorithm can suffer from a slow convergence, particularly when it must navigate a narrow canyon en route to a best fit. On the other hand, when the least-squares function is very flat, the algorithm may easily become lost in parameter space. We introduce several improvements to the Levenberg-Marquardt algorithm in order to improve both its convergence speed and robustness to initial parameter guesses. We update the usual step to include a geodesic acceleration correction term, explore a systematic way of accepting uphill steps that may increase the residual sum of squares due to Umrigar and Nightingale, and employ the Broyden method to update the Jacobian matrix. We test these changes by comparing their performance on a number of test problems with standard implementations of the algorithm. We suggest that these two particular challenges, slow convergence and robustness to initial guesses, are complimentary problems. Schemes that imp...
On the equivalence of Kalman filtering and least-squares estimation
Mysen, E.
2016-07-01
The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.
On the equivalence of Kalman filtering and least-squares estimation
Mysen, E.
2017-01-01
The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.
Institute of Scientific and Technical Information of China (English)
Ge-mai Chen; Jin-hong You
2005-01-01
Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.
On the least-square estimation of parameters for statistical diffusion weighted imaging model.
Yuan, Jing; Zhang, Qinwei
2013-01-01
Statistical model for diffusion-weighted imaging (DWI) has been proposed for better tissue characterization by introducing a distribution function for apparent diffusion coefficients (ADC) to account for the restrictions and hindrances to water diffusion in biological tissues. This paper studies the precision and uncertainty in the estimation of parameters for statistical DWI model with Gaussian distribution, i.e. the position of distribution maxima (Dm) and the distribution width (σ), by using non-linear least-square (NLLS) fitting. Numerical simulation shows that precise parameter estimation, particularly for σ, imposes critical requirements on the extremely high signal-to-noise ratio (SNR) of DWI signal when NLLS fitting is used. Unfortunately, such extremely high SNR may be difficult to achieve for the normal setting of clinical DWI scan. For Dm and σ parameter mapping of in vivo human brain, multiple local minima are found and result in large uncertainties in the estimation of distribution width σ. The estimation error by using NLLS fitting originates primarily from the insensitivity of DWI signal intensity to distribution width σ, as given in the function form of the Gaussian-type statistical DWI model.
Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application
Institute of Scientific and Technical Information of China (English)
JIA Jin-ming; WEN Xiang-jun
2008-01-01
Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.
Error Estimate and Adaptive Refinement in Mixed Discrete Least Squares Meshless Method
Directory of Open Access Journals (Sweden)
J. Amani
2014-01-01
Full Text Available The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.
Analysis of total least squares in estimating the parameters of a mortar trajectory
Energy Technology Data Exchange (ETDEWEB)
Lau, D.L.; Ng, L.C.
1994-12-01
Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.
Calibration of Vector Magnetogram with the Nonlinear Least-squares Fitting Technique
Institute of Scientific and Technical Information of China (English)
Jiang-Tao Su; Hong-Qi Zhang
2004-01-01
To acquire Stokes profiles from observations of a simple sunspot with the Video Vector Magnetograph at Huairou Solar Observing Station(HSOS),we scanned the FeIλ5324.19 A line over the wavelength interval from 150mA redward of the line center to 150mA blueward,in steps of 10mA.With the technique of analytic inversion of Stokes profiles via nonlinear least-squares,we present the calibration coefficients for the HSOS vector magnetic magnetogram.We obtained the theoretical calibration error with linear expressions derived from the Unno-Becker equation under weak-field approximation.
On-line Weighted Least Squares Kernel Method for Nonlinear Dynamic Modeling
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Support vector machines (SVM) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on rolling optimization method and on-line learning strategies, a novel approach based on weighted least squares support vector machines (WLS-SVM) is proposed for nonlinear dynamic modeling.The good robust property of the novel approach enhances the generalization ability of kernel method-based modeling and some experimental results are presented to illustrate the feasibility of the proposed method.
Institute of Scientific and Technical Information of China (English)
Juan ZHAO; Yunmin ZHU
2009-01-01
The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.
Fully Modified Narrow-Band Least Squares Estimation of Weak Fractional Cointegration
DEFF Research Database (Denmark)
Nielsen, Morten Ørregaard; Frederiksen, Per
application recently, especially in financial economics. Previous research on this model has considered a semiparametric narrow-band least squares (NBLS) estimator in the frequency domain, but in the stationary case its asymptotic distribution has been derived only under a condition of non-coherence between...
Numerical solution of a nonlinear least squares problem in digital breast tomosynthesis
Landi, G.; Loli Piccolomini, E.; Nagy, J. G.
2015-11-01
In digital tomosynthesis imaging, multiple projections of an object are obtained along a small range of different incident angles in order to reconstruct a pseudo-3D representation (i.e., a set of 2D slices) of the object. In this paper we describe some mathematical models for polyenergetic digital breast tomosynthesis image reconstruction that explicitly takes into account various materials composing the object and the polyenergetic nature of the x-ray beam. A polyenergetic model helps to reduce beam hardening artifacts, but the disadvantage is that it requires solving a large-scale nonlinear ill-posed inverse problem. We formulate the image reconstruction process (i.e., the method to solve the ill-posed inverse problem) in a nonlinear least squares framework, and use a Levenberg-Marquardt scheme to solve it. Some implementation details are discussed, and numerical experiments are provided to illustrate the performance of the methods.
Directory of Open Access Journals (Sweden)
Hui Cao
2014-01-01
Full Text Available Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
总体最小二乘解性质研究%RESEARCH ON PROPERTIES OF TOTAL LEAST SQUARES ESTIMATION
Institute of Scientific and Technical Information of China (English)
王乐洋
2012-01-01
通过理论推导,发现总体最小二乘解是最小二乘解的线性变换；当系数矩阵含有误差时最小二乘解是有偏的,而总体最小二乘解是无偏的；总体最小二乘解的条件数大于最小二乘解的条件数,总体最小二乘解更容易受到数据误差的影响.通过进一步推导给出了总体最小二乘与最小二乘在解、残差、单位权方差估值等方面的关系式.%Through theory derivation and proof, some properties of the total least squares estimation are found. The total least squares estimation is the linear transformation of the least squares estimation. When the coefficient matrix contains error,the least squares is biased. The total least squares estimation is unbiased. The condition number of the total least squares estimation is bigger than that of the least squares estimation, so the total least squares estimation is more easier to be affected by the data error than the least squares estimation. Through further derivation, the relation of solutions, residuals, unit weight variance estimations between the total least squares and the least squares are given.
ON THE SINGULARITY OF LEAST SQUARES ESTIMATOR FOR MEAN-REVERTING Α-STABLE MOTIONS
Institute of Scientific and Technical Information of China (English)
Hu Yaozhong; Long Hongwei
2009-01-01
We study the problem of parameter estimation for mean-reverting α-stable motion, dXt= (a0- θ0Xt)dt + dZt, observed at discrete time instants.A least squares estimator is obtained and its asymptotics is discussed in the singular case (a0, θ0)=(0,0).If a0=0, then the mean-reverting α-stable motion becomes Ornstein-Uhlenbeck process and is studied in [7] in the ergodie case θ0 > 0.For the Ornstein-Uhlenbeck process, asymptoties of the least squares estimators for the singular case (θ0 = 0) and for ergodic case (θ0 > 0) are completely different.
Least Square Regression Method for Estimating Gas Concentration in an Electronic Nose System
Directory of Open Access Journals (Sweden)
Walaa Khalaf
2009-03-01
Full Text Available We describe an Electronic Nose (ENose system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values, the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte.
Least square regression method for estimating gas concentration in an electronic nose system.
Khalaf, Walaa; Pace, Calogero; Gaudioso, Manlio
2009-01-01
We describe an Electronic Nose (ENose) system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values), the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM) approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte.
SOM-based nonlinear least squares twin SVM via active contours for noisy image segmentation
Xie, Xiaomin; Wang, Tingting
2017-02-01
In this paper, a nonlinear least square twin support vector machine (NLSTSVM) with the integration of active contour model (ACM) is proposed for noisy image segmentation. Efforts have been made to seek the kernel-generated surfaces instead of hyper-planes for the pixels belonging to the foreground and background, respectively, using the kernel trick to enhance the performance. The concurrent self organizing maps (SOMs) are applied to approximate the intensity distributions in a supervised way, so as to establish the original training sets for the NLSTSVM. Further, the two sets are updated by adding the global region average intensities at each iteration. Moreover, a local variable regional term rather than edge stop function is adopted in the energy function to ameliorate the noise robustness. Experiment results demonstrate that our model holds the higher segmentation accuracy and more noise robustness.
A weighted least-squares method for parameter estimation in structured models
Galrinho, Miguel; Rojas, Cristian R.; Hjalmarsson, Håkan
2014-01-01
Parameter estimation in structured models is generally considered a difficult problem. For example, the prediction error method (PEM) typically gives a non-convex optimization problem, while it is difficult to incorporate structural information in subspace identification. In this contribution, we revisit the idea of iteratively using the weighted least-squares method to cope with the problem of non-convex optimization. The method is, essentially, a three-step method. First, a high order least...
Least Orthogonal Distance Estimator and Total Least Square for Simultaneous Equation Models
Directory of Open Access Journals (Sweden)
Alessia Naccarato
2014-01-01
Full Text Available Least Orthogonal Distance Estimator (LODE of Simultaneous Equation Models’ structural parameters is based on minimizing the orthogonal distance between Reduced Form (RF and the Structural Form (SF parameters. In this work we propose a new version – with respect to Pieraccini and Naccarato (2008 – of Full Information (FI LODE based on decomposition of a new structure of the variance-covariance matrix using Singular Value Decomposition (SVD instead of Spectral Decomposition (SD. In this context Total Least Square is applied. A simulation experiment to compare the performances of the new version of FI LODE with respect to Three Stage Least Square (3SLS and Full Information Maximum Likelihood (FIML is presented. Finally a comparison between the FI LODE new and old version together with few words of conclusion conclude the paper.
Thrust estimator design based on least squares support vector regression machine
Institute of Scientific and Technical Information of China (English)
ZHAO Yong-ping; SUN Jian-guo
2010-01-01
In order to realize direct thrust control instead of traditional sensor-based control for nero-engines,it is indispensable to design a thrust estimator with high accuracy,so a scheme for thrust estimator design based on the least square support vector regression machine is proposed to solve this problem.Furthermore,numerical simulations confirm the effectiveness of our presented scheme.During the process of estimator design,a wrap per criterion that can not only reduce the computational complexity but also enhance the generalization performance is proposed to select variables as input variables for estimator.
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2003-01-01
Data coming from different sources have different types and temporal states. Relations between one type of data and another ones, or between data and unknown parameters are almost nonlinear. It is not accurate and reliable to process the data in building the digital earth with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method was put forward to process data in building the digital earth. A separating solution model and the iterative calculation method were used to solve the generalized nonlinear dynamic least squares problem. In fact, a complex problem can be separated and then solved by converting to two sub-problems, each of which has a single variable. Therefore the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies.
Energy Technology Data Exchange (ETDEWEB)
Griffin, P.J.
1998-05-01
This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.
Normalized least-squares estimation in time-varying ARCH models
Fryzlewicz, Piotr; Sapatinas, Theofanis; Subba Rao, Suhasini
2008-01-01
We investigate the time-varying ARCH (tvARCH) process. It is shown that it can be used to describe the slow decay of the sample autocorrelations of the squared returns often observed in financial time series, which warrants the further study of parameter estimation methods for the model. ¶ Since the parameters are changing over time, a successful estimator needs to perform well for small samples. We propose a kernel normalized-least-squares (kernel-NLS) estimator which has a closed form...
Payette, G. S.; Reddy, J. N.
2011-05-01
In this paper we examine the roles of minimization and linearization in the least-squares finite element formulations of nonlinear boundary-values problems. The least-squares principle is based upon the minimization of the least-squares functional constructed via the sum of the squares of appropriate norms of the residuals of the partial differential equations (in the present case we consider L2 norms). Since the least-squares method is independent of the discretization procedure and the solution scheme, the least-squares principle suggests that minimization should be performed prior to linearization, where linearization is employed in the context of either the Picard or Newton iterative solution procedures. However, in the least-squares finite element analysis of nonlinear boundary-value problems, it has become common practice in the literature to exchange the sequence of application of the minimization and linearization operations. The main purpose of this study is to provide a detailed assessment on how the finite element solution is affected when the order of application of these operators is interchanged. The assessment is performed mathematically, through an examination of the variational setting for the least-squares formulation of an abstract nonlinear boundary-value problem, and also computationally, through the numerical simulation of the least-squares finite element solutions of both a nonlinear form of the Poisson equation and also the incompressible Navier-Stokes equations. The assessment suggests that although the least-squares principle indicates that minimization should be performed prior to linearization, such an approach is often impractical and not necessary.
Zimmer, Christoph; Sahle, Sven
2016-04-01
Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.
Nakano, Takemi; Nagata, Kentaro; Yamada, Masafumi; Magatani, Kazushige
2009-01-01
In this study, we describe the application of least square method for muscular strength estimation in hand motion recognition based on surface electromyogram (SEMG). Although the muscular strength can consider the various evaluation methods, a grasp force is applied as an index to evaluate the muscular strength. Today, SEMG, which is measured from skin surface, is widely used as a control signal for many devices. Because, SEMG is one of the most important biological signal in which the human motion intention is directly reflected. And various devices using SEMG are reported by lots of researchers. Those devices which use SEMG as a control signal, we call them SEMG system. In SEMG system, to achieve high accuracy recognition is an important requirement. Conventionally SEMG system mainly focused on how to achieve this objective. Although it is also important to estimate muscular strength of motions, most of them cannot detect power of muscle. The ability to estimate muscular strength is a very important factor to control the SEMG systems. Thus, our objective of this study is to develop the estimation method for muscular strength by application of least square method, and reflecting the result of measured power to the controlled object. Since it was known that SEMG is formed by physiological variations in the state of muscle fiber membranes, it is thought that it can be related with grasp force. We applied to the least-squares method to construct a relationship between SEMG and grasp force. In order to construct an effective evaluation model, four SEMG measurement locations in consideration of individual difference were decided by the Monte Carlo method.
Recursive Least Squares Estimator with Multiple Exponential Windows in Vector Autoregression
Institute of Scientific and Technical Information of China (English)
Hong-zhi An; Zhi-guo Li
2002-01-01
In the parameter tracking of time-varying systems, the ordinary method is weighted least squares with the rectangular window or the exponential window. In this paper we propose a new kind of sliding window called the multiple exponential window, and then use it to fit time-varying Gaussian vector autoregressive models. The asymptotic bias and covariance of the estimator of the parameter for time-invariant models are also derived. Simulation results show that the multiple exponential windows have better parameter tracking effect than rectangular windows and exponential ones.
A Weighted Least-Squares Approach to Parameter Estimation Problems Based on Binary Measurements
Colinet, Eric; Juillard, Jérôme
2010-01-01
We present a new approach to parameter estimation problems based on binary measurements, motivated by the need to add integrated low-cost self-test features to microfabricated devices. This approach is based on the use of original weighted least-squares criteria: as opposed to other existing methods, it requires no dithering signal and it does not rely on an approximation of the quantizer. In this paper, we focus on a simple choice for the weights and establish some asymptotical properties of...
Non-linear Least-squares Fitting in IDL with MPFIT
Markwardt, C. B.
2009-09-01
MPFIT is a port to IDL of the non-linear least squares fitting program MINPACK-1. MPFIT inherits the robustness of the original FORTRAN version of MINPACK-1, but is optimized for performance and convenience in IDL. In addition to the main fitting engine, MPFIT, several specialized functions are provided to fit 1-D curves and 2-D images, 1-D and 2-D peaks, and interactive fitting from the IDL command line. Several constraints can be applied to model parameters, including fixed constraints, simple bounding constraints, and ``tying'' the value to another parameter. Several data-weighting methods are allowed, and the parameter covariance matrix is computed. Extensive diagnostic capabilities are available during the fit, via a call-back subroutine, and after the fit is complete. Several different forms of documentation are provided, including a tutorial, reference pages, and frequently asked questions. The package has been translated to C and Python as well. The full IDL and C packages can be found at http://purl.com/net/mpfit.
Distributed weighted least-squares estimation with fast convergence for large-scale systems.
Marelli, Damián Edgardo; Fu, Minyue
2015-01-01
In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.
Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems
Abbasi-Yadkori, Yasin; Szepesvari, Csaba
2011-01-01
The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bou...
Error Estimates Derived from the Data for Least-Squares Spline Fitting
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2007-06-25
The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.
Least Squares Estimate of the Initial Phases in STFT based Speech Enhancement
DEFF Research Database (Denmark)
Nørholm, Sidsel Marie; Krawczyk-Becker, Martin; Gerkmann, Timo;
2015-01-01
In this paper, we consider single-channel speech enhancement in the short time Fourier transform (STFT) domain. We suggest to improve an STFT phase estimate by estimating the initial phases. The method is based on the harmonic model and a model for the phase evolution over time. The initial phases...... are estimated by setting up a least squares problem between the noisy phase and the model for phase evolution. Simulations on synthetic and speech signals show a decreased error on the phase when an estimate of the initial phase is included compared to using the noisy phase as an initialisation. The error...... on the phase is decreased at input SNRs from -10 to 10 dB. Reconstructing the signal using the clean amplitude, the mean squared error is decreased and the PESQ score is increased....
Directory of Open Access Journals (Sweden)
Cheng Wang
2014-01-01
Full Text Available The identification of a class of linear-in-parameters multiple-input single-output systems is considered. By using the iterative search, a least-squares based iterative algorithm and a gradient based iterative algorithm are proposed. A nonlinear example is used to verify the effectiveness of the algorithms, and the simulation results show that the least-squares based iterative algorithm can produce more accurate parameter estimates than the gradient based iterative algorithm.
Directory of Open Access Journals (Sweden)
Santosh Kumar Singh
2017-06-01
Full Text Available This paper presents a new hybrid method based on Gravity Search Algorithm (GSA and Recursive Least Square (RLS, known as GSA-RLS, to solve the harmonic estimation problems in the case of time varying power signals in presence of different noises. GSA is based on the Newton’s law of gravity and mass interactions. In the proposed method, the searcher agents are a collection of masses that interact with each other using Newton’s laws of gravity and motion. The basic GSA algorithm strategy is combined with RLS algorithm sequentially in an adaptive way to update the unknown parameters (weights of the harmonic signal. Simulation and practical validation are made with the experimentation of the proposed algorithm with real time data obtained from a heavy paper industry. A comparative performance of the proposed algorithm is evaluated with other recently reported algorithms like, Differential Evolution (DE, Particle Swarm Optimization (PSO, Bacteria Foraging Optimization (BFO, Fuzzy-BFO (F-BFO hybridized with Least Square (LS and BFO hybridized with RLS algorithm, which reveals that the proposed GSA-RLS algorithm is the best in terms of accuracy, convergence and computational time.
Nonlinear partial least squares with Hellinger distance for nonlinear process monitoring
Harrou, Fouzi
2017-02-16
This paper proposes an efficient data-based anomaly detection method that can be used for monitoring nonlinear processes. The proposed method merges advantages of nonlinear projection to latent structures (NLPLS) modeling and those of Hellinger distance (HD) metric to identify abnormal changes in highly correlated multivariate data. Specifically, the HD is used to quantify the dissimilarity between current NLPLS-based residual and reference probability distributions. The performances of the developed anomaly detection using NLPLS-based HD technique is illustrated using simulated plug flow reactor data.
A Least-Squares Solution to Nonlinear Steady-State Multi-Dimensional IHCP
Institute of Scientific and Technical Information of China (English)
无
1996-01-01
In this paper,the least-squares method is used to solve the Inverse Heat Conduction Probles(IHCP) to determine the space-wise variation of the unknown boundary condition on the inner surface of a helically coied tube with fluid flow inside,electrical heating and insulation outside.The sensitivity coefficient is analyzed to give a rational distribution of the thermocouples.The results demonstrate that the method effectively extracts information about the unknown boundary condition for the heat conduction problem from the experimental measurements.The results also show that the least-squares method conerges very quickly.
Institute of Scientific and Technical Information of China (English)
罗振东; 朱江; 王会军
2002-01-01
A nonlinear Galerkin/ Petrov- least squares mixed element (NGPLSME) method for the stationary Navier-Stokes equations is presented and analyzed. The scheme is that Petrov-least squares forms of residuals are added to the nonlinear Galerkin mixed element method so that it is stable for any combination of discrete velocity and pressure spaces without requiring the Babuska-Brezzi stability condition. The existence, uniqueness and convergence ( at optimal rate ) of the NGPLSME solution is proved in the case of sufficient viscosity ( or small data).
Nonlinear least squares estimation based on multiple genetic algorithms%基于多群体遗传算法的非线性最小二乘估计
Institute of Scientific and Technical Information of China (English)
刘德玲; 马志强
2011-01-01
Conventional Newton-like algorithms, widely used for parameter estimation of nonlinear models,are sensitive to initial values while simple genetic algorithms are liable to fall into local optimization. This paper proposes a multiple genetic algorithm. It searches the solution with several genetic algorithms and can adjust the parameter domain dynamically according to the optimum solution found by each genetic algorithm with several iterations, for which it can avoid running into local optimization,increase the performance and liability that the solution found is the global optimum solution. Experimental results show that the proposed algorithm is an effective approach of parameter estimations of nonlinear systems.%由于非线性模型参数估计理论广泛使用的传统牛顿类算法对初值的敏感性,以及简单遗传算法易陷入局部最优的问题,提出了一种多群体遗传算法,它采用多个群体执行遗传算法搜索解,并且能根据各个群体在较少迭代次数中找到的最优解动态调整参数域,提高了遗传算法的性能及搜索到的解是全局最优解的可靠性.实验结果表明:新的算法是一种有效的非线性系统模型参数估计方法.
Underwater terrain positioning method based on least squares estimation for AUV
Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing
2015-12-01
To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.
Least squares with non-normal data: estimating experimental variance functions.
Tellinghuisen, Joel
2008-02-01
Contrary to popular belief, the method of least squares (LS) does not require that the data have normally distributed (Gaussian) error for its validity. One practically important application of LS fitting that does not involve normal data is the estimation of data variance functions (VFE) from replicate statistics. If the raw data are normal, sampling estimates s(2) of the variance sigma(2) are chi(2) distributed. For small degrees of freedom, the chi(2) distribution is strongly asymmetrical -- exponential in the case of three replicates, for example. Monte Carlo computations for linear variance functions demonstrate that with proper weighting, the LS variance-function parameters remain unbiased, minimum-variance estimates of the true quantities. However, the parameters are strongly non-normal -- almost exponential for some parameters estimated from s(2) values derived from three replicates, for example. Similar LS estimates of standard deviation functions from estimated s values have a predictable and correctable bias stemming from the bias inherent in s as an estimator of sigma. Because s(2) and s have uncertainties proportional to their magnitudes, the VFE and SDFE fits require weighting as s(-4) and s(-2), respectively. However, these weights must be evaluated on the calculated functions rather than directly from the sampling estimates. The computation is thus iterative but usually converges in a few cycles, with remaining 'weighting' bias sufficiently small as to be of no practical consequence.
Institute of Scientific and Technical Information of China (English)
Wu Fuxian; Wen Weidong
2016-01-01
Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).
Vieira, Vasco M. N. C. S.; Engelen, Aschwin H.; Huanel, Oscar R.; Guillemin, Marie-Laure
2016-01-01
Survival is a fundamental demographic component and the importance of its accurate estimation goes beyond the traditional estimation of life expectancy. The evolutionary stability of isomorphic biphasic life-cycles and the occurrence of its different ploidy phases at uneven abundances are hypothesized to be driven by differences in survival rates between haploids and diploids. We monitored Gracilaria chilensis, a commercially exploited red alga with an isomorphic biphasic life-cycle, having found density-dependent survival with competition and Allee effects. While estimating the linear-in-the-parameters survival function, all model I regression methods (i.e, vertical least squares) provided biased line-fits rendering them inappropriate for studies about ecology, evolution or population management. Hence, we developed an iterative two-step non-linear model II regression (i.e, oblique least squares), which provided improved line-fits and estimates of survival function parameters, while robust to the data aspects that usually turn the regression methods numerically unstable. PMID:27936048
Nair, S P; Righetti, R
2015-05-07
Recent elastography techniques focus on imaging information on properties of materials which can be modeled as viscoelastic or poroelastic. These techniques often require the fitting of temporal strain data, acquired from either a creep or stress-relaxation experiment to a mathematical model using least square error (LSE) parameter estimation. It is known that the strain versus time relationships for tissues undergoing creep compression have a non-linear relationship. In non-linear cases, devising a measure of estimate reliability can be challenging. In this article, we have developed and tested a method to provide non linear LSE parameter estimate reliability: which we called Resimulation of Noise (RoN). RoN provides a measure of reliability by estimating the spread of parameter estimates from a single experiment realization. We have tested RoN specifically for the case of axial strain time constant parameter estimation in poroelastic media. Our tests show that the RoN estimated precision has a linear relationship to the actual precision of the LSE estimator. We have also compared results from the RoN derived measure of reliability against a commonly used reliability measure: the correlation coefficient (CorrCoeff). Our results show that CorrCoeff is a poor measure of estimate reliability for non-linear LSE parameter estimation. While the RoN is specifically tested only for axial strain time constant imaging, a general algorithm is provided for use in all LSE parameter estimation.
Least-squares reverse time migration with and without source wavelet estimation
Zhang, Qingchen; Zhou, Hui; Chen, Hanming; Wang, Jie
2016-11-01
Least-squares reverse time migration (LSRTM) attempts to find the best fit reflectivity model by minimizing the mismatching between the observed and simulated seismic data, where the source wavelet estimation is one of the crucial issues. We divide the frequency-domain observed seismic data by the numerical Green's function at the receiver nodes to estimate the source wavelet for the conventional LSRTM method, and propose the source-independent LSRTM based on a convolution-based objective function. The numerical Green's function can be simulated with a dirac wavelet and the migration velocity in the frequency or time domain. Compared to the conventional method with the additional source estimation procedure, the source-independent LSRTM is insensitive to the source wavelet and can still give full play to the amplitude-preserving ability even using an incorrect wavelet without the source estimation. In order to improve the anti-noise ability, we apply the robust hybrid norm objective function to both the methods and use the synthetic seismic data contaminated by the random Gaussian and spike noises with a signal-to-noise ratio of 5 dB to verify their feasibilities. The final migration images show that the source-independent algorithm is more robust and has a higher amplitude-preserving ability than the conventional source-estimated method.
Fu, Yuan-Yuan; Wang, Ji-Hua; Yang, Gui-Jun; Song, Xiao-Yu; Xu, Xin-Gang; Feng, Hai-Kuan
2013-05-01
The major limitation of using existing vegetation indices for crop biomass estimation is that it approaches a saturation level asymptotically for a certain range of biomass. In order to resolve this problem, band depth analysis and partial least square regression (PLSR) were combined to establish winter wheat biomass estimation model in the present study. The models based on the combination of band depth analysis and PLSR were compared with the models based on common vegetation indexes from the point of view of estimation accuracy, subsequently. Band depth analysis was conducted in the visible spectral domain (550-750 nm). Band depth, band depth ratio (BDR), normalized band depth index, and band depth normalized to area were utilized to represent band depth information. Among the calibrated estimation models, the models based on the combination of band depth analysis and PLSR reached higher accuracy than those based on the vegetation indices. Among them, the combination of BDR and PLSR got the highest accuracy (R2 = 0.792, RMSE = 0.164 kg x m(-2)). The results indicated that the combination of band depth analysis and PLSR could well overcome the saturation problem and improve the biomass estimation accuracy when winter wheat biomass is large.
Least-Squares Fitting Methods for Estimating the Winding Rate in Twisted Magnetic-Flux Tubes
Crouch, Ashley D
2012-01-01
We investigate least-squares fitting methods for estimating the winding rate of field lines about the axis of twisted magnetic-flux tubes. These methods estimate the winding rate by finding the values for a set of parameters that correspond to the minimum of the discrepancy between magnetic-field measurements and predictions from a twisted flux-tube model. For the flux-tube model used in the fitting, we assume that the magnetic field is static, axisymmetric, and does not vary in the vertical direction. Using error-free, synthetic vector magnetic-field data constructed with models for twisted magnetic-flux tubes, we test the efficacy of fitting methods at recovering the true winding rate. Furthermore, we demonstrate how assumptions built into the flux-tube models used for the fitting influence the accuracy of the winding-rate estimates. We identify the radial variation of the winding rate within the flux tube as one assumption that can have a significant impact on the winding-rate estimates. We show that the e...
Institute of Scientific and Technical Information of China (English)
Tang Wei; Shi Zhongke; Chen Jie
2008-01-01
Recently, frequency-based least-squares (LS) estimators have found wide application in identifying aircraft flutter parameters. However, the frequency methods are often known to suffer from numerical difficulties when identifying a continuous-time model, espe-cially, of broader frequency or higher order. In this article, a numerically robust LS estimator based on vector orthogonal polynomial is proposed to solve the numerical problem of multivariable systems and applied to the flutter testing. The key idea of this method is to represent the frequency response function (FRF) matrix by a right matrix fraction description (RMFD) model, and expand the numerator and denominator polynomial matrices on a vector onhogonal basis. As a result, a perfect numerical condition (numerical condition equals 1) can be obtained for linear LS estimator. Finally, this method is verified by flutter test of a wing model in a wind tunnel and real flight flutter test of an aircraft. The results are compared to those with notably LMS PolyMAX, which is not troubled by the numerical problem as it is established in z domain (e.g. derived from a discrete-time model). The verification has evidenced that this method, apart from overcoming the numerical problem, yields the results comparable to those acquired with LMS PolyMAX, or even considerably better at some frequency bands.
Comparison of structural and least-squares lines for estimating geologic relations
Williams, G.P.; Troutman, B.M.
1990-01-01
Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.
Segmented targeted least squares estimator for material decomposition in multi bin PCXDs
Rajbhandary, Paurakh L.; Hsieh, Scott S.; Pelc, Norbert J.
2014-03-01
We present a fast, noise-efficient, and accurate estimator for material separation using photon-counting x-ray detectors (PCXDs) with multiple energy bin capability. The proposed targeted least squares estimator (TLSE) improves a previously proposed A-Table method by incorporating dynamic weighting that allows noise to be closer to the Cramér- Rao Lower Bound (CRLB) throughout the operating range. We explore Cartesian and average-energy segmentation of the basis material space for TLSE, and show that iso-average-energy contours require fewer segments compared to Cartesian segmentation to achieve similar performance. We compare the iso-average-energy TLSE to other proposed estimators - including the gold standard maximum likelihood estimator (MLE) and the A-Table1 - in terms of variance, bias and computational efficiency. The variance and bias of this estimator between 0 to 6 cm of aluminum and 0 to 50 cm of water is simulated with Monte Carlo methods. Iso-average energy TLSE achieves an average variance within 2% of CRLB, and mean of absolute error of (3.68 +/- 0.06) x 10-6 cm. Using the same protocol, MLE showed variance-to- CRLB ratio and average bias of 1.0186 +/- 0.0002 and (3.10 +/- 0.06) x 10-6 cm, respectively, but was 50 times slower in our simulation. Compared to the A-Table method, TLSE gives a more homogenous variance-to-CRLB profile in the operating region. We show that variance-to-CRLB for TLSE is lower by as much as ~36% than A-Table method in the peripheral region of operation (thin or thick objects). The TLSE is a computationally efficient and fast method for implementing material separation technique in PCXDs, with performance parameters comparable to the MLE.
Donato, David I.
2013-01-01
A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.
A Least Squares Collocation Approach with GOCE gravity gradients for regional Moho-estimation
Rieser, Daniel; Mayer-Guerr, Torsten
2014-05-01
The depth of the Moho discontinuity is commonly derived by either seismic observations, gravity measurements or combinations of both. In this study, we aim to use the gravity gradient measurements of the GOCE satellite mission in a Least Squares Collocation (LSC) approach for the estimation of the Moho depth on regional scale. Due to its mission configuration and measurement setup, GOCE is able to contribute valuable information in particular in the medium wavelengths of the gravity field spectrum, which is also of special interest for the crust-mantle boundary. In contrast to other studies we use the full information of the gradient tensor in all three dimensions. The problem outline is formulated as isostatically compensated topography according to the Airy-Heiskanen model. By using a topography model in spherical harmonics representation the topographic influences can be reduced from the gradient observations. Under the assumption of constant mantle and crustal densities, surface densities are directly derived by LSC on regional scale, which in turn are converted in Moho depths. First investigations proofed the ability of this method to resolve the gravity inversion problem already with a small amount of GOCE data and comparisons with other seismic and gravitmetric Moho models for the European region show promising results. With the recently reprocessed GOCE gradients, an improved data set shall be used for the derivation of the Moho depth. In this contribution the processing strategy will be introduced and the most recent developments and results using the currently available GOCE data shall be presented.
Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python
Newville, Matthew; Stensitzki, Till; Allen, Daniel B.; Rawlik, Michal; Ingargiola, Antonino; Nelson, Andrew
2016-06-01
Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.
Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation
Directory of Open Access Journals (Sweden)
Ahmad Bilfarsah
2005-04-01
Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.
Ning, Hanwen; Qing, Guangyan; Jing, Xingjian
2016-11-01
The identification of nonlinear spatiotemporal dynamical systems given by partial differential equations has attracted a lot of attention in the past decades. Several methods, such as searching principle-based algorithms, partially linear kernel methods, and coupled lattice methods, have been developed to address the identification problems. However, most existing methods have some restrictions on sampling processes in that the sampling intervals should usually be very small and uniformly distributed in spatiotemporal domains. These are actually not applicable for some practical applications. In this paper, to tackle this issue, a novel kernel-based learning algorithm named integral least square regularization regression (ILSRR) is proposed, which can be used to effectively achieve accurate derivative estimation for nonlinear functions in the time domain. With this technique, a discretization method named inverse meshless collocation is then developed to realize the dimensional reduction of the system to be identified. Thereafter, with this novel inverse meshless collocation model, the ILSRR, and a multiple-kernel-based learning algorithm, a multistep identification method is systematically proposed to address the identification problem of spatiotemporal systems with pointwise nonuniform observations. Numerical studies for benchmark systems with necessary discussions are presented to illustrate the effectiveness and the advantages of the proposed method.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
A comparison of least-squares and Bayesian minimum risk edge parameter estimation
Mulder, Nanno J.; Abkar, Ali A.
1999-01-01
The problem considered here is to compare two methods for finding a common boundary between two objects with two unknown geometric parameters, such as edge position and edge orientation. We compare two model-based approaches: the least squares and the minimum Bayesian risk method. An expression is d
Memory and computation reduction for least-square channel estimation of mobile OFDM systems
Xu, T.; Tang, Z.; Lu, H.; Leuken, R van
2012-01-01
Mobile OFDM refers to OFDM systems with fast moving transceivers, contrastive to traditional OFDM systems whose transceivers are stationary or have a low velocity. In this paper, we use Basis Expansion Models (BEM) to model the time-variation of channels, based on which two least-squares (LS) channe
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
Conditional least squares estimation in nonstationary nonlinear stochastic regression models
Jacob, Christine
2010-01-01
Let $\\{Z_n\\}$ be a real nonstationary stochastic process such that $E(Z_n|{\\mathcaligr F}_{n-1})\\stackrel{\\mathrm{a.s.}}{<}\\infty$ and $E(Z^2_n|{\\mathcaligr F}_{n-1})\\stackrel{\\mathrm{a.s.}}{<}\\infty$, where $\\{{\\mathcaligr F}_n\\}$ is an increasing sequence of $\\sigma$-algebras. Assuming that $E(Z_n|{\\mathcaligr F}_{n-1})=g_n(\\theta_0,\
Neuenkirch, Andreas
2011-01-01
We study a least square-type estimator for an unknown parameter in the drift coefficient of a stochastic differential equation with additive fractional noise of Hurst parameter H>1/2. The estimator is based on discrete time observations of the stochastic differential equation, and using tools from ergodic theory and stochastic analysis we derive its strong consistency.
Joint 2D-DOA and Frequency Estimation for L-Shaped Array Using Iterative Least Squares Method
Directory of Open Access Journals (Sweden)
Ling-yun Xu
2012-01-01
Full Text Available We introduce an iterative least squares method (ILS for estimating the 2D-DOA and frequency based on L-shaped array. The ILS iteratively finds direction matrix and delay matrix, then 2D-DOA and frequency can be obtained by the least squares method. Without spectral peak searching and pairing, this algorithm works well and pairs the parameters automatically. Moreover, our algorithm has better performance than conventional ESPRIT algorithm and propagator method. The useful behavior of the proposed algorithm is verified by simulations.
Adaptive Wavelet Methods for Linear and Nonlinear Least-Squares Problems
Stevenson, R.
2014-01-01
The adaptive wavelet Galerkin method for solving linear, elliptic operator equations introduced by Cohen et al. (Math Comp 70:27-75, 2001) is extended to nonlinear equations and is shown to converge with optimal rates without coarsening. Moreover, when an appropriate scheme is available for the appr
Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction
2009-08-20
Lecture notes in mathematics ; 606, Springer-Verlag, Berlin ; New York, 1977, pp. 362. [56] Gel’fand, I.M., Fomin, S.V., and Silverman, R.A...computational fluid dynamics and electromagnetics, Scientific computation, Springer, Berlin ; New York, 1998. [70] Karniadakis, G., and Sherwin, S.J...Aeroelasticity,” Journal of Aircraft, Vol. 40, No. 6, 2003, pp. 1066-1092. [78] Lucia , D.J., “The SensorCraft Configurations: A Non-Linear
Lopata, R.G.P.; Hansen, H.H.G.; Nillesen, M.M.; Thijssen, J.M.; Korte, C.L. de
2009-01-01
In this study, the performances of one-dimensional and two-dimensional least-squares strain estimators (LSQSE) are compared. Furthermore, the effects of kernel size are examined using simulated raw frequency data of a widely-adapted hard lesion/soft tissue model. The performances of both methods are
Cho, M.A.; Skidmore, A.K.; Corsi, F.; Wieren, van S.E.; Sobhan, I.
2007-01-01
The main objective was to determine whether partial least squares (PLS) regression improves grass/herb biomass estimation when compared with hyperspectral indices, that is normalised difference vegetation index (NDVI) and red-edge position (REP). To achieve this objective, fresh green grass/herb bio
Spatter Rate Estimation of GMAW-S based on Partial Least Square Regression
Institute of Scientific and Technical Information of China (English)
CAI Yan; WANG Guang-wei; YANG Hai-lan; HUA Xue-ming; WU Yi-xiong
2008-01-01
This paper analyzes the drop transfer process in gas metal arc welding in short-circuit transfer mode (GMAW-S) in order to develop an optimized spatter rate model that can be used on line.According to thermodynamic characters and practical behavior,a complete arcing process is divided into three sub-processes:arc re-ignition,energy output and shorting preparation.Shorting process is then divided as drop spread,bridge sustention and bridge destabilization.Nine process variables and their distribution are analyzed based on welding experiments with high-speed photos and synchronous current and voltage signals.Method of variation coefficient is used to reflect process consistency and to design characteristic parameters.Partial least square regression (PLSR) is utilized to set up spatter rate model because of severe correlativity among the above characteristic parameters.PLSR is a new multivariate statistical analysis method,in which regression modeling,data simplification and relativity analysis are included in a single algorithm.Experiment results show that the regression equation based on PLSR is effective for on-line predicting spatter rate of its corresponding welding condition.
Musa, Rosliza; Ali, Zalila; Baharum, Adam; Nor, Norlida Mohd
2017-08-01
The linear regression model assumes that all random error components are identically and independently distributed with constant variance. Hence, each data point provides equally precise information about the deterministic part of the total variation. In other words, the standard deviations of the error terms are constant over all values of the predictor variables. When the assumption of constant variance is violated, the ordinary least squares estimator of regression coefficient lost its property of minimum variance in the class of linear and unbiased estimators. Weighted least squares estimation are often used to maximize the efficiency of parameter estimation. A procedure that treats all of the data equally would give less precisely measured points more influence than they should have and would give highly precise points too little influence. Optimizing the weighted fitting criterion to find the parameter estimates allows the weights to determine the contribution of each observation to the final parameter estimates. This study used polynomial model with weighted least squares estimation to investigate paddy production of different paddy lots based on paddy cultivation characteristics and environmental characteristics in the area of Kedah and Perlis. The results indicated that factors affecting paddy production are mixture fertilizer application cycle, average temperature, the squared effect of average rainfall, the squared effect of pest and disease, the interaction between acreage with amount of mixture fertilizer, the interaction between paddy variety and NPK fertilizer application cycle and the interaction between pest and disease and NPK fertilizer application cycle.
Energy Technology Data Exchange (ETDEWEB)
Ezure, Hideo
1988-09-01
Effective combination of measured data with theoretical analysis has permitted deriving a mehtod for more accurately estimating the power distribution in BWRs. Use is made of least squares method for the combination between relationship of the power distribution with measured values and the model used in FLARE or in the three-dimensional two-group diffusion code. Trial application of the new method to estimating the power distribution in JPDR-1 has proved the method to provide reliable results.
Eberhard, Wynn L
2017-04-01
The maximum likelihood estimator (MLE) is derived for retrieving the extinction coefficient and zero-range intercept in the lidar slope method in the presence of random and independent Gaussian noise. Least-squares fitting, weighted by the inverse of the noise variance, is equivalent to the MLE. Monte Carlo simulations demonstrate that two traditional least-squares fitting schemes, which use different weights, are less accurate. Alternative fitting schemes that have some positive attributes are introduced and evaluated. The principal factors governing accuracy of all these schemes are elucidated. Applying these schemes to data with Poisson rather than Gaussian noise alters accuracy little, even when the signal-to-noise ratio is low. Methods to estimate optimum weighting factors in actual data are presented. Even when the weighting estimates are coarse, retrieval accuracy declines only modestly. Mathematical tools are described for predicting retrieval accuracy. Least-squares fitting with inverse variance weighting has optimum accuracy for retrieval of parameters from single-wavelength lidar measurements when noise, errors, and uncertainties are Gaussian distributed, or close to optimum when only approximately Gaussian.
Carlberg, Kevin
2010-10-28
A Petrov-Galerkin projection method is proposed for reducing the dimension of a discrete non-linear static or dynamic computational model in view of enabling its processing in real time. The right reduced-order basis is chosen to be invariant and is constructed using the Proper Orthogonal Decomposition method. The left reduced-order basis is selected to minimize the two-norm of the residual arising at each Newton iteration. Thus, this basis is iteration-dependent, enables capturing of non-linearities, and leads to the globally convergent Gauss-Newton method. To avoid the significant computational cost of assembling the reduced-order operators, the residual and action of the Jacobian on the right reduced-order basis are each approximated by the product of an invariant, large-scale matrix, and an iteration-dependent, smaller one. The invariant matrix is computed using a data compression procedure that meets proposed consistency requirements. The iteration-dependent matrix is computed to enable the least-squares reconstruction of some entries of the approximated quantities. The results obtained for the solution of a turbulent flow problem and several non-linear structural dynamics problems highlight the merit of the proposed consistency requirements. They also demonstrate the potential of this method to significantly reduce the computational cost associated with high-dimensional non-linear models while retaining their accuracy. © 2010 John Wiley & Sons, Ltd.
Spectral Estimation from Undersampled Data: Correlogram and Model-Based Least Squares
Shaghaghi, Mahdi
2012-01-01
This paper studies two spectrum estimation methods for the case that the samples are obtained at a rate lower than the Nyquist rate. The first method is the correlogram method for undersampled data. The algorithm partitions the spectrum into a number of segments and estimates the average power within each spectral segment. We derive the bias and the variance of the spectrum estimator, and show that there is a tradeoff between the accuracy of the estimation and the frequency resolution. The asymptotic behavior of the estimator is also investigated, and it is proved that this spectrum estimator is consistent. A new algorithm for reconstructing signals with sparse spectrum from noisy compressive measurements is also introduced. Such model-based algorithm takes the signal structure into account for estimating the unknown parameters which are the frequencies and the amplitudes of linearly combined sinusoidal signals. A high-resolution spectral estimation method is used to recover the frequencies of the signal elem...
Real-Time Blood Flow Estimation Using a Recursive Least-Squares Lattice Filter
DEFF Research Database (Denmark)
Stetson, Paul F.; Jensen, Jørgen Arendt
1997-01-01
Ultrasonic flow estimation involves Fourier-transforming data from successive pulses. The standard periodogram spectral estimate does not reflect the true velocity distribution in the blood and assumes quasi-stationarity in the data. Last year (see J.A. Jensen et al., IEEE Ultrasonics Symposium...... a more realistic velocity distribution and can track rapid changes in the flow...
Hunter, A.J.; Drinkwater, B.W.; Wilcox, P.D.
2011-01-01
Ultrasonic array images are adversely affected by errors in the assumed or measured imaging parameters. For non-destructive testing and evaluation, this can result in reduced defect detection and characterization performance. In this paper, an autofocus algorithm is presented for estimating and corr
2006-03-01
14 T. Capers Jones states that using functional points requires a certain amount of subjectivity that can lead to complications with the...Network, (June 2004). 29 July 2005 http://ssrn.com/abstract=569875 Foss, Tron, Erik Stensrud, Barbara Kitchenham, and Ingunn Myrtveit. “A...Company-Specific Data,” Information and Software Technology, 42:1009-1016 (2000). Jones, T. Capers . Estimating Software Costs. New York: McGraw-Hill
Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA
Directory of Open Access Journals (Sweden)
Jiri Kadlec
2008-08-01
Full Text Available A high performance RLS lattice filter with the estimation of an unknown order and forgetting factor of identified system was developed and implemented as a PCORE coprocessor for Xilinx EDK. The coprocessor implemented in FPGA hardware can fully exploit parallelisms in the algorithm and remove load from a microprocessor. The EDK integration allows effective programming and debugging of hardware accelerated DSP applications. The RLS lattice core extended by the order and forgetting factor estimation was implemented using the logarithmic numbers system (LNS arithmetic. An optimal mapping of the RLS lattice onto the LNS arithmetic units found by the cyclic scheduling was used. The schedule allows us to run four independent filters in parallel on one arithmetic macro set. The coprocessor containing the RLS lattice core is highly configurable. It allows to exploit the modular structure of the RLS lattice filter and construct the pipelined serial connection of filters for even higher performance. It also allows to run independent parallel filters on the same input with different forgetting factors in order to estimate which order and exponential forgetting factor better describe the observed data. The FPGA coprocessor implementation presented in the paper is able to evaluate the RLS lattice filter of order 504 at 12Ã¢Â€Â‰kHz input data sampling rate. For the filter of order up to 20, the probability of order and forgetting factor hypotheses can be continually estimated. It has been demonstrated that the implemented coprocessor accelerates the Microblaze solution up to 20 times. It has also been shown that the coprocessor performs up to 2.5 times faster than highly optimized solution using 50Ã¢Â€Â‰MIPS SHARC DSP processor, while the Microblaze is capable of performing another tasks concurrently.
Indian Academy of Sciences (India)
G Sasibhushana Rao
2007-10-01
The positional accuracy of the Global Positioning System (GPS)is limited due to several error sources.The major error is ionosphere.By augmenting the GPS,the Category I (CAT I)Precision Approach (PA)requirements can be achieved.The Space-Based Augmentation System (SBAS)in India is known as GPS Aided Geo Augmented Navigation (GAGAN).One of the prominent errors in GAGAN that limits the positional accuracy is instrumental biases.Calibration of these biases is particularly important in achieving the CAT I PA landings.In this paper,a new algorithm is proposed to estimate the instrumental biases by modelling the TEC using 4th order polynomial.The algorithm uses values corresponding to a single station for one month period and the results conﬁrm the validity of the algorithm.The experimental results indicate that the estimation precision of the satellite-plus-receiver instrumental bias is of the order of ± 0.17 nsec.The observed mean bias error is of the order − 3.638 nsec and − 4.71 nsec for satellite 1 and 31 respectively.It is found that results are consistent over the period.
Institute of Scientific and Technical Information of China (English)
LIU Dan; WEI Guo; SUN Jin-wei; LIU Xin
2009-01-01
In the osmotic dehydration process of food, on-line estimation of concentrations of two components in ternary solution with NaCI and sucrose was performed based on multi-functional sensing technique.Moving Least Squares were adopted in approximation procedure to estimate the viscosity of such interested ternary solu-tion with the given data set.As a result, in one mode of using total experimental data as calibration data andvalidation data, the relative deviations of estimated viscosities are less than ～ 1.24%.In the other mode, by taking total experimental data except the ones for estimation as calibration data, the relative deviations are less than±3.47%.In the same way, the density of ternary solution can be also estimated with deviations less than ± 0.11% and ± 0.30% respectively in these two models.The satisfactory and accurate results show the ex-traordinary efficiency of Moving Least Squares behaved in signal approximation for multi-functional sensors.
2008-01-01
The aim of this study was to compare REML/BLUP and Least Square procedures in the prediction and estimation of genetic parameters and breeding values in soybean progenies. F(2:3) and F(4:5) progenies were evaluated in the 2005/06 growing season and the F(2:4) and F(4:6) generations derived thereof were evaluated in 2006/07. These progenies were originated from two semi-early, experimental lines that differ in grain yield. The experiments were conducted in a lattice design and plots consisted ...
DEFF Research Database (Denmark)
Tscherning, Carl Christian
2015-01-01
The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalen...... on gravity anomalies (at 10 km altitude) predicted from GOCE Tzz. This has given an improved agreement between errors based on the differences between values derived from EGM2008 (to degree 512) and predicted gravity anomalies.......The method of Least-Squares Collocation (LSC) may be used for the modeling of the anomalous gravity potential (T) and for the computation (prediction) of quantities related to T by a linear functional. Errors may also be estimated. However, when using an isotropic covariance function or equivalent...... outside the data area. On the other hand, a comparison of predicted quantities with observed values show that the error also varies depending on the local data standard deviation. This quantity may be (and has been) estimated using the GOCE second order vertical derivative, Tzz, in the area covered...
Directory of Open Access Journals (Sweden)
Dilip C Nath
2011-07-01
Full Text Available The Quasi-Least Squares (QLS is useful for different correlation structure with attachment of Generalized Estimating Equation (GEE. The purpose of this work is to compare the regression parameter in the presence of different correlation structure with respect to GEE and QLS method. The comparison of estimated regression parameter has been performed in clinical trial data set; studying the effect of drug treatment (metformin with pioglitazone Vs (gliclazide with pioglitazone in type 2 diabetes patients. In case of QLS, the correlation coefficient of post-parandinal blood sugar (PPBS under tridiagonal correlation is 0.008 while it failed to produce by GEE. It has been found that the combination of metformin with pioglitazone is more effective as compared to the combination of gliclazide with pioglitazone.
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
Detecting plant health conditions plays a key role in farm pest management and crop protection. In this study,measurement of hyperspectral leaf reflectance in rice crop (Oryzasativa L.) was conducted on groups of healthy and infected leaves by the fungus Bipolaris oryzae (Helminthosporium oryzae Breda. de Hann) through the wavelength range from 350 to 2 500 nm. The percentage of leaf surface lesions was estimated and defined as the disease severity. Statistical methods like multiple stepwise regression, principal component analysis and partial least-square regression were utilized to calculate and estimate the disease severity of rice brown spot at the leaf level. Our results revealed that multiple stepwise linear regressions could efficiently estimate disease severity with three wavebands in seven steps. The root mean square errors (RMSEs) for training (n=210) and testing (n=53) dataset were 6.5% and 5.8%, respectively. Principal component analysis showed that the first principal component could explain approximately 80% of the variance of the original hyperspectral reflectance. The regression model with the first two principal components predicted a disease severity with RMSEs of 16.3% and 13.9% for the training and testing dataset, respectively. Partial least-square regression with seven extracted factors could most effectively predict disease severity compared with other statistical methods with RMSEs of 4.1% and 2.0% for the training and testing dataset, respectively. Our research demonstrates that it is feasible to estimate the disease severity office brown spot using hyperspectral reflectance data at the leaf level.
Legaie, D.; Pron, H.; Bissieux, C.
2008-11-01
Integral transforms (Laplace, Fourier, Hankel) are widely used to solve the heat diffusion equation. Moreover, it often appears relevant to realize the estimation of thermophysical properties in the transformed space. Here, an analytical model has been developed, leading to a well-posed inverse problem of parameter identification. Two black coatings, a thin black paint layer and an amorphous carbon film, were studied by photothermal infrared thermography. A Hankel transform has been applied on both thermal model and data and the estimation of thermal diffusivity has been achieved in the Hankel space. The inverse problem is formulated as a non-linear least square problem and a Gauss-Newton algorithm is used for the parameter identification.
Abo-Ezz, E. R.; Essa, K. S.
2016-04-01
A new linear least-squares approach is proposed to interpret magnetic anomalies of the buried structures by using a new magnetic anomaly formula. This approach depends on solving different sets of algebraic linear equations in order to invert the depth ( z), amplitude coefficient ( K), and magnetization angle ( θ) of buried structures using magnetic data. The utility and validity of the new proposed approach has been demonstrated through various reliable synthetic data sets with and without noise. In addition, the method has been applied to field data sets from USA and India. The best-fitted anomaly has been delineated by estimating the root-mean squared (rms). Judging satisfaction of this approach is done by comparing the obtained results with other available geological or geophysical information.
Roy Choudhury, Kingshuk; O'Sullivan, Finbarr; Kasman, Ian; Plowman, Greg D
2012-12-20
Measurements in tumor growth experiments are stopped once the tumor volume exceeds a preset threshold: a mechanism we term volume endpoint censoring. We argue that this type of censoring is informative. Further, least squares (LS) parameter estimates are shown to suffer a bias in a general parametric model for tumor growth with an independent and identically distributed measurement error, both theoretically and in simulation experiments. In a linear growth model, the magnitude of bias in the LS growth rate estimate increases with the growth rate and the standard deviation of measurement error. We propose a conditional maximum likelihood estimation procedure, which is shown both theoretically and in simulation experiments to yield approximately unbiased parameter estimates in linear and quadratic growth models. Both LS and maximum likelihood estimators have similar variance characteristics. In simulation studies, these properties appear to extend to the case of moderately dependent measurement error. The methodology is illustrated by application to a tumor growth study for an ovarian cancer cell line.
Es-Sebaiy, Khalifa
2012-01-01
Let $\\theta>0$. We consider a one-dimensional fractional Ornstein-Uhlenbeck process defined as $dX_t= -\\theta\\ X_t dt+dB_t,\\quad t\\geq0,$ where $B$ is a fractional Brownian motion of Hurst parameter $H\\in(1/2,1)$. We are interested in the problem of estimating the unknown parameter $\\theta$. For that purpose, we dispose of a discretized trajectory, observed at $n$ equidistant times $t_i=i\\Delta_{n}, i=0,...,n$, and $T_n=n\\Delta_{n}$ denotes the length of the `observation window'. We assume that $\\Delta_{n} \\rightarrow 0$ and $T_n\\rightarrow \\infty$ as $n\\rightarrow \\infty$. As an estimator of $\\theta$ we choose the least squares estimator (LSE) $\\hat{\\theta}_{n}$. The consistency of this estimator is established. Explicit bounds for the Kolmogorov distance, in the case when $H\\in(1/2,3/4)$, in the central limit theorem for the LSE $\\hat{\\theta}_{n}$ are obtained. These results hold without any kind of ergodicity on the process $X$.
Jarmołowski, Wojciech
2017-07-01
Maximum likelihood (ML) and restricted maximum likelihood (REML) are nowadays very popular in geophysics, geodesy and many other fields. There is also a growing number of investigations into how to calculate covariance parameters by ML/REML accurately and fast, and assure the convergence of the iteration steps in derivative-based approaches. The latter condition is not satisfied in many solutions, as it requires composed procedures or takes an unacceptable amount of time. The article implements efficient Fisher scoring (FS) to covariance parameter estimation in least-squares collocation (LSC). FS is optimized through Levenberg-Marquardt (LM) optimization, which provides stability in convergence when estimating two covariance parameters necessary for LSC. The motivation for this work was a very large number of non-optimized FS in the literature, as well as a deficiency of its scientific and engineering applications. The example work adds some usefulness to maximum likelihood estimation (ML) and FS and shows a new application—an alternative approach to LSC—a parametrization with no empirical covariance estimation. The results of LM damping applied to FS (FSLM) require some additional research related with optimal LM parameter. However, the method appears to be a milestone in relation to non-optimized FS, in terms of convergence. The FS with LM provides a reliable convergence, whose speed can be adjusted by manipulating the LM parameter.
Migliorati, Giovanni
2015-08-28
We study the accuracy of the discrete least-squares approximation on a finite dimensional space of a real-valued target function from noisy pointwise evaluations at independent random points distributed according to a given sampling probability measure. The convergence estimates are given in mean-square sense with respect to the sampling measure. The noise may be correlated with the location of the evaluation and may have nonzero mean (offset). We consider both cases of bounded or square-integrable noise / offset. We prove conditions between the number of sampling points and the dimension of the underlying approximation space that ensure a stable and accurate approximation. Particular focus is on deriving estimates in probability within a given confidence level. We analyze how the best approximation error and the noise terms affect the convergence rate and the overall confidence level achieved by the convergence estimate. The proofs of our convergence estimates in probability use arguments from the theory of large deviations to bound the noise term. Finally we address the particular case of multivariate polynomial approximation spaces with any density in the beta family, including uniform and Chebyshev.
Directory of Open Access Journals (Sweden)
Jingyan Song
2011-07-01
Full Text Available The star centroid estimation is the most important operation, which directly affects the precision of attitude determination for star sensors. This paper presents a theoretical study of the systematic error introduced by the star centroid estimation algorithm. The systematic error is analyzed through a frequency domain approach and numerical simulations. It is shown that the systematic error consists of the approximation error and truncation error which resulted from the discretization approximation and sampling window limitations, respectively. A criterion for choosing the size of the sampling window to reduce the truncation error is given in this paper. The systematic error can be evaluated as a function of the actual star centroid positions under different Gaussian widths of star intensity distribution. In order to eliminate the systematic error, a novel compensation algorithm based on the least squares support vector regression (LSSVR with Radial Basis Function (RBF kernel is proposed. Simulation results show that when the compensation algorithm is applied to the 5-pixel star sampling window, the accuracy of star centroid estimation is improved from 0.06 to 6 × 10−5 pixels.
Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
Directory of Open Access Journals (Sweden)
Meng-Li Cao
2014-06-01
Full Text Available This paper investigates the problem of locating a continuous chemical source using the concentration measurements provided by a wireless sensor network (WSN. Such a problem exists in various applications: eliminating explosives or drugs, detecting the leakage of noxious chemicals, etc. The limited power and bandwidth of WSNs have motivated collaborative in-network processing which is the focus of this paper. We propose a novel distributed least-squares estimation (DLSE method to solve the chemical source localization (CSL problem using a WSN. The DLSE method is realized by iteratively conducting convex combination of the locally estimated chemical source locations in a distributed manner. Performance assessments of our method are conducted using both simulations and real experiments. In the experiments, we propose a fitting method to identify both the release rate and the eddy diffusivity. The results show that the proposed DLSE method can overcome the negative interference of local minima and saddle points of the objective function, which would hinder the convergence of local search methods, especially in the case of locating a remote chemical source.
Quasi-least squares regression
Shults, Justine
2014-01-01
Drawing on the authors' substantial expertise in modeling longitudinal and clustered data, Quasi-Least Squares Regression provides a thorough treatment of quasi-least squares (QLS) regression-a computational approach for the estimation of correlation parameters within the framework of generalized estimating equations (GEEs). The authors present a detailed evaluation of QLS methodology, demonstrating the advantages of QLS in comparison with alternative methods. They describe how QLS can be used to extend the application of the traditional GEE approach to the analysis of unequally spaced longitu
Khaskheli, Abdul Rauf; Sirajuddin; Sherazi, S T H; Mahesar, S A; Kandhro, Aftab A; Kalwar, Nazar Hussain; Mallah, Muhammad Ali
2013-02-01
A rapid, reliable and cost effective analytical procedure for the estimation of ibuprofen in pharmaceutical formulations and human urine samples was developed using transmission Fourier Transform Infrared (FT-IR) spectroscopy. For the determination of ibuprofen, a KBr window with 500 μm spacer was used to acquire the FT-IR spectra of standards, pharmaceuticals as well as urine samples. Partial least square (PLS) calibration model was developed based on region from 1807 to 1,461 cm(-1) using ibuprofen standards ranging from 10 to 100 μg ml(-1). The developed model was evaluated by cross-validation to determine standard error of the models such as root mean square error of calibration (RMSEC), root mean square error of cross validation (RMSECV) and root mean square error of prediction (RMSEP). The coefficient of determination (R(2)) achieved was 0.998 with minimum errors in RMSEC, RMSECV and RMSEP with the value of 1.89%, 1.63% and 4.07%, respectively. The method was successfully applied to urine and pharmaceutical samples and obtained good recovery (98-102%).
Skou, Peter B; Berg, Thilo A; Aunsbjerg, Stina D; Thaysen, Dorrit; Rasmussen, Morten A; van den Berg, Frans
2017-03-01
Reuse of process water in dairy ingredient production-and food processing in general-opens the possibility for sustainable water regimes. Membrane filtration processes are an attractive source of process water recovery since the technology is already utilized in the dairy industry and its use is expected to grow considerably. At Arla Foods Ingredients (AFI), permeate from a reverse osmosis polisher filtration unit is sought to be reused as process water, replacing the intake of potable water. However, as for all dairy and food producers, the process water quality must be monitored continuously to ensure food safety. In the present investigation we found urea to be the main organic compound, which potentially could represent a microbiological risk. Near infrared spectroscopy (NIRS) in combination with multivariate modeling has a long-standing reputation as a real-time measurement technology in quality assurance. Urea was quantified Using NIRS and partial least squares regression (PLS) in the concentration range 50-200 ppm (RMSEP = 12 ppm, R(2 )= 0.88) in laboratory settings with potential for on-line application. A drawback of using NIRS together with PLS is that uncertainty estimates are seldom reported but essential to establishing real-time risk assessment. In a multivariate regression setting, sample-specific prediction errors are needed, which complicates the uncertainty estimation. We give a straightforward strategy for implementing an already developed, but seldom used, method for estimating sample-specific prediction uncertainty. We also suggest an improvement. Comparing independent reference analyses with the sample-specific prediction error estimates showed that the method worked on industrial samples when the model was appropriate and unbiased, and was simple to implement.
Application of penalized least squares estimation in height anomaly%遥感影像理解综述
Institute of Scientific and Technical Information of China (English)
张春晓; 王天宝; 鲁学军; 姜娉
2011-01-01
The model errors exist inevitably in conventional least square fitting model of height anomaly, this article proposed that model error could be dealt with as nonparametric information using penalized least squares and discussed the effect of Regularizer R and Smoothing Parameter α on the results of fitting. Through the research on the solution of the Smoothing Parameter, a method of function Xu (α)was presented, and experimented on a GPS leveling measurement data. The Results showed that penalized least squares is better than least-square method in determining height anomaly.%本文通过对近年来遥感影像理解(IU:Image Understanding)研究的分析,本文给出了遥感影像理解的框架流程,讨论了高级语义特征和低级影像特征,针对流程中的各个任务介绍了有代表性的方法应用,并对发展趋势进行预测,特别是基于知识系统和影像认知的应用.
A Modified Quasi- Newton Method for Nonlinear Least Squares Problems%非线性最小二乘问题的修正拟牛顿法
Institute of Scientific and Technical Information of China (English)
吴淦洲
2011-01-01
A modified quasi - Newton method for nonlinear least squares problems is proposed. By using non - monotone line search technique and structured quasi - Newton method, we establish a modified quasi - Newton method for nonlinear least squares problems, and the global convergence of the algorithm is proved.%给出了求解非线性最小二乘的修正拟牛顿方法。该方法结合了非单调搜索技术和结构化拟牛顿法的思想，提出了一种新的求解非线性最小二乘的修正拟牛顿法，并证明了该方法的全局收敛性。
Directory of Open Access Journals (Sweden)
Agnaldo Donizete Ferreira de Carvalho
2008-01-01
Full Text Available The aim of this study was to compare REML/BLUP and Least Square procedures in the prediction andestimation of genetic parameters and breeding values in soybean progenies. F2:3 and F4:5 progenies were evaluated in the2005/06 growing season and the F2:4 and F4:6 generations derived thereof were evaluated in 2006/07. These progenies wereoriginated from two semi-early experimental lines that differ in grain yield. The experiments were conducted in a lattice designand plots consisted of a 2 m row, spaced 0.5 m apart. The trait grain yield per plot was evaluated. It was observed that earlyselection is more efficient for the discrimination of the best lines from the F4 generation onwards. No practical differences wereobserved between the least square and REML/BLUP procedures in the case of the models and simplifications for REML/BLUPused here.
Bayesian least squares deconvolution
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Bayesian least squares deconvolution
Ramos, A Asensio
2015-01-01
Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
分布函数的非参数最小二乘估计%NON-PARAMETRIC LEAST SQUARE ESTIMATION OF DISTRIBUTION FUNCTION
Institute of Scientific and Technical Information of China (English)
柴根象; 花虹; 尚汉冀
2002-01-01
By using the non-parametric least square method, the strong consistent estimations of distribution function and failure function are established,where the distribution function F(x) after logist transformation is assumed to be approximated by a polynomial.The performance of simulation shows that the estimations are highly satisfactory.
Udink ten Cate, A.J.
1985-01-01
Discrete-time least-squares algorithms for recursive parameter estimation have continuous-time counterparts, which minimize a quadratic functional. The continuous-time algorithms can also include (in)equality constraints. Asymptotic convergence is demonstrated by means of Lyapunov methods. The constrained algorithms are applied in a stabilized output error configuration for parameter estimation in stochastic linear systems.
2008-01-01
The aim of this study was to compare REML/BLUP and Least Square procedures in the prediction andestimation of genetic parameters and breeding values in soybean progenies. F2:3 and F4:5 progenies were evaluated in the2005/06 growing season and the F2:4 and F4:6 generations derived thereof were evaluated in 2006/07. These progenies wereoriginated from two semi-early experimental lines that differ in grain yield. The experiments were conducted in a lattice designand plots consisted of a 2 m row,...
Directory of Open Access Journals (Sweden)
Xiangwei Guo
2016-02-01
Full Text Available An estimation of the power battery state of charge (SOC is related to the energy management, the battery cycle life and the use cost of electric vehicles. When a lithium-ion power battery is used in an electric vehicle, the SOC displays a very strong time-dependent nonlinearity under the influence of random factors, such as the working conditions and the environment. Hence, research on estimating the SOC of a power battery for an electric vehicle is of great theoretical significance and application value. In this paper, according to the dynamic response of the power battery terminal voltage during a discharging process, the second-order RC circuit is first used as the equivalent model of the power battery. Subsequently, on the basis of this model, the least squares method (LS with a forgetting factor and the adaptive unscented Kalman filter (AUKF algorithm are used jointly in the estimation of the power battery SOC. Simulation experiments show that the joint estimation algorithm proposed in this paper has higher precision and convergence of the initial value error than a single AUKF algorithm.
Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia
2014-11-01
To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an
A novel extended kernel recursive least squares algorithm.
Zhu, Pingping; Chen, Badong; Príncipe, José C
2012-08-01
In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.
Directory of Open Access Journals (Sweden)
Moehammad Awaluddin
2012-07-01
Full Text Available Continuous Global Positioning System (GPS observations showed significant crustal displacements as a result of the Bengkulu earthquake occurring on September 12, 2007. A maximum horizontal displacement of 2.11 m was observed at PRKB station, while the vertical component at BSAT station was uplifted with a maximum of 0.73 m, and the vertical component at LAIS station was subsided by -0.97 m. The method of adding more constraint on the inversion for the Bengkulu earthquake slip distribution from GPS observations can help solve a least squares inversion with an under-determined condition. Checkerboard tests were performed to help conduct the weighting for constraining the inversion. The inversion calculation of the Bengkulu earthquake slip distribution yielded in an optimum value of slip distribution by giving a weight of smoothing constraint of 0.001 and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the optimal inversion calculation was 5.12 m at the lower area of PRKB and BSAT stations. The seismic moment calculated from the optimal slip distribution was 7.14 x 1021 Nm, which is equivalent to a magnitude of 8.5.
Collinearity in Least-Squares Analysis
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Total Least Square Estimator of Principal Components under Multicollinearity%复共线性下的主成分全最小二乘估计
Institute of Scientific and Technical Information of China (English)
刘文丽; 吕书龙; 梁飞豹
2011-01-01
针对最小二乘法在参数估计中的局限性,在多维解释变量存在复共线性时,提出主成分全最小二乘估计,避免奇异矩阵求逆的问题.经多组大量测试,计算得到的回归系数的平均绝对偏差均较小,且表现稳定,其效果明显地优于最小二乘估计和全最小二乘估计.%The total least square estimator of principal components is proposed to avoid inversing the singular matrix under explanatory variables of multi-dimensions with multicollinearity. Under certain conditions, we prove by numerous test that it has smaller mean square errors (MSE) than least square estimator (LSE).
Institute of Scientific and Technical Information of China (English)
袁平; 丁峰
2008-01-01
利用Kronecker积,推导出多变量ARX-like随机系统的辨识模型,使用递阶辨识原理研制了一个递阶最小二乘参数估计算法.提出的递阶最小二乘算法比现存递推最小二乘算法计算量小.给出了为仿真例子.%By using the Kronecker product,An identification model for multivariable ARX-like stochastic systems is derived and developed a hierarchical least squares parameter estimation algorithm by the hierarchical identification principle．The proposed algorithm has less computational eorts than the recursive least squares algorithm.A simulation example is included.
Directory of Open Access Journals (Sweden)
W. Marzocchi
2007-06-01
Full Text Available We investigate conceptually, analytically, and numerically the biases in the estimation of the b-value of the Gutenberg-Richter law and of its uncertainty made through the least squares technique. The biases are introduced by the cumulation operation for the cumulative form of the Gutenberg-Richter law, by the logarithmic transformation, and by the measurement errors on the magnitude. We find that the least squares technique, applied to the cumulative and binned form of the Gutenberg-Richter law, produces strong bias in the b-value and its uncertainty, whose amplitudes depend on the size of the sample. Furthermore, the logarithmic transformation produces two different endemic bends in the Log(N versus M curve. This means that this plot might produce fake significant departures from the Gutenberg-Richter law. The effect of the measurement errors is negligible compared to those of cumulation operation and logarithmic transformation. The results obtained show that the least squares technique should never be used to determine the slope of the Gutenberg-Richter law and its uncertainty.
基于核PLS方法的非线性过程在线监控%Online nonlinear process monitoring using kernel partial least squares
Institute of Scientific and Technical Information of China (English)
胡益; 王丽; 马贺贺; 侍洪波
2011-01-01
针对过程监控数据的非线性特点,提出了一种基于核偏最小二乘(KPLS)的监控方法.KPLS方法是将原始输入数据通过核函数映射到高维特征空间,然后在高维特征空间再进行偏最小二乘(PLS)运算.与线性PIS相比,KPLS方法能充分利用样本空间信息,建立起输入输出变量之间的非线性关系.与其他非线性PLS方法不同,KPLS方法只需要进行线性运算,从而避免非线性优化问题.在对过程进行监控时,首先采用KPLS方法建立模型,得到得分向量,然后计算出T2和SPE统计量及其相应的控制限.Tennessee Eastman (TE)模型上的仿真研究结果表明,所提方法比线性PLS方法具有更好的过程监控性能.%To handle the nonlinear problem for process monitoring, a new technique based on kernel partial least squares (KPLS) is developed. KPLS is an improved partial least squares (PLS) method, and its main idea is to first map the input space into a high-dimensional feature space via a nonlinear kernel function and then to use the standard PLS in that feature space. Compared to linear PLS, KPLS can make full use of the sample space information, and effectively capture the nonlinear relationship between input variables and output variables. Different from other nonlinear PLS, KPLS requires only linear algebra and does not involve any nonlinear optimization. For process data, firstly KPLS was used to derive regression model and got the score vectors, and then two statistics, T2 and SPE, and corresponding control limits were calculated. A case study of the Tennessee-Eastman (TE) process illustrated that the proposed approach showed superior process monitoring performance compared to linear PLS.
Mayotte, Jean-Marc; Grabs, Thomas; Sutliff-Johansson, Stacy; Bishop, Kevin
2017-06-01
This study examined how the inactivation of bacteriophage MS2 in water was affected by ionic strength (IS) and dissolved organic carbon (DOC) using static batch inactivation experiments at 4 °C conducted over a period of 2 months. Experimental conditions were characteristic of an operational managed aquifer recharge (MAR) scheme in Uppsala, Sweden. Experimental data were fit with constant and time-dependent inactivation models using two methods: (1) traditional linear and nonlinear least-squares techniques; and (2) a Monte-Carlo based parameter estimation technique called generalized likelihood uncertainty estimation (GLUE). The least-squares and GLUE methodologies gave very similar estimates of the model parameters and their uncertainty. This demonstrates that GLUE can be used as a viable alternative to traditional least-squares parameter estimation techniques for fitting of virus inactivation models. Results showed a slight increase in constant inactivation rates following an increase in the DOC concentrations, suggesting that the presence of organic carbon enhanced the inactivation of MS2. The experiment with a high IS and a low DOC was the only experiment which showed that MS2 inactivation may have been time-dependent. However, results from the GLUE methodology indicated that models of constant inactivation were able to describe all of the experiments. This suggested that inactivation time-series longer than 2 months were needed in order to provide concrete conclusions regarding the time-dependency of MS2 inactivation at 4 °C under these experimental conditions.
Directory of Open Access Journals (Sweden)
Omholt Stig W
2011-06-01
Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback
Institute of Scientific and Technical Information of China (English)
赵旭; 薛留根; 李婧兰; 程维虎
2012-01-01
The generalized Pareto distribution（GPD） is one of the most important distribution in statistics analysis.This paper is based on sample quantiles of the GPD.First,the shape parameter estimator that has high estimated precision is solved,then the approximated generalized least squares estimation expressions of the location and scale parameters are obtained for the GPD.The proposed method is easy and has no limitation for the shape parameter.In addition,it has high estimation accuracy under Monte-Carlo simulation tests.%广义Pareto分布（generalized Pareto distribution,GPD）是统计分析中的一个极为重要的分布.对基于广义Pareto分布的若干个样本分位数进行了研究.首先,求解具有较高精度的形状参数的参数估计;其次,得出广义Pareto分布位置参数及尺度参数的近似广义最小二乘估计.本方法简单易行,对形状参数的存在条件没有限制,通过Monte Carlo模拟验证了该方法具有较高的精度.
A SUCCESSIVE LEAST SQUARES METHOD FOR STRUCTURED TOTAL LEAST SQUARES
Institute of Scientific and Technical Information of China (English)
Plamen Y. Yalamov; Jin-yun Yuan
2003-01-01
A new method for Total Least Squares (TLS) problems is presented. It differs from previous approaches and is based on the solution of successive Least Squares problems.The method is quite suitable for Structured TLS (STLS) problems. We study mostly the case of Toeplitz matrices in this paper. The numerical tests illustrate that the method converges to the solution fast for Toeplitz STLS problems. Since the method is designed for general TLS problems, other structured problems can be treated similarly.
Chen, Hong-Yan; Zhao, Geng-Xing; Li, Xi-Can; Wang, Xiang-Feng; Li, Yu-Ling
2013-11-01
Taking the Qihe County in Shandong Province of East China as the study area, soil samples were collected from the field, and based on the hyperspectral reflectance measurement of the soil samples and the transformation with the first deviation, the spectra were denoised and compressed by discrete wavelet transform (DWT), the variables for the soil alkali hydrolysable nitrogen quantitative estimation models were selected by genetic algorithms (GA), and the estimation models for the soil alkali hydrolysable nitrogen content were built by using partial least squares (PLS) regression. The discrete wavelet transform and genetic algorithm in combining with partial least squares (DWT-GA-PLS) could not only compress the spectrum variables and reduce the model variables, but also improve the quantitative estimation accuracy of soil alkali hydrolysable nitrogen content. Based on the 1-2 levels low frequency coefficients of discrete wavelet transform, and under the condition of large scale decrement of spectrum variables, the calibration models could achieve the higher or the same prediction accuracy as the soil full spectra. The model based on the second level low frequency coefficients had the highest precision, with the model predicting R2 being 0.85, the RMSE being 8.11 mg x kg(-1), and RPD being 2.53, indicating the effectiveness of DWT-GA-PLS method in estimating soil alkali hydrolysable nitrogen content.
A Restricted Least Squares Estimation for Fuzzy Linear Regression Models%模糊线性回归模型的约束最小二乘估计
Institute of Scientific and Technical Information of China (English)
王宁; 张文修
2006-01-01
自Tanaka等1982年提出模糊回归概念以来,该问题已得到广泛的研究.作为主要估计方法之一的模糊最小二乘估计以其与统计最小二乘估计的密切联系更受到人们的重视.本文依据适当定义的两个模糊数之间的距离,提出了模糊线性回归模型的一个约束最小二乘估计方法,该方法不仅能使估计的模糊参数的宽度具有非负性而且估计的模糊参数的中心线与传统的最小二乘估计相一致.最后,通过数值例子说明了所提方法的具体应用.%Fuzzy linear regression has been extensively studied since its inception symbolized by the work of Tanaka et al. In 1982. As one of the main estimation methods, fuzzy least squares approach is appealing because it corresponds, to some extend, to the well known statistical regression analysis. In this article, a restricted least squares method is proposed to fit fuzzy linear models with crisp inputs and symmetric fuzzy output. This method can obtain not only non-negative spreads of the estimated fuzzy parameters and a traditional least squares center line of the fitted fuzzy output which is of particular!importance to a decision maker. Numerical examples are further considered to demonstrate the practical application of the proposed method.
Feng, Jie; Wang, Zhe; Li, Lizhi; Li, Zheng; Ni, Weidou
2013-03-01
A nonlinearized multivariate dominant factor-based partial least-squares (PLS) model was applied to coal elemental concentration measurement. For C concentration determination in bituminous coal, the intensities of multiple characteristic lines of the main elements in coal were applied to construct a comprehensive dominant factor that would provide main concentration results. A secondary PLS thereafter applied would further correct the model results by using the entire spectral information. In the dominant factor extraction, nonlinear transformation of line intensities (based on physical mechanisms) was embedded in the linear PLS to describe nonlinear self-absorption and inter-element interference more effectively and accurately. According to the empirical expression of self-absorption and Taylor expansion, nonlinear transformations of atomic and ionic line intensities of C were utilized to model self-absorption. Then, the line intensities of other elements, O and N, were taken into account for inter-element interference, considering the possible recombination of C with O and N particles. The specialty of coal analysis by using laser-induced breakdown spectroscopy (LIBS) was also discussed and considered in the multivariate dominant factor construction. The proposed model achieved a much better prediction performance than conventional PLS. Compared with our previous, already improved dominant factor-based PLS model, the present PLS model obtained the same calibration quality while decreasing the root mean square error of prediction (RMSEP) from 4.47 to 3.77%. Furthermore, with the leave-one-out cross-validation and L-curve methods, which avoid the overfitting issue in determining the number of principal components instead of minimum RMSEP criteria, the present PLS model also showed better performance for different splits of calibration and prediction samples, proving the robustness of the present PLS model.
AKLSQF - LEAST SQUARES CURVE FITTING
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Directory of Open Access Journals (Sweden)
Fei Jin
2013-05-01
Full Text Available This paper studies the generalized spatial two stage least squares (GS2SLS estimation of spatial autoregressive models with autoregressive disturbances when there are endogenous regressors with many valid instruments. Using many instruments may improve the efficiency of estimators asymptotically, but the bias might be large in finite samples, making the inference inaccurate. We consider the case that the number of instruments K increases with, but at a rate slower than, the sample size, and derive the approximate mean square errors (MSE that account for the trade-offs between the bias and variance, for both the GS2SLS estimator and a bias-corrected GS2SLS estimator. A criterion function for the optimal K selection can be based on the approximate MSEs. Monte Carlo experiments are provided to show the performance of our procedure of choosing K.
Least Squares Data Fitting with Applications
DEFF Research Database (Denmark)
Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela
predictively. The main concern of Least Squares Data Fitting with Applications is how to do this on a computer with efficient and robust computational methods for linear and nonlinear relationships. The presentation also establishes a link between the statistical setting and the computational issues...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis......As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data...
Least Squares Data Fitting with Applications
DEFF Research Database (Denmark)
Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela
As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data...... predictively. The main concern of Least Squares Data Fitting with Applications is how to do this on a computer with efficient and robust computational methods for linear and nonlinear relationships. The presentation also establishes a link between the statistical setting and the computational issues...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis...
DEFF Research Database (Denmark)
Sadiq, Muhammad; Tscherning, Carl C.; Ahmad, Zulfiqar
2009-01-01
. The long term objective is to obtain a regional geoid (or quasi-geoid) modeling using a combination of local data with a high degree and order Earth gravity model (EGM) and to determine a bias (if there is one) with respect to a global mean sea surface. An application of collocation with the optimal...... covariance parameters has facilitated to achieve gravimetric height anomalies in a global geocentric datum. Residual terrain modeling (RTM) technique has been used in combination with the EGM96 for the reduction and smoothing of the gravity data. A value for the bias parameter N-o has been estimated......This paper deals with the analysis of gravity anomaly and precise levelling in conjunction with GPS-Levelling data for the computation of a gravimetric geoid and an estimate of the height system bias parameter N-o for the vertical datum in Pakistan by means of least squares collocation technique...
Bayesian Sparse Partial Least Squares
Vidaurre, D.; Gerven, M.A.J. van; Bielza, C.; Larrañaga, P.; Heskes, T.M.
2013-01-01
Partial least squares (PLS) is a class of methods that makes use of a set of latent or unobserved variables to model the relation between (typically) two sets of input and output variables, respectively. Several flavors, depending on how the latent variables or components are computed, have been dev
Wang, Yan-Cang; Yang, Gui-Jun; Zhu, Jin-Shan; Gu, Xiao-He; Xu, Peng; Liao, Qin-Hong
2014-07-01
For improving the estimation accuracy of soil organic matter content of the north fluvo-aquic soil, wavelet transform technology is introduced. The soil samples were collected from Tongzhou district and Shunyi district in Beijing city. And the data source is from soil hyperspectral data obtained under laboratory condition. First, discrete wavelet transform efficiently decomposes hyperspectral into approximate coefficients and detail coefficients. Then, the correlation between approximate coefficients, detail coefficients and organic matter content was analyzed, and the sensitive bands of the organic matter were screened. Finally, models were established to estimate the soil organic content by using the partial least squares regression (PLSR). Results show that the NIR bands made more contributions than the visible band in estimating organic matter content models; the ability of approximate coefficients to estimate organic matter content is better than that of detail coefficients; The estimation precision of the detail coefficients fir soil organic matter content decreases with the spectral resolution being lower; Compared with the commonly used three types of soil spectral reflectance transforms, the wavelet transform can improve the estimation ability of soil spectral fir organic content; The accuracy of the best model established by the approximate coefficients or detail coefficients is higher, and the coefficient of determination (R2) and the root mean square error (RMSE) of the best model for approximate coefficients are 0.722 and 0.221, respectively. The R2 and RMSE of the best model for detail coefficients are 0.670 and 0.255, respectively.
Duong, Van-Huan; Bastawrous, Hany Ayad; Lim, KaiChin; See, Khay Wai; Zhang, Peng; Dou, Shi Xue
2015-11-01
This paper deals with the contradiction between simplicity and accuracy of the LiFePO4 battery states estimation in the electric vehicles (EVs) battery management system (BMS). State of charge (SOC) and state of health (SOH) are normally obtained from estimating the open circuit voltage (OCV) and the internal resistance of the equivalent electrical circuit model of the battery, respectively. The difficulties of the parameters estimation arise from their complicated variations and different dynamics which require sophisticated algorithms to simultaneously estimate multiple parameters. This, however, demands heavy computation resources. In this paper, we propose a novel technique which employs a simplified model and multiple adaptive forgetting factors recursive least-squares (MAFF-RLS) estimation to provide capability to accurately capture the real-time variations and the different dynamics of the parameters whilst the simplicity in computation is still retained. The validity of the proposed method is verified through two standard driving cycles, namely Urban Dynamometer Driving Schedule and the New European Driving Cycle. The proposed method yields experimental results that not only estimated the SOC with an absolute error of less than 2.8% but also characterized the battery model parameters accurately.
Least-squares fitting Gompertz curve
Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf
2004-08-01
In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.
Consistent Partial Least Squares Path Modeling
Dijkstra, Theo K.; Henseler, Jörg
2015-01-01
This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the
生长曲线模型的惩罚最小二乘估计%Penalized Least Squares Estimation of Growth Curve Model
Institute of Scientific and Technical Information of China (English)
高采文; 朱晓琳; 曾林蕊
2014-01-01
主要考虑了生长曲线模型中的参数矩阵的估计。首先基于 Potthoff-Roy变换后的生长曲线模型,采用不同的惩罚函数：Hard Thresholding函数,LASSO,ENET,改进 LASSO,SACD给出了参数矩阵的惩罚最小二乘估计。接着对不做变换的生长曲线模型,直接定义其惩罚最小二乘估计,基于Nelder-Mead法给出了估计的数值解算法。最后对提出的参数估计方法进行了数据模拟。结果表明自适应LASSO在估计方面效果比较好。%This paper studied the estimation of parameter matrix in the growth curve model.Based on the Potthoff-Roy transform of the growth curve model,and by using different penalty functions:Hard Thresholding function,LASSO, ENET,LASSO,SACD,the penalized least estimation of parameter matrix was given.Then the penalized least squares estima-tion was defined directly on the growth curve model.The numerical solution algorithm for the estimation was proposed based on the Nelder-Mead method.Finally,the methods for the parameter estimation were simulated.The results show that the a-daptive LASSO is better in the estimation results.
Noorizadeh, H; Sobhan Ardakani, S; Ahmadi, T; Mortazavi, S S; Noorizadeh, M
2013-02-01
Genetic algorithm (GA) and partial least squares (PLS) and kernel PLS (KPLS) techniques were used to investigate the correlation between immobilized liposome chromatography partitioning (log Ks) and descriptors for 65 drug compounds. The models were validated using leave-group-out cross validation LGO-CV. The results indicate that GA-KPLS can be used as an alternative modelling tool for quantitative structure-property relationship (QSPR) studies.
Anekawati, Anik; Widjanarko Otok, Bambang; Purhadi; Sutikno
2017-06-01
Research in education often involves a latent variable. Statistical analysis technique that has the ability to analyze the pattern of relationship among latent variables as well as between latent variables and their indicators is Structural Equation Modeling (SEM). SEM partial least square (PLS) was developed as an alternative if these conditions are met: the theory that underlying the design of the model is weak, does not assume a certain scale measurement, the sample size should not be large and the data does not have the multivariate normal distribution. The purpose of this paper is to compare the results of modeling of the educational quality in high school level (SMA/MA) in Sumenep Regency with structural equation modeling approach partial least square with three schemes estimation of score factors. This paper is a result of explanatory research using secondary data from Sumenep Education Department and Badan Pusat Statistik (BPS) Sumenep which was data of Sumenep in the Figures and the District of Sumenep in the Figures for the year 2015. The unit of observation in this study were districts in Sumenep that consists of 18 districts on the mainland and 9 districts in the islands. There were two endogenous variables and one exogenous variable. Endogenous variables are the quality of education level of SMA/MA (Y1) and school infrastructure (Y2), whereas exogenous variable is socio-economic condition (X1). In this study, There is one improved model which represented by model from path scheme because this model is a consistent, all of its indicators are valid and its the value of R-square increased which is: Y1=0.651Y2. In this model, the quality of education influenced only by the school infrastructure (0.651). The socio-economic condition did not affect neither the school infrastructure nor the quality of education. If the school infrastructure increased 1 point, then the quality of education increased 0.651 point. The quality of education had an R2 of 0
Abdi, Hervé; Williams, Lynne J
2013-01-01
Partial least square (PLS) methods (also sometimes called projection to latent structures) relate the information present in two data tables that collect measurements on the same set of observations. PLS methods proceed by deriving latent variables which are (optimal) linear combinations of the variables of a data table. When the goal is to find the shared information between two tables, the approach is equivalent to a correlation problem and the technique is then called partial least square correlation (PLSC) (also sometimes called PLS-SVD). In this case there are two sets of latent variables (one set per table), and these latent variables are required to have maximal covariance. When the goal is to predict one data table the other one, the technique is then called partial least square regression. In this case there is one set of latent variables (derived from the predictor table) and these latent variables are required to give the best possible prediction. In this paper we present and illustrate PLSC and PLSR and show how these descriptive multivariate analysis techniques can be extended to deal with inferential questions by using cross-validation techniques such as the bootstrap and permutation tests.
Energy Technology Data Exchange (ETDEWEB)
Takasu, Miyuki; Tani, Chihiro; Sakoda, Yasuko; Ishikawa, Miho; Tanitame, Keizo; Date, Shuji; Akiyama, Yuji; Awai, Kazuo [Hiroshima University, Department of Diagnostic Radiology, Graduate School of Biomedical Sciences, Hiroshimashi (Japan); Sakai, Akira [Hiroshima University, Department of Hematology and Oncology, Research Institute for Radiation Biology and Medicine, Hiroshimashi (Japan); Asaoku, Hideki [Hiroshima Red Cross Hospital and Atomic-bomb Survivors Hospital, Department of Hematology, Hiroshimashi (Japan); Kajima, Toshio [Kajima Clinic, Hiroshimaken (Japan)
2012-05-15
To evaluate the effectiveness of the iterative decomposition of water and fat with echo asymmetric and least-squares estimation (IDEAL) MRI to quantify tumour infiltration into the lumbar vertebrae in myeloma patients without visible focal lesions. The lumbar spine was examined with 3 T MRI in 24 patients with multiple myeloma and in 26 controls. The fat-signal fraction was calculated as the mean value from three vertebral bodies. A post hoc test was used to compare the fat-signal fraction in controls and patients with monoclonal gammopathy of undetermined significance (MGUS), asymptomatic myeloma or symptomatic myeloma. Differences were considered significant at P < 0.05. The fat-signal fraction and {beta}{sub 2}-microglobulin-to-albumin ratio were entered into the discriminant analysis. Fat-signal fractions were significantly lower in patients with symptomatic myelomas (43.9 {+-}19.7%, P < 0.01) than in the other three groups. Discriminant analysis showed that 22 of the 24 patients (92%) were correctly classified into symptomatic or non-symptomatic myeloma groups. Fat quantification using the IDEAL sequence in MRI was significantly different when comparing patients with symptomatic myeloma and those with asymptomatic myeloma. The fat-signal fraction and {beta}{sub 2}-microglobulin-to-albumin ratio facilitated discrimination of symptomatic myeloma from non-symptomatic myeloma in patients without focal bone lesions. circle A new magnetic resonance technique (IDEAL) offers new insights in multiple myeloma. (orig.)
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-01
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
Time Scale in Least Square Method
Directory of Open Access Journals (Sweden)
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Energy Technology Data Exchange (ETDEWEB)
Kato, Ayumi; Shinohara, Yuki; Fujii, Shinya; Miyoshi, Fuminori; Kuya, Keita; Ogawa, Toshihide [Tottori University, Division of Radiology, Department of Pathophysiological, and Therapeutic Science, Faculty of Medicine, Yonago (Japan); Yamashita, Eijiro [Tottori University Hospital, Division of Clinical Radiology, Yonago (Japan)
2015-09-15
Acute intramural hematoma resulting from cerebral artery dissection is usually visualized as a region of intermediate signal intensity on T1-weighted images (WI). This often causes problems with distinguishing acute atheromatous lesions from surrounding parenchyma and dissection. The present study aimed to determine whether or not R2* maps generated by the iterative decomposition of water and fat with echo asymmetry and least-squares estimation quantitation sequence (IDEAL IQ) can distinguish cerebral artery dissection more effectively than three-dimensional variable refocusing flip angle TSE T1WI (T1-CUBE) and T2*WI. We reviewed data from nine patients with arterial dissection who were assessed by MR images including R2* maps, T2*WI, T1-CUBE, and 3D time-of-flight (TOF)-MRA. We visually assessed intramural hematomas in each patient as positive (clearly visible susceptibility effect reflecting intramural hematoma as hyperintensity on R2* map and hypointensity on T2*WI), negative (absent intramural hematoma), equivocal (difficult to distinguish between intramural hematoma and other paramagnetic substances such as veins, vessel wall calcification, or hemorrhage) and not evaluable (difficult to determine intramural hematoma due to susceptibility artifacts arising from skull base). Eight of nine patients were assessed during the acute phase. Lesions in all eight patients were positive for intramural hematoma corresponding to dissection sites on R2* maps, while two lesions were positive on T2*WI and three lesions showed high-intensity on T1-CUBE reflected intramural hematoma during the acute phase. R2* maps generated using IDEAL IQ can detect acute intramural hematoma associated with cerebral artery dissection more effectively than T2*WI and earlier than T1-CUBE. (orig.)
Directory of Open Access Journals (Sweden)
B. de Foy
2015-03-01
Full Text Available Emission inventories of elemental carbon (EC and organic carbon (OC contain large uncertainties both in their spatial and temporal distributions for different source types. An inverse model was used to evaluate EC and OC emissions based on 1 year of hourly measurements from the St. Louis–Midwest supersite. The input to the model consisted of continuous measurements of EC and OC obtained for 2002 using two semicontinuous analyzers. High resolution meteorological simulations were performed for the entire time period using the Weather Research and Forecasting Model (WRF. These were used to simulate hourly back trajectories at the measurement site using a Lagrangian model (FLEXPART-WRF. In combination, an Eulerian model (CAMx: The Comprehensive Air Quality Model with Extensions was used to simulate the impacts at the measurement site using known emissions inventories for point and area sources from the Lake Michigan Directors Consortium (LADCO as well as for open burning from the Fire Inventory from NCAR (FINN. By considering only passive transport of pollutants, the Bayesian inversion simplifies to a single least squares inversion. The inverse model combines forward Eulerian simulations with backward Lagrangian simulations to yield estimates of emissions from sources in current inventories as well as from emissions that might be missing in the inventories. The CAMx impacts were disaggregated into separate time chunks in order to determine improved diurnal, weekday and monthly temporal patterns of emissions. Because EC is a primary species, the inverse model estimates can be interpreted directly as emissions. In contrast, OC is both a primary and a secondary species. As the inverse model does not differentiate between direct emissions and formation in the plume of those direct emissions, the estimates need to be interpreted as contributions to measured concentrations. Emissions of EC and OC in the St. Louis region from on-road, non-road, marine
ROBUST PENALIZED LEAST SQUARES ESTIMATION FOR A SEMIPARAMETRIC REGRESSION MODEL%半参数回归模型的稳健补偿最小二乘估计
Institute of Scientific and Technical Information of China (English)
胡宏昌
2008-01-01
In this paper, we consider a semiparametric regression model. By the robust penalized least squares estimate method, the robust penalized least squares estimators are given, and their influence functions and asymptotic variance-covariance matrixes are investigated. A simulated example shows that the method is excels the penalized least squares method, and has robustness.%本文研究了一类半参数回归模型,利用稳健补偿最小二乘估计法,得到了稳健补偿最小二乘估计量,以及它们的影响函数及渐近方差-协方差,对结果的分析表明了该法优于补偿最小二乘法,而且具有稳定性.
Xiangwei Guo; Longyun Kang; Yuan Yao; Zhizhen Huang; Wenbiao Li
2016-01-01
An estimation of the power battery state of charge (SOC) is related to the energy management, the battery cycle life and the use cost of electric vehicles. When a lithium-ion power battery is used in an electric vehicle, the SOC displays a very strong time-dependent nonlinearity under the influence of random factors, such as the working conditions and the environment. Hence, research on estimating the SOC of a power battery for an electric vehicle is of great theoretical significance a...
Zhang, Yongliang; Day-Uei Li, David
2017-02-01
This comment is to clarify that Poisson noise instead of Gaussian noise shall be included to assess the performances of least-squares deconvolution with Laguerre expansion (LSD-LE) for analysing fluorescence lifetime imaging data obtained from time-resolved systems. Moreover, we also corrected an equation in the paper. As the LSD-LE method is rapid and has the potential to be widely applied not only for diagnostic but for wider bioimaging applications, it is desirable to have precise noise models and equations.
Satija, A.; Caers, J.
2014-12-01
Hydrogeological forecasting problems, like many subsurface forecasting problems, often suffer from the scarcity of reliable data yet complex prior information about the underlying earth system. Assimilating and integrating this information into an earth model requires using iterative parameter space exploration techniques or Monte Carlo Markov Chain techniques. Since such an earth model needs to account for many large and small scale features of the underlying system, as the system gets larger, iterative modeling can become computationally prohibitive, in particular when the forward model would allow for only a few hundred model evaluations. In addition, most modeling methods do not include the purpose for which inverse method are built, namely, the actual forecast and usually focus only on data and model. In this study, we present a technique to extract features of the earth system informed by time-varying dynamic data (data features) and those that inform a time-varying forecasting variable (forecast features) using Functional Principal Component Analysis. Canonical Coefficient Analysis is then used to examine the relationship between these features using a linear model. When this relationship suggests that the available data informs the required forecast, a simple linear regression can be used on the linear model to directly estimate the posterior of the forecasting problem, without any iterative inversion of model parameters. This idea and method is illustrated using an example of contaminant flow in an aquifer with complex prior, large dimension and non-linear flow & transport model.
The least-square method in complex number domain
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The classical least-square method was extended from the real number into the complex number domain, which is called the complex least-square method. The mathematical derivation and its applications show that the complex least-square method is different from one that the real number and the imaginary number are separately calculated with the classical least-square, by which the actual leastsquare estimation cannot be obtained in practice. Applications of this new method to an arbitrarily given series and to the precipitation in rainy season at 160 meteorological stations in China mainland show advantages of this method over other conventional statistical models.
Meshfree First-order System Least Squares
Institute of Scientific and Technical Information of China (English)
Hugh R.MacMillan; Max D.Gunzburger; John V.Burkardt
2008-01-01
We prove convergence for a meshfree first-order system least squares (FOSLS) partition of unity finite element method (PUFEM). Essentially, by virtue of the partition of unity, local approximation gives rise to global approximation in H(div)∩ H(curl). The FOSLS formulation yields local a posteriori error estimates to guide the judicious allotment of new degrees of freedom to enrich the initial point set in a meshfree dis-cretization. Preliminary numerical results are provided and remaining challenges are discussed.
A Linear-correction Least-squares Approach for Geolocation Using FDOA Measurements Only
Institute of Scientific and Technical Information of China (English)
LI Jinzhou; GUO Fucheng; JIANG Wenli
2012-01-01
A linear-correction least-squares(LCLS) estimation procedure is proposed for geolocation using frequency difference of arrival(FDOA) measurements only.We first analyze the measurements of FDOA,and further derive the Cramér-Rao lower bound(CRLB) of geolocation using FDOA measurements.For the localization model is a nonlinear least squares(LS) estimator with a nonlinear constrained,a linearizing method is used to convert the model to a linear least squares estimator with a nonlinear constrained.The Gauss-Newton iteration method is developed to conquer the source localization problem.From the analysis of solving Lagrange multiplier,the algorithm is a generalization of linear-correction least squares estimation procedure under the condition of geolocation using FDOA measurements only.The algorithm is compared with common least squares estimation.Comparisons of their estimation accuracy and the CRLB are made,and the proposed method attains the CRLB.Simulation resuits are included to corroborate the theoretical development.
Augmented Classical Least Squares Multivariate Spectral Analysis
Energy Technology Data Exchange (ETDEWEB)
Haaland, David M. (Albuquerque, NM); Melgaard, David K. (Albuquerque, NM)
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Energy Technology Data Exchange (ETDEWEB)
Haaland, David M. (Albuquerque, NM); Melgaard, David K. (Albuquerque, NM)
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Miller, Arthur L; Weakley, Andrew Todd; Griffiths, Peter R; Cauda, Emanuele G; Bayman, Sean
2016-09-19
In order to help reduce silicosis in miners, the National Institute for Occupational Health and Safety (NIOSH) is developing field-portable methods for measuring airborne respirable crystalline silica (RCS), specifically the polymorph α-quartz, in mine dusts. In this study we demonstrate the feasibility of end-of-shift measurement of α-quartz using a direct-on-filter (DoF) method to analyze coal mine dust samples deposited onto polyvinyl chloride filters. The DoF method is potentially amenable for on-site analyses, but deviates from the current regulatory determination of RCS for coal mines by eliminating two sample preparation steps: ashing the sampling filter and redepositing the ash prior to quantification by Fourier transform infrared (FT-IR) spectrometry. In this study, the FT-IR spectra of 66 coal dust samples from active mines were used, and the RCS was quantified by using: (1) an ordinary least squares (OLS) calibration approach that utilizes standard silica material as done in the Mine Safety and Health Administration's P7 method; and (2) a partial least squares (PLS) regression approach. Both were capable of accounting for kaolinite, which can confound the IR analysis of silica. The OLS method utilized analytical standards for silica calibration and kaolin correction, resulting in a good linear correlation with P7 results and minimal bias but with the accuracy limited by the presence of kaolinite. The PLS approach also produced predictions well-correlated to the P7 method, as well as better accuracy in RCS prediction, and no bias due to variable kaolinite mass. Besides decreased sensitivity to mineral or substrate confounders, PLS has the advantage that the analyst is not required to correct for the presence of kaolinite or background interferences related to the substrate, making the method potentially viable for automated RCS prediction in the field. This study demonstrated the efficacy of FT-IR transmission spectrometry for silica determination in
Directory of Open Access Journals (Sweden)
Mohamed Amine Bouhlel
2016-01-01
Full Text Available During the last years, kriging has become one of the most popular methods in computer simulation and machine learning. Kriging models have been successfully used in many engineering applications, to approximate expensive simulation models. When many input variables are used, kriging is inefficient mainly due to an exorbitant computational time required during its construction. To handle high-dimensional problems (100+, one method is recently proposed that combines kriging with the Partial Least Squares technique, the so-called KPLS model. This method has shown interesting results in terms of saving CPU time required to build model while maintaining sufficient accuracy, on both academic and industrial problems. However, KPLS has provided a poor accuracy compared to conventional kriging on multimodal functions. To handle this issue, this paper proposes adding a new step during the construction of KPLS to improve its accuracy for multimodal functions. When the exponential covariance functions are used, this step is based on simple identification between the covariance function of KPLS and kriging. The developed method is validated especially by using a multimodal academic function, known as Griewank function in the literature, and we show the gain in terms of accuracy and computer time by comparing with KPLS and kriging.
Sampson, Paul D; Richards, Mark; Szpiro, Adam A; Bergen, Silas; Sheppard, Lianne; Larson, Timothy V; Kaufman, Joel D
2013-08-01
Many cohort studies in environmental epidemiology require accurate modeling and prediction of fine scale spatial variation in ambient air quality across the U.S. This modeling requires the use of small spatial scale geographic or "land use" regression covariates and some degree of spatial smoothing. Furthermore, the details of the prediction of air quality by land use regression and the spatial variation in ambient air quality not explained by this regression should be allowed to vary across the continent due to the large scale heterogeneity in topography, climate, and sources of air pollution. This paper introduces a regionalized national universal kriging model for annual average fine particulate matter (PM2.5) monitoring data across the U.S. To take full advantage of an extensive database of land use covariates we chose to use the method of Partial Least Squares, rather than variable selection, for the regression component of the model (the "universal" in "universal kriging") with regression coefficients and residual variogram models allowed to vary across three regions defined as West Coast, Mountain West, and East. We demonstrate a very high level of cross-validated accuracy of prediction with an overall R(2) of 0.88 and well-calibrated predictive intervals. In accord with the spatially varying characteristics of PM2.5 on a national scale and differing kriging smoothness parameters, the accuracy of the prediction varies by region with predictive intervals being notably wider in the West Coast and Mountain West in contrast to the East.
Least-squares Gaussian beam migration
Yuan, Maolin; Huang, Jianping; Liao, Wenyuan; Jiang, Fuyou
2017-02-01
A theory of least-squares Gaussian beam migration (LSGBM) is presented to optimally estimate a subsurface reflectivity. In the iterative inversion scheme, a Gaussian beam (GB) propagator is used as the kernel of linearized forward modeling (demigration) and its adjoint (migration). Born approximation based GB demigration relies on the calculation of Green’s function by a Gaussian-beam summation for the downward and upward wavefields. The adjoint operator of GB demigration accounts for GB prestack depth migration under the cross-correlation imaging condition, where seismic traces are processed one by one for each shot. A numerical test on the point diffractors model suggests that GB demigration can successfully simulate primary scattered data, while migration (adjoint) can yield a corresponding image. The GB demigration/migration algorithms are used for the least-squares migration scheme to deblur conventional migrated images. The proposed LSGBM is illustrated with two synthetic data for a four-layer model and the Marmousi2 model. Numerical results show that LSGBM, compared to migration (adjoint) with GBs, produces images with more balanced amplitude, higher resolution and even fewer artifacts. Additionally, the LSGBM shows a robust convergence rate.
Garrido, M; Larrechi, M S; Rius, F X
2006-02-01
This study describes the combination of multivariate curve resolution-alternating least squares with a kinetic modeling strategy for obtaining the kinetic rate constants of a curing reaction of epoxy resins. The reaction between phenyl glycidyl ether and aniline is monitored by near-infrared spectroscopy under isothermal conditions for several initial molar ratios of the reagents. The data for all experiments, arranged in a column-wise augmented data matrix, are analyzed using multivariate curve resolution-alternating least squares. The concentration profiles recovered are fitted to a chemical model proposed for the reaction. The selection of the kinetic model is assisted by the information contained in the recovered concentration profiles. The nonlinear fitting provides the kinetic rate constants. The optimized rate constants are in agreement with values reported in the literature.
Ma, Dinglong; Liu, Jing; Qi, Jinyi; Marcu, Laura
2017-02-01
In this response we underscore that the instrumentation described in the original publication (Liu et al 2012 Phys. Med. Biol. 57 843–65) was based on pulse-sampling technique, while the comment by Zhang et al is based on the assumption that a time-correlated single photon counting (TCSPC) instrumentation was used. Therefore the arguments made in the comment are not applicable to the noise model reported by Liu et al. As reported in the literature (Lakowicz 2006 Principles of Fluorescence Spectroscopy (New York: Springer)), while in the TCSPC the experimental noise can be estimated from Poisson statistics, such an assumption is not valid for pulse-sampling (transient recording) techniques. To further clarify this aspect, we present here a comprehensive noise model describing the signal and noise propagation of the pulse sampling time-resolved fluorescence detection. Experimental data recorded in various conditions are analyzed as a case study to demonstrate the noise model of our instrumental system. In addition, regarding the statement of correcting equation (3) in Liu et al (2012 Phys. Med. Biol. 57 843–65), the notation of discrete time Laguerre function in the original publication was clear and consistent with literature conventions (Marmarelis 1993 Ann. Biomed. Eng. 21 573–89, Westwick and Kearney 2003 Identification of Nonlinear Physiological Systems (Hoboken, NJ: Wiley)). Thus, it does not require revision.
Tikhonov Regularization and Total Least Squares
DEFF Research Database (Denmark)
Golub, G. H.; Hansen, Per Christian; O'Leary, D. P.
2000-01-01
formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that...
Generalized Penalized Least Squares and Its Statistical Characteristics
Institute of Scientific and Technical Information of China (English)
DING Shijun; TAO Benzao
2006-01-01
The solution properties of semiparametric model are analyzed, especially that penalized least squares for semiparametric model will be invalid when the matrix BTPB is ill-posed or singular. According to the principle of ridge estimate for linear parametric model, generalized penalized least squares for semiparametric model are put forward, and some formulae and statistical properties of estimates are derived. Finally according to simulation examples some helpful conclusions are drawn.
Institute of Scientific and Technical Information of China (English)
戴长春; 王正风; 张兆阳; 毕天姝
2013-01-01
针对实测数据中会存在粗差而传统最小二乘不具备抗差能力，本文将IGG抗差方法应用到输电线路参数辨识中，提出了基于IGG准则的抗差最小二乘的输电线路参数辨识方法。具体的，本文在介绍基于双端PMU数据的线路线性数学模型和相应的最小二乘辨识的基础上，通过对目标函数的改造，引入基于IGG法的抗差准则，即对有效的观测信息保权，对可利用观测信息降权，对有害观测信息拒绝，从而使得改造后的最小二乘方法具备了较强的抗差能力。基于PSCAD仿真数据的测试验证了本文方法的有效性、抗噪能力及抗差能力；基于实测PMU数据的运行参数辨识结果表明了本文方法的实用性。%In the parameter identification for transmission line, there often exists the gross error in the measured data and lacking of robustness with the traditional least square. This paper applies the IGG robust estimation to the transmission line parameter, i.e., proposes a robust least square estimation to transmission line parameter identification based on IGG robust criterion. In detail, it firstly presents the linear model for transmission line based on the PMU data of both ends and introduces the traditional least square estimation method. And then, after the introduction of the IGG robust criterion, i.e., retaining the weight of the effective observation, reducing the weight of the available observation, but refusing the harmful observation, it proposes the robust least square estimation by modifying the objective function of the traditional least square estimation for parameter identification. The simulation results based on PSCAD demonstrate the effectiveness, noise immunity ability and robust ability of the proposed method. Furthermore, the results based on field PMU data show the effectiveness of the proposed method in application.
Regularization Techniques for Linear Least-Squares Problems
Suliman, Mohamed
2016-04-01
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA
Simultaneous least squares fitter based on the Langrange multiplier method
Guan, Yinghui; Zheng, Yangheng; Zhu, Yong-Sheng
2013-01-01
We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the $\\chi^2$ minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Langrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the $D^{0}-\\bar{D}^{0}$ mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.
A Newton Algorithm for Multivariate Total Least Squares Problems
Directory of Open Access Journals (Sweden)
WANG Leyang
2016-04-01
Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.
Elastic least-squares reverse time migration
Feng, Zongcai
2017-03-08
We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.
Partial Least Squares tutorial for analyzing neuroimaging data
Directory of Open Access Journals (Sweden)
Patricia Van Roon
2014-09-01
Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The
Experiments on Coordinate Transformation based on Least Squares and Total Least Squares Methods
Tunalioglu, Nursu; Mustafa Durdag, Utkan; Hasan Dogan, Ali; Erdogan, Bahattin; Ocalan, Taylan
2016-04-01
Coordinate transformation is an important problem in geodesy discipline. Variations in stochastic and functional models in transformation problem cause different estimation results. Least-squares (LS) method is generally implemented to solve this problem. LS method accepts only one epoch coordinate data group erroneous in stochastic model. However, all the data in transformation problem are erroneous. In contrast to the traditional LS method, the Total Least Squares (TLS) method takes into account the errors in all the variables in the transformation. It is so-called errors-invariables (EIV) model. In the last decades, TLS method has been implemented to solve transformation problem. In this context, it is important to determine which method is more accurate. In this study, LS and TLS methods have been implemented on different 2D and 3D geodetic networks with different simulation scenarios. The first results show that the translation parameters are affected more than rotation and scale parameters. Although TLS method considers the errors for two coordinate the estimated parameters for both methods are different from simulated values.
Steady and transient least square solvers for thermal problems
Padovan, Joe
1987-01-01
This paper develops a hierarchical least square solution algorithm for highly nonlinear heat transfer problems. The methodology's capability is such that both steady and transient implicit formulations can be handled. This includes problems arising from highly nonlinear heat transfer systems modeled by either finite-element or finite-difference schemes. The overall procedure developed enables localized updating, iteration, and convergence checking as well as constraint application. The localized updating can be performed at a variety of hierarchical levels, i.e., degree of freedom, substructural, material-nonlinear groups, and/or boundary groups. The choice of such partitions can be made via energy partitioning or nonlinearity levels as well as by user selection. Overall, this leads to extremely robust computational characteristics. To demonstrate the methodology, problems are drawn from nonlinear heat conduction. These are used to quantify the robust capabilities of the hierarchical least square scheme.
Least Square Methods for Solving Systems of Inequalities with Application to an Assignment Problem
1992-11-01
problem using continuous methods and (2) solving systems of inequalities (and equalities) in a least square sense. The specific assignment problem has...linear equations, in a least square sense are developed. Common algorithmic approaches to solve nonlinear least square problems are adapted to solve
Partial update least-square adaptive filtering
Xie, Bei
2014-01-01
Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a
Distributed Recursive Least-Squares: Stability and Performance Analysis
Mateos, Gonzalo
2011-01-01
The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged' system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying...
Deformation analysis with Total Least Squares
Directory of Open Access Journals (Sweden)
M. Acar
2006-01-01
Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.
Iterative methods for weighted least-squares
Energy Technology Data Exchange (ETDEWEB)
Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)
1996-12-31
A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS.
Algorithms for unweighted least-squares factor analysis
Krijnen, WP
Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed, MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values.
Institute of Scientific and Technical Information of China (English)
祝乔; 程汉卿; 尹怡欣; 陈先中
2012-01-01
Burden distribution in a blast furnace was estimated based on least squares approximations and multi-radar data. First- ly, a three-segment curve, which includes two straight lines and a quadratic curve, was used to describe the burden distribution. Sec- ondly, based on the burden distribution principles, some constraint equations were obtained to estimate parameters in the three-segment curve, which makes the burden surface profile more reasonable. Then, the burden distribution was estimated by using least squares ap- proximations and multi-radar data, and a real-timely display of the burden surface profile could be achieved. A numerical example with real multi-radar data obtained from a steel plant shows the effectiveness of the estimate method.%利用多点雷达数据，使用最小二乘法对高炉的料面形状进行估计．首先，采用三段曲线描述料面形状，其中包括两段直线一段二次曲线．其次，利用炉料分布规律对三段曲线的具体参数进行约束，使得料面形状的估计更为合理．然后，利用多点雷达数据和最小二乘法估计料面形状，实现料面的实时动态显示．利用某钢厂的实际雷达测量数据，证实了该方法的有效性．
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-01-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
Sparse least-squares reverse time migration using seislets
Dutta, Gaurav
2015-08-19
We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.
基于偏最小二乘回归的城市生活用水量预测研究%Estimation of Urban Water Demand Using Partial Least Squares Regression Model
Institute of Scientific and Technical Information of China (English)
张立杰
2012-01-01
the urban water demand is estimated by the least squares regression models which considers six social and economic factors as model input variables. Hie results show:There is a high multicollinearity among the input variables, the partial least squares regression model can reduce the undesirable effects of multicollinearity, and get a more accurate predict (the relative error is only 2.7% ). It has also been found that the data series length and recent data have an important effect on the estimation accuracy.%以城市生活用水量为预测研究对象,选取6个社会经济发展因素作为主要变量因子,建立偏最小二乘回归模型.研究分析表明,各变量因子间存在较强的多重共线性,采用偏最小二乘回归模型能有效克服各类因子变量间的多重共线性对模型拟合精度及其预测能力的影响,取得更接近现实的预估结果(平均相对误差为2.7％).研究还发现,数据序列的长度和变量近期的变化信息也会对模型的预测精度产生重要的影响.
Institute of Scientific and Technical Information of China (English)
刘海涛; 魏汝祥; 蒋国萍
2012-01-01
For the excessiveness of operating factors and multi-correlation of variables in software cost estimation, this paer proposes a software cost estimation method based on Partial Least Squares Regression(PLSR). After the variables are weighted, an integrated index named analogue deviation is defined to describe the approximation of data samples. Then an adaptive weight is assigned to sample according to the approximation, and the optimal partial least squares latent variables and weight parameters are calculated by traversal searching. Experimental results show that the prediction error is reduced by 73.61% and 32.34% than multiple linear regression and PLSR respectively.%针对软件成本估算中影响因素较多且自变量间存在多重相关性的特点,提出一种基于加权偏最小二乘回归(PLSR)的软件成本估算方法.定义属性权重,得到描述软件历史数据相似度的加权相似离度.通过计算样本相似度自适应地为样本分配权值,采用遍历搜索的方式确定最优主成分及权值分配参数.实验结果表明,该方法的估算误差比多元线性回归方法减少73.61％,比全局PLSR方法减少32.34％.
Institute of Scientific and Technical Information of China (English)
王理同
2012-01-01
在生长曲线模型中,参数矩阵的最小二乘估计为响应变量的线性函数,而极大似然估计为响应变量的非线性函数,所以极大似然估计的统计推断比较复杂.为了使它的统计推断简单点,一些学者考虑了极大似然估计与最小二乘估计的等价性.不幸的是极大似然估计与最小二乘估计的完全等价性不易满足.因此考虑它们的近似等价性,即考虑它们基于欧式范数标准下的模长之比.如果比值在任意给定的允许误差之内,就认为极大似然估计近似等价于最小二乘估计,从而简化极大似然估计的统计推断.%In a growth curve model,the generalized least squares estimator of the parameter matrix is a linear function of the response variables while its maximum likelihood estimator is nonlinear, so the statistical inference based on the maximum likelihood estimate might be more complicated. In order to make its statistical inference more easily analytical and tractable to obtain, some authors concern conditions under which the maximum likelihood estimator is completely equivalent to the generalized least squares estimator. Unfortunately, such conditions are very parsimonious. Therefore, an asymptotical equivalence between them is suggested, that is, consider the ratio of two covariance matrices concerned based on Euclidean norm. It is believed that the maximum likelihood estimator approximates the generalized least squares estimator if the ratio between them is limited to the permitted errors, and then the statistical inference of the maximum likelihood estimator is simplified.
Efficient least-squares basket-weaving
Winkel, B.; Flöer, L.; Kraus, A.
2012-11-01
We report on a novel method to solve the basket-weaving problem. Basket-weaving is a technique that is used to remove scan-line patterns from single-dish radio maps. The new approach applies linear least-squares and works on gridded maps from arbitrarily sampled data, which greatly improves computational efficiency and robustness. It also allows masking of bad data, which is useful for cases where radio frequency interference is present in the data. We evaluate the algorithms using simulations and real data obtained with the Effelsberg 100-m telescope.
Efficient least-squares basket-weaving
Winkel, B; Kraus, A
2012-01-01
We report on a novel method to solve the basket-weaving problem. Basket-weaving is a technique that is used to remove scan-line patterns from single-dish radio maps. The new approach applies linear least-squares and works on gridded maps from arbitrarily sampled data, which greatly improves computational efficiency and robustness. It also allows masking of bad data, which is useful for cases where radio frequency interference is present in the data. We evaluate the algorithms using simulations and real data obtained with the Effelsberg 100-m telescope.
Total least squares for anomalous change detection
Energy Technology Data Exchange (ETDEWEB)
Theiler, James P [Los Alamos National Laboratory; Matsekh, Anna M [Los Alamos National Laboratory
2010-01-01
A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.
A unified approach for least-squares surface fitting
Institute of Scientific and Technical Information of China (English)
ZHU; Limin; DING; Han
2004-01-01
This paper presents a novel approach for least-squares fitting of complex surface to measured 3D coordinate points by adjusting its location and/or shape. For a point expressed in the machine reference frame and a deformable smooth surface represented in its own model frame, a signed point-to-surface distance function is defined,and its increment with respect to the differential motion and differential deformation of the surface is derived. On this basis, localization, surface reconstruction and geometric variation characterization are formulated as a unified nonlinear least-squares problem defined on the product space SE(3)×m. By using Levenberg-Marquardt method, a sequential approximation surface fitting algorithm is developed. It has the advantages of implementational simplicity, computational efficiency and robustness. Applications confirm the validity of the proposed approach.
Multiples least-squares reverse time migration
Zhang, D. L.
2013-01-01
To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Directory of Open Access Journals (Sweden)
Taimoor Zahid
2016-09-01
Full Text Available Battery energy storage management for electric vehicles (EV and hybrid EV is the most critical and enabling technology since the dawn of electric vehicle commercialization. A battery system is a complex electrochemical phenomenon whose performance degrades with age and the existence of varying material design. Moreover, it is very tedious and computationally very complex to monitor and control the internal state of a battery’s electrochemical systems. For Thevenin battery model we established a state-space model which had the advantage of simplicity and could be easily implemented and then applied the least square method to identify the battery model parameters. However, accurate state of charge (SoC estimation of a battery, which depends not only on the battery model but also on highly accurate and efficient algorithms, is considered one of the most vital and critical issue for the energy management and power distribution control of EV. In this paper three different estimation methods, i.e., extended Kalman filter (EKF, particle filter (PF and unscented Kalman Filter (UKF, are presented to estimate the SoC of LiFePO4 batteries for an electric vehicle. Battery’s experimental data, current and voltage, are analyzed to identify the Thevenin equivalent model parameters. Using different open circuit voltages the SoC is estimated and compared with respect to the estimation accuracy and initialization error recovery. The experimental results showed that these online SoC estimation methods in combination with different open circuit voltage-state of charge (OCV-SoC curves can effectively limit the error, thus guaranteeing the accuracy and robustness.
Cichocki, A; Unbehauen, R
1994-01-01
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.
Institute of Scientific and Technical Information of China (English)
王柱
2013-01-01
In this paper point:The method of least square distance (LSD) based on the idea that N points of multi-way space close in super-plane.Upon sample of multi-way variance,the method of LSD is good method of the preferences of recessive linear functions,by the method of LSD may be obtain fairly good estimates of parameter of recessive linear functions.%本文再次指出:最小平方距离法(LSD),也可叫最小模方法,是从解决多维空间n个点的超平面拟合问题而提取的；通过对p个随机变量的n组观测值,此方法是探求它们之间是否存在隐式线形函数关系的好方法；使用此法,可以得出隐式线形函数关系较好的参数估计.
Least-Square Prediction for Backward Adaptive Video Coding
2006-01-01
Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP), and demonstrate its potential in video coding. Motivated by the duality between edge contour in im...
Multisource Least-squares Reverse Time Migration
Dai, Wei
2012-12-01
Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is
Skeletonized Least Squares Wave Equation Migration
Zhan, Ge
2010-10-17
The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).
基于总体最小二乘的核四极矩共振参数估计%The estimation of NQR parameters based on total least squares
Institute of Scientific and Technical Information of China (English)
朱凯然; 何学辉; 郑小保; 苏涛
2012-01-01
Nuclear quadrupole resonance（NQR） is a solid-state radio frequency（RF） spectroscopic technique,allowing the detection of many high explosives.Unfortunately,NQR signals are inherently weak and vulnerable both to the thermal noise of the coil and any external radio frequency interference（RFI）,the practical use of NQR is restricted by the low signal-to-noise ratio（SNR）.On the basis of the investigation of the characteristic of the free induction decay（FID） signal,the linear prediction estimator based on the total least squares is applied to estimate the parameters of the NQR.The effectiveness of this algorithm is demonstrated with the results of both simulated data and experimental data.%核四极矩共振（NQR）是一种固态射频谱分析技术,可用于检测高危险爆炸物。然而,NQR信号本身非常弱,并且易受线圈的热噪声和外部射频干扰的影响,低信噪比限制了NQR技术的实际应用。在研究自由感应衰减信号特性基础上,应用基于总体最小二乘法的线性预测估计器对NQR信号的参数进行估计,仿真数据和实测数据结果验证了该算法的有效性。
Wave-equation Q tomography and least-squares migration
Dutta, Gaurav
2016-03-01
This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic
MULTI-RESOLUTION LEAST SQUARES SUPPORT VECTOR MACHINES
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The Least Squares Support Vector Machines (LS-SVM) is an improvement to the SVM.Combined the LS-SVM with the Multi-Resolution Analysis (MRA), this letter proposes the Multi-resolution LS-SVM (MLS-SVM). The proposed algorithm has the same theoretical framework as MRA but with better approximation ability. At a fixed scale MLS-SVM is a classical LS-SVM, but MLS-SVM can gradually approximate the target function at different scales. In experiments, the MLS-SVM is used for nonlinear system identification, and achieves better identification accuracy.
Neural Network Inverse Adaptive Controller Based on Davidon Least Square
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
General neural network inverse adaptive controller haa two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system.These defects limit the scope in which the neural network inverse adaptive controller is used.We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence,and then through constructing the pseudo-plant,a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system.The simulation results show the validity of this scheme.
Generalized total least squares prediction algorithm for universal 3D similarity transformation
Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie
2017-02-01
Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.
Multilevel first-order system least squares for PDEs
Energy Technology Data Exchange (ETDEWEB)
McCormick, S.
1994-12-31
The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.
Temperature prediction control based on least squares support vector machines
Institute of Scientific and Technical Information of China (English)
Bin LIU; Hongye SU; Weihua HUANG; Jian CHU
2004-01-01
A prediction control algorithm is presented based on least squares support vector machines (LS-SVM) model for a class of complex systems with strong nonlinearity.The nonlinear off-line model of the controlled plant is built by LS-SVM with radial basis function (RBF) kernel.In the process of system running,the off-line model is linearized at each sampling instant,and the generalized prediction control (GPC) algorithm is employed to implement the prediction control for the controlled plant.The obtained algorithm is applied to a boiler temperature control system with complicated nonlinearity and large time delay.The results of the experiment verify the effectiveness and merit of the algorithm.
Götterdämmerung over total least squares
Malissiovas, G.; Neitzel, F.; Petrovic, S.
2016-06-01
The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.
Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F
2014-08-01
Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
A least-squares computational ``tool kit``. Nuclear data and measurements series
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.
1993-04-01
The information assembled in this report is intended to offer a useful computational ``tool kit`` to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.
A Generalized Autocovariance Least-Squares Method for Kalman Filter Tuning
DEFF Research Database (Denmark)
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad
2008-01-01
of the state estimates. There is a linear relationship between covariances and autocovariance. Therefore, the covariance estimation problem can be stated as a least-squares problem, which can be solved as a symmetric semidefinite least-squares problem. This problem is convex and can be solved efficiently...
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating
Institute of Scientific and Technical Information of China (English)
柴华; 梁彦刚; 唐国金
2014-01-01
Under the early warning of two passive satellite-borne sensors,the exponential weighted recursive least square method was applied to the burnout states estimation of ballistic targets.Due to the weighting factor can characterize the local quasi-linearity of the boost phase trajectory of the target,so to some extent the dilemma that generic polynomial cannot handle the whole boost phase of the target can be overcome.Through a dynamic analysis,the kinematic characteristics of boost phase target along the orientation of vertical surface were explored.On that basis,a novel boost phase motion model is proposed,which is more accurate than traditional planar motion model.Simulation results show that the proposed approach is superior to the traditional ones.%在双星预警条件下，将指数加权递归最小二乘算法应用于目标关机点状态估计问题中。通过引入加权因子对目标助推段运动的局部拟线性特性进行描述，从而在一定程度上克服了一般的线性多项式模型难以准确刻画整个助推段运动的难题。通过对助推段目标动力学特性的分析，考察了目标在垂直射面方向上的运动特性。在此基础上，提出了一种更为准确的助推段运动模型。仿真算例表明，所提出的关机点状态估计方法相对于传统的方法具有一定的优越性。
Institute of Scientific and Technical Information of China (English)
于雷; 洪永胜; 耿雷; 周勇; 朱强; 曹隽隽; 聂艳
2015-01-01
Soil organic matter (SOM) plays an important role in soil fertility and carbon (C) cycle. Soil spectral reflectance provides an alternative method to soil's classical physical and chemical analysis in laboratory for the estimation of a large range of key soil properties. In order to achieve rapid measurement of soil organic matter content (SOMC) based on hyperspectral analysis, in this paper, 46 soil samples at 0-20 cm depth were collected as research objects from Gong'an County in Jianghan Plain, and these samples were highly representative for the SOM. The raw hyperspectral reflectance of soil samples was measured by the standard procedure with an ASD FieldSpec3 instrument equipped with a high intensity contact probe under the laboratory conditions. Meanwhile, physical and chemical properties of these soil samples were analyzed. Twenty-eight of 46 samples were used for building hyperspectral estimation models of SOMC and the other 18 samples were used for model prediction. In the next, the raw spectral reflectance (R) was transformed to 3 spectral indices, i.e. logarithm of reciprocal reflectance (LR), first-order differential reflectance (FDR) and continuum removal reflectance (CR) to analyze the correlation coefficients between the 4 spectral indices and their SOMC. Then, the correlation coefficients of the 4 spectral indices by F significant test were got (P<0.01), which could be used to extract significant bands. At last, we used partial least squares regression (PLSR) method to build quantitative inversion model of SOMC based on full bands (400-2 400 nm) and significant bands for this study area, respectively. The prediction accuracies of these optimal models were assessed by comparing determination coefficients (R2), root mean squared error (RMSE) and relative percent deviation (RPD) between the estimated and measured SOMC. The results showed that, after conducting the CR transformation on raw soil spectral data, there were prominent differences among the
Michaelis-Menten kinetics, the operator-repressor system, and least squares approaches.
Hadeler, Karl Peter
2013-01-01
The Michaelis-Menten (MM) function is a fractional linear function depending on two positive parameters. These can be estimated by nonlinear or linear least squares methods. The non-linear methods, based directly on the defect of the MM function, can fail and not produce any minimizer. The linear methods always produce a unique minimizer which, however, may not be positive. Here we give sufficient conditions on the data such that the nonlinear problem has at least one positive minimizer and also conditions for the minimizer of the linear problem to be positive. We discuss in detail the models and equilibrium relations of a classical operator-repressor system, and we extend our approach to the MM problem with leakage and to reversible MM kinetics. The arrangement of the sufficient conditions exhibits the important role of data that have a concavity property (chemically feasible data).
Multispectral colormapping using penalized least square regression
DEFF Research Database (Denmark)
Dissing, Bjørn Skovlund; Carstensen, Jens Michael; Larsen, Rasmus
2010-01-01
based multispectral system with a total of 11 channels in the visible area. To obtain interpretable models, the method estimates the projection coefficients with regard to their neighbors as well as the target. This results in relatively smooth coefficient curves which are correlated with the CIE...
On the interpretation of least squares collocation. [for geodetic data reduction
Tapley, B. D.
1976-01-01
A demonstration is given of the strict mathematical equivalence between the least squares collocation and the classical minimum variance estimates. It is shown that the least squares collocation algorithms are a special case of the modified minimum variance estimates. The computational efficiency of several forms of the general minimum variance estimation algorithm is discussed. It is pointed out that for certain geodetic applications the least square collocation algorithm may provide a more efficient formulation of the results from the point of view of the computations required.
Least Square Approximation by Linear Combinations of Multi(Poles).
1983-04-01
ID-R134 069 LEAST SQUARE APPROXIMATION BY LINEAR COMBINATIONS OF i/i MULTI(POLES). 1U OHIO STATE UNIV COLUMBUS DEPT OF GEODETIC SCIENCE AND SURVEY...TR-83-0 117 LEAST SQUARE APPROXIMATION BY LINEAR COMBINATIONS OF (MULTI)POLES WILLI FREEDEN DEPARTMENT OF GEODETIC SCIENCE AND SURVEYING THE OHIO...Subtitle) S. TYPE OF REPORT & PERIOD COVERED LEAST SQUARE APPROXIMATION BY LINEAR Scientific Report No. 3 COMBINATIONS OF (MULTI)POLES 6. PERFORMING ORG
Least-squares based iterative multipath super-resolution technique
Nam, Wooseok
2011-01-01
In this paper, we study the problem of multipath channel estimation for direct sequence spread spectrum signals. To resolve multipath components arriving within a short interval, we propose a new algorithm called the least-squares based iterative multipath super-resolution (LIMS). Compared to conventional super-resolution techniques, such as the multiple signal classification (MUSIC) and the estimation of signal parameters via rotation invariance techniques (ESPRIT), our algorithm has several appealing features. In particular, even in critical situations where the conventional super-resolution techniques are not very powerful due to limited data or the correlation between path coefficients, the LIMS algorithm can produce successful results. In addition, due to its iterative nature, the LIMS algorithm is suitable for recursive multipath tracking, whereas the conventional super-resolution techniques may not be. Through numerical simulations, we show that the LIMS algorithm can resolve the first arrival path amo...
Partial Least Squares Structural Equation Modeling with R
Directory of Open Access Journals (Sweden)
Hamdollah Ravand
2016-09-01
Full Text Available Structural equation modeling (SEM has become widespread in educational and psychological research. Its flexibility in addressing complex theoretical models and the proper treatment of measurement error has made it the model of choice for many researchers in the social sciences. Nevertheless, the model imposes some daunting assumptions and restrictions (e.g. normality and relatively large sample sizes that could discourage practitioners from applying the model. Partial least squares SEM (PLS-SEM is a nonparametric technique which makes no distributional assumptions and can be estimated with small sample sizes. In this paper a general introduction to PLS-SEM is given and is compared with conventional SEM. Next, step by step procedures, along with R functions, are presented to estimate the model. A data set is analyzed and the outputs are interpreted
semPLS: Structural Equation Modeling Using Partial Least Squares
Directory of Open Access Journals (Sweden)
Armin Monecke
2012-05-01
Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.
Institute of Scientific and Technical Information of China (English)
高宁; 崔希民; 王果; 张玲; 卢立托
2013-01-01
以AR (p)模型为例,考虑模型误差对变形预测的影响,将模型误差看做非参数信号,采用半参数补偿最小二乘方法来处理,即利用半参数中的非参数分量表达模型误差；为更好地控制残差部分VT PV和光滑部分ST RS之间的平衡,提出一种求解平滑参数α的Xu函数；最后,通过实例将精化后的AR (p)模型与灰色模型、灰神经网络模型、常规AR模型的结果进行了比较.结果表明,补偿最小二乘方法能有效地处理变形建模中存在的模型误差,具有较好的预测效果.%In this paper, taking autoregressive model as an example, considering the influence of model errors in deformation analysis, and proposed that model errors as nonparametric, a penalized least squares method is used to deal with deformation data processing. There are two key steps in resolving semi-parametric model, one is to choose the regularization matrix, and the other is to determine the smoothing parameter. This paper focuses on the determination of the smoothing parameter. A new method for determining smoothing parameter, Xu function method, is presented. Finally, a prediction problem in subsidence of building used to explain the method. The results of the semi-parametric model, grey model, grey neural network model and autoregressive model, are compared, which demonstrates that the model errors can be compensated correctly by penalized least squares method and better precaution result can be obtained.
ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
SONG Kaichen; NIE Xili
2006-01-01
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
Solution of a Complex Least Squares Problem with Constrained Phase.
Bydder, Mark
2010-12-30
The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.
note: The least square nucleolus is a general nucleolus
Elisenda Molina; Juan Tejada
2000-01-01
This short note proves that the least square nucleolus (Ruiz et al. (1996)) and the lexicographical solution (Sakawa and Nishizaki (1994)) select the same imputation in each game with nonempty imputation set. As a consequence the least square nucleolus is a general nucleolus (Maschler et al. (1992)).
Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants
One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...
DEFF Research Database (Denmark)
Anders, Annett; Nishijima, Kazuyoshi
The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... the improvement of the computational efficiency is to “best utilize” the least squares method; i.e. least squares method is applied for estimating the expected utility for terminal decisions, conditional on realizations of underlying random phenomena at respective times in a parametric way. The implementation...
Least-squares methods involving the H{sup -1} inner product
Energy Technology Data Exchange (ETDEWEB)
Pasciak, J.
1996-12-31
Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2016-06-20
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.
Khawaja, Taimoor Saleem
and any abnormal or novel data during real-time operation. The results of the scheme are interpreted as a posterior probability of health (1 - probability of fault). As shown through two case studies in Chapter 3, the scheme is well suited for diagnosing imminent faults in dynamical non-linear systems. Finally, the failure prognosis scheme is based on an incremental weighted Bayesian LS-SVR machine. It is particularly suited for online deployment given the incremental nature of the algorithm and the quick optimization problem solved in the LS-SVR algorithm. By way of kernelization and a Gaussian Mixture Modeling (GMM) scheme, the algorithm can estimate "possibly" non-Gaussian posterior distributions for complex non-linear systems. An efficient regression scheme associated with the more rigorous core algorithm allows for long-term predictions, fault growth estimation with confidence bounds and remaining useful life (RUL) estimation after a fault is detected. The leading contributions of this thesis are (a) the development of a novel Bayesian Anomaly Detector for efficient and reliable Fault Detection and Identification (FDI) based on Least Squares Support Vector Machines, (b) the development of a data-driven real-time architecture for long-term Failure Prognosis using Least Squares Support Vector Machines, (c) Uncertainty representation and management using Bayesian Inference for posterior distribution estimation and hyper-parameter tuning, and finally (d) the statistical characterization of the performance of diagnosis and prognosis algorithms in order to relate the efficiency and reliability of the proposed schemes.
A Novel Kernel for Least Squares Support Vector Machine
Institute of Scientific and Technical Information of China (English)
FENG Wei; ZHAO Yong-ping; DU Zhong-hua; LI De-cai; WANG Li-feng
2012-01-01
Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel.ELM kernel based methods are able to solve the nonlinear problems by inducing an explicit mapping compared with the commonly-used kernels such as Gaussian kernel.In this paper,the ELM kernel is extended to the least squares support vector regression(LSSVR),so ELM-LSSVR was proposed.ELM-LSSVR can be used to reduce the training and test time simultaneously without extra techniques such as sequential minimal optimization and pruning mechanism.Moreover,the memory space for the training and test was relieved.To confirm the efficacy and feasibility of the proposed ELM-LSSVR,the experiments are reported to demonstrate that ELM-LSSVR takes the advantage of training and test time with comparable accuracy to other algorithms.
Parsimonious extreme learning machine using recursive orthogonal least squares.
Wang, Ning; Er, Meng Joo; Han, Min
2014-10-01
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Global Convergence of Adaptive Generalized Predictive Controller Based on Least Squares Algorithm
Institute of Scientific and Technical Information of China (English)
张兴会; 陈增强; 袁著祉
2003-01-01
Some papers on stochastic adaptive control schemes have established convergence algorithm using a leastsquares parameters. With the popular application of GPC, global convergence has become a key problem in automatic control theory. However, now global convergence of GPC has not been established for algorithms in computing a least squares iteration. A generalized model of adaptive generalized predictive control is presented. The global convergebce is also given on the basis of estimating the parameters of GPC by least squares algorithm.
SUPERCONVERGENCE OF LEAST-SQUARES MIXED FINITE ELEMENTS FOR ELLIPTIC PROBLEMS ON TRIANGULATION
Institute of Scientific and Technical Information of China (English)
陈艳萍; 杨菊娥
2003-01-01
In this paper,we present the least-squares mixed finite element method and investigate superconvergence phenomena for the second order elliptic boundary-value problems over triangulations.On the basis of the L2-projection and some mixed finite element projections,we obtain the superconvergence result of least-squares mixed finite element solutions.This error estimate indicates an accuracy of O(h3/2)if the lowest order Raviart-Thomas elements are employed.
Visualizing Least-Square Lines of Best Fit.
Engebretsen, Arne
1997-01-01
Presents strategies that utilize graphing calculators and computer software to help students understand the concept of minimizing the squared residuals to find the line of best fit. Includes directions for least-squares drawings using a variety of technologies. (DDR)
Performance Evaluation of the Ordinary Least Square (OLS) and ...
African Journals Online (AJOL)
Nana Kwasi Peprah
Keywords: Differential Global Positioning, System, Total Least Square, Ordinary ... observation equations where only the observations are considered as ..... Dreiseitl, S., and Ohno-Machado, L. (2002), “Logistic Regression and Artificial Neural.
An application of least squares fit mapping to clinical classification.
Yang, Y.; Chute, C. G.
1992-01-01
This paper describes a unique approach, "Least Square Fit Mapping," to clinical data classification. We use large collections of human-assigned text-to-category matches as training sets to compute the correlations between physicians' terms and canonical concepts. A Linear Least Squares Fit (LLSF) technique is employed to obtain a mapping function which optimally fits the known matches given in a training set and probabilistically captures the unknown matches for arbitrary texts. We tested our...
Recursive least squares background prediction of univariate syndromic surveillance data
Burkom Howard; Najmi Amir-Homayoon
2009-01-01
Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of s...
Recursive least squares background prediction of univariate syndromic surveillance data
Directory of Open Access Journals (Sweden)
Burkom Howard
2009-01-01
Full Text Available Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold
Institute of Scientific and Technical Information of China (English)
吴云; 郭际明
2008-01-01
To obtain higher accurate position estimates, the stochastic model is estimated by using residual of observations, hence, the stochastic model describes the noise and bias in measurements more realistically. By using GPS data and broad- cast ephemeris, the numerical results indicating the accurate position estimates at sub-meter level are obtainable.
Improving the gradient in least-squares reverse time migration
Liu, Qiancheng
2016-04-01
Least-squares reverse time migration (LSRTM) is a linearized inversion technique used for estimating high-wavenumber reflectivity. However, due to the redundant overlay of the band-limited source wavelet, the gradient based on the cross-correlated imaging principle suffers from a loss of wavenumber information. We first prepare the residuals between observed and demigrated data by deconvolving with the amplitude spectrum of the source wavelet, and then migrate the preprocessed residuals by using the cross-correlation imaging principle. In this way, a gradient that preserves the spectral signature of data residuals is obtained. The computational cost of source-wavelet removal is negligible compared to that of wavefield simulation. The two-dimensional Marmousi model containing complex geology structures is considered to test our scheme. Numerical examples show that our improved gradient in LSRTM has a better convergence behavior and promises inverted results of higher resolution. Finally, we attempt to update the background velocity with our inverted velocity perturbations to approach the true velocity.
Integer least-squares theory for the GNSS compass
Teunissen, P. J. G.
2010-07-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to high-precision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategies. It extends current unconstrained ILS theory to the nonlinearly constrained case, an extension that is particularly suited for precise attitude determination. As opposed to current practice, our method does proper justice to the a priori given information. The nonlinear baseline constraint is fully integrated into the ambiguity objective function, thereby receiving a proper weighting in its minimization and providing guidance for the integer search. Different search strategies are developed to compute exact and approximate solutions of the nonlinear constrained ILS problem. Their applicability depends on the strength of the GNSS model and on the length of the baseline. Two of the presented search strategies, a global and a local one, are based on the use of an ellipsoidal search space. This has the advantage that standard methods can be applied. The global ellipsoidal search strategy is applicable to GNSS models of sufficient strength, while the local ellipsoidal search strategy is applicable to models for which the baseline lengths are not too small. We also develop search strategies for the most challenging case, namely when the curvature of the non-ellipsoidal ambiguity search space needs to be taken into account. Two such strategies are presented, an approximate one and a rigorous, somewhat more complex, one. The approximate one is applicable when the fixed baseline variance matrix is close to diagonal. Both methods make use of a search and shrink strategy. The rigorous solution is efficiently obtained by means of a search and shrink strategy that uses non-quadratic, but easy-to-evaluate, bounding functions of the ambiguity objective function. The theory
Least squares in calibration: dealing with uncertainty in x.
Tellinghuisen, Joel
2010-08-01
The least-squares (LS) analysis of data with error in x and y is generally thought to yield best results when carried out by minimizing the "total variance" (TV), defined as the sum of the properly weighted squared residuals in x and y. Alternative "effective variance" (EV) methods project the uncertainty in x into an effective contribution to that in y, and though easier to employ are considered to be less reliable. In the case of a linear response function with both sigma(x) and sigma(y) constant, the EV solutions are identically those from ordinary LS; and Monte Carlo (MC) simulations reveal that they can actually yield smaller root-mean-square errors than the TV method. Furthermore, the biases can be predicted from theory based on inverse regression--x upon y when x is error-free and y is uncertain--which yields a bias factor proportional to the ratio sigma(x)(2)/sigma(xm)(2) of the random-error variance in x to the model variance. The MC simulations confirm that the biases are essentially independent of the error in y, hence correctable. With such bias corrections, the better performance of the EV method in estimating the parameters translates into better performance in estimating the unknown (x(0)) from measurements (y(0)) of its response. The predictability of the EV parameter biases extends also to heteroscedastic y data as long as sigma(x) remains constant, but the estimation of x(0) is not as good in this case. When both x and y are heteroscedastic, there is no known way to predict the biases. However, the MC simulations suggest that for proportional error in x, a geometric x-structure leads to small bias and comparable performance for the EV and TV methods.
CSIR Research Space (South Africa)
Ramoelo, Abel
2013-06-01
Full Text Available squares regression (PLSR) for predicting grass N and P concentrations through integrating in situ hyperspectral remote sensing and environmental variables (climatic, edaphic and topographic). Data were collected along a land use gradient in the greater...
Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods
Chamis, Christos C.; Coroneos, Rula M.
2007-01-01
Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.
Constrained total least squares algorithm for passive location based on bearing-only measurements
Institute of Scientific and Technical Information of China (English)
WANG Ding; ZHANG Li; WU Ying
2007-01-01
The constrained total least squares algorithm for the passive location is presented based on the bearing-only measurements in this paper. By this algorithm the non-linear measurement equations are firstly transformed into linear equations and the effect of the measurement noise on the linear equation coefficients is analyzed,therefore the problem of the passive location can be considered as the problem of constrained total least squares, then the problem is changed into the optimized question without restraint which can be solved by the Newton algorithm, and finally the analysis of the location accuracy is given. The simulation results prove that the new algorithm is effective and practicable.
A note on implementation of decaying product correlation structures for quasi-least squares.
Shults, Justine; Guerra, Matthew W
2014-08-30
This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable.
Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS
Directory of Open Access Journals (Sweden)
Ade Widyaningsih
2014-06-01
Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.
Least-squares finite-element lattice Boltzmann method.
Li, Yusong; LeBoeuf, Eugene J; Basu, P K
2004-06-01
A new numerical model of the lattice Boltzmann method utilizing least-squares finite element in space and Crank-Nicolson method in time is presented. The new method is able to solve problem domains that contain complex or irregular geometric boundaries by using finite-element method's geometric flexibility and numerical stability, while employing efficient and accurate least-squares optimization. For the pure advection equation on a uniform mesh, the proposed method provides for fourth-order accuracy in space and second-order accuracy in time, with unconditional stability in the time domain. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow and Couette flow.
A note on the limitations of lattice least squares
Gillis, J. T.; Gustafson, C. L.; Mcgraw, G. A.
1988-01-01
This paper quantifies the known limitation of lattice least squares to ARX models in terms of the dynamic properties of the system being modeled. This allows determination of the applicability of lattice least squares in a given situation. The central result is that an equivalent ARX model exists for an ARMAX system if and only if the ARMAX system has no transmission zeros from the noise port to the output port. The technique used to prove this fact is a construction using the matrix fractional description of the system. The final section presents two computational examples.
Multi-source least-squares migration of marine data
Wang, Xin
2012-11-04
Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.
HERMITE SCATTERED DATA FITTING BY THE PENALIZED LEAST SQUARES METHOD
Institute of Scientific and Technical Information of China (English)
Tianhe Zhou; Danfu Han
2009-01-01
Given a set of scattered data with derivative values. If the data is noisy or there is an extremely large number of data, we use an extension of the penalized least squares method of von Golitschek and Schumaker[Serdica, 18 (2002), pp.1001-1020]to fit the data. We show that the extension of the penalized least squares method produces a unique spline to fit the data. Also we give the error bound for the extension method. Some numerical examples are presented to demonstrate the effectiveness of the proposed method.
Fast Dating Using Least-Squares Criteria and Algorithms.
To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier
2016-01-01
Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that
Consistency of System Identification by Global Total Least Squares
C. Heij (Christiaan); W. Scherrer
1996-01-01
textabstractGlobal total least squares (GTLS) is a method for the identification of linear systems where no distinction between input and output variables is required. This method has been developed within the deterministic behavioural approach to systems. In this paper we analyse statistical proper
Consistency of global total least squares in stochastic system identification
C. Heij (Christiaan); W. Scherrer
1995-01-01
textabstractGlobal total least squares has been introduced as a method for the identification of deterministic system behaviours. We analyse this method within a stochastic framework, where the observed data are generated by a stationary stochastic process. Conditions are formulated so that the meth
Integer least-squares theory for the GNSS compass
Teunissen, P.J.G.
2010-01-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to highprecision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategi
Risk and Management Control: A Partial Least Square Modelling Approach
DEFF Research Database (Denmark)
Nielsen, Steen; Pontoppidan, Iens Christian
and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...
SELECTION OF REFERENCE PLANE BY THE LEAST SQUARES FITTING METHODS
Directory of Open Access Journals (Sweden)
Przemysław Podulka
2016-06-01
For least squares polynomial fittings it was found that applied method for cylinder liners gave usually better robustness for scratches, valleys and dimples occurrence. For piston skirt surfaces better edge-filtering results were obtained. It was also recommended to analyse the Sk parameters for proper selection of reference plane in surface topography measurements.
Fuzzy modeling of friction by bacterial and least square optimization
Jastrzebski, Marcin
2006-03-01
In this paper a new method of tuning parameters of Sugeno fuzzy models is presented. Because modeled phenomenon is discontinuous, new type of consequent function was introduced. Described algorithm (BA+LSQ) combines bacterial algorithm (BA) for tuning parameters of membership functions and least square method (LSQ) for parameters of consequent functions.
Plane-wave Least-squares Reverse Time Migration
Dai, Wei
2012-11-04
Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.
An Orthogonal Least Squares Based Approach to FIR Designs
Institute of Scientific and Technical Information of China (English)
Xiao-Feng Wu; Zi-Qiang Lang; Stephen A Billings
2005-01-01
This paper is concerned with the application of forward Orthogonal Least Squares (OLS) algorithm to the design of Finite Impulse Response (FIR) filters. The focus of this study is a new FIR filter design procedure and to compare this with traditional methods known as the fir2() routine provided by MATLAB.
Weighted least squares stationary approximations to linear systems.
Bierman, G. J.
1972-01-01
Investigation of the problem of replacing a certain time-varying linear system by a stationary one. Several quadratic criteria are proposed to aid in determining suitable candidate systems. One criterion for choosing the matrix B (in the stationary system) is initial-condition dependent, and another bounds the 'worst case' homogeneous system performance. Both of these criteria produce weighted least square fits.
ON A FAMILY OF MULTIVARIATE LEAST-SQUARES ORTHOGONAL POLYNOMIALS
Institute of Scientific and Technical Information of China (English)
郑成德; 王仁宏
2003-01-01
In this paper the new notion of multivariate least-squares orthogonal poly-nomials from the rectangular form is introduced. Their existence and uniqueness isstudied and some methods for their recursive computation are given. As an applica-is constructed.
On the Routh approximation technique and least squares errors
Aburdene, M. F.; Singh, R.-N. P.
1979-01-01
A new method for calculating the coefficients of the numerator polynomial of the direct Routh approximation method (DRAM) using the least square error criterion is formulated. The necessary conditions have been obtained in terms of algebraic equations. The method is useful for low frequency as well as high frequency reduced-order models.
Optimization of sequential decisions by least squares Monte Carlo method
DEFF Research Database (Denmark)
Nishijima, Kazuyoshi; Anders, Annett
change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...
Integer least-squares theory for the GNSS compass
Teunissen, P.J.G.
2010-01-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to highprecision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search
ON THE COMPARISION OF THE TOTAL LEAST SQUARES AND THE LEAST SQUARES PROBLEMS%TLS和LS问题的比较
Institute of Scientific and Technical Information of China (English)
刘永辉; 魏木生
2003-01-01
There are a number of articles discussing the total least squares(TLS) and the least squares(LS) problems.M.Wei(M.Wei, Mathematica Numerica Sinica 20(3)(1998),267-278) proposed a new orthogonal projection method to improve existing perturbation bounds of the TLS and LS problems.In this paper,wecontinue to improve existing bounds of differences between the squared residuals,the weighted squared residuals and the minimum norm correction matrices of the TLS and LS problems.
Hierarchical Least Squares Identification and Its Convergence for Large Scale Multivariable Systems
Institute of Scientific and Technical Information of China (English)
丁锋; 丁韬
2002-01-01
The recursive least squares identification algorithm (RLS) for large scale multivariable systems requires a large amount of calculations, therefore, the RLS algorithm is difficult to implement on a computer. The computational load of estimation algorithms can be reduced using the hierarchical least squares identification algorithm (HLS) for large scale multivariable systems. The convergence analysis using the Martingale Convergence Theorem indicates that the parameter estimation error (PEE) given by the HLS algorithm is uniformly bounded without a persistent excitation signal and that the PEE consistently converges to zero for the persistent excitation condition. The HLS algorithm has a much lower computational load than the RLS algorithm.
Institute of Scientific and Technical Information of China (English)
LUO Zhen-dong; MAO Yun-kui; ZHU Jiang
2007-01-01
The Galerkin-Petrov least squares method is combined with the mixed finite element method to deal with the stationary, incompressible magnetohydrodynamics system of equations with viscosity. A Galerkin-Petrov least squares mixed finite element format for the stationary incompressible magnetohydrodynamics equations is presented.And the existence and error estimates of its solution are derived. Through this method,the combination among the mixed finite element spaces does not demand the discrete Babu(s)ka-Brezzi stability conditions so that the mixed finite element spaces could be chosen arbitrartily and the error estimates with optimal order could be obtained.
Partial least-squares: Theoretical issues and engineering applications in signal processing
Directory of Open Access Journals (Sweden)
Fredric M. Ham
1996-01-01
Full Text Available In this paper we present partial least-squares (PLS, which is a statistical modeling method used extensively in analytical chemistry for quantitatively analyzing spectroscopic data. Comparisons are made between classical least-squares (CLS and PLS to show how PLS can be used in certain engineering signal processing applications. Moreover, it is shown that in certain situations when there exists a linear relationship between the independent and dependent variables, PLS can yield better predictive performance than CLS when it is not desirable to use all of the empirical data to develop a calibration model used for prediction. Specifically, because PLS is a factor analysis method, optimal selection of the number of PLS factors can result in a calibration model whose predictive performance is considerably better than CLS. That is, factor analysis (rank reduction allows only those features of the data that are associated with information of interest to be retained for development of the calibration model, and the remaining data associated with noise are discarded. It is shown that PLS can yield physical insight into the system from which empirical data has been collected. Also, when there exists a non-linear cause-and-effect relationship between the independent and dependent variables, the PLS calibration model can yield prediction errors that are much less than those for CLS. Three PLS application examples are given and the results are compared to CLS. In one example, a method is presented using PLS for parametric system identification. Using PLS for system identification allows simultaneous estimation of the system dimension and the system parameter vector associated with a minimal realization of the system.
Least squares estimation in a simple random coefficient autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Lange, Theis
2013-01-01
The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p=P(...
Least squares estimation in a simple random coefficient autoregressive model
DEFF Research Database (Denmark)
Johansen, Søren; Lange, Theis
2013-01-01
The question we discuss is whether a simple random coefficient autoregressive model with infinite variance can create the long swings, or persistence, which are observed in many macroeconomic variables. The model is defined by yt=stρyt−1+εt,t=1,…,n, where st is an i.i.d. binary variable with p=P(...
Moving least-squares corrections for smoothed particle hydrodynamics
Directory of Open Access Journals (Sweden)
Ciro Del Negro
2011-12-01
Full Text Available First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small increase in computational cost, we manage to achieve smooth density distributions without the need for post-processing and with higher accuracy in the computation of the viscous term of the Navier–Stokes equations, thereby reducing the formation of spurious shockwaves or other streaming effects in the evolution of fluid flow. Numerical tests on a classic two-dimensional dam-break problem confirm the improvement of the new approach.
Least Squares Shadowing for Sensitivity Analysis of Turbulent Fluid Flows
Blonigan, Patrick; Wang, Qiqi
2014-01-01
Computational methods for sensitivity analysis are invaluable tools for aerodynamics research and engineering design. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in turbulent fluid flow fields, specifically those obtained using high-fidelity turbulence simulations. This is because of a number of dynamical properties of turbulent and chaotic fluid flows, most importantly high sensitivity of the initial value problem, popularly known as the "butterfly effect". The recently developed least squares shadowing (LSS) method avoids the issues encountered by traditional sensitivity analysis methods by approximating the "shadow trajectory" in phase space, avoiding the high sensitivity of the initial value problem. The following paper discusses how the least squares problem associated with LSS is solved. Two methods are presented and are demonstrated on a simulation of homogeneous isotropic turbulence and the Kuramoto-Sivashinsky (KS) equation, a 4th order c...
Anisotropy minimization via least squares method for transformation optics.
Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H
2014-07-28
In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero.
Linearized least-square imaging of internally scattered data
Aldawood, Ali
2014-01-01
Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.
CONDITION NUMBER FOR WEIGHTED LINEAR LEAST SQUARES PROBLEM
Institute of Scientific and Technical Information of China (English)
Yimin Wei; Huaian Diao; Sanzheng Qiao
2007-01-01
In this paper,we investigate the condition numbers for the generalized matrix inversion and the rank deficient linear least squares problem:minx ||Ax-b||2,where A is an m-by-n (m≥n)rank deficient matrix.We first derive an explicit expression for the condition number in the weighted Frobenius norm || [AT,βb]||F of the data A and b,where T is a positive diagonal matrix and β is a positive scalar.We then discuss the sensitivity of the standard 2-norm condition numbers for the generalized matrix inversion and rank deficient least squares and establish relations between the condition numbers and their condition numbers called level-2 condition numbers.
Source allocation by least-squares hydrocarbon fingerprint matching
Energy Technology Data Exchange (ETDEWEB)
William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)
2006-11-01
There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.
Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression
Verdoolaege, G.; Shabbir, A.; Hornung, G.
2016-11-01
Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.
SUBSPACE SEARCH METHOD FOR A CLASS OF LEAST SQUARES PROBLEM
Institute of Scientific and Technical Information of China (English)
Zi-Luan Wei
2000-01-01
A subspace search method for solving a class of least squares problem is pre sented in the paper. The original problem is divided into many independent sub problems, and a search direction is obtained by solving each of the subproblems, as well as a new iterative point is determined by choosing a suitable steplength such that the value of residual norm is decreasing. The convergence result is also given. The numerical test is also shown for a special problem,
Parallel Nonnegative Least Squares Solvers for Model Order Reduction
2016-03-01
not for the PQN method. For the latter method the size of the active set is controlled to promote sparse solutions. This is described in Section 3.2.1...or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington...21005-5066 primary author’s email: <james.p.collins106.civ@mail.mil>. Parallel nonnegative least squares (NNLS) solvers are developed specifically for
An iterative approach to a constrained least squares problem
Directory of Open Access Journals (Sweden)
Simeon Reich
2003-01-01
In the case where the set of the constraints is the nonempty intersection of a finite collection of closed convex subsets of H, an iterative algorithm is designed. The resulting sequence is shown to converge strongly to the unique solution of the regularized problem. The net of the solutions to the regularized problems strongly converges to the minimum norm solution of the least squares problem if its solution set is nonempty.
Online least-squares policy iteration for reinforcement learning control
2010-01-01
Reinforcement learning is a promising paradigm for learning optimal control. We consider policy iteration (PI) algorithms for reinforcement learning, which iteratively evaluate and improve control policies. State-of-the-art, least-squares techniques for policy evaluation are sample-efficient and have relaxed convergence requirements. However, they are typically used in offline PI, whereas a central goal of reinforcement learning is to develop online algorithms. Therefore, we propose an online...
MODIFIED LEAST SQUARE METHOD ON COMPUTING DIRICHLET PROBLEMS
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The singularity theory of dynamical systems is linked to the numerical computation of boundary value problems of differential equations. It turns out to be a modified least square method for a calculation of variational problem defined on Ck(Ω), in which the base functions are polynomials and the computation of problems is transferred to compute the coefficients of the base functions. The theoretical treatment and some simple examples are provided for understanding the modification procedure of the metho...
Penalized Weighted Least Squares for Outlier Detection and Robust Regression
Gao, Xiaoli; Fang, Yixin
2016-01-01
To conduct regression analysis for data contaminated with outliers, many approaches have been proposed for simultaneous outlier detection and robust regression, so is the approach proposed in this manuscript. This new approach is called "penalized weighted least squares" (PWLS). By assigning each observation an individual weight and incorporating a lasso-type penalty on the log-transformation of the weight vector, the PWLS is able to perform outlier detection and robust regression simultaneou...
Least Squares Polynomial Chaos Expansion: A Review of Sampling Strategies
Hadigol, Mohammad; Doostan, Alireza
2017-01-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling ...
AN ASSESSMENT OF THE MESHLESS WEIGHTED LEAST-SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
PanXiaofei; SzeKimYim; ZhangXiong
2004-01-01
The meshless weighted least-square (MWLS) method was developed based on the weighted least-square method. The method possesses several advantages, such as high accuracy, high stability and high efficiency. Moreover, the coefficient matrix obtained is symmetric and semipositive definite. In this paper, the method is further examined critically. The effects of several parameters on the results of MWLS are investigated systematically by using a cantilever beam and an infinite plate with a central circular hole. The numerical results are compared with those obtained by using the collocation-based meshless method (CBMM) and Galerkin-based meshless method (GBMM). The investigated parameters include the type of approximations, the type of weight functions, the number of neighbors of an evaluation point, as well as the manner in which the neighbors of an evaluation point are determined. This study shows that the displacement accuracy and convergence rate obtained by MWLS is comparable to that of the GBMM while the stress accuracy and convergence rate yielded by MWLS is even higher than that of GBMM. Furthermore, MWLS is much more efficient than GBMM. This study also shows that the instability of CBMM is mainly due to the neglect of the equilibrium residuals at boundary nodes. In MWLS, the residuals of all the governing equations are minimized in a weighted least-square sense.
Multi-source least-squares reverse time migration
Dai, Wei
2012-06-15
Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.
Solving linear inequalities in a least squares sense
Energy Technology Data Exchange (ETDEWEB)
Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)
1994-12-31
Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.
Adjoint sensitivity in PDE constrained least squares problems as a multiphysics problem
Lahaye, D.; Mulckhuyse, W.F.W.
2012-01-01
Purpose - The purpose of this paper is to provide a framework for the implementation of an adjoint sensitivity formulation for least-squares partial differential equations constrained optimization problems exploiting a multiphysics finite elements package. The estimation of the diffusion coefficient
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
LEAST-SQUARES MIXED FINITE ELEMENT METHOD FOR SADDLE-POINT PROBLEM
Institute of Scientific and Technical Information of China (English)
Lie-heng Wang; Huo-yuan Duan
2000-01-01
In this paper, a least-squares mixed finite element method for the solution of the primal saddle-point problem is developed. It is proved that the approximate problem is consistent ellipticity in the conforming finite element spaces with only the discrete BB-condition needed for a smaller auxiliary problem. The abstract error estimate is derived.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
Chen, S; Wu, Y; Luk, B L
1999-01-01
The paper presents a two-level learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.
Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes
Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)
2003-01-01
The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.
Image denoising using least squares wavelet support vector machines
Institute of Scientific and Technical Information of China (English)
Guoping Zeng; Ruizhen Zhao
2007-01-01
We propose a new method for image denoising combining wavelet transform and support vector machines (SVMs). A new image filter operator based on the least squares wavelet support vector machines (LSWSVMs) is presented. Noisy image can be denoised through this filter operator and wavelet thresholding technique. Experimental results show that the proposed method is better than the existing SVM regression with the Gaussian radial basis function (RBF) and polynomial RBF. Meanwhile, it can achieve better performance than other traditional methods such as the average filter and median filter.
Spectral feature matching based on partial least squares
Institute of Scientific and Technical Information of China (English)
Weidong Yan; Zheng Tian; Lulu Pan; Mingtao Ding
2009-01-01
We investigate the spectral approaches to the problem of point pattern matching, and present a spectral feature descriptors based on partial least square (PLS). Given keypoints of two images, we define the position similarity matrices respectively, and extract the spectral features from the matrices by PLS, which indicate geometric distribution and inner relationships of the keypoints. Then the keypoints matching is done by bipartite graph matching. The experiments on both synthetic and real-world data corroborate the robustness and invariance of the algorithm.
Positive Scattering Cross Sections using Constrained Least Squares
Energy Technology Data Exchange (ETDEWEB)
Dahl, J.A.; Ganapol, B.D.; Morel, J.E.
1999-09-27
A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.
Classification using least squares support vector machine for reliability analysis
Institute of Scientific and Technical Information of China (English)
Zhi-wei GUO; Guang-chen BAI
2009-01-01
In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.
Handbook of Partial Least Squares Concepts, Methods and Applications
Vinzi, Vincenzo Esposito; Henseler, Jörg
2010-01-01
This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.
On the stability and accuracy of least squares approximations
Cohen, Albert; Leviatan, Dany
2011-01-01
We consider the problem of reconstructing an unknown function $f$ on a domain $X$ from samples of $f$ at $n$ randomly chosen points with respect to a given measure $\\rho_X$. Given a sequence of linear spaces $(V_m)_{m>0}$ with ${\\rm dim}(V_m)=m\\leq n$, we study the least squares approximations from the spaces $V_m$. It is well known that such approximations can be inaccurate when $m$ is too close to $n$, even when the samples are noiseless. Our main result provides a criterion on $m$ that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in $L^2(X,\\rho_X)$, is comparable to the best approximation error of $f$ by elements from $V_m$. We illustrate this criterion for various approximation schemes, such as trigonometric polynomials, with $\\rho_X$ being the uniform measure, and algebraic polynomials, with $\\rho_X$ being either the uniform or Chebyshev measure. For such examples we also prove similar stability results using deterministic...
Orthogonal least squares learning algorithm for radial basis function networks
Energy Technology Data Exchange (ETDEWEB)
Chen, S.; Cowan, C.F.N.; Grant, P.M. (Dept. of Electrical Engineering, Univ. of Edinburgh, Mayfield Road, Edinburgh EH9 3JL, Scotland (GB))
1991-03-01
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular value decomposition to solve for the weights of the network. Such a procedure has several drawbacks and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The paper proposes an alternative learning procedure based on the orthogonal least squares method. The procedure choose radial basis function centers one by one in a rational way until an adequate network has been constructed. The algorithm has the property that each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least squares learning strategy provides a simple and efficient means for fitting radial basis function networks, and this is illustrated using examples taken from two different signal processing applications.
Orthogonal least squares learning algorithm for radial basis function networks.
Chen, S; Cowan, C N; Grant, P M
1991-01-01
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.
Making the most out of the least (squares migration)
Dutta, Gaurav
2014-08-05
Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.
Plane-wave least-squares reverse-time migration
Dai, Wei
2013-06-03
A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.
Making the most out of least-squares migration
Huang, Yunsong
2014-09-01
Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.
Least squares weighted twin support vector machines with local information
Institute of Scientific and Technical Information of China (English)
花小朋; 徐森; 李先锋
2015-01-01
A least squares version of the recently proposed weighted twin support vector machine with local information (WLTSVM) for binary classification is formulated. This formulation leads to an extremely simple and fast algorithm, called least squares weighted twin support vector machine with local information (LSWLTSVM), for generating binary classifiers based on two non-parallel hyperplanes. Two modified primal problems of WLTSVM are attempted to solve, instead of two dual problems usually solved. The solution of the two modified problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in WLTSVM. Moreover, two extra modifications were proposed in LSWLTSVM to improve the generalization capability. One is that a hot kernel function, not the simple-minded definition in WLTSVM, is used to define the weight matrix of adjacency graph, which ensures that the underlying similarity information between any pair of data points in the same class can be fully reflected. The other is that the weight for each point in the contrary class is considered in constructing equality constraints, which makes LSWLTSVM less sensitive to noise points than WLTSVM. Experimental results indicate that LSWLTSVM has comparable classification accuracy to that of WLTSVM but with remarkably less computational time.
Kernel-Based Least Squares Temporal Difference With Gradient Correction.
Song, Tianheng; Li, Dazi; Cao, Liulin; Hirasawa, Kotaro
2016-04-01
A least squares temporal difference with gradient correction (LS-TDC) algorithm and its kernel-based version kernel-based LS-TDC (KLS-TDC) are proposed as policy evaluation algorithms for reinforcement learning (RL). LS-TDC is derived from the TDC algorithm. Attributed to TDC derived by minimizing the mean-square projected Bellman error, LS-TDC has better convergence performance. The least squares technique is used to omit the size-step tuning of the original TDC and enhance robustness. For KLS-TDC, since the kernel method is used, feature vectors can be selected automatically. The approximate linear dependence analysis is performed to realize kernel sparsification. In addition, a policy iteration strategy motivated by KLS-TDC is constructed to solve control learning problems. The convergence and parameter sensitivities of both LS-TDC and KLS-TDC are tested through on-policy learning, off-policy learning, and control learning problems. Experimental results, as compared with a series of corresponding RL algorithms, demonstrate that both LS-TDC and KLS-TDC have better approximation and convergence performance, higher efficiency for sample usage, smaller burden of parameter tuning, and less sensitivity to parameters.
Point pattern matching based on kernel partial least squares
Institute of Scientific and Technical Information of China (English)
Weidong Yan; Zheng Tian; Lulu Pan; Jinhuan Wen
2011-01-01
@@ Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.%Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.
Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.
2016-08-01
Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower-upper-middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.
A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks
Directory of Open Access Journals (Sweden)
Baoguo Yu
2016-01-01
Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.
Chkifa, Abdellah
2015-04-08
Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.
A Coupled Finite Difference and Moving Least Squares Simulation of Violent Breaking Wave Impact
DEFF Research Database (Denmark)
Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter
2012-01-01
Two model for simulation of free surface flow is presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A special...... feature of this model is a generalized finite point set method which is applied to the solution of the Poisson equation on an unstructured point distribution. The presented finite point set method is generalized to arbitrary order of approximation. The two models are applied to simulation of steep...... and overturning wave impacts on a vertical breakwater. Wave groups with five different wave heights are propagated from offshore to the vicinity of the breakwater, where the waves are steep, but still smooth and non-overturning. These waves are used as initial condition for the weighted least squares based...
Least-squares reverse time migration of multiples
Zhang, Dongliang
2013-12-06
The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower
Least squares algorithm for region-of-interest evaluation in emission tomography
Energy Technology Data Exchange (ETDEWEB)
Formiconi, A.R. (Sezione di Medicina Nucleare, Firenze (Italy). Dipt. di Fisiopatologia Clinica)
1993-03-01
In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region.
Least-Squares Seismic Inversion with Stochastic Conjugate Gradient Method
Institute of Scientific and Technical Information of China (English)
Wei Huang; Hua-Wei Zhou
2015-01-01
With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.
Local validation of EU-DEM using Least Squares Collocation
Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios
2016-04-01
In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.
A stochastic total least squares solution of adaptive filtering problem.
Javed, Shazia; Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs.
DIRECT ITERATIVE METHODS FOR RANK DEFICIENT GENERALIZED LEAST SQUARES PROBLEMS
Institute of Scientific and Technical Information of China (English)
Jin-yun Yuan; Xiao-qing Jin
2000-01-01
The generalized least squares (LS) problem appears in many application areas. Here W is an m × m symmetric positive definite matrix and A is an m × n matrix with m≥n. Since the problem has many solutions in rank deficient case, some special preconditioned techniques are adapted to obtain the minimum 2-norm solution. A block SOR method and the preconditioned conjugate gradient (PCG) method are proposed here. Convergence and optimal relaxation parameter for the block SOR method are studied. An error bound for the PCG method is given. The comparison of these methods is investigated. Some remarks on the implementation of the methods and the operation cost are given as well.
Regularized plane-wave least-squares Kirchhoff migration
Wang, Xin
2013-09-22
A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.
Partial least squares regression in the social sciences
Directory of Open Access Journals (Sweden)
Megan L. Sawatsky
2015-06-01
Full Text Available Partial least square regression (PLSR is a statistical modeling technique that extracts latent factors to explain both predictor and response variation. PLSR is particularly useful as a data exploration technique because it is highly flexible (e.g., there are few assumptions, variables can be highly collinear. While gaining importance across a diverse number of fields, its application in the social sciences has been limited. Here, we provide a brief introduction to PLSR, directed towards a novice audience with limited exposure to the technique; demonstrate its utility as an alternative to more classic approaches (multiple linear regression, principal component regression; and apply the technique to a hypothetical dataset using JMP statistical software (with references to SAS software.
Least-squares reverse time migration with radon preconditioning
Dutta, Gaurav
2016-09-06
We present a least-squares reverse time migration (LSRTM) method using Radon preconditioning to regularize noisy or severely undersampled data. A high resolution local radon transform is used as a change of basis for the reflectivity and sparseness constraints are applied to the inverted reflectivity in the transform domain. This reflects the prior that for each location of the subsurface the number of geological dips is limited. The forward and the adjoint mapping of the reflectivity to the local Radon domain and back are done through 3D Fourier-based discrete Radon transform operators. The sparseness is enforced by applying weights to the Radon domain components which either vary with the amplitudes of the local dips or are thresholded at given quantiles. Numerical tests on synthetic and field data validate the effectiveness of the proposed approach in producing images with improved SNR and reduced aliasing artifacts when compared with standard RTM or LSRTM.
Cognitive assessment in mathematics with the least squares distance method.
Ma, Lin; Çetin, Emre; Green, Kathy E
2012-01-01
This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.
A Galerkin least squares approach to viscoelastic flow.
Energy Technology Data Exchange (ETDEWEB)
Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-10-01
A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.
Least squares deconvolution of the stellar intensity and polarization spectra
Kochukhov, O; Piskunov, N
2010-01-01
Least squares deconvolution (LSD) is a powerful method of extracting high-precision average line profiles from the stellar intensity and polarization spectra. Despite its common usage, the LSD method is poorly documented and has never been tested using realistic synthetic spectra. In this study we revisit the key assumptions of the LSD technique, clarify its numerical implementation, discuss possible improvements and give recommendations how to make LSD results understandable and reproducible. We also address the problem of interpretation of the moments and shapes of the LSD profiles in terms of physical parameters. We have developed an improved, multiprofile version of LSD and have extended the deconvolution procedure to linear polarization analysis taking into account anomalous Zeeman splitting of spectral lines. This code is applied to the theoretical Stokes parameter spectra. We test various methods of interpreting the mean profiles, investigating how coarse approximations of the multiline technique trans...
RNA structural motif recognition based on least-squares distance.
Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin
2013-09-01
RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.
Risk and Management Control: A Partial Least Square Modelling Approach
DEFF Research Database (Denmark)
Nielsen, Steen; Pontoppidan, Iens Christian
and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...... and a external attitude dimension. The results have important implications for both management control research and for the management control systems design for the way accountants consider the element of risk in their different tasks, both operational and strategic. Specifically, it seems that different risk......Risk and economic theory goes many year back (e.g. to Keynes & Knight 1921) and risk/uncertainty belong to one of the explanations for the existence of the firm (Coarse, 1937). The present financial crisis going on in the past years have re-accentuated risk and the need of coherence...
Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae
2017-06-01
The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.
Institute of Scientific and Technical Information of China (English)
刘国海; 张懿; 魏海峰; 赵文祥
2012-01-01
针对神经网络逆控制存在的不足,对一类模型未知且某些状态量较难测得的多输入多输出（MIMO）非线性系统,在状态软测量函数存在的前提下,提出一种最小二乘支持向量机（LSSVM）广义逆辨识控制策略.通过广义逆将原被控系统转化为伪线性复合系统,并可使其极点任意配置,采用LSSVM代替神经网络拟合广义逆系统中的静态非线性映射.将系统的状态量辨识与LSSVM逆模型辨识结合,通过LSSVM训练拟合同时实现软测量功能.最后以双电机变频调速系统为对象,采用该控制策略进行仿真研究,结果验证了本文算法的有效性.%Considering the deficiency of neural network inverse control method,for a class of multi-input and multioutput（MIMO） nonlinear systems with unknown model,when soft-sensing functions for immeasurable states are available,we propose a new identification and control strategy based on the generalized inverse control of least squares support vector machines（LSSVM）.The generalized inverse converts the controlled nonlinear system into a pseudo linear system with expected pole placement.In place of the neural network,LSSVM is employed to fit the static nonlinear mapping of the generalized inverse system.The identification of state variables is combined with the identification of LSSVM inverse model.Meanwhile,the soft-sensing is implemented through LSSVM training and fitting.Simulation is performed on a two-motor variable-frequency speed-regulating system.Results show that the proposed control strategy is feasible and efficient.
SUPERCONVERGENCE OF LEAST-SQUARES MIXED FINITE ELEMENT FOR SECOND-ORDER ELLIPTIC PROBLEMS
Institute of Scientific and Technical Information of China (English)
Yan-ping Chen; De-hao Yu
2003-01-01
In this paper the least-squares mixed finite element is considered for solving secondorder elliptic problems in two dimensional domains. The primary solution u and the flux σ are approximated using finite element spaces consisting of piecewise polynomials of degree k and r respectively. Based on interpolation operators and an auxiliary projection,superconvergent Hi-error estimates of both the primary solution approximation uh and the flux approximation σh are obtained under the standard quasi-uniform assumption on finite element partition. The superconvergence indicates an accuracy of O(hr+2) for the least-squares mixed finite element approximation if Raviart-Thomas or Brezzi-DouglasFortin-Marini elements of order r are employed with optimal error estimate of O(hr+1).
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks).
Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions
Energy Technology Data Exchange (ETDEWEB)
Jerome Blair
2008-05-15
An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.
Solving the Axisymmetric Inverse Heat Conduction Problem by a Wavelet Dual Least Squares Method
Directory of Open Access Journals (Sweden)
Fu Chu-Li
2009-01-01
Full Text Available We consider an axisymmetric inverse heat conduction problem of determining the surface temperature from a fixed location inside a cylinder. This problem is ill-posed; the solution (if it exists does not depend continuously on the data. A special project method—dual least squares method generated by the family of Shannon wavelet is applied to formulate regularized solution. Meanwhile, an order optimal error estimate between the approximate solution and exact solution is proved.
Seismic time-lapse imaging using Interferometric least-squares migration
Sinha, Mrinal
2016-09-06
One of the problems with 4D surveys is that the environmental conditions change over time so that the experiment is insufficiently repeatable. To mitigate this problem, we propose the use of interferometric least-squares migration (ILSM) to estimate the migration image for the baseline and monitor surveys. Here, a known reflector is used as the reference reflector for ILSM. Results with synthetic and field data show that ILSM can eliminate artifacts caused by non-repeatability in time-lapse surveys.
Directory of Open Access Journals (Sweden)
Maria Eugênia Zerlotti Mercadante
2003-10-01
Full Text Available Registros do peso à seleção de bovinos Nelore pertencentes aos rebanhos experimentais da Estação Experimental de Zootecnia de Sertãozinho, foram usados para estudar o acréscimo de informações no acesso à estimativa da mudança genética anual, utilizando métodos de quadrados mínimos e de modelo misto. A mudança genética nos três rebanhos (controle-NeC, e selecionados-NeS e NeT ou somente nos selecionados (NeS e NeT foi estimada, só para fêmeas, por quadrados mínimos, por modelo misto considerando somente o desempenho das fêmeas e incorporando a matriz de parentesco, e por modelo misto incorporando todas as informações disponíveis (registros de machos e fêmeas e matriz de parentesco. A incorporação de todas as informações disponíveis em modelo misto para estimar a mudança genética ao longo dos anos forneceu uma curva mais suavizada das médias anuais e, consequentemente, uma estimativa da tendência genética anual com menor erro-padrão. O método dos quadrados mínimos forneceu estimativas da mudança genética ao longo dos anos altamente variáveis, evidenciando a menor capacidade de particionar a mudança genética da mudança ambiental. O modelo misto incorporando parte das informações de desempenho forneceu estimativas da tendência genética anual muito baixas (viesadas, evidenciando o confundimento da mudança ambiental com o restante da mudança genética não estimada pela ausência de informações.Records of body weight at selection of Nelore cattle pertaining to the Sertãozinho's Experimental Station (SP-Brasil Nelore herds were used to study the influence of the incorporation of additional information on the estimation of annual genetic trend by least squares and mixed model methodologies. The genetic changes in all three Nelore herds (control-NeC and selected-NeS and NeT or only in the selected ones (NeS and NeT were estimated, only for the females, by least squares, by mixed model including their
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
Directory of Open Access Journals (Sweden)
Greenwood L.R.
2016-01-01
Full Text Available The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002 [3], which was sponsored by the International Atomic Energy Agency (IAEA, though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
Greenwood, L. R.; Johnson, C. D.
2016-02-01
The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator
Least-squares joint imaging of multiples and primaries
Brown, Morgan Parker
Current exploration geophysics practice still regards multiple reflections as noise, although multiples often contain considerable information about the earth's angle-dependent reflectivity that primary reflections do not. To exploit this information, multiples and primaries must be combined in a domain in which they are comparable, such as in the prestack image domain. However, unless the multiples and primaries have been pre-separated from the data, crosstalk leakage between multiple and primary images will significantly degrade any gains in the signal fidelity, geologic interpretability, and signal-to-noise ratio of the combined image. I present a global linear least-squares algorithm, denoted LSJIMP (Least-squares Joint Imaging of Multiples and Primaries), which separates multiples from primaries while simultaneously combining their information. The novelty of the method lies in the three model regularization operators which discriminate between crosstalk and signal and extend information between multiple and primary images. The LSJIMP method exploits the hitherto ignored redundancy between primaries and multiples in the data. While many different types of multiple imaging operators are well-suited for use with the LSJIMP method, in this thesis I utilize an efficient prestack time imaging strategy for multiples which sacrifices accuracy in a complex earth for computational speed and convenience. I derive a variant of the normal moveout (NMO) equation for multiples, called HEMNO, which can image "split" pegleg multiples which arise from a moderately heterogeneous earth. I also derive a series of prestack amplitude compensation operators which when combined with HEMNO, transform pegleg multiples into events are directly comparable---kinematically and in terms of amplitudes---to the primary reflection. I test my implementation of LSJIMP on two datasets from the deepwater Gulf of Mexico. The first, a 2-D line in the Mississippi Canyon region, exhibits a variety of
Wavelet Neural Networks for Adaptive Equalization by Using the Orthogonal Least Square Algorithm
Institute of Scientific and Technical Information of China (English)
JIANG Minghu(江铭虎); DENG Beixing(邓北星); Georges Gielen
2004-01-01
Equalizers are widely used in digital communication systems for corrupted or time varying channels. To overcome performance decline for noisy and nonlinear channels, many kinds of neural network models have been used in nonlinear equalization. In this paper, we propose a new nonlinear channel equalization, which is structured by wavelet neural networks. The orthogonal least square algorithm is applied to update the weighting matrix of wavelet networks to form a more compact wavelet basis unit, thus obtaining good equalization performance. The experimental results show that performance of the proposed equalizer based on wavelet networks can significantly improve the neural modeling accuracy and outperform conventional neural network equalization in signal to noise ratio and channel non-linearity.
BER analysis of regularized least squares for BPSK recovery
Ben Atitallah, Ismail
2017-06-20
This paper investigates the problem of recovering an n-dimensional BPSK signal x
HASM-AD Algorithm Based on the Sequential Least Squares
Institute of Scientific and Technical Information of China (English)
WANG Shihai; YUE Tianxiang
2010-01-01
The HASM (high accuracy surface modeling) technique is based on the fundamental theory of surfaces, which has been proved to improve the interpolation accuracy in surface fitting. However, the integral iterative solution in previous studies resulted in high temporal complexity in computation and huge memory usage so that it became difficult to put the technique into application,especially for large-scale datasets. In the study, an innovative model (HASM-AD) is developed according to the sequential least squares on the basis of data adjustment theory. Sequential division is adopted in the technique, so that linear equations can be divided into groups to be processed in sequence with the temporal complexity reduced greatly in computation. The experiment indicates that the HASM-AD technique surpasses the traditional spatial interpolation methods in accuracy. Also, the cross-validation result proves the same conclusion for the spatial interpolation of soil PH property with the data sampled in Jiangxi province. Moreover, it is demonstrated in the study that the HASM-AD technique significantly reduces the computational complexity and lessens memory usage in computation.
3D plane-wave least-squares Kirchhoff migration
Wang, Xin
2014-08-05
A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.
Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors
Energy Technology Data Exchange (ETDEWEB)
Gavel, D
2002-10-08
A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.
Efficient sparse kernel feature extraction based on partial least squares.
Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John
2009-08-01
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.
Prediction of solubility parameters using partial least square regression.
Tantishaiyakul, Vimon; Worakul, Nimit; Wongpoowarak, Wibul
2006-11-15
The total solubility parameter (delta) values were effectively predicted by using computed molecular descriptors and multivariate partial least squares (PLS) statistics. The molecular descriptors in the derived models included heat of formation, dipole moment, molar refractivity, solvent-accessible surface area (SA), surface-bounded molecular volume (SV), unsaturated index (Ui), and hydrophilic index (Hy). The values of these descriptors were computed by the use of HyperChem 7.5, QSPR Properties module in HyperChem 7.5, and Dragon Web version. The other two descriptors, hydrogen bonding donor (HD), and hydrogen bond-forming ability (HB) were also included in the models. The final reduced model of the whole data set had R(2) of 0.853, Q(2) of 0.813, root mean squared error from the cross-validation of the training set (RMSEcv(tr)) of 2.096 and RMSE of calibration (RMSE(tr)) of 1.857. No outlier was observed from this data set of 51 diverse compounds. Additionally, the predictive power of the developed model was comparable to the well recognized systems of Hansen, van Krevelen and Hoftyzer, and Hoy.
River flow time series using least squares support vector machines
Samsudin, R.; Saad, P.; Shabri, A.
2011-06-01
This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA), GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE) and coefficient of correlation (R) are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.
Least-squares fit of a linear combination of functions
Directory of Open Access Journals (Sweden)
Niraj Upadhyay
2013-12-01
Full Text Available We propose that given a data-set $S=\\{(x_i,y_i/i=1,2,{\\dots}n\\}$ and real-valued functions $\\{f_\\alpha(x/\\alpha=1,2,{\\dots}m\\},$ the least-squares fit vector $A=\\{a_\\alpha\\}$ for $y=\\sum_\\alpha a_{\\alpha}f_\\alpha(x$ is $A = (F^TF^{-1}F^TY$ where $[F_{i\\alpha}]=[f_\\alpha(x_i].$ We test this formalism by deriving the algebraic expressions of the regression coefficients in $y = ax + b$ and in $y = ax^2 + bx + c.$ As a practical application, we successfully arrive at the coefficients in the semi-empirical mass formula of nuclear physics. The formalism is {\\it generic} - it has the potential of being applicable to any {\\it type} of $\\{x_i\\}$ as long as there exist appropriate $\\{f_\\alpha\\}.$ The method can be exploited with a CAS or an object-oriented language and is excellently suitable for parallel-processing.
A pruning method for the recursive least squared algorithm.
Leung, C S; Wong, K W; Sum, P F; Chan, L W
2001-03-01
The recursive least squared (RLS) algorithm is an effective online training method for neural networks. However, its conjunctions with weight decay and pruning have not been well studied. This paper elucidates how generalization ability can be improved by selecting an appropriate initial value of the error covariance matrix in the RLS algorithm. Moreover, how the pruning of neural networks can be benefited by using the final value of the error covariance matrix will also be investigated. Our study found that the RLS algorithm is implicitly a weight decay method, where the weight decay effect is controlled by the initial value of the error covariance matrix; and that the inverse of the error covariance matrix is approximately equal to the Hessian matrix of the network being trained. We propose that neural networks are first trained by the RLS algorithm and then some unimportant weights are removed based on the approximate Hessian matrix. Simulation results show that our approach is an effective training and pruning method for neural networks.
Non-parametric and least squares Langley plot methods
Directory of Open Access Journals (Sweden)
P. W. Kiedron
2015-04-01
Full Text Available Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer–Lambert–Beer law V=V>/i>0e−τ ·m, where a plot of ln (V voltage vs. m air mass yields a straight line with intercept ln (V0. This ln (V0 subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0's are smoothed and interpolated with median and mean moving window filters.
PREDIKSI WAKTU KETAHANAN HIDUP DENGAN METODE PARTIAL LEAST SQUARE
Directory of Open Access Journals (Sweden)
PANDE PUTU BUDI KUSUMA
2013-03-01
Full Text Available Coronary heart disease is caused due to an accumulation of fat on the inside walls of blood vessels of the heart (coronary arteries. The factors that had led to the occurrence of coronary heart disease is dominated by unhealthy lifestyle of patients, and the survival times of different patients. This research objective is to predict the survival time of patients with coronary heart disease by taking into account the explanatory variables were analyzed by the method of Partial Least Square (PLS. PLS method is used to resolve the multiple regression analysis when the specific problems of multicollinearity and microarray data. The purpose of the PLS method is to predict the explanatory variables with multiple response variables so as to produce a more accurate predictive value. The results of this research showed that the prediction of survival for the three samples of patients with coronary heart disease had an average of 13 days, with a RMSEP value (error value was 1.526 which means that the results of this study are not much different from the predicted results in the field of medicine. This is consistent with the fact that the medical field suggests that the average survival for patients with coronary heart disease by 13 days.
A cross-correlation objective function for least-squares migration and visco-acoustic imaging
Dutta, Gaurav
2014-08-05
Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.
Machado, A. E. de A.; da Gama, A. A. de S.; de Barros Neto, B.
2011-09-01
A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ( β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.
Optimization of absorption placement using geometrical acoustic models and least squares.
Saksela, Kai; Botts, Jonathan; Savioja, Lauri
2015-04-01
Given a geometrical model of a space, the problem of optimally placing absorption in a space to match a desired impulse response is in general nonlinear. This has led some to use costly optimization procedures. This letter reformulates absorption assignment as a constrained linear least-squares problem. Regularized solutions result in direct distribution of absorption in the room and can accommodate multiple frequency bands, multiple sources and receivers, and constraints on geometrical placement of absorption. The method is demonstrated using a beam tracing model, resulting in the optimal absorption placement on the walls and ceiling of a classroom.
Directory of Open Access Journals (Sweden)
F. S. Zhang
2016-01-01
Full Text Available The spatial mapping of losses attributable to such disasters is now well established as a means of describing the spatial patterns of disaster risk, and it has been shown to be suitable for many types of major meteorological disasters. However, few studies have been carried out by developing a regression model to estimate the effects of the spatial distribution of meteorological factors on losses associated with meteorological disasters. In this study, the proposed approach is capable of the following: (a estimating the spatial distributions of seven meteorological factors using Bayesian maximum entropy, (b identifying the four mapping methods used in this research with the best performance based on the cross validation, and (c establishing a fitted model between the PLS components and disaster losses information using partial least squares regression within a specific research area. The results showed the following: (a best mapping results were produced by multivariate Bayesian maximum entropy with probabilistic soft data; (b the regression model using three PLS components, extracted from seven meteorological factors by PLS method, was the most predictive by means of PRESS/SS test; (c northern Hunan Province sustains the most damage, and southeastern Gansu Province and western Guizhou Province sustained the least.
Modelling and Estimation of Hammerstein System with Preload Nonlinearity
Directory of Open Access Journals (Sweden)
Khaled ELLEUCH
2010-12-01
Full Text Available This paper deals with modelling and parameter identification of nonlinear systems described by Hammerstein model having asymmetric static nonlinearities known as preload nonlinearity characteristic. The simultaneous use of both an easy decomposition technique and the generalized orthonormal bases leads to a particular form of Hammerstein model containing a minimal parameters number. The employ of orthonormal bases for the description of the linear dynamic block conducts to a linear regressor model, so that least squares techniques can be used for the parameter estimation. Singular Values Decomposition (SVD technique has been applied to separate the coupled parameters. To demonstrate the feasibility of the identification method, an illustrative example is included.
Least-squares reverse time migration in elastic media
Ren, Zhiming; Liu, Yang; Sen, Mrinal K.
2017-02-01
Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.
Topology testing of phylogenies using least squares methods
Directory of Open Access Journals (Sweden)
Wróbel Borys
2006-12-01
Full Text Available Abstract Background The least squares (LS method for constructing confidence sets of trees is closely related to LS tree building methods, in which the goodness of fit of the distances measured on the tree (patristic distances to the observed distances between taxa is the criterion used for selecting the best topology. The generalized LS (GLS method for topology testing is often frustrated by the computational difficulties in calculating the covariance matrix and its inverse, which in practice requires approximations. The weighted LS (WLS allows for a more efficient albeit approximate calculation of the test statistic by ignoring the covariances between the distances. Results The goal of this paper is to assess the applicability of the LS approach for constructing confidence sets of trees. We show that the approximations inherent to the WLS method did not affect negatively the accuracy and reliability of the test both in the analysis of biological sequences and DNA-DNA hybridization data (for which character-based testing methods cannot be used. On the other hand, we report several problems for the GLS method, at least for the available implementation. For many data sets of biological sequences, the GLS statistic could not be calculated. For some data sets for which it could, the GLS method included all the possible trees in the confidence set despite a strong phylogenetic signal in the data. Finally, contrary to WLS, for simulated sequences GLS showed undercoverage (frequent non-inclusion of the true tree in the confidence set. Conclusion The WLS method provides a computationally efficient approximation to the GLS useful especially in exploratory analyses of confidence sets of trees, when assessing the phylogenetic signal in the data, and when other methods are not available.
The moving-least-squares-particle hydrodynamics method (MLSPH)
Energy Technology Data Exchange (ETDEWEB)
Dilts, G. [Los Alamos National Lab., NM (United States)
1997-12-31
An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.
Cross-correlation least-squares reverse time migration in the pseudo-time domain
Li, Qingyang; Huang, Jianping; Li, Zhenchun
2017-08-01
The least-squares reverse time migration (LSRTM) method with higher image resolution and amplitude is becoming increasingly popular. However, the LSRTM is not widely used in field land data processing because of its sensitivity to the initial migration velocity model, large computational cost and mismatch of amplitudes between the synthetic and observed data. To overcome the shortcomings of the conventional LSRTM, we propose a cross-correlation least-squares reverse time migration algorithm in pseudo-time domain (PTCLSRTM). Our algorithm not only reduces the depth/velocity ambiguities, but also reduces the effect of velocity error on the imaging results. It relieves the accuracy requirements on the migration velocity model of least-squares migration (LSM). The pseudo-time domain algorithm eliminates the irregular wavelength sampling in the vertical direction, thus it can reduce the vertical grid points and memory requirements used during computation, which makes our method more computationally efficient than the standard implementation. Besides, for field data applications, matching the recorded amplitudes is a very difficult task because of the viscoelastic nature of the Earth and inaccuracies in the estimation of the source wavelet. To relax the requirement for strong amplitude matching of LSM, we extend the normalized cross-correlation objective function to the pseudo-time domain. Our method is only sensitive to the similarity between the predicted and the observed data. Numerical tests on synthetic and land field data confirm the effectiveness of our method and its adaptability for complex models.
Analyzing industrial energy use through ordinary least squares regression models
Golden, Allyson Katherine
Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and
Institute of Scientific and Technical Information of China (English)
CHEN Nan-xiang; CAO Lian-hai; HUANG Qiang
2005-01-01
Scientific forecasting water yield of mine is of great significance to the safety production of mine and the colligated using of water resources. The paper established the forecasting model for water yield of mine, combining neural network with the partial least square method. Dealt with independent variables by the partial least square method, it can not only solve the relationship between independent variables but also reduce the input dimensions in neural network model, and then use the neural network which can solve the non-linear problem better. The result of an example shows that the prediction has higher precision in forecasting and fitting.
A negative-norm least-squares method for time-harmonic Maxwell equations
Copeland, Dylan M.
2012-04-01
This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.
FOSLS (first-order systems least squares): An overivew
Energy Technology Data Exchange (ETDEWEB)
Manteuffel, T.A. [Univ. of Colorado, Boulder, CO (United States)
1996-12-31
The process of modeling a physical system involves creating a mathematical model, forming a discrete approximation, and solving the resulting linear or nonlinear system. The mathematical model may take many forms. The particular form chosen may greatly influence the ease and accuracy with which it may be discretized as well as the properties of the resulting linear or nonlinear system. If a model is chosen incorrectly it may yield linear systems with undesirable properties such as nonsymmetry or indefiniteness. On the other hand, if the model is designed with the discretization process and numerical solution in mind, it may be possible to avoid these undesirable properties.
Institute of Scientific and Technical Information of China (English)
Xudong Yu; Yu Wang; Guo Wei; Pengfei Zhang; Xingwu Long
2011-01-01
Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way. This is an important restraining factor for improving the accuracy of RLG. Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM). Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method. Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method. The results show the maximum error of RLG bias drops by almost two orders of magnitude after static temperature compensation, while bias stability of RLG improves by one order of magnitude after dynamic temperature compensation. Thus, the proposed method reduces the influence of temperature variation on the bias of the RLG effectively and improves the accuracy of the gyro scope considerably.%@@ Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way.This is an important restraining factor for improving the accuracy of RLG.Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM).Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method.Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method.
Billings, S. A.
1988-03-01
Time and frequency domain identification methods for nonlinear systems are reviewed. Parametric methods, prediction error methods, structure detection, model validation, and experiment design are discussed. Identification of a liquid level system, a heat exchanger, and a turbocharge automotive diesel engine are illustrated. Rational models are introduced. Spectral analysis for nonlinear systems is treated. Recursive estimation is mentioned.
Directory of Open Access Journals (Sweden)
I Gusti Ayu Made Srinadi
2017-06-01
Full Text Available Partial Least Square Regression (PLSR is one of the methods applied in the estimation of multiple linear regression models when the ordinary least square method (OLS can not be used. OLS generates an invalid model estimate when multicollinearity occurs or when the number of independent variables is greater than the number of data observations. In conditions that OLS can be applied in obtaining model estimation, want to know the performance of PLSR method. This study aims to determine the model of PLSR the influence of literacy rate, the average of school duration, school enrollment rate, Income per capita, and open unemployment rate to the level of poverty seen from the percentage of poor people in Indonesia by 2015. Estimated model with OLS , Only variable of literacy rate are included in the model with the coefficient of determination R2 = 32.52%. PLSR model estimation of cross-validation, leave-one-out method with one selected component has R2 of 33,23%. Both models shows a negative relationship between poverty and literacy rate. The higher literacy rate will reduce the poverty level, indicating that the success of the Indonesian government in the development of education will support the government's success in reducing poverty level.
Application of least-squares spectral element solver methods to incompressible flow problems
Proot, M.M.J.; Gerritsma, M.I.; Nool, M.
2003-01-01
Least-squares spectral element methods are based on two important and successful numerical methods: spectral /hp element methods and least-squares finite element methods. In this respect, least-squares spectral element methods are very powerfull since they combine the generality of finite element me
Parallel Implementation of a Least-Squares Spectral Element Solver for Incomressible Flow Problems
Nool, M.; Proot, M.M.J.; Sloot, P.M.A.; Kenneth Tan, C.J.; Dongarra, J.J.; Hoekstra, A.G.
2002-01-01
Least-squares spectral element methods are based on two important and successful numerical methods: spectral/{\\em hp} element methods and least-squares finite element methods. Least-squares methods lead to symmetric and positive definite algebraic systems which circumvent the Ladyzhenskaya-Babu\\v{s}
A Pascal program for the least-squares evaluation of standard RBS spectra
Hnatowicz, V.; Havránek, V.; Kvítek, J.
1992-11-01
A computer program for least-squares fitting of energy spectra obtained in common Rutherford backscattering (RBS) analyses is described. The samples analyzed by RBS technique are considered to be made up of a finite number of layers, each with uniform composition. The RBS spectra are treated as a combination of variable number of three different basic figures (strip, bulge and Gaussian) which are represented by ad-hoc chosen analytical expressions. The initial parameter estimates are inserted by the operator (with an assistance of graphical support on a TV screen) and the result of the fit is displayed on the screen and stored as a table on hard disk.
Regularization Paths for Least Squares Problems with Generalized $\\ell_1$ Penalties
Tibshirani, Ryan J
2010-01-01
We present a path algorithm for least squares problems with generalized $\\ell_1$ penalties. This includes as a special case the lasso and fused lasso problems. The algorithm is based on solving the (equivalent) Lagrange dual problem, an approach which offers both a computational advantage and an interesting geometric interpretation of the solution path. Using insights gathered from the dual formulation, we study degrees of freedom for the generalized problem, and develop an unbiased estimate of the degrees of freedom of the fused lasso fit. Our approach bears similarities to least angle regression (LARS), and a simple modification to our method gives the LARS procedure exactly.
Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem
Energy Technology Data Exchange (ETDEWEB)
Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)
1996-12-31
Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.
Defense of the Least Squares Solution to Peelle’s Pertinent Puzzle
Directory of Open Access Journals (Sweden)
Nicolas Hengartner
2011-02-01
Full Text Available Generalized least squares (GLS for model parameter estimation has a long and successful history dating to its development by Gauss in 1795. Alternatives can outperform GLS in some settings, and alternatives to GLS are sometimes sought when GLS exhibits curious behavior, such as in Peelle’s Pertinent Puzzle (PPP. PPP was described in 1987 in the context of estimating fundamental parameters that arise in nuclear interaction experiments. In PPP, GLS estimates fell outside the range of the data, eliciting concerns that GLS was somehow flawed. These concerns have led to suggested alternatives to GLS estimators. This paper defends GLS in the PPP context, investigates when PPP can occur, illustrates when PPP can be beneficial for parameter estimation, reviews optimality properties of GLS estimators, and gives an example in which PPP does occur.
Linear least squares compartmental-model-independent parameter identification in PET.
Thie, J A; Smith, G T; Hubner, K F
1997-02-01
A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals--all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible.
Huang, Kang; Wang, Hui-jun; Xu, Hui-rong; Wang, Jian-ping; Ying, Yi-bin
2009-04-01
The application of least square support vector machines (LS-SVM) regression method based on statistics study theory to the analysis with near infrared (NIR) spectra of tomato juice was introduced in the present paper. In this method, LS-SVM was used for establishing model of spectral analysis, and was applied to predict the sugar contents (SC) and available acid (VA) in tomato juice samples. NIR transmission spectra of tomato juice were measured in the spectral range of 800-2,500 nm using InGaAs detector. The radial basis function (RBF) was adopted as a kernel function of LS-SVM. Sixty seven tomato juice samples were used as calibration set, and thirty three samples were used as validation set. The results of the method for sugar contents (SC) and available acid (VA) prediction were: a high correlation coefficient of 0.9903 and 0.9675, and a low root mean square error of prediction (RMSEP) of 0.0056 degree Brix and 0.0245, respectively. And compared to PLS and PCR methods, the performance of the LSSVM method was better. The results indicated that it was possible to built statistic models to quantify some common components in tomato juice using near-infrared (NIR) spectroscopy and least square support vector machines (LS-SVM) regression method as a nonlinear multivariate calibration procedure, and LS-SVM could be a rapid and accurate method for juice components determination based on NIR spectra.
Iterative least square phase-measuring method that tolerates extended finite bandwidth illumination.
Munteanu, Florin; Schmit, Joanna
2009-02-20
Iterative least square phase-measuring techniques address the phase-shifting interferometry issue of sensitivity to vibrations and scanner nonlinearity. In these techniques the wavefront phase and phase steps are determined simultaneously from a single set of phase-shifted fringe frames where the phase shift does not need to have a nominal value or be a priori precisely known. This method is commonly used in laser interferometers in which the contrast of fringes is constant between frames and across the field. We present step-by-step modifications to the basic iterative least square method. These modifications allow for vibration insensitive measurements in an interferometric system in which fringe contrast varies across a single frame, as well as from frame to frame, due to the limited bandwidth light source and the nonzero numerical aperture of the objective. We demonstrate the efficiency of the new algorithm with experimental data, and we analyze theoretically the degree of contrast variation that this new algorithm can tolerate.
Bulcock, J. W.
The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…
Concerning an application of the method of least squares with a variable weight matrix
Sukhanov, A. A.
1979-01-01
An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2016-10-26
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors
Outlier detection algorithms for least squares time series regression
DEFF Research Database (Denmark)
Johansen, Søren; Nielsen, Bent
We review recent asymptotic results on some robust methods for multiple regression. The regressors include stationary and non-stationary time series as well as polynomial terms. The methods include the Huber-skip M-estimator, 1-step Huber-skip M-estimators, in particular the Impulse Indicator...... theory involves normal distribution results and Poisson distribution results. The theory is applied to a time series data set....
Efficent Estimation of the Non-linear Volatility and Growth Model
2009-01-01
Ramey and Ramey (1995) introduced a non-linear model relating volatility to growth. The solution of this model by generalised computer algorithms for non-linear maximum likelihood estimation encounters the usual difficulties and is, at best, tedious. We propose an algebraic solution for the model that provides fully efficient estimators and is elementary to implement as a standard ordinary least squares procedure. This eliminates issues such as the ‘guesstimation’ of initial values and mul...
A Coupled Finite Difference and Moving Least Squares Simulation of Violent Breaking Wave Impact
DEFF Research Database (Denmark)
Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter
2012-01-01
incompressible and inviscid model and the wave impacts on the vertical breakwater are simulated in this model. The resulting maximum pressures and forces on the breakwater are relatively high when compared with other studies and this is due to the incompressible nature of the present model.......Two model for simulation of free surface flow is presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A special...... feature of this model is a generalized finite point set method which is applied to the solution of the Poisson equation on an unstructured point distribution. The presented finite point set method is generalized to arbitrary order of approximation. The two models are applied to simulation of steep...
Xu, Lin; Feng, Yanqiu; Liu, Xiaoyun; Kang, Lili; Chen, Wufan
2014-01-01
Accuracy of interpolation coefficients fitting to the auto-calibrating signal data is crucial for k-space-based parallel reconstruction. Both conventional generalized autocalibrating partially parallel acquisitions (GRAPPA) reconstruction that utilizes linear interpolation function and nonlinear GRAPPA (NLGRAPPA) reconstruction with polynomial kernel function are sensitive to interpolation window and often cannot consistently produce good results for overall acceleration factors. In this study, sparse multi-kernel learning is conducted within the framework of least squares support vector regression to fit interpolation coefficients as well as to reconstruct images robustly under different subsampling patterns and coil datasets. The kernel combination weights and interpolation coefficients are adaptively determined by efficient semi-infinite linear programming techniques. Experimental results on phantom and in vivo data indicate that the proposed method can automatically achieve an optimized compromise between noise suppression and residual artifacts for various sampling schemes. Compared with NLGRAPPA, our method is significantly less sensitive to the interpolation window and kernel parameters.
Baseline configuration for GNSS attitude determination with an analytical least-squares solution
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2016-12-01
The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance-covariance matrix.
Song, Jun-Ling; Hong, Yan-Ji; Wang, Guang-Yu; Pan, Hu
2013-08-01
The measurement of nonuniform temperature and concentration distributions was investigated based on tunable diode laser absorption spectroscopy technology. Through direct scanning multiple absorption lines of H2O, two zones for temperature and concentration distribution were achieved by solving nonlinear equations by least-square fitting from numerical and experimental studies. The numerical results show that the calculated temperature and concentration have relative errors of 8.3% and 7.6% compared to the model, respectively. The calculating accuracy can be improved by increasing the number of absorption lines and reduction in unknown numbers. Compared with the thermocouple readings, the high and low temperatures have relative errors of 13.8% and 3.5% respectively. The numerical results are in agreement with the experimental results.
Natural gradient-based recursive least-squares algorithm for adaptive blind source separation
Institute of Scientific and Technical Information of China (English)
ZHU Xiaolong; ZHANG Xianda; YE Jimin
2004-01-01
This paper focuses on the problem of adaptive blind source separation (BSS).First, a recursive least-squares (RLS) whitening algorithm is proposed. By combining it with a natural gradient-based RLS algorithm for nonlinear principle component analysis (PCA), and using reasonable approximations, a novel RLS algorithm which can achieve BSS without additional pre-whitening of the observed mixtures is obtained. Analyses of the equilibrium points show that both of the RLS whitening algorithm and the natural gradient-based RLS algorithm for BSS have the desired convergence properties. It is also proved that the combined new RLS algorithm for BSS is equivariant and has the property of keeping the separating matrix from becoming singular. Finally, the effectiveness of the proposed algorithm is verified by extensive simulation results.
Hasegawa, K; Funatsu, K
2000-01-01
Quantitative structure-activity relationship (QSAR) studies based on chemometric techniques are reviewed. Partial least squares (PLS) is introduced as a novel robust method to replace classical methods such as multiple linear regression (MLR). Advantages of PLS compared to MLR are illustrated with typical applications. Genetic algorithm (GA) is a novel optimization technique which can be used as a search engine in variable selection. A novel hybrid approach comprising GA and PLS for variable selection developed in our group (GAPLS) is described. The more advanced method for comparative molecular field analysis (CoMFA) modeling called GA-based region selection (GARGS) is described as well. Applications of GAPLS and GARGS to QSAR and 3D-QSAR problems are shown with some representative examples. GA can be hybridized with nonlinear modeling methods such as artificial neural networks (ANN) for providing useful tools in chemometric and QSAR.
Directory of Open Access Journals (Sweden)
Kuosheng Jiang
2014-07-01
Full Text Available In this paper a stochastic resonance (SR-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.
New predictive control algorithms based on Least Squares Support Vector Machines
Institute of Scientific and Technical Information of China (English)
LIU Bin; SU Hong-ye; CHU Jian
2005-01-01
Used for industrial process with different degree of nonlinearity, the two predictive control algorithms presented in this paper are based on Least Squares Support Vector Machines (LS-SVM) model. For the weakly nonlinear system, the system model is built by using LS-SVM with linear kernel function, and then the obtained linear LS-SVM model is transformed into linear input-output relation of the controlled system. However, for the strongly nonlinear system, the off-line model of the controlled system is built by using LS-SVM with Radial Basis Function (RBF) kernel. The obtained nonlinear LS-SVM model is linearized at each sampling instant of system running, after which the on-line linear input-output model of the system is built. Based on the obtained linear input-output model, the Generalized Predictive Control (GPC) algorithm is employed to implement predictive control for the controlled plant in both algorithms. The simulation results after the presented algorithms were implemented in two different industrial processes model; respectively revealed the effectiveness and merit of both algorithms.
Robust Nonlinear Regression in Enzyme Kinetic Parameters Estimation
Directory of Open Access Journals (Sweden)
Maja Marasović
2017-01-01
Full Text Available Accurate estimation of essential enzyme kinetic parameters, such as Km and Vmax, is very important in modern biology. To this date, linearization of kinetic equations is still widely established practice for determining these parameters in chemical and enzyme catalysis. Although simplicity of linear optimization is alluring, these methods have certain pitfalls due to which they more often then not result in misleading estimation of enzyme parameters. In order to obtain more accurate predictions of parameter values, the use of nonlinear least-squares fitting techniques is recommended. However, when there are outliers present in the data, these techniques become unreliable. This paper proposes the use of a robust nonlinear regression estimator based on modified Tukey’s biweight function that can provide more resilient results in the presence of outliers and/or influential observations. Real and synthetic kinetic data have been used to test our approach. Monte Carlo simulations are performed to illustrate the efficacy and the robustness of the biweight estimator in comparison with the standard linearization methods and the ordinary least-squares nonlinear regression. We then apply this method to experimental data for the tyrosinase enzyme (EC 1.14.18.1 extracted from Solanum tuberosum, Agaricus bisporus, and Pleurotus ostreatus. The results on both artificial and experimental data clearly show that the proposed robust estimator can be successfully employed to determine accurate values of Km and Vmax.
Directory of Open Access Journals (Sweden)
Byambaa Dorj
2016-01-01
Full Text Available The next promising key issue of the automobile development is a self-driving technique. One of the challenges for intelligent self-driving includes a lane-detecting and lane-keeping capability for advanced driver assistance systems. This paper introduces an efficient and lane detection method designed based on top view image transformation that converts an image from a front view to a top view space. After the top view image transformation, a Hough transformation technique is integrated by using a parabolic model of a curved lane in order to estimate a parametric model of the lane in the top view space. The parameters of the parabolic model are estimated by utilizing a least-square approach. The experimental results show that the newly proposed lane detection method with the top view transformation is very effective in estimating a sharp and curved lane leading to a precise self-driving capability.
Directory of Open Access Journals (Sweden)
Yang Xu
2016-02-01
Full Text Available Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion (BIC. We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability, polymorphic information content (PIC, and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.
Institute of Scientific and Technical Information of China (English)
Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu
2016-01-01
Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion(BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability,polymorphic information content(PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.
Institute of Scientific and Technical Information of China (English)
Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu
2016-01-01
Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion (BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability, polymorphic information content (PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.
Directory of Open Access Journals (Sweden)
Mingjun Zhang
2015-12-01
Full Text Available A novel thruster fault identification method for autonomous underwater vehicle is presented in this article. It uses the proposed peak region energy method to extract fault feature and uses the proposed least square grey relational grade method to estimate fault degree. The peak region energy method is developed from fusion feature modulus maximum method. It applies the fusion feature modulus maximum method to get fusion feature and then regards the maximum of peak region energy in the convolution operation results of fusion feature as fault feature. The least square grey relational grade method is developed from grey relational analysis algorithm. It determines the fault degree interval by the grey relational analysis algorithm and then estimates fault degree in the interval by least square algorithm. Pool experiments of the experimental prototype are conducted to verify the effectiveness of the proposed methods. The experimental results show that the fault feature extracted by the peak region energy method is monotonic to fault degree while the one extracted by the fusion feature modulus maximum method is not. The least square grey relational grade method can further get an estimation result between adjacent standard fault degrees while the estimation result of the grey relational analysis algorithm is just one of the standard fault degrees.
Confidence Region of Least Squares Solution for Single-Arc Observations
Principe, G.; Armellin, R.; Lewis, H.
2016-09-01
The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.
Least-squares finite-element scheme for the lattice Boltzmann method on an unstructured mesh.
Li, Yusong; LeBoeuf, Eugene J; Basu, P K
2005-10-01
A numerical model of the lattice Boltzmann method (LBM) utilizing least-squares finite-element method in space and the Crank-Nicolson method in time is developed. This method is able to solve fluid flow in domains that contain complex or irregular geometric boundaries by using the flexibility and numerical stability of a finite-element method, while employing accurate least-squares optimization. Fourth-order accuracy in space and second-order accuracy in time are derived for a pure advection equation on a uniform mesh; while high stability is implied from a von Neumann linearized stability analysis. Implemented on unstructured mesh through an innovative element-by-element approach, the proposed method requires fewer grid points and less memory compared to traditional LBM. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow, Couette flow, and flow past a circular cylinder. Finally, the proposed method is applied to estimate the permeability of a randomly generated porous media, which further demonstrates its inherent geometric flexibility.
Online segmentation of time series based on polynomial least-squares approximations.
Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard
2010-12-01
The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs.
Nobile, Fabio
2015-01-07
We consider a general problem F(u, y) = 0 where u is the unknown solution, possibly Hilbert space valued, and y a set of uncertain parameters. We specifically address the situation in which the parameterto-solution map u(y) is smooth, however y could be very high (or even infinite) dimensional. In particular, we are interested in cases in which F is a differential operator, u a Hilbert space valued function and y a distributed, space and/or time varying, random field. We aim at reconstructing the parameter-to-solution map u(y) from random noise-free or noisy observations in random points by discrete least squares on polynomial spaces. The noise-free case is relevant whenever the technique is used to construct metamodels, based on polynomial expansions, for the output of computer experiments. In the case of PDEs with random parameters, the metamodel is then used to approximate statistics of the output quantity. We discuss the stability of discrete least squares on random points show convergence estimates both in expectation and probability. We also present possible strategies to select, either a-priori or by adaptive algorithms, sequences of approximating polynomial spaces that allow to reduce, and in some cases break, the curse of dimensionality
NEGATIVE NORM LEAST-SQUARES METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS
Institute of Scientific and Technical Information of China (English)
Gao Shaoqin; Duan Huoyuan
2008-01-01
The purpose of this article is to develop and analyze least-squares approxi-mations for the incompressible magnetohydrodynamic equations. The major advantage of the least-squares finite element method is that it is not subjected to the so-called Ladyzhenskaya-Babuska-Brezzi (LBB) condition. The authors employ least-squares func-tionals which involve a discrete inner product which is related to the inner product in H-1(Ω).
Generalized total least squares to characterize biogeochemical processes of the ocean
Guglielmi, Véronique; Goyet, Catherine; Touratier, Franck; El Jai, Marie
2016-11-01
The chemical composition of the global ocean is governed by biological, chemical, and physical processes. These processes interact with each other so that the concentrations of carbon, oxygen, nitrogen (mainly from nitrate, nitrite, ammonium), and phosphorous (mainly from phosphate), vary in constant proportions, referred to as the Redfield ratios. We construct here the generalized total least squares estimator of these ratios. The significance of our approach is twofold; it respects the hydrological characteristics of the studied areas, and it can be applied identically in any area where enough data are available. The tests applied to Atlantic Ocean data highlight a variability of the Redfield ratios, both with geographical location and with depth. This variability emphasizes the importance of local and accurate estimates of Redfield ratios.
Directory of Open Access Journals (Sweden)
Saïda Bedoui
2013-01-01
Full Text Available This paper addresses the problem of simultaneous identification of linear discrete time delay multivariable systems. This problem involves both the estimation of the time delays and the dynamic parameters matrices. In fact, we suggest a new formulation of this problem allowing defining the time delay and the dynamic parameters in the same estimated vector and building the corresponding observation vector. Then, we use this formulation to propose a new method to identify the time delays and the parameters of these systems using the least square approach. Convergence conditions and statistics properties of the proposed method are also developed. Simulation results are presented to illustrate the performance of the proposed method. An application of the developed approach to compact disc player arm is also suggested in order to validate simulation results.
CHEBYSHEV WEIGHTED NORM LEAST-SQUARES SPECTRAL METHODS FOR THE ELLIPTIC PROBLEM
Institute of Scientific and Technical Information of China (English)
Sang Dong Kim; Byeong Chun Shin
2006-01-01
We develop and analyze a first-order system least-squares spectral method for the second-order elliptic boundary value problem with variable coefficients. We first analyze the Chebyshev weighted norm least-squares functional defined by the sum of the L2w-and H-1w,- norm of the residual equations and then we replace the negative norm by the discrete negative norm and analyze the discrete Chebyshev weighted least-squares method. The spectral convergence is derived for the proposed method. We also present various numerical experiments. The Legendre weighted least-squares method can be easily developed by following this paper.
Directory of Open Access Journals (Sweden)
Zhan-bo Chen
2014-01-01
Full Text Available In order to improve the performance prediction accuracy of hydraulic excavator, the regression least squares support vector machine is applied. First, the mathematical model of the regression least squares support vector machine is studied, and then the algorithm of the regression least squares support vector machine is designed. Finally, the performance prediction simulation of hydraulic excavator based on regression least squares support vector machine is carried out, and simulation results show that this method can predict the performance changing rules of hydraulic excavator correctly.
Radio astronomical image formation using constrained least squares and Krylov subspaces
Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan
2016-04-01
Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources
Directory of Open Access Journals (Sweden)
Ibrahim Mohd Tarmizi
2017-01-01
Full Text Available Theories are developed to explain an observed phenomenon in an effort to understand why and how things happen. Theories thus, use latent variables to estimate conceptual parameters. The level of abstraction depends, partly on the complexity of the theoretical model explaining the phenomenon. The conjugation of directly-measured variables leads to a formation of a first-order factor. A combination of theoretical underpinnings supporting an existence of a higher-order components, and statistical evidence pointing to such presence adds advantage for the researchers to investigate a phenomenon both at an aggregated and disjointed dimensions. As partial least square (PLS gains its tractions in theory development, behavioural accounting discipline in general should exploit the flexibility of PLS to work with the higher-order factors. However, technical guides are scarcely available. Therefore, this article presents a PLS approach to validate a higher-order factor on a statistical ground using accounting information system dataset.
DEFF Research Database (Denmark)
Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel;
2014-01-01
. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3......Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods...
Spline based least squares integration for two-dimensional shape or wavefront reconstruction
Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad
2017-04-01
In this work, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.
Directory of Open Access Journals (Sweden)
Margaretha Ohyver
2014-12-01
Full Text Available Multicollinearity and outliers are the common problems when estimating regression model. Multicollinearitiy occurs when there are high correlations among predictor variables, leading to difficulties in separating the effects of each independent variable on the response variable. While, if outliers are present in the data to be analyzed, then the assumption of normality in the regression will be violated and the results of the analysis may be incorrect or misleading. Both of these cases occurred in the data on room occupancy rate of hotels in Kendari. The purpose of this study is to find a model for the data that is free of multicollinearity and outliers and to determine the factors that affect the level of room occupancy hotels in Kendari. The method used is Continuous Wavelet Transformation and Partial Least Squares. The result of this research is a regression model that is free of multicollinearity and a pattern of data that resolved the present of outliers.
Water Quantity Prediction Using Least Squares Support Vector Machines (LS-SVM Method
Directory of Open Access Journals (Sweden)
Nian Zhang
2014-08-01
Full Text Available The impact of reliable estimation of stream flows at highly urbanized areas and the associated receiving waters is very important for water resources analysis and design. We used the least squares support vector machine (LS-SVM based algorithm to forecast the future streamflow discharge. A Gaussian Radial Basis Function (RBF kernel framework was built on the data set to optimize the tuning parameters and to obtain the moderated output. The training process of LS-SVM was designed to select both kernel parameters and regularization constants. The USGS real-time water data were used as time series input. 50% of the data were used for training, and 50% were used for testing. The experimental results showed that the LS-SVM algorithm is a reliable and efficient method for streamflow prediction, which has an important impact to the water resource management field.
From least squares to multilevel modeling: A graphical introduction to Bayesian inference
Loredo, Thomas J.
2016-01-01
This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.
Experiments using least square lattice filters for the identification of structural dynamics
Sundararajan, N.; Montgomery, R. C.
1983-01-01
An approach for identifying the dynamics of large space structures is applied to a free-free beam. In this approach the system's order is determined on-line, along with mode shapes, using recursive lattice filters which provide a least square estimate of the measurement data. The mode shapes determined are orthonormal in the space of the measurements and, hence, are not the natural modes of the structure. To determine the natural modes of the structure, a method based on the fast Fourier transform is used on the outputs of the lattice filter. These natural modes are used to obtain the modal amplitude time series which provides the input data for an output error parameter identification scheme that identifies the ARMA parameters of the difference equation model of the modes. the approach is applied to both simulated and experimental data.
Adaptive control of a flexible beam using least square lattice filters
Sundararajan, N.; Montgomery, R. C.
1983-01-01
This paper presents an indirect adaptive control scheme for the control of flexible structures using recursive least square lattice filters. The identification scheme uses lattice filters which provide an on-line estimate of the number of modes, mode shapes and modal amplitudes. These modes are coupled and a transformation to decouple them in order to obtain the natural modes is presented. The decoupled modal amplitude time series are then used in an equation error identification scheme to identify the model parameters in an autoregressive moving average (ARMA) form. The control is based on modal pole placement scheme with the objective of vibration suppression. The control gains are calculated based on the identified ARMA parameters. Before using the identified parameters for control, detailed testing and validation procedures are carried out on the identified parameters. The full adaptive control scheme is demonstrated using the simulation for the 12 foot free-free beam apparatus at NASA Langley Research Center.
Montgomery, R. C.; Sundararajan, N.
1984-01-01
The basic theory of least square lattice filters and their use in identification of structural dynamics systems is summarized. Thereafter, this theory is applied to a two-dimensional grid structure made of overlapping bars. Previously, this theory has been applied to an integral beam. System identification results are presented for both simulated and experimental tests and they are compared with those predicted using finite element modelling. The lattice filtering approach works well for simulated data based on finite element modelling. However, considerable discrepancy exists between estimates obtained from experimental data and the finite element analysis. It is believed that this discrepancy is the result of inadequacies in the finite element modelling to represent the damped motion of the laboratory apparatus.
A Selective Moving Window Partial Least Squares Method and Its Application in Process Modeling
Institute of Scientific and Technical Information of China (English)
Ouguan Xu; Yongfeng Fu; Hongye Su; Lijuan Li
2014-01-01
A selective moving window partial least squares (SMW-PLS) soft sensor was proposed in this paper and applied to a hydro-isomerization process for on-line estimation of para-xylene (PX) content. Aiming at the high frequen-cy of model updating in previous recursive PLS methods, a selective updating strategy was developed. The model adaptation is activated once the prediction error is larger than a preset threshold, or the model is kept unchanged. As a result, the frequency of model updating is reduced greatly, while the change of prediction accuracy is minor. The performance of the proposed model is better as compared with that of other PLS-based model. The compro-mise between prediction accuracy and real-time performance can be obtained by regulating the threshold. The guidelines to determine the model parameters are illustrated. In summary, the proposed SMW-PLS method can deal with the slow time-varying processes effectively.
A Collocation Method by Moving Least Squares Applicable to European Option Pricing
Directory of Open Access Journals (Sweden)
M. Amirfakhrian
2016-05-01
Full Text Available The subject matter of the present inquiry is the pricing of European options in the actual form of numbers. To assess the numerical prices of European options, a scheme independent of any kind of mesh but rather powered by moving least squares (MLS estimation is made. In practical terms, first the discretion of time variable is implemented and then, an MLS-powered method is applied for spatial approximation. As, unlike other methods, these courses of action mentioned here don't rely on a mesh, one can firmly claim they are to be categorized under mesh-less methods. And, of course, at the end of the paper, various experiments are offered to prove how efficient and how powerful the introduced approach is.
DEFF Research Database (Denmark)
Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel;
2014-01-01
Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods....... Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3...
Scaled first-order methods for a class of large-scale constrained least square problems
Coli, Vanna Lisa; Ruggiero, Valeria; Zanni, Luca
2016-10-01
Typical applications in signal and image processing often require the numerical solution of large-scale linear least squares problems with simple constraints, related to an m × n nonnegative matrix A, m « n. When the size of A is such that the matrix is not available in memory and only the operators of the matrix-vector products involving A and AT can be computed, forward-backward methods combined with suitable accelerating techniques are very effective; in particular, the gradient projection methods can be improved by suitable step-length rules or by an extrapolation/inertial step. In this work, we propose a further acceleration technique for both schemes, based on the use of variable metrics tailored for the considered problems. The numerical effectiveness of the proposed approach is evaluated on randomly generated test problems and real data arising from a problem of fibre orientation estimation in diffusion MRI.
Institute of Scientific and Technical Information of China (English)
陶莉莉; 钟伟民; 罗娜; 钱锋
2012-01-01
针对软测量建模过程中数据可能存在粗大误差以及粗差数据对模型的性能产生的影响,提出了一种基于粗差判别的自适应加权最小二乘支持向量机回归方法(WLS-SVM).该方法首先根据3δ法则检测出样本中的显著误差并加以剔除,然后根据样本误差的大小自适应地调整权值,使得非显著误差对模型性能的影响大大降低.另外,由于最小二乘支持向量机的正则化参数和核宽度参数对模型的拟合精度和泛化能力有较大的影响,一般依靠经验和试算的方法进行估计,耗时且不准确,本文将模型的参数作为进化算法的优化问题,应用自适应免疫算法(AIGA)对参数进行优化选择.仿真实验表明,该方法对非线性系统的建模具有很好的效果.同时,将该方法应用于工业PX氧化建模过程中动力学参数的估计中,结果表明,基于粗差判别的参数优化自适应最小二乘支持向量机预测精度高,取得了较好的效果.%The presence of gross errors can corrupt a model's performance,giving undesirable results. A novel weighted least square support vector machine regression (WLS-SVM) is proposed,which combines gross error detection and adaptive weight value for the training sample. First,the 3δ principle is applied to detect the gross error. Second,the initial weight is obtained according to the fitting error of each sample. Then,an adaptive immune algorithm (AIGA) is applied to obtain the optimal parameters of the WLS-SVM. To illustrate the performance of the WLS-SVM,simulation experiment is designed to produce the training sample. The results showed that the predicting performance of AIGA-WLS-SVM is the best. Furthermore,the AIGA-WLS-SVM method was applied to estimate the rate constants of an industrial p-xylene oxidation model,and the satisfactory result was obtained.
A Least Square Finite Element Technique for Transonic Flow with Shock,
1977-08-22
dimensional form. A least square finite element technique was used with a linearly interpolating polynomial to reduce the governing equation to a...partial differential equations by a system of ordinary differential equations. Using the least square finite element technique a computer program was
Speckle evolution with multiple steps of least-squares phase removal
CSIR Research Space (South Africa)
Chen, M
2011-08-01
Full Text Available The authors study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both...
Least-Squares Mirrorsymmetric Solution for Matrix Equations (AX=B, XC=D)
Institute of Scientific and Technical Information of China (English)
Fanliang Li; Xiyan Hu; Lei Zhang
2006-01-01
In this paper, least-squares mirrorsymmetric solution for matrix equations (AX =B, XC=D) and its optimal approximation is considered. With special expression of mirrorsymmetric matrices, a general representation of solution for the least-squares problem is obtained. In addition, the optimal approximate solution and some algorithms to obtain the optimal approximation are provided.
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Energy Technology Data Exchange (ETDEWEB)
Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov
2016-02-25
Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.
Energy Technology Data Exchange (ETDEWEB)
Li, Chun-Hua; Zhu, Xin-Jian; Cao, Guang-Yi; Sui, Sheng; Hu, Ming-Ruo [Fuel Cell Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China)
2008-01-03
This paper reports a Hammerstein modeling study of a proton exchange membrane fuel cell (PEMFC) stack using least squares support vector machines (LS-SVM). PEMFC is a complex nonlinear, multi-input and multi-output (MIMO) system that is hard to model by traditional methodologies. Due to the generalization performance of LS-SVM being independent of the dimensionality of the input data and the particularly simple structure of the Hammerstein model, a MIMO SVM-ARX (linear autoregression model with exogenous input) Hammerstein model is used to represent the PEMFC stack in this paper. The linear model parameters and the static nonlinearity can be obtained simultaneously by solving a set of linear equations followed by the singular value decomposition (SVD). The simulation tests demonstrate the obtained SVM-ARX Hammerstein model can efficiently approximate the dynamic behavior of a PEMFC stack. Furthermore, based on the proposed SVM-ARX Hammerstein model, valid control strategy studies such as predictive control, robust control can be developed. (author)
Energy Technology Data Exchange (ETDEWEB)
Machado, A.E. de A, E-mail: aeam@rpd.ufmg.br [Laboratorio de Quimica Computacional e Modelagem Molecular (LQC-MM), Departamento de Quimica, ICEx, Universidade Federal de Minas Gerais (UFMG), Campus Universitario, Pampulha, Belo Horizonte, MG 31270-90 (Brazil); Departamento de Quimica Fundamental, Universidade Federal de Pernambuco, Recife, PE 50740-540 (Brazil); Gama, A.A. de S da; Barros Neto, B. de [Departamento de Quimica Fundamental, Universidade Federal de Pernambuco, Recife, PE 50740-540 (Brazil)
2011-09-22
Graphical abstract: PLS regression equations predicts quite well static {beta} values for a large set of donor-acceptor organic molecules, in close agreement with the available experimental data. Display Omitted Highlights: {yields} PLS regression predicts static {beta} values of 35 push-pull organic molecules. {yields} PLS equations show correlation of {beta} with structural-electronic parameters. {yields} PLS regression selects best components of push-bridge-pull nonlinear compounds. {yields} PLS analyses can be routinely used to select novel second-order materials. - Abstract: A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ({beta}'s). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large {beta} values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of {pi}-electrons. The regression equation predicts quite well the static {beta} values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.
Calculation of stratum surface principal curvature based on a moving least square method
Institute of Scientific and Technical Information of China (English)
LI Guo-qing; MENG Zhao-ping; MA Feng-shan; ZHAO Hai-jun; DING De-min; LIU Qin; WANG Cheng
2008-01-01
With the east section of the Changji sag Zhunger Basin as a case study, both a principal curvature method and a moving least square method are elaborated. The moving least square method is introduced, for the first time, to fit a stratum surface. The results show that, using the same-degree base function, compared with a traditional least square method, the moving least square method can produce lower fitting errors, the fitting surface can describe the morphological characteristics of stratum surfaces more accurately and the principal curvature values vary within a wide range and may be more suitable for the prediction of the distribu-tion of structural fractures. The moving least square method could be useful in curved surface fitting and stratum curvature analysis.
Li, Qing-Bo; Huang, Zheng-Wei
2014-02-01
In order to improve the prediction accuracy of quantitative analysis model in the near-infrared spectroscopy of blood glucose, this paper, by combining net analyte preprocessing (NAP) algorithm and radial basis functions partial least squares (RBFPLS) regression, builds a nonlinear model building method which is suitable for glucose measurement of human, named as NAP-RBFPLS. First, NAP is used to pre-process the near-infrared spectroscopy of blood glucose, in order to effectively extract the information which only relates to glucose signal from the original near-infrared spectra, so that it could effectively weaken the occasional correlation problems of the glucose changes and the interference factors which are caused by the absorption of water, albumin, hemoglobin, fat and other components of the blood in human body, the change of temperature of human body, the drift of measuring instruments, the changes of measuring environment, and the changes of measuring conditions; and then a nonlinear quantitative analysis model is built with the near-infrared spectroscopy data after NAP, in order to solve the nonlinear relationship between glucose concentrations and near-infrared spectroscopy which is caused by body strong scattering. In this paper, the new method is compared with other three quantitative analysis models building on partial least squares (PLS), net analyte preprocessing partial least squares (NAP-PLS) and RBFPLS respectively. At last, the experimental results show that the nonlinear calibration model, developed by combining NAP algorithm and RBFPLS regression, which was put forward in this paper, greatly improves the prediction accuracy of prediction sets, and what has been proved in this paper is that the nonlinear model building method will produce practical applications for the research of non-invasive detection techniques on human glucose concentrations.
Linear parameter estimation of rational biokinetic functions
Doeswijk, T.G.; Keesman, K.J.
2009-01-01
For rational biokinetic functions such as the Michaelis-Menten equation, in general, a nonlinear least-squares method is a good estimator. However, a major drawback of a nonlinear least-squares estimator is that it can end up in a local minimum. Rearranging and linearizing rational biokinetic
Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello
2016-05-01
Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination.
An Effective Hybrid Artificial Bee Colony Algorithm for Nonnegative Linear Least Squares Problems
Directory of Open Access Journals (Sweden)
Xiangyu Kong
2014-07-01
Full Text Available An effective hybrid artificial bee colony algorithm is proposed in this paper for nonnegative linear least squares problems. To further improve the performance of algorithm, orthogonal initialization method is employed to generate the initial swarm. Furthermore, to balance the exploration and exploitation abilities, a new search mechanism is designed. The performance of this algorithm is verified by using 27 benchmark functions and 5 nonnegative linear least squares test problems. And the comparison analyses are given between the proposed algorithm and other swarm intelligence algorithms. Numerical results demonstrate that the proposed algorithm displays a high performance compared with other algorithms for global optimization problems and nonnegative linear least squares problems.
A least squares finite element scheme for transonic flow around harmonically oscillating airfoils
Cox, C. L.; Fix, G. J.; Gunzburger, M. D.
1983-01-01
The present investigation shows that a finite element scheme with a weighted least squares variational principle is applicable to the problem of transonic flow around a harmonically oscillating airfoil. For the flat plate case, numerical results compare favorably with the exact solution. The obtained numerical results for the transonic problem, for which an exact solution is not known, have the characteristics of known experimental results. It is demonstrated that the performance of the employed numerical method is independent of equation type (elliptic or hyperbolic) and frequency. The weighted least squares principle allows the appropriate modeling of singularities, which such a modeling of singularities is not possible with normal least squares.
Multilevel solvers of first-order system least-squares for Stokes equations
Energy Technology Data Exchange (ETDEWEB)
Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)
1996-12-31
Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.
Dutta, Gaurav
2013-08-20
Attenuation leads to distortion of amplitude and phase of seismic waves propagating inside the earth. Conventional acoustic and least-squares reverse time migration do not account for this distortion which leads to defocusing of migration images in highly attenuative geological environments. To account for this distortion, we propose to use the visco-acoustic wave equation for least-squares reverse time migration. Numerical tests on synthetic data show that least-squares reverse time migration with the visco-acoustic wave equation corrects for this distortion and produces images with better balanced amplitudes compared to the conventional approach. © 2013 SEG.
Hybrid partial least squares and neural network approach for short-term electrical load forecasting
Institute of Scientific and Technical Information of China (English)
Shukang YANG; Ming LU; Huifeng XUE
2008-01-01
Intelligent systems and methods such as the neural network (NN) are usually used in electric power systems for short-term electrical load forecasting. However, a vast amount of electrical load data is often redundant, and linearly or nonlinearly correlated with each other. Highly correlated input data can result in erroneous prediction results given out by an NN model. Besides this, the determination of the topological structure of an NN model has always been a problem for designers. This paper presents a new artificial intelligence hybrid procedure for next day electric load forecasting based on partial least squares (PLS) and NN. PLS is used for the compression of data input space, and helps to determine the structure of the NN model. The hybrid PLS-NN model can be used to predict hourly electric load on weekdays and weekends. The advantage of this methodology is that the hybrid model can provide faster convergence and more precise prediction results in comparison with abductive networks algorithm. Extensive testing on the electrical load data of the Puget power utility in the USA confirms the validity of the proposed approach.
Fishery landing forecasting using EMD-based least square support vector machine models
Shabri, Ani
2015-05-01
In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..
Institute of Scientific and Technical Information of China (English)
Fan Youping; Chen Yunping; Sun Wansheng; Li Yu
2005-01-01
As a new type of learning machine developed on the basis of statistics learning theory, support vector machine (SVM) plays an important role in knowledge discovering and knowledge updating by constructing non-linear optimal classifier. However, realizing SVM requires resolving quadratic programming under constraints of inequality, which results in calculation difficulty while learning samples gets larger. Besides, standard SVM is incapable of tackling multi-classification. To overcome the bottleneck of populating SVM, with training algorithm presented, the problem of quadratic programming is converted into that of resolving a linear system of equations composed of a group of equation constraints by adopting the least square SVM(LS-SVM) and introducing a modifying variable which can change inequality constraints into equation constraints, which simplifies the calculation. With regard to multi-classification, an LS-SVM applicable in multi-classification is deduced. Finally, efficiency of the algorithm is checked by using universal Circle in square and two-spirals to measure the performance of the classifier.
Least-squares finite-element method for shallow-water equations with source terms
Institute of Scientific and Technical Information of China (English)
Shin-Jye Liang; Tai-Wen Hsu
2009-01-01
Numerical solution of shallow-water equations (SWE) has been a challenging task because of its nonlinear hyperbolic nature, admitting discontinuous solution, and the need to satisfy the C-property. The presence of source terms in momentum equations, such as the bottom slope and friction of bed, compounds the difficulties further. In this paper, a least-squares finite-element method for the space discretization and θ-method for the time integration is developed for the 2D non-conservative SWE including the source terms. Advantages of the method include: the source terms can be approximated easily with interpolation functions, no upwind scheme is needed, as well as the resulting system equations is symmetric and positive-definite, therefore, can be solved efficiently with the conjugate gradient method. The method is applied to steady and unsteady flows, subcritical and transcritical flow over a bump, 1D and 2D circular dam-break, wave past a circular cylinder, as well as wave past a hump. Computed results show good C-property, conservation property and compare well with exact solutions and other numerical results for flows with weak and mild gradient changes, but lead to inaccurate predictions for flows with strong gradient changes and discontinuities.
Partial least squares regression for predicting economic loss of vegetables caused by acid rain
Institute of Scientific and Technical Information of China (English)
WANG Ju; MENG He; DONG De-ming; LI Wei; FANG Chun-sheng
2009-01-01
To predict the economic loss of crops caused by acid rain, we used partial least squares (PLS) regression to build a model of single dependent variable-the economic loss calculated with the decrease in yield related to the pH value and levels of Ca2+, NH4+, Na+, K+, Mg2+, SO42-, NO3-, and Cl- in acid rain. We selected vegetables which were sensitive to acid rain as the sample crops, and collected 12 groups of data, of which 8 groups were used for modeling and 4 groups for testing. Using the cross validation method to evaluate the performace of this prediction model indicates that the optimum number of principal components was 3, determined by the minimum of prediction residual error sum of squares, and the prediction error of the regression equation ranges from-2.25% to 4.32%. The model predicted that the economic loss of vegetables from acid rain is negatively corrrelated to pH and the concentrations of NH4+, SO42-, NO3-, and Cl- in the rain, and positively correlated to the concentrations of Ca2+, Na+, K+ and Mg2+. The precision of the model may be improved if the non-linearity of original data is addressed.
A modified Generalized Least Squares method for large scale nuclear data evaluation
Schnabel, Georg; Leeb, Helmut
2017-01-01
Nuclear data evaluation aims to provide estimates and uncertainties in the form of covariance matrices of cross sections and related quantities. Many practitioners use the Generalized Least Squares (GLS) formulas to combine experimental data and results of model calculations in order to determine reliable estimates and covariance matrices. A prerequisite to apply the GLS formulas is the construction of a prior covariance matrix for the observables from a set of model calculations. Modern nuclear model codes are able to provide predictions for a large number of observables. However, the inclusion of all observables may lead to a prior covariance matrix of intractable size. Therefore, we introduce mathematically equivalent versions of the GLS formulas to avoid the construction of the prior covariance matrix. Experimental data can be incrementally incorporated into the evaluation process, hence there is no upper limit on their amount. We demonstrate the modified GLS method in a tentative evaluation involving about three million observables using the code TALYS. The revised scheme is well suited as building block of a database application providing evaluated nuclear data. Updating with new experimental data is feasible and users can query estimates and correlations of arbitrary subsets of the observables stored in the database.
Noise and nonlinear estimation with optimal schemes in DTI.
Özcan, Alpay
2010-11-01
In general, the estimation of the diffusion properties for diffusion tensor experiments (DTI) is accomplished via least squares estimation (LSE). The technique requires applying the logarithm to the measurements, which causes bad propagation of errors. Moreover, the way noise is injected to the equations invalidates the least squares estimate as the best linear unbiased estimate. Nonlinear estimation (NE), despite its longer computation time, does not possess any of these problems. However, all of the conditions and optimization methods developed in the past are based on the coefficient matrix obtained in a LSE setup. In this article, NE for DTI is analyzed to demonstrate that any result obtained relatively easily in a linear algebra setup about the coefficient matrix can be applied to the more complicated NE framework. The data, obtained using non-optimal and optimized diffusion gradient schemes, are processed with NE. In comparison with LSE, the results show significant improvements, especially for the optimization criterion. However, NE does not resolve the existing conflicts and ambiguities displayed with LSE methods.
8th International Conference on Partial Least Squares and Related Methods
Vinzi, Vincenzo; Russolillo, Giorgio; Saporta, Gilbert; Trinchera, Laura
2016-01-01
This volume presents state of the art theories, new developments, and important applications of Partial Least Square (PLS) methods. The text begins with the invited communications of current leaders in the field who cover the history of PLS, an overview of methodological issues, and recent advances in regression and multi-block approaches. The rest of the volume comprises selected, reviewed contributions from the 8th International Conference on Partial Least Squares and Related Methods held in Paris, France, on 26-28 May, 2014. They are organized in four coherent sections: 1) new developments in genomics and brain imaging, 2) new and alternative methods for multi-table and path analysis, 3) advances in partial least square regression (PLSR), and 4) partial least square path modeling (PLS-PM) breakthroughs and applications. PLS methods are very versatile methods that are now used in areas as diverse as engineering, life science, sociology, psychology, brain imaging, genomics, and business among both academics ...
Iterative least-squares solvers for the Navier-Stokes equations
Energy Technology Data Exchange (ETDEWEB)
Bochev, P. [Univ. of Texas, Arlington, TX (United States)
1996-12-31
In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.
Least-squares finite element discretizations of neutron transport equations in 3 dimensions
Energy Technology Data Exchange (ETDEWEB)
Manteuffel, T.A [Univ. of Colorado, Boulder, CO (United States); Ressel, K.J. [Interdisciplinary Project Center for Supercomputing, Zurich (Switzerland); Starkes, G. [Universtaet Karlsruhe (Germany)
1996-12-31
The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.
ON STABLE PERTURBATIONS OF THE STIFFLY WEIGHTED PSEUDOINVERSE AND WEIGHTED LEAST SQUARES PROBLEM
Institute of Scientific and Technical Information of China (English)
Mu-sheng Wei
2005-01-01
In this paper we study perturbations of the stiffly weighted pseudoinverse (W1/2 A)+W1/2 and the related stiffly weighted least squares problem, where both the matrices A and W are given with W positive diagonal and severely stiff. We show that the perturbations to the stiffly weighted pseudoinverse and the related stiffly weighted least squares problem are stable, if and only if the perturbed matrices (^)A = A+δA satisfy several row rank preserving conditions.
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
Energy Technology Data Exchange (ETDEWEB)
Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
2017-01-15
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Institute of Scientific and Technical Information of China (English)
ZHANG Liqing; WU Xiaohua
2005-01-01
The computer auxiliary partial least squares is introduced to simultaneously determine the contents of Deoxyschizandin, Schisandrin, γ- Schisandrin in the extracted solution of wuweizi. Regression analysis of the experimental results shows that the average recovery of each component is all in the range from 98.9% to 110.3% ,which means the partial least squares regression spectrophotometry can circumvent the overlapping of absorption spectrums of multi-components, so that satisfactory results can be obtained without any sample pre-separation.