Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis
DEFF Research Database (Denmark)
Nielsen, Allan Aasbjerg
2007-01-01
This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...
Hays, J. R.
1969-01-01
Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.
Nonlinear Least Squares for Inverse Problems
Chavent, Guy
2009-01-01
Presents an introduction into the least squares resolution of nonlinear inverse problems. This title intends to develop a geometrical theory to analyze nonlinear least square (NLS) problems with respect to their quadratic wellposedness, that is, both wellposedness and optimizability
Least-Squares, Continuous Sensitivity Analysis for Nonlinear Fluid-Structure Interaction
2009-08-20
Lecture notes in mathematics ; 606, Springer-Verlag, Berlin ; New York, 1977, pp. 362. [56] Gel’fand, I.M., Fomin, S.V., and Silverman, R.A...computational fluid dynamics and electromagnetics, Scientific computation, Springer, Berlin ; New York, 1998. [70] Karniadakis, G., and Sherwin, S.J...Aeroelasticity,” Journal of Aircraft, Vol. 40, No. 6, 2003, pp. 1066-1092. [78] Lucia , D.J., “The SensorCraft Configurations: A Non-Linear
Collinearity in Least-Squares Analysis
de Levie, Robert
2012-01-01
How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…
Augmented Classical Least Squares Multivariate Spectral Analysis
Energy Technology Data Exchange (ETDEWEB)
Haaland, David M. (Albuquerque, NM); Melgaard, David K. (Albuquerque, NM)
2005-01-11
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
Augmented Classical Least Squares Multivariate Spectral Analysis
Energy Technology Data Exchange (ETDEWEB)
Haaland, David M. (Albuquerque, NM); Melgaard, David K. (Albuquerque, NM)
2005-07-26
A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.
A NEW SOLUTION MODEL OF NONLINEAR DYNAMIC LEAST SQUARE ADJUSTMENT
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2000-01-01
The nonlinear least square adjustment is a head object studied in technology fields. The paper studies on the non-derivative solution to the nonlinear dynamic least square adjustment and puts forward a new algorithm model and its solution model. The method has little calculation load and is simple. This opens up a theoretical method to solve the linear dynamic least square adjustment.
A Note on Separable Nonlinear Least Squares Problem
Gharibi, Wajeb
2011-01-01
Separable nonlinear least squares (SNLS)problem is a special class of nonlinear least squares (NLS)problems, whose objective function is a mixture of linear and nonlinear functions. It has many applications in many different areas, especially in Operations Research and Computer Sciences. They are difficult to solve with the infinite-norm metric. In this paper, we give a short note on the separable nonlinear least squares problem, unseparated scheme for NLS, and propose an algorithm for solving mixed linear-nonlinear minimization problem, method of which results in solving a series of least squares separable problems.
Deformation analysis with Total Least Squares
Directory of Open Access Journals (Sweden)
M. Acar
2006-01-01
Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.
Least Squares Moving-Window Spectral Analysis.
Lee, Young Jong
2017-01-01
Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.
A Genetic Algorithm Approach to Nonlinear Least Squares Estimation
Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.
2004-01-01
A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…
An Algorithm to Solve Separable Nonlinear Least Square Problem
Directory of Open Access Journals (Sweden)
Wajeb Gharibi
2013-07-01
Full Text Available Separable Nonlinear Least Squares (SNLS problem is a special class of Nonlinear Least Squares (NLS problems, whose objective function is a mixture of linear and nonlinear functions. SNLS has many applications in several areas, especially in the field of Operations Research and Computer Science. Problems related to the class of NLS are hard to resolve having infinite-norm metric. This paper gives a brief explanation about SNLS problem and offers a Lagrangian based algorithm for solving mixed linear-nonlinear minimization problem
A Hybrid Method for Nonlinear Least Squares Problems
Institute of Scientific and Technical Information of China (English)
Zhongyi Liu; Linping Sun
2007-01-01
A negative curvature method is applied to nonlinear least squares problems with indefinite Hessian approximation matrices. With the special structure of the method,a new switch is proposed to form a hybrid method. Numerical experiments show that this method is feasible and effective for zero-residual,small-residual and large-residual problems.
Multisplitting for linear, least squares and nonlinear problems
Energy Technology Data Exchange (ETDEWEB)
Renaut, R.
1996-12-31
In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.
Kernel Partial Least Squares for Nonlinear Regression and Discrimination
Rosipal, Roman; Clancy, Daniel (Technical Monitor)
2002-01-01
This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.
Cao, Hui; Li, Yao-Jiang; Zhou, Yan; Wang, Yan-Xia
2014-11-01
To deal with nonlinear characteristics of spectra data for the thermal power plant flue, a nonlinear partial least square (PLS) analysis method with internal model based on neural network is adopted in the paper. The latent variables of the independent variables and the dependent variables are extracted by PLS regression firstly, and then they are used as the inputs and outputs of neural network respectively to build the nonlinear internal model by train process. For spectra data of flue gases of the thermal power plant, PLS, the nonlinear PLS with the internal model of back propagation neural network (BP-NPLS), the non-linear PLS with the internal model of radial basis function neural network (RBF-NPLS) and the nonlinear PLS with the internal model of adaptive fuzzy inference system (ANFIS-NPLS) are compared. The root mean square error of prediction (RMSEP) of sulfur dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 16.96%, 16.60% and 19.55% than that of PLS, respectively. The RMSEP of nitric oxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 8.60%, 8.47% and 10.09% than that of PLS, respectively. The RMSEP of nitrogen dioxide of BP-NPLS, RBF-NPLS and ANFIS-NPLS are reduced by 2.11%, 3.91% and 3.97% than that of PLS, respectively. Experimental results show that the nonlinear PLS is more suitable for the quantitative analysis of glue gas than PLS. Moreover, by using neural network function which can realize high approximation of nonlinear characteristics, the nonlinear partial least squares method with internal model mentioned in this paper have well predictive capabilities and robustness, and could deal with the limitations of nonlinear partial least squares method with other internal model such as polynomial and spline functions themselves under a certain extent. ANFIS-NPLS has the best performance with the internal model of adaptive fuzzy inference system having ability to learn more and reduce the residuals effectively. Hence, ANFIS-NPLS is an
Simple procedures for imposing constraints for nonlinear least squares optimization
Energy Technology Data Exchange (ETDEWEB)
Carvalho, R. [Petrobras, Rio de Janeiro (Brazil); Thompson, L.G.; Redner, R.; Reynolds, A.C. [Univ. of Tulsa, OK (United States)
1995-12-31
Nonlinear regression method (least squares, least absolute value, etc.) have gained acceptance as practical technology for analyzing well-test pressure data. Even for relatively simple problems, however, commonly used algorithms sometimes converge to nonfeasible parameter estimates (e.g., negative permeabilities) resulting in a failure of the method. The primary objective of this work is to present a new method for imaging the objective function across all boundaries imposed to satisfy physical constraints on the parameters. The algorithm is extremely simple and reliable. The method uses an equivalent unconstrained objective function to impose the physical constraints required in the original problem. Thus, it can be used with standard unconstrained least squares software without reprogramming and provides a viable alternative to penalty functions for imposing constraints when estimating well and reservoir parameters from pressure transient data. In this work, the authors also present two methods of implementing the penalty function approach for imposing parameter constraints in a general unconstrained least squares algorithm. Based on their experience, the new imaging method always converges to a feasible solution in less time than the penalty function methods.
Institute of Scientific and Technical Information of China (English)
Xin LIU; Guo WEI; Jin-wei SUN; Dan LIU
2009-01-01
Least squares support vector machines (LS-SVMs) are modified support vector machines (SVMs) that involve equality constraints and work with a least squares cost function, which simplifies the optimization procedure. In this paper, a novel training algorithm based on total least squares (TLS) for an LS-SVM is presented and applied to muhifunctional sensor signal reconstruction. For three different nonlinearities of a multi functional sensor model, the reconstruction accuracies of input signals are 0.001 36%, 0.03184% and 0.504 80%, respectively. The experimental results demonstrate the higher reliability and accuracy of the proposed method for multi functional sensor signal reconstruction than the original LS-SVM training algorithm, and verify the feasibility and stability of the proposed method.
Robust Homography Estimation Based on Nonlinear Least Squares Optimization
Directory of Open Access Journals (Sweden)
Wei Mou
2014-01-01
Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.
Nonlinear least-squares data fitting in Excel spreadsheets.
Kemmer, Gerdi; Keller, Sandro
2010-02-01
We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.
Non-linear Least Squares Fitting in IDL with MPFIT
Markwardt, Craig B
2009-01-01
MPFIT is a port to IDL of the non-linear least squares fitting program MINPACK-1. MPFIT inherits the robustness of the original FORTRAN version of MINPACK-1, but is optimized for performance and convenience in IDL. In addition to the main fitting engine, MPFIT, several specialized functions are provided to fit 1-D curves and 2-D images; 1-D and 2-D peaks; and interactive fitting from the IDL command line. Several constraints can be applied to model parameters, including fixed constraints, simple bounding constraints, and "tying" the value to another parameter. Several data weighting methods are allowed, and the parameter covariance matrix is computed. Extensive diagnostic capabilities are available during the fit, via a call-back subroutine, and after the fit is complete. Several different forms of documentation are provided, including a tutorial, reference pages, and frequently asked questions. The package has been translated to C and Python as well. The full IDL and C packages can be found at http://purl.co...
Feng, Jie; Wang, Zhe; Li, Lizhi; Li, Zheng; Ni, Weidou
2013-03-01
A nonlinearized multivariate dominant factor-based partial least-squares (PLS) model was applied to coal elemental concentration measurement. For C concentration determination in bituminous coal, the intensities of multiple characteristic lines of the main elements in coal were applied to construct a comprehensive dominant factor that would provide main concentration results. A secondary PLS thereafter applied would further correct the model results by using the entire spectral information. In the dominant factor extraction, nonlinear transformation of line intensities (based on physical mechanisms) was embedded in the linear PLS to describe nonlinear self-absorption and inter-element interference more effectively and accurately. According to the empirical expression of self-absorption and Taylor expansion, nonlinear transformations of atomic and ionic line intensities of C were utilized to model self-absorption. Then, the line intensities of other elements, O and N, were taken into account for inter-element interference, considering the possible recombination of C with O and N particles. The specialty of coal analysis by using laser-induced breakdown spectroscopy (LIBS) was also discussed and considered in the multivariate dominant factor construction. The proposed model achieved a much better prediction performance than conventional PLS. Compared with our previous, already improved dominant factor-based PLS model, the present PLS model obtained the same calibration quality while decreasing the root mean square error of prediction (RMSEP) from 4.47 to 3.77%. Furthermore, with the leave-one-out cross-validation and L-curve methods, which avoid the overfitting issue in determining the number of principal components instead of minimum RMSEP criteria, the present PLS model also showed better performance for different splits of calibration and prediction samples, proving the robustness of the present PLS model.
Distributed Recursive Least-Squares: Stability and Performance Analysis
Mateos, Gonzalo
2011-01-01
The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged' system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying...
Least Squares Shadowing for Sensitivity Analysis of Turbulent Fluid Flows
Blonigan, Patrick; Wang, Qiqi
2014-01-01
Computational methods for sensitivity analysis are invaluable tools for aerodynamics research and engineering design. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in turbulent fluid flow fields, specifically those obtained using high-fidelity turbulence simulations. This is because of a number of dynamical properties of turbulent and chaotic fluid flows, most importantly high sensitivity of the initial value problem, popularly known as the "butterfly effect". The recently developed least squares shadowing (LSS) method avoids the issues encountered by traditional sensitivity analysis methods by approximating the "shadow trajectory" in phase space, avoiding the high sensitivity of the initial value problem. The following paper discusses how the least squares problem associated with LSS is solved. Two methods are presented and are demonstrated on a simulation of homogeneous isotropic turbulence and the Kuramoto-Sivashinsky (KS) equation, a 4th order c...
Liu, Jingwei; Liu, Yi; Xu, Meizhi
2015-01-01
Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...
Classification using least squares support vector machine for reliability analysis
Institute of Scientific and Technical Information of China (English)
Zhi-wei GUO; Guang-chen BAI
2009-01-01
In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.
Liu, Jingwei
2011-01-01
A function based nonlinear least squares estimation (FNLSE) method is proposed and investigated in parameter estimation of Jelinski-Moranda software reliability model. FNLSE extends the potential fitting functions of traditional least squares estimation (LSE), and takes the logarithm transformed nonlinear least squares estimation (LogLSE) as a special case. A novel power transformation function based nonlinear least squares estimation (powLSE) is proposed and applied to the parameter estimation of Jelinski-Moranda model. Solved with Newton-Raphson method, Both LogLSE and powLSE of Jelinski-Moranda models are applied to the mean time between failures (MTBF) predications on six standard software failure time data sets. The experimental results demonstrate the effectiveness of powLSE with optimal power index compared to the classical least--squares estimation (LSE), maximum likelihood estimation (MLE) and LogLSE in terms of recursively relative error (RE) index and Braun statistic index.
LEAST-SQUARES MIXED FINITE ELEMENT METHODS FOR NONLINEAR PARABOLIC PROBLEMS
Institute of Scientific and Technical Information of China (English)
Dan-ping Yang
2002-01-01
Two least-squares mixed finite element schemes are formulated to solve the initialboundary value problem of a nonlinear parabolic partial differential equation and the convergence of these schemes are analyzed.
Institute of Scientific and Technical Information of China (English)
TAO Hua-xue (陶华学); GUO Jin-yun (郭金运)
2003-01-01
Data are very important to build the digital mine. Data come from many sources, have different types and temporal states. Relations between one class of data and the other one, or between data and unknown parameters are more nonlinear. The unknown parameters are non-random or random, among which the random parameters often dynamically vary with time. Therefore it is not accurate and reliable to process the data in building the digital mine with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method to process data in building the digital mine is put forward. In the meantime, the corresponding mathematical model is also given. The generalized nonlinear least squares problem is more complex than the common nonlinear least squares problem and its solution is more difficultly obtained because the dimensions of data and parameters in the former are bigger. So a new solution model and the method are put forward to solve the generalized nonlinear dynamic least squares problem. In fact, the problem can be converted to two sub-problems, each of which has a single variable. That is to say, a complex problem can be separated and then solved. So the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations. The method lessens the calculating load and opens up a new way to process the data in building the digital mine, which have more sources, different types and more temporal states.
Payette, G. S.; Reddy, J. N.
2011-05-01
In this paper we examine the roles of minimization and linearization in the least-squares finite element formulations of nonlinear boundary-values problems. The least-squares principle is based upon the minimization of the least-squares functional constructed via the sum of the squares of appropriate norms of the residuals of the partial differential equations (in the present case we consider L2 norms). Since the least-squares method is independent of the discretization procedure and the solution scheme, the least-squares principle suggests that minimization should be performed prior to linearization, where linearization is employed in the context of either the Picard or Newton iterative solution procedures. However, in the least-squares finite element analysis of nonlinear boundary-value problems, it has become common practice in the literature to exchange the sequence of application of the minimization and linearization operations. The main purpose of this study is to provide a detailed assessment on how the finite element solution is affected when the order of application of these operators is interchanged. The assessment is performed mathematically, through an examination of the variational setting for the least-squares formulation of an abstract nonlinear boundary-value problem, and also computationally, through the numerical simulation of the least-squares finite element solutions of both a nonlinear form of the Poisson equation and also the incompressible Navier-Stokes equations. The assessment suggests that although the least-squares principle indicates that minimization should be performed prior to linearization, such an approach is often impractical and not necessary.
Performance analysis of the Least-Squares estimator in Astrometry
Lobos, Rodrigo A; Mendez, Rene A; Orchard, Marcos
2015-01-01
We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated...
Algorithms for unweighted least-squares factor analysis
Krijnen, WP
Estimation of the factor model by unweighted least squares (ULS) is distribution free, yields consistent estimates, and is computationally fast if the Minimum Residuals (MinRes) algorithm is employed, MinRes algorithms produce a converging sequence of monotonically decreasing ULS function values.
Institute of Scientific and Technical Information of China (English)
TAO Hua-xue; GUO Jin-yun
2005-01-01
The unknown parameter's variance-covariance propagation and calculation in the generalized nonlinear least squares remain to be studied now,which didn't appear in the internal and external referencing documents. The unknown parameter's variance-covariance propagation formula, considering the two-power terms, was concluded used to evaluate the accuracy of unknown parameter estimators in the generalized nonlinear least squares problem. It is a new variance-covariance formula and opens up a new way to evaluate the accuracy when processing data which have the multi-source,multi-dimensional, multi-type, multi-time-state, different accuracy and nonlinearity.
Acceleration Control in Nonlinear Vibrating Systems based on Damped Least Squares
Pilipchuk, V N
2011-01-01
A discrete time control algorithm using the damped least squares is introduced for acceleration and energy exchange controls in nonlinear vibrating systems. It is shown that the damping constant of least squares and sampling time step of the controller must be inversely related to insure that vanishing the time step has little effect on the results. The algorithm is illustrated on two linearly coupled Duffing oscillators near the 1:1 internal resonance. In particular, it is shown that varying the dissipation ratio of one of the two oscillators can significantly suppress the nonlinear beat phenomenon.
Padovan, J.; Lackney, J.
1986-01-01
The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2002-01-01
Using difference quotient instead of derivative, the paper presents the solution method and procedure of the nonlinear least square estimation containing different classes of measurements. In the meantime, the paper shows several practical cases, which indicate the method is very valid and reliable.
1985-05-01
first generated the errors and response variables. The errors, i, were produced using the Marsaglia and Tsang pseudo-normal ran- dom number algorithm...34Asymptotic properties of non-linear least squares estimators," The Annals of Mathematical Statistici, 40(2), pp. 633-643. Marsaglia , G., Tsang, W
BER analysis of regularized least squares for BPSK recovery
Ben Atitallah, Ismail
2017-06-20
This paper investigates the problem of recovering an n-dimensional BPSK signal x
Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine
Institute of Scientific and Technical Information of China (English)
XU Rui-Rui; BIAN Guo-Xing; GAO Chen-Feng; CHEN Tian-Lun
2005-01-01
The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction.First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.
Cao, Jiguo
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.
Cao, Jiguo; Huang, Jianhua Z; Wu, Hulin
2012-01-01
Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.
Institute of Scientific and Technical Information of China (English)
无
2002-01-01
A new robust on-line fault diagnosis method based on least squares estimate for nonlinear difference-algebraic systems (DAS) with uncertainties is proposed. Based on the known nominal model of the DAS, this method firstly constructs an auxiliary system consisting of a difference equation and an algebraic equation, then, based on the relationship between the state deviation and the faults in the difference equation and the relationship between the algebraic variable deviation and the faults in algebraic equation, it identifies the faults on-line through least squares estimate. This method can not only detect, isolate and identify faults for DAS, but also give the upper bound of the error of fault identification. The simulation results indicate that it can give satisfactory diagnostic results for both abrupt and incipient faults.
Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization
Transtrum, Mark K
2012-01-01
When minimizing a nonlinear least-squares function, the Levenberg-Marquardt algorithm can suffer from a slow convergence, particularly when it must navigate a narrow canyon en route to a best fit. On the other hand, when the least-squares function is very flat, the algorithm may easily become lost in parameter space. We introduce several improvements to the Levenberg-Marquardt algorithm in order to improve both its convergence speed and robustness to initial parameter guesses. We update the usual step to include a geodesic acceleration correction term, explore a systematic way of accepting uphill steps that may increase the residual sum of squares due to Umrigar and Nightingale, and employ the Broyden method to update the Jacobian matrix. We test these changes by comparing their performance on a number of test problems with standard implementations of the algorithm. We suggest that these two particular challenges, slow convergence and robustness to initial guesses, are complimentary problems. Schemes that imp...
Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).
Bevilacqua, Marta; Marini, Federico
2014-08-01
The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks).
Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application
Institute of Scientific and Technical Information of China (English)
JIA Jin-ming; WEN Xiang-jun
2008-01-01
Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.
Calibration of Vector Magnetogram with the Nonlinear Least-squares Fitting Technique
Institute of Scientific and Technical Information of China (English)
Jiang-Tao Su; Hong-Qi Zhang
2004-01-01
To acquire Stokes profiles from observations of a simple sunspot with the Video Vector Magnetograph at Huairou Solar Observing Station(HSOS),we scanned the FeIλ5324.19 A line over the wavelength interval from 150mA redward of the line center to 150mA blueward,in steps of 10mA.With the technique of analytic inversion of Stokes profiles via nonlinear least-squares,we present the calibration coefficients for the HSOS vector magnetic magnetogram.We obtained the theoretical calibration error with linear expressions derived from the Unno-Becker equation under weak-field approximation.
On-line Weighted Least Squares Kernel Method for Nonlinear Dynamic Modeling
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
Support vector machines (SVM) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on rolling optimization method and on-line learning strategies, a novel approach based on weighted least squares support vector machines (WLS-SVM) is proposed for nonlinear dynamic modeling.The good robust property of the novel approach enhances the generalization ability of kernel method-based modeling and some experimental results are presented to illustrate the feasibility of the proposed method.
Nonlinear decoupling controller design based on least squares support vector regression
Institute of Scientific and Technical Information of China (English)
WEN Xiang-jun; ZHANG Yu-nong; YAN Wei-wu; XU Xiao-ming
2006-01-01
Support Vector Machines (SVMs) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on a method of least squares SVM (LS-SVM) for multivariate function estimation, a generalized inverse system is developed for the linearization and decoupling control ora general nonlinear continuous system. The approach of inverse modelling via LS-SVM and parameters optimization using the Bayesian evidence framework is discussed in detail. In this paper, complex high-order nonlinear system is decoupled into a number of pseudo-linear Single Input Single Output (SISO) subsystems with linear dynamic components. The poles of pseudo-linear subsystems can be configured to desired positions. The proposed method provides an effective alternative to the controller design of plants whose accurate mathematical model is unknown or state variables are difficult or impossible to measure. Simulation results showed the efficacy of the method.
Numerical solution of a nonlinear least squares problem in digital breast tomosynthesis
Landi, G.; Loli Piccolomini, E.; Nagy, J. G.
2015-11-01
In digital tomosynthesis imaging, multiple projections of an object are obtained along a small range of different incident angles in order to reconstruct a pseudo-3D representation (i.e., a set of 2D slices) of the object. In this paper we describe some mathematical models for polyenergetic digital breast tomosynthesis image reconstruction that explicitly takes into account various materials composing the object and the polyenergetic nature of the x-ray beam. A polyenergetic model helps to reduce beam hardening artifacts, but the disadvantage is that it requires solving a large-scale nonlinear ill-posed inverse problem. We formulate the image reconstruction process (i.e., the method to solve the ill-posed inverse problem) in a nonlinear least squares framework, and use a Levenberg-Marquardt scheme to solve it. Some implementation details are discussed, and numerical experiments are provided to illustrate the performance of the methods.
Directory of Open Access Journals (Sweden)
Hui Cao
2014-01-01
Full Text Available Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Cao, Hui; Yan, Xingyu; Li, Yaojiang; Wang, Yanxia; Zhou, Yan; Yang, Sanchun
2014-01-01
Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN) is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.
Kazemi, Mahdi; Arefi, Mohammad Mehdi
2016-12-15
In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.
SOM-based nonlinear least squares twin SVM via active contours for noisy image segmentation
Xie, Xiaomin; Wang, Tingting
2017-02-01
In this paper, a nonlinear least square twin support vector machine (NLSTSVM) with the integration of active contour model (ACM) is proposed for noisy image segmentation. Efforts have been made to seek the kernel-generated surfaces instead of hyper-planes for the pixels belonging to the foreground and background, respectively, using the kernel trick to enhance the performance. The concurrent self organizing maps (SOMs) are applied to approximate the intensity distributions in a supervised way, so as to establish the original training sets for the NLSTSVM. Further, the two sets are updated by adding the global region average intensities at each iteration. Moreover, a local variable regional term rather than edge stop function is adopted in the energy function to ameliorate the noise robustness. Experiment results demonstrate that our model holds the higher segmentation accuracy and more noise robustness.
Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples
Harada, Koji; Sakai, Hideaki
In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.
Institute of Scientific and Technical Information of China (English)
陶华学; 郭金运
2003-01-01
Data coming from different sources have different types and temporal states. Relations between one type of data and another ones, or between data and unknown parameters are almost nonlinear. It is not accurate and reliable to process the data in building the digital earth with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method was put forward to process data in building the digital earth. A separating solution model and the iterative calculation method were used to solve the generalized nonlinear dynamic least squares problem. In fact, a complex problem can be separated and then solved by converting to two sub-problems, each of which has a single variable. Therefore the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations.
Non-linear Least-squares Fitting in IDL with MPFIT
Markwardt, C. B.
2009-09-01
MPFIT is a port to IDL of the non-linear least squares fitting program MINPACK-1. MPFIT inherits the robustness of the original FORTRAN version of MINPACK-1, but is optimized for performance and convenience in IDL. In addition to the main fitting engine, MPFIT, several specialized functions are provided to fit 1-D curves and 2-D images, 1-D and 2-D peaks, and interactive fitting from the IDL command line. Several constraints can be applied to model parameters, including fixed constraints, simple bounding constraints, and ``tying'' the value to another parameter. Several data-weighting methods are allowed, and the parameter covariance matrix is computed. Extensive diagnostic capabilities are available during the fit, via a call-back subroutine, and after the fit is complete. Several different forms of documentation are provided, including a tutorial, reference pages, and frequently asked questions. The package has been translated to C and Python as well. The full IDL and C packages can be found at http://purl.com/net/mpfit.
ELASTO－PLASTICITY ANALYSIS BASED ON COLLOCATION WITH THE MOVING LEAST SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
SongKangzu; ZhangXiong; LuMiugwau
2003-01-01
A meshless approach based on the moving least square method is developed for elasto-plasticity analysis, in which the incremental formulation is used. In this approach, the displacement shape functions are constructed by using the moving least square approximation, and the discrete governing equations for elasto-plastic material are constructed with the direct collocation method. The boundary conditions are also imposed by collocation. The method established is a truly meshless one, as it does not need any mesh, either for the purpose of interpolation of the solution variables, or for the purpose of construction of the discrete equations. It is simply formulated and very efficient, and no post-processing procedure is required to compute the derivatives of the unknown variables, since the solution from this method based on the moving least square approximation is already smooth enough. Numerical examples are given to verify the accuracy of the meshless method proposed for elasto-rdasticity analysis.
Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression
Verdoolaege, G.; Shabbir, A.; Hornung, G.
2016-11-01
Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.
Wind Tunnel Strain-Gage Balance Calibration Data Analysis Using a Weighted Least Squares Approach
Ulbrich, N.; Volden, T.
2017-01-01
A new approach is presented that uses a weighted least squares fit to analyze wind tunnel strain-gage balance calibration data. The weighted least squares fit is specifically designed to increase the influence of single-component loadings during the regression analysis. The weighted least squares fit also reduces the impact of calibration load schedule asymmetries on the predicted primary sensitivities of the balance gages. A weighting factor between zero and one is assigned to each calibration data point that depends on a simple count of its intentionally loaded load components or gages. The greater the number of a data point's intentionally loaded load components or gages is, the smaller its weighting factor becomes. The proposed approach is applicable to both the Iterative and Non-Iterative Methods that are used for the analysis of strain-gage balance calibration data in the aerospace testing community. The Iterative Method uses a reasonable estimate of the tare corrected load set as input for the determination of the weighting factors. The Non-Iterative Method, on the other hand, uses gage output differences relative to the natural zeros as input for the determination of the weighting factors. Machine calibration data of a six-component force balance is used to illustrate benefits of the proposed weighted least squares fit. In addition, a detailed derivation of the PRESS residuals associated with a weighted least squares fit is given in the appendices of the paper as this information could not be found in the literature. These PRESS residuals may be needed to evaluate the predictive capabilities of the final regression models that result from a weighted least squares fit of the balance calibration data.
Nonlinear partial least squares with Hellinger distance for nonlinear process monitoring
Harrou, Fouzi
2017-02-16
This paper proposes an efficient data-based anomaly detection method that can be used for monitoring nonlinear processes. The proposed method merges advantages of nonlinear projection to latent structures (NLPLS) modeling and those of Hellinger distance (HD) metric to identify abnormal changes in highly correlated multivariate data. Specifically, the HD is used to quantify the dissimilarity between current NLPLS-based residual and reference probability distributions. The performances of the developed anomaly detection using NLPLS-based HD technique is illustrated using simulated plug flow reactor data.
A Least-Squares Solution to Nonlinear Steady-State Multi-Dimensional IHCP
Institute of Scientific and Technical Information of China (English)
无
1996-01-01
In this paper,the least-squares method is used to solve the Inverse Heat Conduction Probles(IHCP) to determine the space-wise variation of the unknown boundary condition on the inner surface of a helically coied tube with fluid flow inside,electrical heating and insulation outside.The sensitivity coefficient is analyzed to give a rational distribution of the thermocouples.The results demonstrate that the method effectively extracts information about the unknown boundary condition for the heat conduction problem from the experimental measurements.The results also show that the least-squares method conerges very quickly.
Rauk, Adam P; Guo, Kevin; Hu, Yanling; Cahya, Suntara; Weiss, William F
2014-08-01
Defining a suitable product presentation with an acceptable stability profile over its intended shelf-life is one of the principal challenges in bioproduct development. Accelerated stability studies are routinely used as a tool to better understand long-term stability. Data analysis often employs an overall mass action kinetics description for the degradation and the Arrhenius relationship to capture the temperature dependence of the observed rate constant. To improve predictive accuracy and precision, the current work proposes a least-squares estimation approach with a single nonlinear covariate and uses a polynomial to describe the change in a product attribute with respect to time. The approach, which will be referred to as Arrhenius time-scaled (ATS) least squares, enables accurate, precise predictions to be achieved for degradation profiles commonly encountered during bioproduct development. A Monte Carlo study is conducted to compare the proposed approach with the common method of least-squares estimation on the logarithmic form of the Arrhenius equation and nonlinear estimation of a first-order model. The ATS least squares method accommodates a range of degradation profiles, provides a simple and intuitive approach for data presentation, and can be implemented with ease. © 2014 Wiley Periodicals, Inc. and the American Pharmacists Association.
Institute of Scientific and Technical Information of China (English)
罗振东; 朱江; 王会军
2002-01-01
A nonlinear Galerkin/ Petrov- least squares mixed element (NGPLSME) method for the stationary Navier-Stokes equations is presented and analyzed. The scheme is that Petrov-least squares forms of residuals are added to the nonlinear Galerkin mixed element method so that it is stable for any combination of discrete velocity and pressure spaces without requiring the Babuska-Brezzi stability condition. The existence, uniqueness and convergence ( at optimal rate ) of the NGPLSME solution is proved in the case of sufficient viscosity ( or small data).
Harmonic tidal analysis at a few stations using the least squares method
Digital Repository Service at National Institute of Oceanography (India)
Fernandes, A.A; Das, V.K.; Bahulayan, N.
Using the least squares method, harmonic analysis has been performed on hourly water level records of 29 days at several stations depicting different types of non-tidal noise. For a tidal record at Mormugao, which was free from storm surges (low...
Use of correspondence analysis partial least squares on linear and unimodal data
DEFF Research Database (Denmark)
Frisvad, Jens C.; Bergsøe, Merete Norsker
1996-01-01
Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...
Least Squares Spectral Analysis and Its Application to Superconducting Gravimeter Data Analysis
Institute of Scientific and Technical Information of China (English)
YIN Hui; Spiros D. Pagiatakis
2004-01-01
Detection of a periodic signal hidden in noise is the goal of Superconducting Gravimeter (SG) data analysis. Due to spikes, gaps, datum shrifts (offsets) and other disturbances, the traditional FFT method shows inherent limitations. Instead, the least squares spectral analysis (LSSA) has showed itself more suitable than Fourier analysis of gappy, unequally spaced and unequally weighted data series in a variety of applications in geodesy and geophysics. This paper reviews the principle of LSSA and gives a possible strategy for the analysis of time series obtained from the Canadian Superconducting Gravimeter Installation (CGSI), with gaps, offsets, unequal sampling decimation of the data and unequally weighted data points.
Method for exploiting bias in factor analysis using constrained alternating least squares algorithms
Keenan, Michael R.
2008-12-30
Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.
Directory of Open Access Journals (Sweden)
Xisheng Yu
2014-01-01
Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.
Sun, Zhibin; Chang, Chein-I.; Ren, Hsuan; D"Amico, Francis M.; Jensen, James O.
2003-12-01
Fully constrained linear spectral mixture analysis (FCLSMA) has been used for material quantification in remotely sensed imagery. In order to implement FCLSMA, two constraints are imposed on abundance fractions, referred to as Abundance Sum-to-one Constraint (ASC) and Abundance Nonnegativity Constraint (ANC). While the ASC is linear equality constraint, the ANC is a linear inequality constraint. A direct approach to imposing the ASC and ANC has been recently investigated and is called fully constrained least-squares (FCLS) method. Since there is no analytical solution resulting from the ANC, a modified fully constrained least-squares method (MFCLS) which replaces the ANC with an Absolute Abundance Sum-to-one Constraint (AASC) was proposed to convert a set of inequality constraints to a quality constraint. The results produced by these two approaches have been shown to be very close. In this paper, we take an oopposite approach to the MFCLS method, called least-squares with linear inequality constraints (LSLIC) method which also solves FCLSMA, but replaces the ASC with two linear inequalities. The proposed LSLIC transforms the FCLSMA to a linear distance programming problem which can be solved easily by a numerical algorithm. In order to demonstrate its utility in solving FCLSMA, the LSLIC method is compared to the FCLS and MFCLS methods. The experimental results show that these three methods perform very similarly with only subtle differences resulting from their problem formations.
Energy Technology Data Exchange (ETDEWEB)
Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE
2008-01-01
Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.
Bouchard, M
2001-01-01
In recent years, a few articles describing the use of neural networks for nonlinear active control of sound and vibration were published. Using a control structure with two multilayer feedforward neural networks (one as a nonlinear controller and one as a nonlinear plant model), steepest descent algorithms based on two distinct gradient approaches were introduced for the training of the controller network. The two gradient approaches were sometimes called the filtered-x approach and the adjoint approach. Some recursive-least-squares algorithms were also introduced, using the adjoint approach. In this paper, an heuristic procedure is introduced for the development of recursive-least-squares algorithms based on the filtered-x and the adjoint gradient approaches. This leads to the development of new recursive-least-squares algorithms for the training of the controller neural network in the two networks structure. These new algorithms produce a better convergence performance than previously published algorithms. Differences in the performance of algorithms using the filtered-x and the adjoint gradient approaches are discussed in the paper. The computational load of the algorithms discussed in the paper is evaluated for multichannel systems of nonlinear active control. Simulation results are presented to compare the convergence performance of the algorithms, showing the convergence gain provided by the new algorithms.
Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.
Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia
2016-10-26
Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors
Analysis Linking the Tensor Structure to the Least-Squares Method.
1984-01-01
7 A-A142 159 ANALYSIS LINKIN THE TENSOR STRUCTURE TO THE LEAST-SQUARES METHOD(U) NOVA UNIV OCEANOGRAPHIC CENTER DANIA FL G BLAHA JAN 84 AFGL-TR-84...Tienstra ([5], [6], [7]), Baarda (C81, [9], [10]), Kooimans ([11]) and a number of others (for example, the editors of [71). It would probably be more...institute, Technische Hogeschool, Delft, 1967 & 1970. 11. A. H. KOOIMANS : "Principles of the Calculus of Observations". Rapport Special, Neuvi me Congr
A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.
Directory of Open Access Journals (Sweden)
Dominique Van de Sompel
Full Text Available Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles.
A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.
Van de Sompel, Dominique; Garai, Ellis; Zavaleta, Cristina; Gambhir, Sanjiv Sam
2012-01-01
Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest) and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles.
Directory of Open Access Journals (Sweden)
Yan Zhou
2013-01-01
Full Text Available We propose an augmented classical least squares (ACLS calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS and principal component regression (PCR using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.
Lima, Clodoaldo A M; Coelho, André L V; Eisencraft, Marcio
2010-08-01
The electroencephalogram (EEG) signal captures the electrical activity of the brain and is an important source of information for studying neurological disorders. The proper analysis of this biological signal plays an important role in the domain of brain-computer interface, which aims at the construction of communication channels between human brain and computers. In this paper, we investigate the application of least squares support vector machines (LS-SVM) to the task of epilepsy diagnosis through automatic EEG signal classification. More specifically, we present a sensitivity analysis study by means of which the performance levels exhibited by standard and least squares SVM classifiers are contrasted, taking into account the setting of the kernel function and of its parameter value. Results of experiments conducted over different types of features extracted from a benchmark EEG signal dataset evidence that the sensitivity profiles of the kernel machines are qualitatively similar, both showing notable performance in terms of accuracy and generalization. In addition, the performance accomplished by optimally configured LS-SVM models is also quantitatively contrasted with that obtained by related approaches for the same dataset. Copyright 2010 Elsevier Ltd. All rights reserved.
Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil
Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris
2016-01-01
Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.
Joint cluster and non-negative least squares analysis for aerosol mass spectrum data
Energy Technology Data Exchange (ETDEWEB)
Zhang, T; Zhu, W [Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794-3600 (United States); McGraw, R [Environmental Sciences Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)], E-mail: zhu@ams.sunysb.edu
2008-07-15
Aerosol mass spectrum (AMS) data contain hundreds of mass to charge ratios and their corresponding intensities from air collected through the mass spectrometer. The observations are usually taken sequentially in time to monitor the air composition, quality and temporal change in an area of interest. An important goal of AMS data analysis is to reduce the dimensionality of the original data yielding a small set of representing tracers for various atmospheric and climatic models. In this work, we present an approach to jointly apply the cluster analysis and the non-negative least squares method towards this goal. Application to a relevant study demonstrates the effectiveness of this new approach. Comparisons are made to other relevant multivariate statistical techniques including the principal component analysis and the positive matrix factorization method, and guidelines are provided.
Directory of Open Access Journals (Sweden)
Yang Xu
2016-02-01
Full Text Available Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion (BIC. We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability, polymorphic information content (PIC, and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.
Institute of Scientific and Technical Information of China (English)
Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu
2016-01-01
Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion(BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability,polymorphic information content(PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.
Institute of Scientific and Technical Information of China (English)
Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu
2016-01-01
Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion (BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability, polymorphic information content (PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.
Analysis of Shift and Deformation of Planar Surfaces Using the Least Squares Plane
Directory of Open Access Journals (Sweden)
Hrvoje Matijević
2006-12-01
Full Text Available Modern methods of measurement developed on the basis of advanced reflectorless distance measurement have paved the way for easier detection and analysis of shift and deformation. A large quantity of collected data points will often require a mathematical model of the surface that fits best into these. Although this can be a complex task, in the case of planar surfaces it is easily done, enabling further processing and analysis of measurement results. The paper describes the fitting of a plane to a set of collected points using the least squares distance, with previously excluded outliers via the RANSAC algorithm. Based on that, a method for analysis of the deformation and shift of planar surfaces is also described.
Jafari, Masoumeh; Salimifard, Maryam; Dehghani, Maryam
2014-07-01
This paper presents an efficient method for identification of nonlinear Multi-Input Multi-Output (MIMO) systems in the presence of colored noises. The method studies the multivariable nonlinear Hammerstein and Wiener models, in which, the nonlinear memory-less block is approximated based on arbitrary vector-based basis functions. The linear time-invariant (LTI) block is modeled by an autoregressive moving average with exogenous (ARMAX) model which can effectively describe the moving average noises as well as the autoregressive and the exogenous dynamics. According to the multivariable nature of the system, a pseudo-linear-in-the-parameter model is obtained which includes two different kinds of unknown parameters, a vector and a matrix. Therefore, the standard least squares algorithm cannot be applied directly. To overcome this problem, a Hierarchical Least Squares Iterative (HLSI) algorithm is used to simultaneously estimate the vector and the matrix of unknown parameters as well as the noises. The efficiency of the proposed identification approaches are investigated through three nonlinear MIMO case studies.
Adaptive Wavelet Methods for Linear and Nonlinear Least-Squares Problems
Stevenson, R.
2014-01-01
The adaptive wavelet Galerkin method for solving linear, elliptic operator equations introduced by Cohen et al. (Math Comp 70:27-75, 2001) is extended to nonlinear equations and is shown to converge with optimal rates without coarsening. Moreover, when an appropriate scheme is available for the appr
DEFF Research Database (Denmark)
Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel;
2014-01-01
. Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3......Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods...
DEFF Research Database (Denmark)
Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel;
2014-01-01
Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods....... Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3...
Research on mine noise sources analysis based on least squares wave-let transform
Institute of Scientific and Technical Information of China (English)
CHENG Gen-yin; YU Sheng-chen; CHEN Shao-jie; WEI Zhi-yong; ZHANG Xiao-chen
2010-01-01
In order to determine the characteristics of noise source accurately, the noise distribution at different frequencies was determined by taking the differences into account between aerodynamic noises, mechanical noise, electrical noise in terms of in frequency and intensity. Designed a least squares wavelet with high precision and special effects for strong interference zone (multi-source noise), which is applicable to strong noise analysis produced by underground mine, and obtained distribution of noise in different frequency and achieves good results. According to the results of decomposition, the characteristics of noise sources production can be more accurately determined, which lays a good foundation for the follow-up focused and targeted noise control, and provides a new method that is greatly applicable for testing and analyzing noise control.
Lmfit: Non-Linear Least-Square Minimization and Curve-Fitting for Python
Newville, Matthew; Stensitzki, Till; Allen, Daniel B.; Rawlik, Michal; Ingargiola, Antonino; Nelson, Andrew
2016-06-01
Lmfit provides a high-level interface to non-linear optimization and curve fitting problems for Python. Lmfit builds on and extends many of the optimization algorithm of scipy.optimize, especially the Levenberg-Marquardt method from optimize.leastsq. Its enhancements to optimization and data fitting problems include using Parameter objects instead of plain floats as variables, the ability to easily change fitting algorithms, and improved estimation of confidence intervals and curve-fitting with the Model class. Lmfit includes many pre-built models for common lineshapes.
Simplified Least Squares Shadowing sensitivity analysis for chaotic ODEs and PDEs
Energy Technology Data Exchange (ETDEWEB)
Chater, Mario, E-mail: chaterm@mit.edu; Ni, Angxiu, E-mail: niangxiu@mit.edu; Wang, Qiqi, E-mail: qiqi@mit.edu
2017-01-15
This paper develops a variant of the Least Squares Shadowing (LSS) method, which has successfully computed the derivative for several chaotic ODEs and PDEs. The development in this paper aims to simplify Least Squares Shadowing method by improving how time dilation is treated. Instead of adding an explicit time dilation term as in the original method, the new variant uses windowing, which can be more efficient and simpler to implement, especially for PDEs.
Lascola, Robert; O'Rourke, Patrick E; Kyser, Edward A
2017-01-01
We have developed a piecewise local (PL) partial least squares (PLS) analysis method for total plutonium measurements by absorption spectroscopy in nitric acid-based nuclear material processing streams. Instead of using a single PLS model that covers all expected solution conditions, the method selects one of several local models based on an assessment of solution absorbance, acidity, and Pu oxidation state distribution. The local models match the global model for accuracy against the calibration set, but were observed in several instances to be more robust to variations associated with measurements in the process. The improvements are attributed to the relative parsimony of the local models. Not all of the sources of spectral variation are uniformly present at each part of the calibration range. Thus, the global model is locally overfitting and susceptible to increased variance when presented with new samples. A second set of models quantifies the relative concentrations of Pu(III), (IV), and (VI). Standards containing a mixture of these species were not at equilibrium due to a disproportionation reaction. Therefore, a separate principal component analysis is used to estimate of the concentrations of the individual oxidation states in these standards in the absence of independent confirmatory analysis. The PL analysis approach is generalizable to other systems where the analysis of chemically complicated systems can be aided by rational division of the overall range of solution conditions into simpler sub-regions.
Fu, Yuan-Yuan; Wang, Ji-Hua; Yang, Gui-Jun; Song, Xiao-Yu; Xu, Xin-Gang; Feng, Hai-Kuan
2013-05-01
The major limitation of using existing vegetation indices for crop biomass estimation is that it approaches a saturation level asymptotically for a certain range of biomass. In order to resolve this problem, band depth analysis and partial least square regression (PLSR) were combined to establish winter wheat biomass estimation model in the present study. The models based on the combination of band depth analysis and PLSR were compared with the models based on common vegetation indexes from the point of view of estimation accuracy, subsequently. Band depth analysis was conducted in the visible spectral domain (550-750 nm). Band depth, band depth ratio (BDR), normalized band depth index, and band depth normalized to area were utilized to represent band depth information. Among the calibrated estimation models, the models based on the combination of band depth analysis and PLSR reached higher accuracy than those based on the vegetation indices. Among them, the combination of BDR and PLSR got the highest accuracy (R2 = 0.792, RMSE = 0.164 kg x m(-2)). The results indicated that the combination of band depth analysis and PLSR could well overcome the saturation problem and improve the biomass estimation accuracy when winter wheat biomass is large.
Cong, Zhi-Bo; Sun, Lan-Xiang; Xin, Yong; Li, Yang; Qi, Li-Feng; Yang, Zhi-Jia
2014-02-01
In the present paper both the partial least squares (PLS) method and the calibration curve (CC) method are used to quantitatively analyze the laser induced breakdown spectroscopy data obtained from the standard alloy steel samples. Both the major and trace elements were quantitatively analyzed. By comparing the results of two different calibration methods some useful results were obtained: for major elements, the PLS method is better than the CC method in quantitative analysis; more importantly, for the trace elements, the CC method can not give the quantitative results due to the extremely weak characteristic spectral lines, but the PLS method still has a good ability of quantitative analysis. And the regression coefficient of PLS method is compared with the original spectral data with background interference to explain the advantage of the PLS method in the LIBS quantitative analysis. Results proved that the PLS method used in laser induced breakdown spectroscopy is suitable for quantitative analysis of trace elements such as C in the metallurgical industry.
Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M
2013-08-29
A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. Copyright © 2013 IBRO. Published by Elsevier Ltd. All rights reserved.
Directory of Open Access Journals (Sweden)
Xun Chen
2013-01-01
Full Text Available Corticomuscular activity modeling based on multiple data sets such as electroencephalography (EEG and electromyography (EMG signals provides a useful tool for understanding human motor control systems. In this paper, we propose modeling corticomuscular activity by combining partial least squares (PLS and canonical correlation analysis (CCA. The proposed method takes advantage of both PLS and CCA to ensure that the extracted components are maximally correlated across two data sets and meanwhile can well explain the information within each data set. This complementary combination generalizes the statistical assumptions beyond both PLS and CCA methods. Simulations were performed to illustrate the performance of the proposed method. We also applied the proposed method to concurrent EEG and EMG data collected in a Parkinson’s disease (PD study. The results reveal several highly correlated temporal patterns between EEG and EMG signals and indicate meaningful corresponding spatial activation patterns. In PD subjects, enhanced connections between occipital region and other regions are noted, which is consistent with previous medical knowledge. The proposed framework is a promising technique for performing multisubject and bimodal data analysis.
Lo, Yen-Li; Pan, Wen-Harn; Hsu, Wan-Lun; Chien, Yin-Chu; Chen, Jen-Yang; Hsu, Mow-Ming; Lou, Pei-Jen; Chen, I-How; Hildesheim, Allan; Chen, Chien-Jen
2016-01-01
Evidence on the association between dietary component, dietary pattern and nasopharyngeal carcinoma (NPC) is scarce. A major challenge is the high degree of correlation among dietary constituents. We aimed to identify dietary pattern associated with NPC and to illustrate the dose-response relationship between the identified dietary pattern scores and the risk of NPC. Taking advantage of a matched NPC case-control study, data from a total of 319 incident cases and 319 matched controls were analyzed. Dietary pattern was derived employing partial least square discriminant analysis (PLS-DA) performed on energy-adjusted food frequencies derived from a 66-item food-frequency questionnaire. Odds ratios (ORs) and 95% confidence intervals (CIs) were estimated with multiple conditional logistic regression models, linking pattern scores and NPC risk. A high score of the PLS-DA derived pattern was characterized by high intakes of fruits, milk, fresh fish, vegetables, tea, and eggs ordered by loading values. We observed that one unit increase in the scores was associated with a significantly lower risk of NPC (ORadj = 0.73, 95% CI = 0.60-0.88) after controlling for potential confounders. Similar results were observed among Epstein-Barr virus seropositive subjects. An NPC protective diet is indicated with more phytonutrient-rich plant foods (fruits, vegetables), milk, other protein-rich foods (in particular fresh fish and eggs), and tea. This information may be used to design potential dietary regimen for NPC prevention.
Wong, Ka H; Razmovski-Naumovski, Valentina; Li, Kong M; Li, George Q; Chan, Kelvin
2013-10-01
The aims of the study were to differentiate Pueraria lobata from its related species Pueraria thomsonii and to examine the raw herbal material used in manufacturing kudzu root granules using partial least square discriminant analysis (PLS-DA). Sixty-four raw materials of P. lobata and P. thomsonii and kudzu root-labelled granules were analysed by ultra performance liquid chromatography. To differentiate P. lobata from P. thomsonii, PLS-DA models using the variables selected from the entire chromatograms, genetic algorithm (GA), successive projection algorithm (SPA), puerarin alone and six selected peaks were employed. The models constructed by GA and SPA demonstrated superior classification ability and lower model's complexity as compared to the model based on the entire chromatographic matrix, whilst the model constructed by the six selected peaks was comparable to the entire chromatographic model. The model established by puerarin alone showed inferior classification ability. In addition, the PLS-DA models constructed by the entire chromatographic matrix, GA, SPA and the six selected peaks showed that four brands out of seventeen granules were mislabelled as P. lobata. In conclusion, PLS-DA is a promising procedure for differentiating Pueraria species and determining raw material used in commercial products.
Sensitivity analysis on chaotic dynamical system by Non-Intrusive Least Square Shadowing (NILSS)
Ni, Angxiu
2016-01-01
This paper develops the tangent Non-Intrusive Least Square Shadowing (NILSS) method, which computes sensitivity for chaotic dynamical systems. In NILSS, a tangent solution is represented as a linear combination of a inhomogeneous tangent solution and some homogeneous tangent solutions. Then we solve a least square problem under this new representation. As a result, this new variant is easier to implement with existing solvers. For chaotic systems with large degrees of freedom but low dimensional attractors, NILSS has low computation cost. NILSS is applied to two chaotic PDE systems: the Lorenz 63 system, and a CFD simulation of a backward-facing step. The results show that NILSS computes the correct derivative with a lower cost than the conventional Least Square Shadowing method and the conventional finite difference method.
Analysis of total least squares in estimating the parameters of a mortar trajectory
Energy Technology Data Exchange (ETDEWEB)
Lau, D.L.; Ng, L.C.
1994-12-01
Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.
Directory of Open Access Journals (Sweden)
Pudji Ismartini
2010-08-01
Full Text Available One of the major problem facing the data modelling at social area is multicollinearity. Multicollinearity can have significant impact on the quality and stability of the fitted regression model. Common classical regression technique by using Least Squares estimate is highly sensitive to multicollinearity problem. In such a problem area, Partial Least Squares Regression (PLSR is a useful and flexible tool for statistical model building; however, PLSR can only yields point estimations. This paper will construct the interval estimations for PLSR regression parameters by implementing Jackknife technique to poverty data. A SAS macro programme is developed to obtain the Jackknife interval estimator for PLSR.
[MEG]PLS: A pipeline for MEG data analysis and partial least squares statistics.
Cheung, Michael J; Kovačević, Natasa; Fatima, Zainab; Mišić, Bratislav; McIntosh, Anthony R
2016-01-01
The emphasis of modern neurobiological theories has recently shifted from the independent function of brain areas to their interactions in the context of whole-brain networks. As a result, neuroimaging methods and analyses have also increasingly focused on network discovery. Magnetoencephalography (MEG) is a neuroimaging modality that captures neural activity with a high degree of temporal specificity, providing detailed, time varying maps of neural activity. Partial least squares (PLS) analysis is a multivariate framework that can be used to isolate distributed spatiotemporal patterns of neural activity that differentiate groups or cognitive tasks, to relate neural activity to behavior, and to capture large-scale network interactions. Here we introduce [MEG]PLS, a MATLAB-based platform that streamlines MEG data preprocessing, source reconstruction and PLS analysis in a single unified framework. [MEG]PLS facilitates MRI preprocessing, including segmentation and coregistration, MEG preprocessing, including filtering, epoching, and artifact correction, MEG sensor analysis, in both time and frequency domains, MEG source analysis, including multiple head models and beamforming algorithms, and combines these with a suite of PLS analyses. The pipeline is open-source and modular, utilizing functions from FieldTrip (Donders, NL), AFNI (NIMH, USA), SPM8 (UCL, UK) and PLScmd (Baycrest, CAN), which are extensively supported and continually developed by their respective communities. [MEG]PLS is flexible, providing both a graphical user interface and command-line options, depending on the needs of the user. A visualization suite allows multiple types of data and analyses to be displayed and includes 4-D montage functionality. [MEG]PLS is freely available under the GNU public license (http://meg-pls.weebly.com).
Carlberg, Kevin
2010-10-28
A Petrov-Galerkin projection method is proposed for reducing the dimension of a discrete non-linear static or dynamic computational model in view of enabling its processing in real time. The right reduced-order basis is chosen to be invariant and is constructed using the Proper Orthogonal Decomposition method. The left reduced-order basis is selected to minimize the two-norm of the residual arising at each Newton iteration. Thus, this basis is iteration-dependent, enables capturing of non-linearities, and leads to the globally convergent Gauss-Newton method. To avoid the significant computational cost of assembling the reduced-order operators, the residual and action of the Jacobian on the right reduced-order basis are each approximated by the product of an invariant, large-scale matrix, and an iteration-dependent, smaller one. The invariant matrix is computed using a data compression procedure that meets proposed consistency requirements. The iteration-dependent matrix is computed to enable the least-squares reconstruction of some entries of the approximated quantities. The results obtained for the solution of a turbulent flow problem and several non-linear structural dynamics problems highlight the merit of the proposed consistency requirements. They also demonstrate the potential of this method to significantly reduce the computational cost associated with high-dimensional non-linear models while retaining their accuracy. © 2010 John Wiley & Sons, Ltd.
Huang, Jie-Tsuen; Hsieh, Hui-Hsien
2011-01-01
The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…
Sarstedt, Marko; Henseler, Jörg; Ringle, Christian M.
2011-01-01
Purpose – Partial least squares (PLS) path modeling has become a pivotal empirical research method in international marketing. Owing to group comparisons' important role in research on international marketing, we provide researchers with recommendations on how to conduct multigroup analyses in PLS p
Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.
Yuan, Ke-Hai; Bentler, Peter M.
2000-01-01
Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)
Boccard, Julien; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.
Multilocus association testing of quantitative traits based on partial least-squares analysis.
Directory of Open Access Journals (Sweden)
Feng Zhang
Full Text Available Because of combining the genetic information of multiple loci, multilocus association studies (MLAS are expected to be more powerful than single locus association studies (SLAS in disease genes mapping. However, some researchers found that MLAS had similar or reduced power relative to SLAS, which was partly attributed to the increased degrees of freedom (dfs in MLAS. Based on partial least-squares (PLS analysis, we develop a MLAS approach, while avoiding large dfs in MLAS. In this approach, genotypes are first decomposed into the PLS components that not only capture majority of the genetic information of multiple loci, but also are relevant for target traits. The extracted PLS components are then regressed on target traits to detect association under multilinear regression. Simulation study based on real data from the HapMap project were used to assess the performance of our PLS-based MLAS as well as other popular multilinear regression-based MLAS approaches under various scenarios, considering genetic effects and linkage disequilibrium structure of candidate genetic regions. Using PLS-based MLAS approach, we conducted a genome-wide MLAS of lean body mass, and compared it with our previous genome-wide SLAS of lean body mass. Simulations and real data analyses results support the improved power of our PLS-based MLAS in disease genes mapping relative to other three MLAS approaches investigated in this study. We aim to provide an effective and powerful MLAS approach, which may help to overcome the limitations of SLAS in disease genes mapping.
Rouède, Denis; Bellanger, Jean-Jacques; Bomo, Jérémy; Baffet, Georges; Tiaho, François
2015-05-18
A linear least square (LLS) method is proposed to process polarization dependent SHG intensity analysis at pixel-resolution level in order to provide an analytic solution of nonlinear susceptibility χ(2) coefficients and of fibril orientation. This model is applicable to fibrils with identical orientation in the excitation volume. It has been validated on type I collagen fibrils from cell-free gel, tendon and extracellular matrix of F1 biliary epithelial cells. LLS is fast (a few hundred milliseconds for a 512 × 512 pixel image) and very easy to perform for non-expert in numerical signal processing. Theoretical simulation highlights the importance of signal to noise ratio for accurate determination of nonlinear susceptibility χ(2) coefficients. The results also suggest that, in addition to the peptide group, a second molecular nonlinear optical hyperpolarizability β contributes to the SHG signal. Finally from fibril orientation analysis, results show that F1 cells remodel extracellular matrix collagen fibrils by changing fibril orientation, which might have important physiological function in cell migration and communication.
Energy Technology Data Exchange (ETDEWEB)
Boccard, Julien, E-mail: julien.boccard@unige.ch; Rudaz, Serge
2016-05-12
Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. - Highlights: • A new method is proposed for the analysis of Omics data generated using design of
Ning, Hanwen; Qing, Guangyan; Jing, Xingjian
2016-11-01
The identification of nonlinear spatiotemporal dynamical systems given by partial differential equations has attracted a lot of attention in the past decades. Several methods, such as searching principle-based algorithms, partially linear kernel methods, and coupled lattice methods, have been developed to address the identification problems. However, most existing methods have some restrictions on sampling processes in that the sampling intervals should usually be very small and uniformly distributed in spatiotemporal domains. These are actually not applicable for some practical applications. In this paper, to tackle this issue, a novel kernel-based learning algorithm named integral least square regularization regression (ILSRR) is proposed, which can be used to effectively achieve accurate derivative estimation for nonlinear functions in the time domain. With this technique, a discretization method named inverse meshless collocation is then developed to realize the dimensional reduction of the system to be identified. Thereafter, with this novel inverse meshless collocation model, the ILSRR, and a multiple-kernel-based learning algorithm, a multistep identification method is systematically proposed to address the identification problem of spatiotemporal systems with pointwise nonuniform observations. Numerical studies for benchmark systems with necessary discussions are presented to illustrate the effectiveness and the advantages of the proposed method.
Energy Technology Data Exchange (ETDEWEB)
Blumberg, L.N. [Brookhaven National Lab., Upton, NY (United States). National Synchrotron Light Source
1992-03-01
The authors have analyzed simulated magnetic measurements data for the SXLS bending magnet in a plane perpendicular to the reference axis at the magnet midpoint by fitting the data to an expansion solution of the 3-dimensional Laplace equation in curvilinear coordinates as proposed by Brown and Servranckx. The method of least squares is used to evaluate the expansion coefficients and their uncertainties, and compared to results from an FFT fit of 128 simulated data points on a 12-mm radius circle about the reference axis. They find that the FFT method gives smaller coefficient uncertainties that the Least Squares method when the data are within similar areas. The Least Squares method compares more favorably when a larger number of data points are used within a rectangular area of 30-mm vertical by 60-mm horizontal--perhaps the largest area within the 35-mm x 75-mm vacuum chamber for which data could be obtained. For a grid with 0.5-mm spacing within the 30 x 60 mm area the Least Squares fit gives much smaller uncertainties than the FFT. They are therefore in the favorable position of having two methods which can determine the multipole coefficients to much better accuracy than the tolerances specified to General Dynamics. The FFT method may be preferable since it requires only one Hall probe rather than the four envisioned for the least squares grid data. However least squares can attain better accuracy with fewer probe movements. The time factor in acquiring the data will likely be the determining factor in choice of method. They should further explore least squares analysis of a Fourier expansion of data on a circle or arc of a circle since that method gives coefficient uncertainties without need for multiple independent sets of data as needed by the FFT method.
Angelis, Georgios I; Matthews, Julian C; Kotasidis, Fotis A; Markiewicz, Pawel J; Lionheart, William R; Reader, Andrew J
2014-11-01
Estimation of nonlinear micro-parameters is a computationally demanding and fairly challenging process, since it involves the use of rather slow iterative nonlinear fitting algorithms and it often results in very noisy voxel-wise parametric maps. Direct reconstruction algorithms can provide parametric maps with reduced variance, but usually the overall reconstruction is impractically time consuming with common nonlinear fitting algorithms. In this work we employed a recently proposed direct parametric image reconstruction algorithm to estimate the parametric maps of all micro-parameters of a two-tissue compartment model, used to describe the kinetics of [[Formula: see text]F]FDG. The algorithm decouples the tomographic and the kinetic modelling problems, allowing the use of previously developed post-reconstruction methods, such as the generalised linear least squares (GLLS) algorithm. Results on both clinical and simulated data showed that the proposed direct reconstruction method provides considerable quantitative and qualitative improvements for all micro-parameters compared to the conventional post-reconstruction fitting method. Additionally, region-wise comparison of all parametric maps against the well-established filtered back projection followed by post-reconstruction non-linear fitting, as well as the direct Patlak method, showed substantial quantitative agreement in all regions. The proposed direct parametric reconstruction algorithm is a promising approach towards the estimation of all individual microparameters of any compartment model. In addition, due to the linearised nature of the GLLS algorithm, the fitting step can be very efficiently implemented and, therefore, it does not considerably affect the overall reconstruction time.
Sze, K. H.; Barsukov, I. L.; Roberts, G. C. K.
A procedure for quantitative evaluation of cross-peak volumes in spectra of any order of dimensions is described; this is based on a generalized algorithm for combining appropriate one-dimensional integrals obtained by nonlinear-least-squares curve-fitting techniques. This procedure is embodied in a program, NDVOL, which has three modes of operation: a fully automatic mode, a manual mode for interactive selection of fitting parameters, and a fast reintegration mode. The procedures used in the NDVOL program to obtain accurate volumes for overlapping cross peaks are illustrated using various simulated overlapping cross-peak patterns. The precision and accuracy of the estimates of cross-peak volumes obtained by application of the program to these simulated cross peaks and to a back-calculated 2D NOESY spectrum of dihydrofolate reductase are presented. Examples are shown of the use of the program with real 2D and 3D data. It is shown that the program is able to provide excellent estimates of volume even for seriously overlapping cross peaks with minimal intervention by the user.
Elkhoudary, Mahmoud M; Abdel Salam, Randa A; Hadad, Ghada M
2014-09-15
Metronidazole (MNZ) is a widely used antibacterial and amoebicide drug. Therefore, it is important to develop a rapid and specific analytical method for the determination of MNZ in mixture with Spiramycin (SPY), Diloxanide (DIX) and Cliquinol (CLQ) in pharmaceutical preparations. This work describes simple, sensitive and reliable six multivariate calibration methods, namely linear and nonlinear artificial neural networks preceded by genetic algorithm (GA-ANN) and principle component analysis (PCA-ANN) as well as partial least squares (PLS) either alone or preceded by genetic algorithm (GA-PLS) for UV spectrophotometric determination of MNZ, SPY, DIX and CLQ in pharmaceutical preparations with no interference of pharmaceutical additives. The results manifest the problem of nonlinearity and how models like ANN can handle it. Analytical performance of these methods was statistically validated with respect to linearity, accuracy, precision and specificity. The developed methods indicate the ability of the previously mentioned multivariate calibration models to handle and solve UV spectra of the four components' mixtures using easy and widely used UV spectrophotometer.
Institute of Scientific and Technical Information of China (English)
Chen Zhao; Weiguo Gao; Jungong Xue
2007-01-01
A structured perturbation analysis of the least squares problem is considered in this paper. The new error bound proves to be sharper than that for general perturbations. We apply the new error bound to study sensitivity of changing the knots for curve fitting of interest rate term structure by cubic spline. Numerical experiments are given to illustrate the sharpness of this bound.
Least Squares Data Fitting with Applications
DEFF Research Database (Denmark)
Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela
predictively. The main concern of Least Squares Data Fitting with Applications is how to do this on a computer with efficient and robust computational methods for linear and nonlinear relationships. The presentation also establishes a link between the statistical setting and the computational issues...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis......As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data...
Least Squares Data Fitting with Applications
DEFF Research Database (Denmark)
Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela
As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data...... predictively. The main concern of Least Squares Data Fitting with Applications is how to do this on a computer with efficient and robust computational methods for linear and nonlinear relationships. The presentation also establishes a link between the statistical setting and the computational issues...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis...
Kang, Gumin; Lee, Kwangchil; Park, Haesung; Lee, Jinho; Jung, Youngjean; Kim, Kyoungsik; Son, Boongho; Park, Hyoungkuk
2010-06-15
Mixed hydrofluoric and nitric acids are widely used as a good etchant for the pickling process of stainless steels. The cost reduction and the procedure optimization in the manufacturing process can be facilitated by optically detecting the concentration of the mixed acids. In this work, we developed a novel method which allows us to obtain the concentrations of hydrofluoric acid (HF) and nitric acid (HNO(3)) mixture samples with high accuracy. The experiments were carried out for the mixed acids which consist of the HF (0.5-3wt%) and the HNO(3) (2-12wt%) at room temperature. Fourier Transform Raman spectroscopy has been utilized to measure the concentration of the mixed acids HF and HNO(3), because the mixture sample has several strong Raman bands caused by the vibrational mode of each acid in this spectrum. The calibration of spectral data has been performed using the partial least squares regression method which is ideal for local range data treatment. Several figures of merit (FOM) were calculated using the concept of net analyte signal (NAS) to evaluate performance of our methodology.
Institute of Scientific and Technical Information of China (English)
SHAO, Xueguang; CHEN, Da; XU, Heng; LIU, Zhichao; CAI, Wensheng
2009-01-01
Partial least-squares (PLS) regression has been presented as a powerful tool for spectral quantitative measure- ment. However, the improvement of the robustness and stability of PLS models is still needed, because it is difficult to build a stable model when complex samples are analyzed or outliers are contained in the calibration data set. To achieve the purpose, a robust ensemble PLS technique based on probability resampling was proposed, which is named RE-PLS. In the proposed method, a probability is firstly obtained for each calibration sample from its resid- ual in a robust regression. Then, multiple PLS models are constructed based on probability resampling. At last, the multiple PLS models are used to predict unknown samples by taking the average of the predictions from the multi- ple models as final prediction result. To validate the effectiveness and universality of the proposed method, it was applied to two different sets of NIR spectra. The results show that RE-PLS can not only effectively avoid the inter- ference of outliers but also enhance the precision of prediction and the stability of PLS regression. Thus, it may pro- vide a useful tool for multivariate calibration with multiple outliers.
A weighted least squares analysis of globalization and the Nigerian stock market performance
Directory of Open Access Journals (Sweden)
Alenoghena Osi Raymond
2013-12-01
Full Text Available The study empirically investigates the impact of globalization on the performance of the Nigerian Stock market. The study seeks the verification of the existence of a linking mechanism between globalization through trade openness, net inflow of capital, participation in international capital market and financial development on Stock Market performance over the period of 1981 to 2011. The methodology adopted examines the stochastic characteristics of each time series by testing their stationarity using the Im, Pesaran and Shin W-stat test. The weighted least squares regression method was employed to ascertain the different level of impacts on the above subject matter. The findings were reinforced by the presence of a long-term equilibrium relationship, as evidenced by the cointegrating equation of the VECM. The Model ascertained that globalization variables actually positively impacted on stock market performance. However, the findings reveal that while net capital inflows and participation in international capital market have greater impact on the Nigerian Stock market performance during the period under review. Accordingly, it is advised that in formulating foreign policy, policy makers should take strategic views on the international economy and make new creative policies that will foster economic integration between Nigeria and its existing trade allies. These creative policies will also assist to create avenues for the making new trade agreements with other nations of the world, which hitherto were not trade partners with Nigeria.
Directory of Open Access Journals (Sweden)
Vasileios A. Tzanakakis
2014-12-01
Full Text Available Partial Least Squares Regression (PLSR can integrate a great number of variables and overcome collinearity problems, a fact that makes it suitable for intensive agronomical practices such as land application. In the present study a PLSR model was developed to predict important management goals, including biomass production and nutrient recovery (i.e., nitrogen and phosphorus, associated with treatment potential, environmental impacts, and economic benefits. Effluent loading and a considerable number of soil parameters commonly monitored in effluent irrigated lands were considered as potential predictor variables during the model development. All data were derived from a three year field trial including plantations of four different plant species (Acacia cyanophylla, Eucalyptus camaldulensis, Populus nigra, and Arundo donax, irrigated with pre-treated domestic effluent. PLSR method was very effective despite the small sample size and the wide nature of data set (with many highly correlated inputs and several highly correlated responses. Through PLSR method the number of initial predictor variables was reduced and only several variables were remained and included in the final PLSR model. The important input variables maintained were: Effluent loading, electrical conductivity (EC, available phosphorus (Olsen-P, Na+, Ca2+, Mg2+, K2+, SAR, and NO3−-N. Among these variables, effluent loading, EC, and nitrates had the greater contribution to the final PLSR model. PLSR is highly compatible with intensive agronomical practices such as land application, in which a large number of highly collinear and noisy input variables is monitored to assess plant species performance and to detect impacts on the environment.
Bellomarino, S A; Parker, R M; Conlan, X A; Barnett, N W; Adams, M J
2010-09-23
HPLC with acidic potassium permanganate chemiluminescence detection was employed to analyse 17 Cabernet Sauvignon wines across a range of vintages (1971-2003). Partial least squares regression analysis and principal components analysis was used in order to investigate the relationship between wine composition and vintage. Tartaric acid, vanillic acid, catechin, sinapic acid, ethyl gallate, myricetin, procyanadin B and resveratrol were found to be important components in terms of differences between the vintages.
Xing, Jie; Yuan, Shu-chun; Sun, Hui-min; Fan, Ma-li; Li, Zhen-yu; Qin, Xue-mei
2015-08-01
1H NMR metabonomics approach was used to reveal the chemical difference of urine between patients with Xiao-Chaihu Tang syndrome (XCHTS) and healthy participants (HP). The partial least square method was used to establish a model to distinguish the patients with Xiao-Chaihu-Tang syndrome from the healthy controls. Thirty-four endogenous metabolites were identified in the 1H NMR spectrum, and orthogonal partial least squares discriminant analysis showed the urine of patients with Xiao-Chaihu Tang syndrome and healthy participants could be separated clearly. It is indicated that the metabolic profiling of patients with Xiao-Chaihu Tang syndrome was changed obviously. Fifteen metabolites were found by S-pot of OPLS-DA and VIP value. The contents of leucine, formic acid, glycine, hippuric acid and uracil increased in the urine of patients, while threonine, 2-hydroxyisobutyrate, acetamide, 2-oxoglutarate, citric acid, dimethylamine, malonic acid, betaine, trimethylamine oxide, phenylacetyl glycine, and uridine decreased. These metabolites involve the intestinal microbial balance, energy metabolism and amino acid metabolism pathways, which is related with the major symptom of Xiao-Chaihu Tang syndrome. The patients with Xiao-Chaihu Tang syndrome could be identified and predicted correctly using the established partial least squares model. This study could be served as the basis for the accurate diagnostic and reasonable administration of Xiao-Chaihu-Tang syndrome.
Miranian, A; Abdollahzade, M
2013-02-01
Local modeling approaches, owing to their ability to model different operating regimes of nonlinear systems and processes by independent local models, seem appealing for modeling, identification, and prediction applications. In this paper, we propose a local neuro-fuzzy (LNF) approach based on the least-squares support vector machines (LSSVMs). The proposed LNF approach employs LSSVMs, which are powerful in modeling and predicting time series, as local models and uses hierarchical binary tree (HBT) learning algorithm for fast and efficient estimation of its parameters. The HBT algorithm heuristically partitions the input space into smaller subdomains by axis-orthogonal splits. In each partitioning, the validity functions automatically form a unity partition and therefore normalization side effects, e.g., reactivation, are prevented. Integration of LSSVMs into the LNF network as local models, along with the HBT learning algorithm, yield a high-performance approach for modeling and prediction of complex nonlinear time series. The proposed approach is applied to modeling and predictions of different nonlinear and chaotic real-world and hand-designed systems and time series. Analysis of the prediction results and comparisons with recent and old studies demonstrate the promising performance of the proposed LNF approach with the HBT learning algorithm for modeling and prediction of nonlinear and chaotic systems and time series.
van der Burg, Eeke; de Leeuw, Jan; Verdegaal, R.
1986-01-01
Homogeneity analysis, or multiple correspondence analysis, is usually applied to k separate variables. In this paper, it is applied to sets of variables by using sums within sets. The resulting technique is referred to as OVERALS. It uses the notion of optimal scaling, with transformations that can
Hecht, Jeffrey B.
The analysis of regression residuals and detection of outliers are discussed, with emphasis on determining how deviant an individual data point must be to be considered an outlier and the impact that multiple suspected outlier data points have on the process of outlier determination and treatment. Only bivariate (one dependent and one independent)…
Martin, Y. L.
The performance of quantitative analysis of 1D NMR spectra depends greatly on the choice of the NMR signal model. Complex least-squares analysis is well suited for optimizing the quantitative determination of spectra containing a limited number of signals (20). From a general point of view it is concluded, on the basis of mathematical considerations and numerical simulations, that, in the absence of truncation of the free-induction decay, complex least-squares curve fitting either in the time or in the frequency domain and linear-prediction methods are in fact nearly equivalent and give identical results. However, in the situation considered, complex least-squares analysis in the frequency domain is more flexible since it enables the quality of convergence to be appraised at every resonance position. An efficient data-processing strategy has been developed which makes use of an approximate conjugate-gradient algorithm. All spectral parameters (frequency, damping factors, amplitudes, phases, initial delay associated with intensity, and phase parameters of a baseline correction) are simultaneously managed in an integrated approach which is fully automatable. The behavior of the error as a function of the signal-to-noise ratio is theoretically estimated, and the influence of apodization is discussed. The least-squares curve fitting is theoretically proved to be the most accurate approach for quantitative analysis of 1D NMR data acquired with reasonable signal-to-noise ratio. The method enables complex spectral residuals to be sorted out. These residuals, which can be cumulated thanks to the possibility of correcting for frequency shifts and phase errors, extract systematic components, such as isotopic satellite lines, and characterize the shape and the intensity of the spectral distortion with respect to the Lorentzian model. This distortion is shown to be nearly independent of the chemical species, of the nature of the molecular site, and of the type of nucleus, but
Quasi-least squares regression
Shults, Justine
2014-01-01
Drawing on the authors' substantial expertise in modeling longitudinal and clustered data, Quasi-Least Squares Regression provides a thorough treatment of quasi-least squares (QLS) regression-a computational approach for the estimation of correlation parameters within the framework of generalized estimating equations (GEEs). The authors present a detailed evaluation of QLS methodology, demonstrating the advantages of QLS in comparison with alternative methods. They describe how QLS can be used to extend the application of the traditional GEE approach to the analysis of unequally spaced longitu
Pagiatakis, Spiros D.; Yin, Hui; El-Gelil, Mahmoud Abd
2007-02-01
We develop a new approach for the spectral analysis of the superconducting gravimeter data to search for the spheroidal oscillation 1S1 of the Earth solid inner core. The new method, which we call least- squares ( LS) self- coherency analysis, is based on the product of the least-squares spectra of segments of the time series under consideration. The statistical foundation of this method is presented in the new least- squares product spectrum theorem that establishes rigorously confidence levels for detecting significant peaks. We apply this approach along with a number of other innovative ideas to a 6-year long gravity series collected at the Canadian Superconducting Gravimeter Installation (CSGI) in Cantley, Canada, by splitting it into 72 statistically independent monthly records. Each monthly record is analysed spectrally and all monthly LS spectra are multiplied to construct the self- coherency spectrum of the 6-year gravity series. The self-coherency spectrum is then used to detect significant peaks in the band 3-7 h at various significant levels with the aim to identify a triplet of periods associated with the rotational/ellipsoidal splitting of 1S1 (Slichter triplet). From all the Slichter periods predicted by various researchers so far, Smylie's triplet appears to be the most supported one, albeit very weakly, both, before and after the atmospheric pressure effect is removed from the series. Using the viscous splitting law [Smylie, D.E., 1992. The inner core translational triplet and the density near Earth's center. Science 255, 1678-1682] as guide, we can also see one interesting and statistically significant triplet with periods A = {4.261 h, 4.516 h, 4.872 h}, which changes slightly to A' = {4.269 h, 4.516 h, 4.889 h} after the atmospheric pressure correction is applied to the gravity series.
Cervellino, A.; Giannini, C.; Guagliardi, A.; Zanchet, D.
2004-10-01
Powder samples of thiol-capped gold nanoparticles in the size range of 2-4 nm were quantitatively characterized by means of synchrotron X-ray diffraction data, with respect to their structure, size and strain distributions. A novel Rietveld-like approach was applied, refining domain size distribution, strain-size dependence and structure type concentrations. Three structure types (cuboctahedron, icosahedron, decahedron) were considered in this analysis and a detail study of the strain content was performed by comparing different models. The results showed a strong influence of the strain model and a careful analysis is presented. Final domain size and strain distributions agree well with the existence of both single-domain and imperfectly formed or multi-domain nanoparticles, but the final strain profiles seem to be mostly related to the different degree of structural perfection at different sizes as a result of the synthesis process. The present work represents an important step towards the development of robust methods to determine strain profiles in nanosystems, aiming to fulfill the description of these important but complex systems.
Han, Bangxing; Peng, Huasheng; Yan, Hui
2016-01-01
Mugua is a common Chinese herbal medicine. There are three main medicinal origin places in China, Xuancheng City Anhui Province, Qijiang District Chongqing City, Yichang City, Hubei Province, and suitable for food origin places Linyi City Shandong Province. To construct a qualitative analytical method to identify the origin of medicinal Mugua by near infrared spectroscopy (NIRS). Partial least squares discriminant analysis (PLSDA) model was established after the Mugua derived from five different origins were preprocessed by the original spectrum. Moreover, the hierarchical cluster analysis was performed. The result showed that PLSDA model was established. According to the relationship of the origins-related important score and wavenumber, and K-mean cluster analysis, the Muguas derived from different origins were effectively identified. NIRS technology can quickly and accurately identify the origin of Mugua, provide a new method and technology for the identification of Chinese medicinal materials. After preprocessed by D1+autoscale, more peaks were increased in the preprocessed Mugua in the near infrared spectrumFive latent variable scores could reflect the information related to the origin place of MuguaOrigins of Mugua were well-distinguished according to K. mean value clustering analysis. Abbreviations used: TCM: Traditional Chinese Medicine, NIRS: Near infrared spectroscopy, SG: Savitzky-Golay smoothness, D1: First derivative, D2: Second derivative, SNV: Standard normal variable transformation, MSC: Multiplicative scatter correction, PLSDA: Partial least squares discriminant analysis, LV: Latent variable, VIP scores: Important score.
A Modified Quasi- Newton Method for Nonlinear Least Squares Problems%非线性最小二乘问题的修正拟牛顿法
Institute of Scientific and Technical Information of China (English)
吴淦洲
2011-01-01
A modified quasi - Newton method for nonlinear least squares problems is proposed. By using non - monotone line search technique and structured quasi - Newton method, we establish a modified quasi - Newton method for nonlinear least squares problems, and the global convergence of the algorithm is proved.%给出了求解非线性最小二乘的修正拟牛顿方法。该方法结合了非单调搜索技术和结构化拟牛顿法的思想，提出了一种新的求解非线性最小二乘的修正拟牛顿法，并证明了该方法的全局收敛性。
Bayesian least squares deconvolution
Asensio Ramos, A.; Petit, P.
2015-11-01
Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Bayesian least squares deconvolution
Ramos, A Asensio
2015-01-01
Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.
Gazmeh, Meisam; Bahreini, Maryam; Tavassoli, Seyed Hassan
2015-01-01
In the laser drilling of teeth, a microplasma is generated which may be utilized for elemental analysis of ablated tissue via a laser-induced breakdown spectroscopy (LIBS) technique. In this study, LIBS is used to investigate the possibility of discrimination of healthy and carious tooth tissues. This possibility is examined using multivariate statistical analysis called partial least square discriminant analysis (PLS-DA) based on atomic and ionic emission lines of teeth LIBS spectra belonging to P, Ca, Mg, Zn, K, Sr, C, Na, H, and O elements. Results show an excellent discrimination and prediction of unknown tooth tissues. It is shown that using the PLS-DA method, the spectroscopic analysis of plasma emission during the laser drilling, would be a promising technique for caries detection.
Kudo, Takamasa; Uda, Shinsuke; Tsuchiya, Takaho; Wada, Takumi; Karasawa, Yasuaki; Fujii, Masashi; Saito, Takeshi H; Kuroda, Shinya
2016-01-01
Signaling networks are made up of limited numbers of molecules and yet can code information that controls different cellular states through temporal patterns and a combination of signaling molecules. In this study, we used a data-driven modeling approach, the Laguerre filter with partial least square regression, to describe how temporal and combinatorial patterns of signaling molecules are decoded by their downstream targets. The Laguerre filter is a time series model used to represent a nonlinear system based on Volterra series expansion. Furthermore, with this approach, each component of the Volterra series expansion is expanded by Laguerre basis functions. We combined two approaches, application of a Laguerre filter and partial least squares (PLS) regression, and applied the combined approach to analysis of a signal transduction network. We applied the Laguerre filter with PLS regression to identify input and output (IO) relationships between MAP kinases and the products of immediate early genes (IEGs). We found that Laguerre filter with PLS regression performs better than Laguerre filter with ordinary regression for the reproduction of a time series of IEGs. Analysis of the nonlinear characteristics extracted using the Laguerre filter revealed a priming effect of ERK and CREB on c-FOS induction. Specifically, we found that the effects of a first pulse of ERK enhance the subsequent effects on c-FOS induction of treatment with a second pulse of ERK, a finding consistent with prior molecular biological knowledge. The variable importance of projections and output loadings in PLS regression predicted the upstream dependency of each IEG. Thus, a Laguerre filter with partial least square regression approach appears to be a powerful method to find the processing mechanism of temporal patterns and combination of signaling molecules by their downstream gene expression.
Liu, Song; Su, Bo-min; Li, Qing-hui; Gan, Fu-xi
2015-01-01
The authors tried to find a method for quantitative analysis using pXRF without solid bulk stone/jade reference samples. 24 nephrite samples were selected, 17 samples were calibration samples and the other 7 are test samples. All the nephrite samples were analyzed by Proton induced X-ray emission spectroscopy (PIXE) quantitatively. Based on the PIXE results of calibration samples, calibration curves were created for the interested components/elements and used to analyze the test samples quantitatively; then, the qualitative spectrum of all nephrite samples were obtained by pXRF. According to the PIXE results and qualitative spectrum of calibration samples, partial least square method (PLS) was used for quantitative analysis of test samples. Finally, the results of test samples obtained by calibration method, PLS method and PIXE were compared to each other. The accuracy of calibration curve method and PLS method was estimated. The result indicates that the PLS method is the alternate method for quantitative analysis of stone/jade samples.
Institute of Scientific and Technical Information of China (English)
SHI Lin; LI Zhi-ling; YU Tao; LI Jiang-peng
2011-01-01
In blast furnace （BF） iron-making process, the hot metal silicon content was usually used to measure the quality of hot metal and to reflect the thermal state of BF. Principal component analysis （PCA） and partial least- square （PLS） regression methods were used to predict the hot metal silicon content. Under the conditions of BF rela- tively stable situation, PCA and PLS regression models of hot metal silicon content utilizing data from Baotou Steel No. 6 BF were established, which provided the accuracy of 88.4% and 89.2%. PLS model used less variables and time than principal component analysis model, and it was simple to calculate. It is shown that the model gives good results and is helpful for practical production.
Andrade, Jose Manuel; Cristoforetti, Gabriele; Legnaioli, Stefano; Lorenzetti, Giulia; Palleschi, Vincenzo; Shaltout, Abdallah A.
2010-08-01
In this work we compare the analytical results obtained by traditional calibration curves (CC) and multivariate Partial Least Squares (PLS) algorithm when applied to the LIBS spectra obtained from ten brass samples (nine standards of known composition and one 'unknown'). Both major (Cu and Zn) and trace (Sn, Pb, Fe) elements in the sample matrix were analyzed. After the analysis, the composition of the 'unknown' sample, measured by X-ray Fluorescence (XRF) technique, was revealed. The predicted concentrations of major elements obtained by rapid PLS algorithms are in very good agreement with the nominal concentrations, as well as with those obtained by the more time-consuming CC approach. A discussion about the possible effects leading to discrepancies of the results is reported. The results of this study open encouraging perspectives towards the development of cheap LIBS instrumentation which would be capable, despite the limitations of the experimental apparatus, to perform fast and precise quantitative analysis on complex samples.
McEvoy, Fintan J; Amigo, José M
2013-01-01
As the number of images per study increases in the field of veterinary radiology, there is a growing need for computer-assisted diagnosis techniques. The purpose of this study was to evaluate two machine learning statistical models for automatically identifying image regions that contain the canine hip joint on ventrodorsal pelvis radiographs. A training set of images (120 of the hip and 80 from other regions) was used to train a linear partial least squares discriminant analysis (PLS-DA) model and a nonlinear artificial neural network (ANN) model to classify hip images. Performance of the models was assessed using a separate test image set (36 containing hips and 20 from other areas). Partial least squares discriminant analysis model achieved a classification error, sensitivity, and specificity of 6.7%, 100%, and 89%, respectively. The corresponding values for the ANN model were 8.9%, 86%, and 100%. Findings indicated that statistical classification of veterinary images is feasible and has the potential for grouping and classifying images or image features, especially when a large number of well-classified images are available for model training. © 2012 Veterinary Radiology & Ultrasound.
Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis
2017-01-01
A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.
Ruiz-Samblás, Cristina; Arrebola-Pascual, Cristina; Tres, Alba; van Ruth, Saskia; Cuadros-Rodríguez, Luis
2013-11-15
Main goals of the present work were to develop authentication models based on liquid and gas chromatographic fingerprinting of triacylglycerols (TAGs) from palm oil of different geographical origins in order to compare them. For this purpose, a set of palm oil samples were collected from different continents: South eastern Asia, Africa and South America. For the analysis of the information in these fingerprint profiles, a pattern recognition technique such as partial least square discriminant analysis (PLS-DA) was applied to discriminate the geographical origin of these oils, at continent level. The liquid chromatography, coupled to a charged aerosol detector, (HPLC-CAD) TAGs separation was optimized in terms of mobile phase composition and by means of a solid silica core column. The gas chromatographic method with a mass spectrometer was applied under high temperature (HTGC-MS) in order to analyze the intact TAGs. Satisfactory chromatographic resolution within a short total analysis time was achieved with both chromatographic approaches and without any prior sample treatment. The rates of successful in prediction of the geographical origin of the 85 samples varied between 70% and 100%.
Institute of Scientific and Technical Information of China (English)
李锐华; 高乃奎; 谢恒堃; 史维祥
2004-01-01
Objective To investigate various data message of the stator bars condition parameters under the condition that only a few samples are available, especially about correlation information between the nondestructive parameters and residual breakdown voltage of the stator bars. Methods Artificial stator bars is designed to simulate the generator bars. The partial didcharge( PD) and dielectric loss experiments are performed in order to obtain the nondestructive parameters, and the residual breakdown voltage acquired by AC damage experiment. In order to eliminate the dimension effect on measurement data, raw data is preprocessed by centered-compress. Based on the idea of extracting principal components, a partial least square (PLS) method is applied to screen and synthesize correlation information between the nondestructive parameters and residual breakdown voltage easily. Moreover, various data message about condition parameters are also discussed. Results Graphical analysis function of PLS is easily to understand various data message of the stator bars condition parameters. The analysis Results are consistent with result of aging testing. Conclusion The method can select and extract PLS components of condition parameters from sample data, and the problems of less samples and multicollinearity are solved effectively in regression analysis.
边坡稳定性分析最小二乘法%Least Square Method of Slope Stability Analysis
Institute of Scientific and Technical Information of China (English)
刘秀军
2012-01-01
In this article, the author points out that using MATLAB program and the principle of the least squares to calculate the liner equations can get a more accurate safety factor when under the following situations. Firstly, the author use the classical earth pressure theory and Mohr-Coulomb failure criterion to set-up the reasonable thrust line position of soil slice. Secondly, according to static equilibrium equation and torque equilibrium equation, the liner equations are established. This assumption is set by a reasonable thrust line position and avoiding any unreasonable interaction forces. MATLAB is the method for solving linear equations, it overcomes the shortcomings, in which other methods to solve nonlinear equations cannot iterative to converge. The results show the xassumption is more reliable in precision. The study is a guide for evaluating the stability of slopes.%利用经典土压力理论设定合理土条推力线位置,对土条底滑面采用摩尔-库伦破坏准则,根据静力平衡及力矩平衡条件建立线性超定方程组,应用MATLAB软件基于最小二乘法原理对此方程组求解,得到比较精确的安全系数。该法从设定合理土条推力线位置出发,避免了对条间力函数的不合理设定,采用MATLAB求解线性超定方程组得解,克服了求解非线性方程组不能迭代收敛得解的缺点,经算例验证其在精度方面比较可靠,对于评价边坡的稳定性具有参考意义。
Tan, Ailing; Zhao, Yong; Wang, Siyuan
2016-10-01
Quantitative analysis of the simulated complex oil spills was researched based on PSO-LS-SVR method. Forty simulated mixture oil spills samples were made with different concentration proportions of gasoline, diesel and kerosene oil, and their near infrared spectra were collected. The parameters of least squares support vector machine were optimized by particle swarm optimization algorithm. The optimal concentration quantitative models of three-component oil spills were established. The best regularization parameter C and kernel parameter σ of gasoline, diesel and kerosene model were 48.1418 and 0.1067, 53.2820 and 0.1095, 59.1689 and 0.1000 respectively. The decision coefficient R2 of the prediction model were 0.9983, 0.9907 and 0.9942 respectively. RMSEP values were 0.0753, 0.1539 and 0.0789 respectively. For gasoline, diesel fuel and kerosene oil models, the mean value and variance value of predict absolute error were -0.0176±0.0636 μL/mL, -0.0084+/-0.1941 μL/mL, and 0.00338+/-0.0726 μL/mL respectively. The results showed that each component's concentration of the oil spills samples could be detected by the NIR technology combined with PSO-LS-SVR regression method, the predict results were accurate and reliable, thus this method can provide effective means for the quantitative detection and analysis of complex marine oil spills.
Ibrahim, George M; Morgan, Benjamin R; Fallah, Aria
2015-02-01
Previous studies aimed at identifying predictors of seizure outcomes following resective surgery for tuberous sclerosis complex (TSC) are limited by multicollinearity among predictors, whereby the high degree of correlation between covariates precludes detection of potentially significant findings. Here, we apply a data-driven method, partial least squares (PLS) to model multidimensional variance and study significant patterns in data that are associated with seizure outcomes. Post hoc analysis of 186 children with TSC who underwent resective epilepsy surgery derived from an individual participant data meta-analysis was performed. PLS was used to derive a latent variable (component) that relates clinical covariates with Engel classification. Permutation testing was performed to evaluate the significance of the component, and bootstrapping was used to identify significant contributors to the component. A significant component was identified, which represents the pattern of covariates related to Engel class. The strongest and significant factors contributing to this component were focal ictal electroencephalogram and concordance of electroencephalography (EEG)-magnetic resonance imaging (MRI) abnormality. Interestingly, covariates contributing the least to the seizure-free patient phenotype were continent of treatment and age at the time of surgery. Using a data-driven, multivariate method, PLS, we describe patient phenotypes that are associated with seizure-freedom following resective surgery for TSC.
Directory of Open Access Journals (Sweden)
A.V. Faria
2011-02-01
Full Text Available High resolution proton nuclear magnetic resonance spectroscopy (¹H MRS can be used to detect biochemical changes in vitro caused by distinct pathologies. It can reveal distinct metabolic profiles of brain tumors although the accurate analysis and classification of different spectra remains a challenge. In this study, the pattern recognition method partial least squares discriminant analysis (PLS-DA was used to classify 11.7 T ¹H MRS spectra of brain tissue extracts from patients with brain tumors into four classes (high-grade neuroglial, low-grade neuroglial, non-neuroglial, and metastasis and a group of control brain tissue. PLS-DA revealed 9 metabolites as the most important in group differentiation: γ-aminobutyric acid, acetoacetate, alanine, creatine, glutamate/glutamine, glycine, myo-inositol, N-acetylaspartate, and choline compounds. Leave-one-out cross-validation showed that PLS-DA was efficient in group characterization. The metabolic patterns detected can be explained on the basis of previous multimodal studies of tumor metabolism and are consistent with neoplastic cell abnormalities possibly related to high turnover, resistance to apoptosis, osmotic stress and tumor tendency to use alternative energetic pathways such as glycolysis and ketogenesis.
Abdi, Hervé; Williams, Lynne J
2013-01-01
Partial least square (PLS) methods (also sometimes called projection to latent structures) relate the information present in two data tables that collect measurements on the same set of observations. PLS methods proceed by deriving latent variables which are (optimal) linear combinations of the variables of a data table. When the goal is to find the shared information between two tables, the approach is equivalent to a correlation problem and the technique is then called partial least square correlation (PLSC) (also sometimes called PLS-SVD). In this case there are two sets of latent variables (one set per table), and these latent variables are required to have maximal covariance. When the goal is to predict one data table the other one, the technique is then called partial least square regression. In this case there is one set of latent variables (derived from the predictor table) and these latent variables are required to give the best possible prediction. In this paper we present and illustrate PLSC and PLSR and show how these descriptive multivariate analysis techniques can be extended to deal with inferential questions by using cross-validation techniques such as the bootstrap and permutation tests.
Jamali, Ali; Anton, François; Rahman, Alias Abdul; Mioc, Darka
2016-10-01
Nowadays, municipalities intend to have 3D city models for facility management, disaster management and architectural planning. Indoor models can be reconstructed from construction plans but sometimes, they are not available or very often, they differ from `as-built' plans. In this case, the buildings and their rooms must be surveyed. One of the most utilized methods of indoor surveying is laser scanning. The laser scanning method allows taking accurate and detailed measurements. However, Terrestrial Laser Scanner is costly and time consuming. In this paper, several techniques for indoor 3D building data acquisition have been investigated. For reducing the time and cost of indoor building data acquisition process, the Trimble LaserAce 1000 range finder is used. The proposed approache use relatively cheap equipment: a light Laser Rangefinder which appear to be feasible, but it needs to be tested to see if the observation accuracy is sufficient for the 3D building modelling. The accuracy of the rangefinder is evaluated and a simple spatial model is reconstructed from real data. This technique is rapid (it requires a shorter time as compared to others), but the results show inconsistencies in horizontal angles for short distances in indoor environments. The range finder horizontal angle sensor was calibrated using a least square adjustment algorithm, a polynomial kernel, interval analysis and homotopy continuation.
Monfared, Ali Momenpour T.; Tiwari, Vidhu S.; Tripathi, Markandey M.; Anis, Hanan
2013-02-01
Heparin is the most widely used anti-coagulant for the prevention of blood clots in patients undergoing certain types of surgeries including open heart surgeries and dialysis. The precise monitoring of heparin amount in patients' blood is crucial for reducing the morbidity and mortality in surgical environments. Based upon these considerations, we have used Raman spectroscopy in conjunction with partial least squares (PLS) analysis to measure heparin concentration at clinical level which is less than 10 United States Pharmacopeia (USP) in serum. The PLS calibration model was constructed from the Raman spectra of different concentrations of heparin in serum. It showed a high coefficient of determination (R2>0.91) between the spectral data and heparin level in serum along with a low root mean square error of prediction ˜4 USP/ml. It enabled the detection of extremely low concentrations of heparin in serum (˜8 USP/ml) as desirable in clinical environment. The proposed optical method has the potential of being implemented as the point-of-care testing procedure during surgeries, where the interest is to rapidly monitor low concentrations of heparin in patient's blood.
Directory of Open Access Journals (Sweden)
Margaret A. Ryan
2005-12-01
Full Text Available The Jet Propulsion Laboratory has recently developed and built an electronic nose(ENose using a polymer-carbon composite sensing array. This ENose is designed to be usedfor air quality monitoring in an enclosed space, and is designed to detect, identify andquantify common contaminants at concentrations in the parts-per-million range. Itscapabilities were demonstrated in an experiment aboard the National Aeronautics and SpaceAdministrationÃ¢Â€Â™s Space Shuttle Flight STS-95. This paper describes a modified nonlinearleast-squares based algorithm developed to analyze data taken by the ENose, and itsperformance for the identification and quantification of single gases and binary mixtures oftwelve target analytes in clean air. Results from laboratory-controlled events demonstrate theeffectiveness of the algorithm to identify and quantify a gas event if concentration exceedsthe ENose detection threshold. Results from the flight test demonstrate that the algorithmcorrectly identifies and quantifies all registered events (planned or unplanned, as singles ormixtures with no false positives and no inconsistencies with the logged events and theindependent analysis of air samples.
Several partial least squares (PLS) models were created correlating various properties and chemical composition measurements with the 1H and 13C NMR spectra of 73 different of pyrolysis bio-oil samples from various biomass sources (crude and intermediate products), finished oils and small molecule s...
Legaie, D.; Pron, H.; Bissieux, C.
2008-11-01
Integral transforms (Laplace, Fourier, Hankel) are widely used to solve the heat diffusion equation. Moreover, it often appears relevant to realize the estimation of thermophysical properties in the transformed space. Here, an analytical model has been developed, leading to a well-posed inverse problem of parameter identification. Two black coatings, a thin black paint layer and an amorphous carbon film, were studied by photothermal infrared thermography. A Hankel transform has been applied on both thermal model and data and the estimation of thermal diffusivity has been achieved in the Hankel space. The inverse problem is formulated as a non-linear least square problem and a Gauss-Newton algorithm is used for the parameter identification.
Konukoglu, Ender; Coutu, Jean-Philippe; Salat, David H; Fischl, Bruce
2016-07-01
Diffusion magnetic resonance imaging (dMRI) is a unique technology that allows the noninvasive quantification of microstructural tissue properties of the human brain in healthy subjects as well as the probing of disease-induced variations. Population studies of dMRI data have been essential in identifying pathological structural changes in various conditions, such as Alzheimer's and Huntington's diseases (Salat et al., 2010; Rosas et al., 2006). The most common form of dMRI involves fitting a tensor to the underlying imaging data (known as diffusion tensor imaging, or DTI), then deriving parametric maps, each quantifying a different aspect of the underlying microstructure, e.g. fractional anisotropy and mean diffusivity. To date, the statistical methods utilized in most DTI population studies either analyzed only one such map or analyzed several of them, each in isolation. However, it is most likely that variations in the microstructure due to pathology or normal variability would affect several parameters simultaneously, with differing variations modulating the various parameters to differing degrees. Therefore, joint analysis of the available diffusion maps can be more powerful in characterizing histopathology and distinguishing between conditions than the widely used univariate analysis. In this article, we propose a multivariate approach for statistical analysis of diffusion parameters that uses partial least squares correlation (PLSC) analysis and permutation testing as building blocks in a voxel-wise fashion. Stemming from the common formulation, we present three different multivariate procedures for group analysis, regressing-out nuisance parameters and comparing effects of different conditions. We used the proposed procedures to study the effects of non-demented aging, Alzheimer's disease and mild cognitive impairment on the white matter. Here, we present results demonstrating that the proposed PLSC-based approach can differentiate between effects of
Chang, Guobin; Xu, Tianhe; Wang, Qianxin; Zhang, Shubi; Chen, Guoliang
2017-05-01
The symmetric Helmert transformation model is widely used in geospatial science and engineering. Using an analytical least-squares solution to the problem, a simple and approximate error analysis is developed. This error analysis follows the Pope procedure solving nonlinear problems, but no iteration is needed here. It is simple because it is not based on the direct and cumbersome error analysis of every single process involved in the analytical solution. It is approximate because it is valid only in the first-order approximation sense, or in other words, the error analysis is performed approximately on the tangent hyperplane at the estimates instead of the original nonlinear manifold of the observables. Though simple and approximate, this error analysis's consistency is not sacrificed as can be validated by Monte Carlo experiments. So the practically important variance-covariance matrix, as a consistent accuracy measure of the parameter estimate, is provided by the developed error analysis. Further, the developed theory can be easily generalized to other cases with more general assumptions about the measurement errors.
Directory of Open Access Journals (Sweden)
Haifeng Gao
2015-04-01
Full Text Available This research article analyzes the resonant reliability at the rotating speed of 6150.0 r/min for low-pressure compressor rotor blade. The aim is to improve the computational efficiency of reliability analysis. This study applies least squares support vector machine to predict the natural frequencies of the low-pressure compressor rotor blade considered. To build a more stable and reliable least squares support vector machine model, leave-one-out cross-validation is introduced to search for the optimal parameters of least squares support vector machine. Least squares support vector machine with leave-one-out cross-validation is presented to analyze the resonant reliability. Additionally, the modal analysis at the rotating speed of 6150.0 r/min for the rotor blade is considered as a tandem system to simplify the analysis and design process, and the randomness of influence factors on frequencies, such as material properties, structural dimension, and operating condition, is taken into consideration. Back-propagation neural network is compared to verify the proposed approach based on the same training and testing sets as least squares support vector machine with leave-one-out cross-validation. Finally, the statistical results prove that the proposed approach is considered to be effective and feasible and can be applied to structural reliability analysis.
基于核PLS方法的非线性过程在线监控%Online nonlinear process monitoring using kernel partial least squares
Institute of Scientific and Technical Information of China (English)
胡益; 王丽; 马贺贺; 侍洪波
2011-01-01
针对过程监控数据的非线性特点,提出了一种基于核偏最小二乘(KPLS)的监控方法.KPLS方法是将原始输入数据通过核函数映射到高维特征空间,然后在高维特征空间再进行偏最小二乘(PLS)运算.与线性PIS相比,KPLS方法能充分利用样本空间信息,建立起输入输出变量之间的非线性关系.与其他非线性PLS方法不同,KPLS方法只需要进行线性运算,从而避免非线性优化问题.在对过程进行监控时,首先采用KPLS方法建立模型,得到得分向量,然后计算出T2和SPE统计量及其相应的控制限.Tennessee Eastman (TE)模型上的仿真研究结果表明,所提方法比线性PLS方法具有更好的过程监控性能.%To handle the nonlinear problem for process monitoring, a new technique based on kernel partial least squares (KPLS) is developed. KPLS is an improved partial least squares (PLS) method, and its main idea is to first map the input space into a high-dimensional feature space via a nonlinear kernel function and then to use the standard PLS in that feature space. Compared to linear PLS, KPLS can make full use of the sample space information, and effectively capture the nonlinear relationship between input variables and output variables. Different from other nonlinear PLS, KPLS requires only linear algebra and does not involve any nonlinear optimization. For process data, firstly KPLS was used to derive regression model and got the score vectors, and then two statistics, T2 and SPE, and corresponding control limits were calculated. A case study of the Tennessee-Eastman (TE) process illustrated that the proposed approach showed superior process monitoring performance compared to linear PLS.
Directory of Open Access Journals (Sweden)
Omholt Stig W
2011-06-01
Full Text Available Abstract Background Deterministic dynamic models of complex biological systems contain a large number of parameters and state variables, related through nonlinear differential equations with various types of feedback. A metamodel of such a dynamic model is a statistical approximation model that maps variation in parameters and initial conditions (inputs to variation in features of the trajectories of the state variables (outputs throughout the entire biologically relevant input space. A sufficiently accurate mapping can be exploited both instrumentally and epistemically. Multivariate regression methodology is a commonly used approach for emulating dynamic models. However, when the input-output relations are highly nonlinear or non-monotone, a standard linear regression approach is prone to give suboptimal results. We therefore hypothesised that a more accurate mapping can be obtained by locally linear or locally polynomial regression. We present here a new method for local regression modelling, Hierarchical Cluster-based PLS regression (HC-PLSR, where fuzzy C-means clustering is used to separate the data set into parts according to the structure of the response surface. We compare the metamodelling performance of HC-PLSR with polynomial partial least squares regression (PLSR and ordinary least squares (OLS regression on various systems: six different gene regulatory network models with various types of feedback, a deterministic mathematical model of the mammalian circadian clock and a model of the mouse ventricular myocyte function. Results Our results indicate that multivariate regression is well suited for emulating dynamic models in systems biology. The hierarchical approach turned out to be superior to both polynomial PLSR and OLS regression in all three test cases. The advantage, in terms of explained variance and prediction accuracy, was largest in systems with highly nonlinear functional relationships and in systems with positive feedback
基于小波分析的最小二乘拟合及应用%Application of the Least Square Fitting Based on Wavelet Analysis
Institute of Scientific and Technical Information of China (English)
王江荣
2012-01-01
Straight line is a very important descriptor in image analysis process. In industrial control, the least square method is commonly used for fitting of straight line in image processing. When higher accuracy of estimation is requested, traditional least square method cannot satisfy the requirement. Thus, the discrete wavelet transform is combined with traditional least square method, to establish the new type of solution based on wavelet pre-processing for the least square estimation, to acquire better estimation result than that of traditional least square method. The experiments verify the effectiveness and high precision of this method.%直线是图像分析过程中非常重要的描述符号.在工业控制中,图像处理通常采用最小二乘法对直线进行拟合,但在对估计精度要求较高时,传统最小二乘法往往不能满足要求.将离散小波变换和传统最小二乘法相结合,建立了一种基于小波预处理的最小二乘估计的新方法,获得了比传统最小二乘法效果更好的估计结果.试验证明了该方法的有效性和高精度性.
A SUCCESSIVE LEAST SQUARES METHOD FOR STRUCTURED TOTAL LEAST SQUARES
Institute of Scientific and Technical Information of China (English)
Plamen Y. Yalamov; Jin-yun Yuan
2003-01-01
A new method for Total Least Squares (TLS) problems is presented. It differs from previous approaches and is based on the solution of successive Least Squares problems.The method is quite suitable for Structured TLS (STLS) problems. We study mostly the case of Toeplitz matrices in this paper. The numerical tests illustrate that the method converges to the solution fast for Toeplitz STLS problems. Since the method is designed for general TLS problems, other structured problems can be treated similarly.
Directory of Open Access Journals (Sweden)
Mohamed G. Egila
2016-12-01
Full Text Available This paper presents a proposed design for analyzing electrocardiography (ECG signals. This methodology employs highpass least-square linear phase Finite Impulse Response (FIR filtering technique to filter out the baseline wander noise embedded in the input ECG signal to the system. Discrete Wavelet Transform (DWT was utilized as a feature extraction methodology to extract the reduced feature set from the input ECG signal. The design uses back propagation neural network classifier to classify the input ECG signal. The system is implemented on Xilinx 3AN-XC3S700AN Field Programming Gate Array (FPGA board. A system simulation has been done. The design is compared with some other designs achieving total accuracy of 97.8%, and achieving reduction in utilizing resources on FPGA implementation.
Directory of Open Access Journals (Sweden)
Sérgio Luiz do Amaral Moretti
2016-05-01
Full Text Available It is impossible to develop effective tourism marketing actions and planning the communication without an understanding of the motivations of tourists for travel. The aim of this paper is to deepen the knowledge of the reasons for traveling festivals visitors. For this purpose we developed a survey instrument that consists of four constructs obtained in the literature, with data collected through a survey in Oktoberfest Blumenau (BR, with 432 respondents and Munich (Germany, with 285 respondents. Most of scales were confirmed showing the validity of the instrument. The study by Partial Least Squares - PLS revealed that both samples looking to experience different customs and cultures and meet new situations that are not part of your environment. Visitors also attend festivals to be with friends, reduce stress, anxiety and frustration. The understanding of tourists' motivations for travel reports new subsidies for the development of public policies and the tourist trade.
Margolis, H S
2015-01-01
A method is presented for analysing over-determined sets of clock frequency comparison data involving standards based on a number of different reference transitions. This least-squares adjustment procedure, which is based on the method used by CODATA to derive a self-consistent set of values for the fundamental physical constants, can be used to derive optimized values for the frequency ratios of all possible pairs of reference transitions. It is demonstrated to reproduce the frequency values recommended by the International Committee for Weights and Measures, when using the same input data used to derive those values. The effects of including more recently published data in the evaluation is discussed and the importance of accounting for correlations between the input data is emphasised.
AKLSQF - LEAST SQUARES CURVE FITTING
Kantak, A. V.
1994-01-01
The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.
Abd-El-Fattah, Sabry M.
2005-01-01
A Partial Least Squares Path Analysis technique was used to test the effect of students' prior experience with computers, statistical self-efficacy, and computer anxiety on their achievement in an introductory statistics course. Computer Anxiety Rating Scale and Current Statistics Self-Efficacy Scale were administered to a sample of 64 first-year…
DEFF Research Database (Denmark)
Shetty, Nisha; Olesen, Merete Halkjær; Gislum, René;
2012-01-01
Because of the difficulties in obtaining homogenous germination of spinach seeds for baby leaf production, the possibility of using partial least squares discriminant analysis (PLS-DA) on features extracted from multispectral images of spinach seeds was investigated. The objective has been...
Ramoelo, A.; Skidmore, A. K.; Cho, M. A.; Mathieu, R.; Heitkönig, I. M. A.; Dudeni-Tlhone, N.; Schlerf, M.; Prins, H. H. T.
2013-08-01
Grass nitrogen (N) and phosphorus (P) concentrations are direct indicators of rangeland quality and provide imperative information for sound management of wildlife and livestock. It is challenging to estimate grass N and P concentrations using remote sensing in the savanna ecosystems. These areas are diverse and heterogeneous in soil and plant moisture, soil nutrients, grazing pressures, and human activities. The objective of the study is to test the performance of non-linear partial least squares regression (PLSR) for predicting grass N and P concentrations through integrating in situ hyperspectral remote sensing and environmental variables (climatic, edaphic and topographic). Data were collected along a land use gradient in the greater Kruger National Park region. The data consisted of: (i) in situ-measured hyperspectral spectra, (ii) environmental variables and measured grass N and P concentrations. The hyperspectral variables included published starch, N and protein spectral absorption features, red edge position, narrow-band indices such as simple ratio (SR) and normalized difference vegetation index (NDVI). The results of the non-linear PLSR were compared to those of conventional linear PLSR. Using non-linear PLSR, integrating in situ hyperspectral and environmental variables yielded the highest grass N and P estimation accuracy (R2 = 0.81, root mean square error (RMSE) = 0.08, and R2 = 0.80, RMSE = 0.03, respectively) as compared to using remote sensing variables only, and conventional PLSR. The study demonstrates the importance of an integrated modeling approach for estimating grass quality which is a crucial effort towards effective management and planning of protected and communal savanna ecosystems.
Luo, Hongxia; Ye, Huanzhuo; Ke, Yinghai; Pan, Jianping; Gong, Jianya; Chen, Xiaoling
2005-01-01
In the region covered by variable amounts of vegetation, pixel spectra received by remotely-sensed sensor with given spatial resolution are a mixture of soil and vegetation spectra, so vegetation covering on soil influences the accuracy of soils surveying by remote sensing. Mixed pixel spectra are described as a linear combination of endmember signature matrix with appropriate abundance fractions correspond to it in a linear mixture model. According to the principle of this model, abundance fractions of endmembers in every pixel were calculated using unsupervised fully constrained least squares(UFCLS) algorithm. Then the signature of vegetation correspond to its abundance fraction was eliminated, and other endmember signatures covered by vegetation were restituted by scaling their abundance fractions to sum the original pixel total and recalculating the model. After above processing, de-vegetated reflectance images were produced for soils surveying. The accuracies of paddy soils classified using these characteristic images were better than that of using the raw images, but the accuracies of zonal soils were inferior to the latter. Compared to many other image processing methods, such as K-T transformation and ratio bands, the linear spectral unmixing to removing vegetation produced slightly better overall accuracy of soil classification, so it was useful for soils surveying by remote sensing.
Vio, R; Wamsteker, W
2004-01-01
It is well-known that the noise associated with the collection of an astronomical image by a CCD camera is, in large part, Poissonian. One would expect, therefore, that computational approaches that incorporate this a priori information will be more effective than those that do not. The Richardson-Lucy (RL) algorithm, for example, can be viewed as a maximum-likelihood (ML) method for image deblurring when the data noise is assumed to be Poissonian. Least-squares (LS) approaches, on the other hand, arises from the assumption that the noise is Gaussian with fixed variance across pixels, which is rarely accurate. Given this, it is surprising that in many cases results obtained using LS techniques are relatively insensitive to whether the noise is Poissonian or Gaussian. Furthermore, in the presence of Poisson noise, results obtained using LS techniques are often comparable with those obtained by the RL algorithm. We seek an explanation of these phenomena via an examination of the regularization properties of par...
Institute of Scientific and Technical Information of China (English)
Lin Li
2011-01-01
Partial least squares (PLS) regression was applied to the Lunar Soil Characterization Consortium (LSCC) dataset for spectral estimation of TiO2.The LSCC dataset was split into a number of subsets including the low-Ti,high-Ti,total mare soils,total highland,Apollo 16,and Apollo 14 soils to investigete the effects of interfering minerals and nonlinearity on the PLS performance.The PLS weight loading vectors were analyzed through stepwise multiple regression analysis (SMRA) to identify mineral species driving and interfering the PLS performance.PLS exhibits high performance for estimating TiO2 for the LSCC low-Ti and high-Ti mare samples and both groups analyzed together.The results suggest that while the dominant TiO2-bearing minerals are few,additional PLS factors are required to compensate the effects on the important PLS factors of minerals that are not highly corrected to TiO2,to accommodate nonlinear relationships between reflectance and TiO2,and to correct inconsistent mineral-TiO2 correlations between the high-Ti and iow-Ti mare samples.Analysis of the LSCC highland soil samples indicates that the Apollo 16 soils are responsible for the large errors of TiO2 estimates when the soils are modeled with other subgroups.For the LSCC Apollo 16 samples,the dominant spectral effects of plagioclase over other dark minerals are primarily responsible for large errors of estimated TiO2.For the Apollo 14 soils,more accurate estimation for TiO2 is attributed to the positive correlation between a major TiO2-bearing component and TiO2,explaining why the Apollo 14 soils follow the regression trend when analyzed with other soils groups.
Bayesian Sparse Partial Least Squares
Vidaurre, D.; Gerven, M.A.J. van; Bielza, C.; Larrañaga, P.; Heskes, T.M.
2013-01-01
Partial least squares (PLS) is a class of methods that makes use of a set of latent or unobserved variables to model the relation between (typically) two sets of input and output variables, respectively. Several flavors, depending on how the latent variables or components are computed, have been dev
Shakib, Farzin; Hughes, Thomas J. R.
1991-01-01
A Fourier stability and accuracy analysis of the space-time Galerkin/least-squares method as applied to a time-dependent advective-diffusive model problem is presented. Two time discretizations are studied: a constant-in-time approximation and a linear-in-time approximation. Corresponding space-time predictor multi-corrector algorithms are also derived and studied. The behavior of the space-time algorithms is compared to algorithms based on semidiscrete formulations.
VOLMER, M; BOLCK, A; WOLTHERS, BG; DERUITER, AJ; DOORNBOS, DA; VANDERSLIK, W
1993-01-01
Quantitative assessment of urinary calculus (renal stone) constituents by infrared analysis (IR) is hampered by the need of expert knowledge for spectrum interpretation. Our laboratory performed a computerized search of several libraries, containing 235 reference spectra from various mixtures with d
Noorizadeh, H; Sobhan Ardakani, S; Ahmadi, T; Mortazavi, S S; Noorizadeh, M
2013-02-01
Genetic algorithm (GA) and partial least squares (PLS) and kernel PLS (KPLS) techniques were used to investigate the correlation between immobilized liposome chromatography partitioning (log Ks) and descriptors for 65 drug compounds. The models were validated using leave-group-out cross validation LGO-CV. The results indicate that GA-KPLS can be used as an alternative modelling tool for quantitative structure-property relationship (QSPR) studies.
Angeyo, K H; Gari, S; Mustapha, A O; Mangala, J M
2012-11-01
The greatest challenge to material characterization by XRF technique is encountered in direct trace analysis of complex matrices. We exploited partial least squares (PLS) in conjunction with energy dispersive X-ray fluorescence and scattering (EDXRFS) spectrometry to rapidly (200 s) analyze lubricating oils. The PLS-EDXRFS method affords non-invasive quality assurance (QA) analysis of complex matrix liquids as it gave optimistic results for both heavy- and low-Z metal additives. Scatter peaks may further be used for QA characterization via the light elements.
Migliorati, Giovanni
2016-01-05
We review the main results achieved in the analysis of the stability and accuracy of the discrete leastsquares approximation on multivariate polynomial spaces, with noiseless evaluations at random points, noiseless evaluations at low-discrepancy point sets, and noisy evaluations at random points.
Kumar, Keshav; Mishra, Ashok Kumar
2015-07-01
Fluorescence characteristic of 8-anilinonaphthalene-1-sulfonic acid (ANS) in ethanol-water mixture in combination with partial least square (PLS) analysis was used to propose a simple and sensitive analytical procedure for monitoring the adulteration of ethanol by water. The proposed analytical procedure was found to be capable of detecting even small adulteration level of ethanol by water. The robustness of the procedure is evident from the statistical parameters such as square of correlation coefficient (R(2)), root mean square of calibration (RMSEC) and root mean square of prediction (RMSEP) that were found to be well with in the acceptable limits.
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
The number of latent variables (LVs) or the factor number is a key parameter in PLS modeling to obtain a correct prediction. Although lots of work have been done on this issue, it is still a difficult task to determine a suitable LV number in practical uses. A method named independent factor diagnostics (IFD) is proposed for investigation of the contribution of each LV to the predicted results on the basis of discussion about the determination of LV number in PLS modeling for near infrared (NIR) spectra of complex samples. The NIR spectra of three data sets of complex samples, including a public data set and two tobacco lamina ones, are investigated. It is shown that several high order LVs constitute main contributions to the predicted results, albeit the contribution of the low order LVs should not be neglected in the PLS models. Therefore, in practical uses of PLS for analysis of complex samples, it may be better to use a slightly large LV number for NIR spectral analysis of complex samples.
Institute of Scientific and Technical Information of China (English)
LIU ZhiChao; MA Xiang; WEN YaDong; WANG Yi; CAI WenSheng; SHAO XueGuang
2009-01-01
The number of latent variables (LVs) or the factor number is a key parameter in PLS modeling to obtain a correct prediction.Although lots of work have been done on this issue,it is still a difficult task to determine a suitable LV number in practical uses.A method named independent factor diagnostics (IFD) is proposed for investigation of the contribution of each LV to the predicted results on the basis of discussion about the determination of LV number in PLS modeling for near infrared (NIR) spectra of complex samples.The NIR spectra of three data sets of complex samples,including a public data set and two tobacco lamina ones,are investigated.It is shown that several high order LVs constitute main contributions to the predicted results,albeit the contribution of the low order LVs should not be neglected in the PLS models.Therefore,in practical uses of PLS for analysis of complex samples,it may be better to use a slightly large LV number for NIR spectral analysis of complex samples.
Kasemsumran, Sumaporn; Suttiwijitpukdee, Nattaporn; Keeratinijakal, Vichein
2017-01-01
In this research, near-infrared (NIR) spectroscopy in combination with moving window partial least squares-discrimination analysis (MWPLS-DA) was utilized to discriminate the variety of turmeric based on DNA markers, which correlated to the quantity of curcuminoid. Curcuminoid was used as a marker compound in variety identification due to the most pharmacological properties of turmeric possessed from it. MWPLS-DA optimized informative NIR spectral regions for the fitting and prediction to {-1/1}-coded turmeric varieties, indicating variables in the development of latent variables in discrimination analysis. Consequently, MWPLS-DA benefited in the selection of combined informative NIR spectral regions of 1100 - 1260, 1300 - 1500 and 1880 - 2500 nm for classification modeling of turmeric variety with 148 calibration samples, and yielded the results better than that obtained from a partial least squares-discrimination analysis (PLS-DA) model built by using the whole NIR spectral region. An effective and rapid strategy of using NIR in combination with MWPLS-DA provided the best variety identification results of 100% in both specificity and total accuracy for 48 test samples.
Ibañez, Gabriela A
2008-05-30
A simple and sensitive methodology to simultaneously quantify tetracycline and oxytetracycline in bovine serum samples is described. The method combines the advantages of the lanthanide-sensitized luminescence (i.e., sensitivity and selectivity) with partial least-squares (PLS) analysis, and requires no previous separation steps. Due to the strong overlapping of emission and excitation spectra of the analytes and their europium complexes, the luminescence decay curve (intensity of luminescence vs. time) of analyte-Eu complex was selected to resolve mixtures of tetracycline and oxytetracycline. Partial least-squares uses the luminescence decay as discriminatory parameter and regresses the luminescence versus time onto the concentrations of standards. Using a 16-sample aqueous calibration set, 10 validation samples, 11 spiked serum bovine samples and a blank of serum were studied. The analyte recoveries from serum samples ranged from 87 to 104% for tetracycline and from 94 to 106% for oxytetracycline. The results obtained by the developed method were statistically comparable to those obtained with high performance liquid chromatography.
Prazen, B J; Johnson, K J; Weber, A; Synovec, R E
2001-12-01
Quantitative analysis of naphtha samples is demonstrated using comprehensive two-dimensional gas chromatography (GC x GC) and chemometrics. This work is aimed at providing a GC system for the quantitative and qualitative analysis of complex process streams for process monitoring and control. The high-speed GC x GC analysis of naphtha is accomplished through short GC columns, high carrier gas velocities, and partial chromatographic peak resolution followed by multivariate quantitative analysis. Six min GC x GC separations are analyzed with trilinear partial least squares (tri-PLS) to predict the aromatic and naphthene (cycloalkanes) content of naphtha samples. The 6-min GC x GC separation time is over 16 times faster than a single-GC-column standard method in which a single-column separation resolves the aromatic and naphthene compounds in naphtha and predicts the aromatic and naphthene percent concentrations through addition of the resolved signals. Acceptable quantitative precision is provided by GC x GC/tri-PLS.
Sheta, B.; M. Elhabiby; Sheimy, N.
2012-01-01
A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter a...
Satija, A.; Caers, J.
2014-12-01
Hydrogeological forecasting problems, like many subsurface forecasting problems, often suffer from the scarcity of reliable data yet complex prior information about the underlying earth system. Assimilating and integrating this information into an earth model requires using iterative parameter space exploration techniques or Monte Carlo Markov Chain techniques. Since such an earth model needs to account for many large and small scale features of the underlying system, as the system gets larger, iterative modeling can become computationally prohibitive, in particular when the forward model would allow for only a few hundred model evaluations. In addition, most modeling methods do not include the purpose for which inverse method are built, namely, the actual forecast and usually focus only on data and model. In this study, we present a technique to extract features of the earth system informed by time-varying dynamic data (data features) and those that inform a time-varying forecasting variable (forecast features) using Functional Principal Component Analysis. Canonical Coefficient Analysis is then used to examine the relationship between these features using a linear model. When this relationship suggests that the available data informs the required forecast, a simple linear regression can be used on the linear model to directly estimate the posterior of the forecasting problem, without any iterative inversion of model parameters. This idea and method is illustrated using an example of contaminant flow in an aquifer with complex prior, large dimension and non-linear flow & transport model.
Institute of Scientific and Technical Information of China (English)
Wei Liang; Jiancheng Li; Xinyu Xu; Yongqi Zhao
2016-01-01
The block-diagonal least squares method,which theoretically has specific requirements for the observation data and the spatial distribution of its precision,plays an important role in ultra-high degree gravity field determination.On the basis of block-diagonal least squares method,three data processing strategies are employed to determine the gravity field models using three kinds of simulated global grid data with different noise spatial distribution in this paper.The numerical results show that when we employed the weight matrix corresponding to the noise of the observation data,the model computed by the least squares using the full normal matrix has much higher precision than the one estimated only using the block part of the normal matrix.The model computed by the block-diagonal least squares method without the weight matrix has slightly lower precision than the model computed using the rigorous least squares with the weight matrix.The result offers valuable reference to the using of block-diagonal least squares method in ultra-high gravity model determination.
Darwish, Hany W; Naguib, Ibrahim A
2013-05-01
Performance of partial least squares regression (PLSR) is enhanced in the presented work by three multivariate models, including weighted regression PLSR (Weighted-PLSR), genetic algorithm PLSR (GA-PLSR), and wavelet transform PLSR (WT-PLSR). The proposed models were applied for the stability-indicating analysis of mixtures of mebeverine hydrochloride (meb) and sulpiride (sul) in the presence of their reported impurities and degradation products. The work introduced in this paper aims to compare these different chemometric methods, showing the underlying algorithm for each and making a comparison of analysis results. For proper analysis, a 6-factor, 5-level experimental design was established resulting in a training set of 25 mixtures containing different ratios of the interfering species. A test set consisting of 5 mixtures was used to validate the prediction ability of the suggested models. Leave one out (LOO) and bootstrap were applied to predict number of PLS components. The GA-PLSR proposed method was successfully applied for the analysis of raw material (test set 101.03% ± 1.068, 101.47% ± 2.721 for meb and sul, respectively) and pharmaceutical tablets containing meb and sul mixtures (10.10% ± 0.566, 98.16% ± 1.081 for meb and sul).
Hobro, Alison J; Kuligowski, Julia; Döll, Markus; Lendl, Bernhard
2010-11-01
Wood is a ubiquitous material used in everyday life. Accurate identification of species can be of importance in a historical context enabling appropriate conservation treatment and adequate choice of material to be applied to historic wooden objects, and in a more modern context, in the identification of forgeries. Wood is also often treated to improve certain physical characteristics, often strength and durability. However, determination of whether or not a piece of wood has been treated can be very difficult. Infrared spectroscopy has previously been applied to differentiate between different wood species or between treated and untreated wood, often in conjunction with chemometric analysis techniques. Here, we report the use of mid-IR spectroscopy, coupled with partial least squares discriminant analysis for the discrimination between two walnut wood species and to differentiate between steam-treated and untreated samples of each of these wood species. We show that the discrimination between species and between steam-treated and non-steam-treated wood from Juglans nigra is very clear and, while analysis of the quality of the discrimination between steam-treated and non-steam-treated J. regia samples is not as good, it is, nevertheless, sufficient for discrimination between the two groups with a statistical significance of P < 0.0001.
Mohammadi Moghaddam, Toktam; Razavi, Seyed M A; Taghizadeh, Masoud; Sazgarnia, Ameneh
2016-01-01
Roasting is an important step in the processing of pistachio nuts. The effect of hot air roasting temperature (90, 120 and 150 °C), time (20, 35 and 50 min) and air velocity (0.5, 1.5 and 2.5 m/s) on textural and sensory characteristics of pistachio nuts and kernels were investigated. The results showed that increasing the roasting temperature decreased the fracture force (82-25.54 N), instrumental hardness (82.76-37.59 N), apparent modulus of elasticity (47-21.22 N/s), compressive energy (280.73-101.18 N.s) and increased amount of bitterness (1-2.5) and the hardness score (6-8.40) of pistachio kernels. Higher roasting time improved the flavor of samples. The results of the consumer test showed that the roasted pistachio kernels have good acceptability for flavor (score 5.83-8.40), color (score 7.20-8.40) and hardness (score 6-8.40) acceptance. Moreover, Partial Least Square (PLS) analysis of instrumental and sensory data provided important information for the correlation of objective and subjective properties. The univariate analysis showed that over 93.87 % of the variation in sensory hardness and almost 87 % of the variation in sensory acceptability could be explained by instrumental texture properties.
Melaku, Yohannes Adama; Gill, Tiffany K; Taylor, Anne W; Adams, Robert; Shi, Zumin
2017-06-12
The relative advantages of dietary analysis methods, particularly in identifying dietary patterns associated with bone mass, have not been investigated. We evaluated principal component analysis (PCA), partial least-squares (PLS) and reduced-rank regressions (RRR) in determining dietary patterns associated with bone mass. Data from 1182 study participants (45.9% males; aged 50 years and above) from the North West Adelaide Health Study (NWAHS) were used. Dietary data were collected using a food frequency questionnaire (FFQ). Dietary patterns were constructed using PCA, PLS and RRR and compared based on the performance to identify plausible patterns associated with bone mineral density (BMD) and content (BMC). PCA, PLS and RRR identified two, four and four dietary patterns, respectively. All methods identified similar patterns for the first two factors (factor 1, "prudent" and factor 2, "western" patterns). Three, one and none of the patterns derived by RRR, PLS and PCA were significantly associated with bone mass, respectively. The "prudent" and dairy (factor 3) patterns determined by RRR were positively and significantly associated with BMD and BMC. Vegetables and fruit pattern (factor 4) of PLS and RRR was negatively and significantly associated with BMD and BMC, respectively. RRR was found to be more appropriate in identifying more (plausible) dietary patterns that are associated with bone mass than PCA and PLS. Nevertheless, the advantage of RRR over the other two methods (PCA and PLS) should be confirmed in future studies.
MULTI-RESOLUTION LEAST SQUARES SUPPORT VECTOR MACHINES
Institute of Scientific and Technical Information of China (English)
无
2007-01-01
The Least Squares Support Vector Machines (LS-SVM) is an improvement to the SVM.Combined the LS-SVM with the Multi-Resolution Analysis (MRA), this letter proposes the Multi-resolution LS-SVM (MLS-SVM). The proposed algorithm has the same theoretical framework as MRA but with better approximation ability. At a fixed scale MLS-SVM is a classical LS-SVM, but MLS-SVM can gradually approximate the target function at different scales. In experiments, the MLS-SVM is used for nonlinear system identification, and achieves better identification accuracy.
Yan, Si-Min; Liu, Jun-Ping; Xu, Lu; Fu, Xian-Shu; Cui, Hai-Feng; Yun, Zhen-Yu; Yu, Xiao-Ping; Ye, Zi-Hong
2014-01-01
This paper focuses on a rapid and nondestructive way to discriminate the geographical origin of Anxi-Tieguanyin tea by near-infrared (NIR) spectroscopy and chemometrics. 450 representative samples were collected from Anxi County, the original producing area of Tieguanyin tea, and another 120 Tieguanyin samples with similar appearance were collected from unprotected producing areas in China. All these samples were measured by NIR. The Stahel-Donoho estimates (SDE) outlyingness diagnosis was used to remove the outliers. Partial least squares discriminant analysis (PLSDA) was performed to develop a classification model and predict the authenticity of unknown objects. To improve the sensitivity and specificity of classification, the raw data was preprocessed to reduce unwanted spectral variations by standard normal variate (SNV) transformation, taking second-order derivatives (D2) spectra, and smoothing. As the best model, the sensitivity and specificity reached 0.931 and 1.000 with SNV spectra. Combination of NIR spectrometry and statistical model selection can provide an effective and rapid method to discriminate the geographical producing area of Anxi-Tieguanyin.
Liu, Xiu-ying; Wang, Li; Chang, Qing-rui; Wang, Xiao-xing; Shang, Yan
2015-07-01
Wuqi County of Shaanxi Province, where the vegetation recovering measures have been carried out for years, was taken as the study area. A total of 100 loess samples from 24 different profiles were collected. Total nitrogen (TN) and alkali hydrolysable nitrogen (AHN) contents of the soil samples were analyzed, and the soil samples were scanned in the visible/near-infrared (VNIR) region of 350-2500 nm in the laboratory. The calibration models were developed between TN and AHN contents and VNIR values based on correlation analysis (CA) and partial least squares regression (PLS). Independent samples validated the calibration models. The results indicated that the optimum model for predicting TN of loess was established by using first derivative of reflectance. The best model for predicting AHN of loess was established by using normal derivative spectra. The optimum TN model could effectively predict TN in loess from 0 to 40 cm, but the optimum AHN model could only roughly predict AHN at the same depth. This study provided a good method for rapidly predicting TN of loess where vegetation recovering measures have been adopted, but prediction of AHN needs to be further studied.
Shi, Xiaoxia; Zhang, Xiaoming; Song, Shiqing; Tan, Chen; Jia, Chengsheng; Xia, Shuqin
2013-01-15
The "enzymatic hydrolysis-mild thermal oxidation" method was employed to obtain oxidized tallow. Nine beeflike flavours (BFs) were prepared through Maillard reaction with oxidized tallow and other ingredients. Volatile compounds of oxidized tallow and beeflike flavours were analysed by SPME/GC-MS. Six sensory attributes (meaty, beefy, tallowy, simulate, burnt and off-flavour) were selected to assess BFs. Thirty four odour-active compounds were identified to represent beef odour through GC-O analysis based on detection frequency method. GC-MS profiles of oxidized tallow were correlated with GC-O responses and sensory attributes of BFs using partial least squares regression modelling (PLSR). Twenty nine compounds were considered as the potential precursors of oxidized tallow. Among them, tetradecanoic acid, d-limonene, 1,7-heptandiol, 2-butyltetrahydrofuran, (Z)-4-undecenal, (Z)-4-decenal, (E)-4-nonenal and 5-pentyl-2(3H)-furanone were unique products generated from enzymatic hydrolysis-mild thermal oxidation of tallow, while hexanal, heptanal, octanal, nonanal, decanal, pentanal, acetic acid, butanoic acid, hexanoic acid, 1-heptanol, 1-octanol, 3-methylbutanal, 2-pentylfuran, γ-nonalactone, 2-undecenal, (E,E)-2,4-decadienal, (E,E)-2,4-nonadienal, (E)-2-nonenal, (E)-2-octenal, (E)-2-decenal and (Z)-2-heptenal were common products generated from thermal oxidation of tallow.
Directory of Open Access Journals (Sweden)
Si-Min Yan
2014-01-01
Full Text Available This paper focuses on a rapid and nondestructive way to discriminate the geographical origin of Anxi-Tieguanyin tea by near-infrared (NIR spectroscopy and chemometrics. 450 representative samples were collected from Anxi County, the original producing area of Tieguanyin tea, and another 120 Tieguanyin samples with similar appearance were collected from unprotected producing areas in China. All these samples were measured by NIR. The Stahel-Donoho estimates (SDE outlyingness diagnosis was used to remove the outliers. Partial least squares discriminant analysis (PLSDA was performed to develop a classification model and predict the authenticity of unknown objects. To improve the sensitivity and specificity of classification, the raw data was preprocessed to reduce unwanted spectral variations by standard normal variate (SNV transformation, taking second-order derivatives (D2 spectra, and smoothing. As the best model, the sensitivity and specificity reached 0.931 and 1.000 with SNV spectra. Combination of NIR spectrometry and statistical model selection can provide an effective and rapid method to discriminate the geographical producing area of Anxi-Tieguanyin.
Pilon, Alan Cesar; Carnevale Neto, Fausto; Freire, Rafael Teixeira; Cardoso, Patrícia; Carneiro, Renato Lajarim; Da Silva Bolzani, Vanderlan; Castro-Gamboa, Ian
2016-03-01
A major challenge in metabolomic studies is how to extract and analyze an entire metabolome. So far, no single method was able to clearly complete this task in an efficient and reproducible way. In this work we proposed a sequential strategy for the extraction and chromatographic separation of metabolites from leaves Jatropha gossypifolia using a design of experiments and partial least square model. The effect of 14 different solvents on extraction process was evaluated and an optimized separation condition on liquid chromatography was estimated considering mobile phase composition and analysis time. The initial conditions of extraction using methanol and separation in 30 min between 5 and 100% water/methanol (1:1 v/v) with 0.1% of acetic acid, 20 μL sample volume, 3.0 mL min(-1) flow rate and 25°C column temperature led to 107 chromatographic peaks. After the optimization strategy using i-propanol/chloroform (1:1 v/v) for extraction, linear gradient elution of 60 min between 5 and 100% water/(acetonitrile/methanol 68:32 v/v with 0.1% of acetic acid), 30 μL sample volume, 2.0 mL min(-1) flow rate, and 30°C column temperature, we detected 140 chromatographic peaks, 30.84% more peaks compared to initial method. This is a reliable strategy using a limited number of experiments for metabolomics protocols.
He, Wei; Zhou, Jian; Cheng, Hao; Wang, Liyuan; Wei, Kang; Wang, Weifeng; Li, Xinghui
2012-02-01
In today's global food markets, the ability to trace the origins of agricultural products is becoming increasingly important. We developed an efficient procedure for validating the authenticity and origin of tea samples where Partial Least Squares and Euclidean Distance methods, based on near-infrared spectroscopy data, were combined to classify tea samples from different tea producing areas. Four models for identification of authenticity of tea samples were constructed and utilized in our two-step procedure. High accuracy rates of 98.60%, 97.90%, 97.55%, and 99.83% for the calibration set, and 97.19%, 97.54%, 97.83%, 100% for test set, were achieved. After the first identification step, employing the four origin authenticity models, followed by the second step using the Euclidean Distance method, accuracy rates for specific origin identification were 98.43% in the calibration set and 96.84% in the test set. This method, employing two-step analysis of multi-origin model, accurately identified the origin of tea samples collected in four different areas. This study provided a potential reference method for the detection of "geographical indication" of agricultural products' and is available for use in traceability of origin studies.
Freye, Chris E; Fitz, Brian D; Billingsley, Matthew C; Synovec, Robert E
2016-06-01
The chemical composition and several physical properties of RP-1 fuels were studied using comprehensive two-dimensional (2D) gas chromatography (GC×GC) coupled with flame ionization detection (FID). A "reversed column" GC×GC configuration was implemented with a RTX-wax column on the first dimension ((1)D), and a RTX-1 as the second dimension ((2)D). Modulation was achieved using a high temperature diaphragm valve mounted directly in the oven. Using leave-one-out cross-validation (LOOCV), the summed GC×GC-FID signal of three compound-class selective 2D regions (alkanes, cycloalkanes, and aromatics) was regressed against previously measured ASTM derived values for these compound classes, yielding root mean square errors of cross validation (RMSECV) of 0.855, 0.734, and 0.530mass%, respectively. For comparison, using partial least squares (PLS) analysis with LOOCV, the GC×GC-FID signal of the entire 2D separations was regressed against the same ASTM values, yielding a linear trend for the three compound classes (alkanes, cycloalkanes, and aromatics), yielding RMSECV values of 1.52, 2.76, and 0.945 mass%, respectively. Additionally, a more detailed PLS analysis was undertaken of the compounds classes (n-alkanes, iso-alkanes, mono-, di-, and tri-cycloalkanes, and aromatics), and of physical properties previously determined by ASTM methods (such as net heat of combustion, hydrogen content, density, kinematic viscosity, sustained boiling temperature and vapor rise temperature). Results from these PLS studies using the relatively simple to use and inexpensive GC×GC-FID instrumental platform are compared to previously reported results using the GC×GC-TOFMS instrumental platform.
Institute of Scientific and Technical Information of China (English)
Bang-hua YANG; Liang-fei HE; Lin LIN; Qian WANG
2015-01-01
Ocular artifacts cause the main interfering signals within electroencephalogram (EEG) signal measurements. An adaptive filter based on reference signals from an electrooculogram (EOG) can reduce ocular interference, but collecting EOG signals during a long-term EEG recording is inconvenient and uncomfortable for the subject. To remove ocular artifacts from EEG in brain-computer interfaces (BCIs), a method named spatial constraint independent component analysis based recursive least squares (SCICA-RLS) is proposed. The method consists of two stages. In the first stage, independent component analysis (ICA) is used to decompose multiple EEG channels into an equal number of independent components (ICs). Ocular ICs are identified by an automatic artifact detection method based on kurtosis. Then empirical mode decomposition (EMD) is employed to remove any cerebral activity from the identified ocular ICs to obtain exact artifact ICs. In the second stage, first, SCICA applies exact artifact ICs obtained in the first stage as a constraint to extract artifact ICs from the given EEG signal. These extracted ICs are called spatial constraint ICs (SC-ICs). Then the RLS based adaptive filter uses SC-ICs as reference signals to reduce interference, which avoids the need for parallel EOG recordings. In addition, the proposed method has the ability of fast computation as it is not necessary for SCICA to identify all ICs like ICA. Based on the EEG data recorded from seven subjects, the new approach can lead to average classification accuracies of 3.3% and 12.6% higher than those of the standard ICA and raw EEG, respectively. In addition, the proposed method has 83.5% and 83.8% reduction in time-consumption compared with the standard ICA and ICA-RLS, respectively, which demonstrates a better and faster OA reduction.
Directory of Open Access Journals (Sweden)
Igseo eChoi
2011-05-01
Full Text Available A three-generation resource population was constructed by crossing pigs from the Duroc and Pietrain breeds. In this study, 954 F2 animals were used to identify quantitative trait loci (QTL affecting carcass and meat quality traits. Based on results of the first scan analyzed with a line-cross model using 124 microsatellite markers and 510 F2 animals, 9 chromosomes were selected for genotyping of additional markers. Twenty additional markers were genotyped for 954 F2 animals and 20 markers used in the first scan were genotyped for 444 additional F2 animals. Three different Mendelian models using least-squares for QTL analysis were applied for the second scan: a line-cross model, a half-sib model, and a combined line-cross and half-sib model. Significance thresholds were determined by false discovery rate (FDR. In total, 50 QTL using the line-cross model, 38 QTL using the half-sib model and 3 additional QTL using the combined line-cross and half-sib model were identified (q < 0.05. The line-cross and half-sib models revealed strong evidence for QTL regions on SSC6 for carcass traits (e.g., 10th-rib backfat; q < 0.0001 and on SSC15 for meat quality traits (e.g., tenderness, color, pH; q < 0.01, respectively. QTL for pH (SSC3, dressing percent (SSC7, marbling score and moisture percent (SSC12, CIE a* (SSC16 and carcass length and spareribs weight (SSC18 were also significant (q < 0.01. Additional marker and animal genotypes increased the statistical power for QTL detection, and applying different analysis models allowed confirmation of QTL and detection of new QTL.
Energy Technology Data Exchange (ETDEWEB)
Li, Xiongwei; Wang, Zhe, E-mail: zhewang@tsinghua.edu.cn; Fu, Yangting; Li, Zheng; Ni, Weidou
2014-09-01
Quantitative measurement of carbon content in coal is essentially important for coal property analysis. However, quantitative measurement of carbon content in coal using laser-induced breakdown spectroscopy (LIBS) suffered from low measurement accuracy due to measurement uncertainty as well as the matrix effects. In this study, our previously proposed spectrum standardization method and dominant factor based partial least square (PLS) method were combined to improve the measurement accuracy of carbon content in coal using LIBS. The combination model utilized the spectrum standardization method to accurately calculate dominant carbon concentration as the dominant factor, and then applied PLS with full spectrum information to correct residual errors. The combination model was applied to measure the carbon content in 24 bituminous coal samples. Results demonstrated that the combination model can further improve measurement accuracy compared with the spectrum standardization model and the dominant factor based PLS model, in which the dominant factor was calculated using traditional univariate method. The coefficient of determination, root-mean-square error of prediction, and average relative error for the combination model were 0.99, 1.63%, and 1.82%, respectively. The values for the spectrum standardization model were 0.90, 2.24%, and 2.75%, respectively, whereas those for the dominant factor based PLS model were 0.99, 2.66%, and 3.64%, respectively. The results indicate that LIBS has great potential to be applied for the coal analysis. - Highlights: • Spectrum standardization method is utilized to establish a more accurate dominant factor model. • PLS algorithm is applied to further compensate for residual errors using the entire spectrum information. • Measurement accuracy is improved.
Peerbhay, Kabir Yunus; Mutanga, Onisimo; Ismail, Riyad
2013-05-01
Discriminating commercial tree species using hyperspectral remote sensing techniques is critical in monitoring the spatial distributions and compositions of commercial forests. However, issues related to data dimensionality and multicollinearity limit the successful application of the technology. The aim of this study was to examine the utility of the partial least squares discriminant analysis (PLS-DA) technique in accurately classifying six exotic commercial forest species (Eucalyptus grandis, Eucalyptus nitens, Eucalyptus smithii, Pinus patula, Pinus elliotii and Acacia mearnsii) using airborne AISA Eagle hyperspectral imagery (393-900 nm). Additionally, the variable importance in the projection (VIP) method was used to identify subsets of bands that could successfully discriminate the forest species. Results indicated that the PLS-DA model that used all the AISA Eagle bands (n = 230) produced an overall accuracy of 80.61% and a kappa value of 0.77, with user's and producer's accuracies ranging from 50% to 100%. In comparison, incorporating the optimal subset of VIP selected wavebands (n = 78) in the PLS-DA model resulted in an improved overall accuracy of 88.78% and a kappa value of 0.87, with user's and producer's accuracies ranging from 70% to 100%. Bands located predominantly within the visible region of the electromagnetic spectrum (393-723 nm) showed the most capability in terms of discriminating between the six commercial forest species. Overall, the research has demonstrated the potential of using PLS-DA for reducing the dimensionality of hyperspectral datasets as well as determining the optimal subset of bands to produce the highest classification accuracies.
Huo, R.; Wehrens, H.R.M.J.; Buydens, L.M.C.
2004-01-01
The quality of DOSY NMR data can be improved by careful pre-processing techniques. Baseline drift, peak shift, and phase shift commonly exist in real-world DOSY NMR data. These phenomena seriously hinder the data analysis and should be removed as much as possible. In this paper, a series of preproce
Konukoglu, Ender; Coutu, Jean-Philippe; Salat, David H.; Fischl, Bruce
2016-01-01
Diffusion magnetic resonance imaging (dMRI) is a unique technology that allows the noninvasive quantification of microstructural tissue properties of the human brain in healthy subjects as well as the probing of disease-induced variations. Population studies of dMRI data have been essential in identifying pathological structural changes in various conditions, such as Alzheimer’s and Huntington’s diseases1,2. The most common form of dMRI involves fitting a tensor to the underlying imaging data (known as Diffusion Tensor Imaging, or DTI), then deriving parametric maps, each quantifying a different aspect of the underlying microstructure, e.g. fractional anisotropy and mean diffusivity. To date, the statistical methods utilized in most DTI population studies either analyzed only one such map or analyzed several of them, each in isolation. However, it is most likely that variations in the microstructure due to pathology or normal variability would affect several parameters simultaneously, with differing variations modulating the various parameters to differing degrees. Therefore, joint analysis of the available diffusion maps can be more powerful in characterizing histopathology and distinguishing between conditions than the widely used univariate analysis. In this article, we propose a multivariate approach for statistical analysis of diffusion parameters that uses partial least squares correlation (PLSC) analysis and permutation testing as building blocks in a voxel-wise fashion. Stemming from the common formulation, we present three different multivariate procedures for group analysis, regressing-out nuisance parameters and comparing effects of different conditions. We used the proposed procedures to study the effects of non-demented aging, Alzheimer’s disease and mild cognitive impairment on the white matter. Here, we present results demonstrating that the proposed PLSC-based approach can differentiate between effects of different conditions in the same
Tikhonov Regularization and Total Least Squares
DEFF Research Database (Denmark)
Golub, G. H.; Hansen, Per Christian; O'Leary, D. P.
2000-01-01
formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that...
Gross, Bernard
1996-01-01
Material characterization parameters obtained from naturally flawed specimens are necessary for reliability evaluation of non-deterministic advanced ceramic structural components. The least squares best fit method is applied to the three parameter uniaxial Weibull model to obtain the material parameters from experimental tests on volume or surface flawed specimens subjected to pure tension, pure bending, four point or three point loading. Several illustrative example problems are provided.
A novel extended kernel recursive least squares algorithm.
Zhu, Pingping; Chen, Badong; Príncipe, José C
2012-08-01
In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.
Li, Meng; Wang, Jun; Du, Fu; Diallo, Boubacar; Xie, Guang Hui
2017-01-01
Due to its chemical composition and abundance, lignocellulosic biomass is an attractive feedstock source for global bioenergy production. However, chemical composition variations interfere with the success of any single methodology for efficient bioenergy extraction from diverse lignocellulosic biomass sources. Although chemical component distributions could guide process design, they are difficult to obtain and vary widely among lignocellulosic biomass types. Therefore, expensive and laborious "one-size-fits-all" processes are still widely used. Here, a non-destructive and rapid analytical technology, near-infrared spectroscopy (NIRS) coupled with multivariate calibration, shows promise for addressing these challenges. Recent advances in molecular spectroscopy analysis have led to methodologies for dual-optimized NIRS using sample subset partitioning and variable selection, which could significantly enhance the robustness and accuracy of partial least squares (PLS) calibration models. Using this methodology, chemical components and theoretical ethanol yield (TEY) values were determined for 70 sweet and 77 biomass sorghum samples from six sweet and six biomass sorghum varieties grown in 2013 and 2014 at two study sites in northern China. Chemical components and TEY of the 147 bioenergy sorghum samples were initially analyzed and compared using wet chemistry methods. Based on linear discriminant analysis, a correct classification assignment rate (either sweet or biomass type) of 99.3% was obtained using 20 principal components. Next, detailed statistical analysis demonstrated that partial optimization using sample set partitioning based on joint X-Y distances (SPXY) for sample subset partitioning enhanced the robustness and accuracy of PLS calibration models. Finally, comparisons between five dual-optimized strategies indicated that competitive adaptive reweighted sampling coupled with the SPXY (CARS-SPXY) was the most efficient and effective method for improving
Lao, Wan-li; He, Yu-chan; Li, Gai-yun; Zhou, Qun
2016-01-01
The biomass to plastic ratio in wood plastic composites (WPCs) greatly affects the physical and mechanical properties and price. Fast and accurate evaluation of the biomass to plastic ratio is important for the further development of WPCs. Quantitative analysis of the WPC main composition currently relies primarily on thermo-analytical methods. However, these methods have some inherent disadvantages, including time-consuming, high analytical errors and sophisticated, which severely limits the applications of these techniques. Therefore, in this study, Fourier Transform Infrared (FTIR) spectroscopy in combination with partial least square (PLS) has been used for rapid prediction of bamboo and polypropylene (PP) content in bamboo/PP composites. The bamboo powders were used as filler after being dried at 105 degrees C for 24 h. PP was used as matrix materials, and some chemical regents were used as additives. Then 42 WPC samples with different ratios of bamboo and PP were prepared by the methods of extrusion. FTIR spectral data of 42 WPC samples were collected by means of KBr pellets technique. The model for bamboo and PP content prediction was developed by PLS-2 and full cross validation. Results of internal cross validation showed that the first derivative spectra in the range of 1 800-800 cm(-1) corrected by standard normal variate (SNV) yielded the optimal model. For both bamboo and PP calibration, the coefficients of determination (R2) were 0.955. The standard errors of calibration (SEC) were 1.872 for bamboo content and 1.848 for PP content, respectively. For both bamboo and PP validation, the R2 values were 0.950. The standard errors of cross validation (SECV) were 1.927 for bamboo content and 1.950 for PP content, respectively. And the ratios of performance to deviation (RPD) were 4.45 for both biomass and PP examinations. The results of external validation showed that the relative prediction deviations for both biomass and PP contents were lower than ± 6
A Linear-correction Least-squares Approach for Geolocation Using FDOA Measurements Only
Institute of Scientific and Technical Information of China (English)
LI Jinzhou; GUO Fucheng; JIANG Wenli
2012-01-01
A linear-correction least-squares(LCLS) estimation procedure is proposed for geolocation using frequency difference of arrival(FDOA) measurements only.We first analyze the measurements of FDOA,and further derive the Cramér-Rao lower bound(CRLB) of geolocation using FDOA measurements.For the localization model is a nonlinear least squares(LS) estimator with a nonlinear constrained,a linearizing method is used to convert the model to a linear least squares estimator with a nonlinear constrained.The Gauss-Newton iteration method is developed to conquer the source localization problem.From the analysis of solving Lagrange multiplier,the algorithm is a generalization of linear-correction least squares estimation procedure under the condition of geolocation using FDOA measurements only.The algorithm is compared with common least squares estimation.Comparisons of their estimation accuracy and the CRLB are made,and the proposed method attains the CRLB.Simulation resuits are included to corroborate the theoretical development.
Liu, Xue-song; Sun, Fen-fang; Jin, Ye; Wu, Yong-jiang; Gu, Zhi-xin; Zhu, Li; Yan, Dong-lan
2015-12-01
A novel method was developed for the rapid determination of multi-indicators in corni fructus by means of near infrared (NIR) spectroscopy. Particle swarm optimization (PSO) based least squares support vector machine was investigated to increase the levels of quality control. The calibration models of moisture, extractum, morroniside and loganin were established using the PSO-LS-SVM algorithm. The performance of PSO-LS-SVM models was compared with partial least squares regression (PLSR) and back propagation artificial neural network (BP-ANN). The calibration and validation results of PSO-LS-SVM were superior to both PLS and BP-ANN. For PSO-LS-SVM models, the correlation coefficients (r) of calibrations were all above 0.942. The optimal prediction results were also achieved by PSO-LS-SVM models with the RMSEP (root mean square error of prediction) and RSEP (relative standard errors of prediction) less than 1.176 and 15.5% respectively. The results suggest that PSO-LS-SVM algorithm has a good model performance and high prediction accuracy. NIR has a potential value for rapid determination of multi-indicators in Corni Fructus.
Recursive least squares background prediction of univariate syndromic surveillance data
Burkom Howard; Najmi Amir-Homayoon
2009-01-01
Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of s...
Li, Xiongwei; Fu, Yangting; Li, Zheng; Ni, Weidou
2014-01-01
Successful quantitative measurement of carbon content in coal using laser-induced breakdown spectroscopy (LIBS) is suffered from relatively low precision and accuracy. In the present work, the spectrum standardization method was combined with the dominant factor based partial least square (PLS) method to improve the measurement accuracy of carbon content in coal by LIBS. The combination model employed the spectrum standardization method to convert the carbon line intensity into standard state for more accurately calculating the dominant carbon concentration, and then applied PLS with full spectrum information to correct the residual errors. The combination model was applied to the measurement of carbon content for 24 bituminous coal samples. The results demonstrated that the combination model could further improve the measurement accuracy compared with both our previously established spectrum standardization model and dominant factor based PLS model using spectral area normalized intensity for the dominant fa...
DEFF Research Database (Denmark)
Nørlykke, Simon F.; Flyvbjerg, Henrik
2010-01-01
Optical tweezers and atomic force microscope (AFM) cantilevers are often calibrated by fitting their experimental power spectra of Brownian motion. We demonstrate here that if this is done with typical weighted least-squares methods, the result is a bias of relative size between -2/n and + 1/n....... The fitted value for the characteristic frequency is not affected by this bias. For the AFM then, force measurements are not affected provided an independent length-scale calibration is available. For optical tweezers there is no such luck, since the spring constant is found as the ratio...... of the characteristic frequency and the diffusion coefficient. We give analytical results for the weight-dependent bias for the wide class of systems whose dynamics is described by a linear (integro)differential equation with additive noise, white or colored. Examples are optical tweezers with hydrodynamic self...
Energy Technology Data Exchange (ETDEWEB)
Tian, Ye; Wang, Zhennan [Optics and Optoelectronics Laboratory, Ocean University of China, Qingdao, Shandong 266100 (China); Han, Xiaoshuang [Optics and Optoelectronics Laboratory, Ocean University of China, Qingdao, Shandong 266100 (China); College of Electronic Information Engineering, Inner Mongolia University, Hohhot, Inner Mongolia 010021 (China); Hou, Huaming [Optics and Optoelectronics Laboratory, Ocean University of China, Qingdao, Shandong 266100 (China); Zheng, Ronger, E-mail: rzheng@ouc.edu.cn [Optics and Optoelectronics Laboratory, Ocean University of China, Qingdao, Shandong 266100 (China)
2014-12-01
With the hope of applying laser-induced breakdown spectroscopy (LIBS) to the geological logging field, a series of cutting samples were classified using LIBS coupled with chemometric methods. In this paper, we focused on a comparative investigation of the linear PLS-DA method and non-linear SVM method. Both the optimal PLS-DA model and SVM model were built by the leave-one-out cross-validation (LOOCV) approach with the calibration LIBS spectra, and then tested by validation spectra. We show that the performance of SVM is significantly better than PLS-DA because of its ability to address the non-linear relationships in LIBS spectra, with a correct classification rate of 91.67% instead of 68.34%, and an unclassification rate of 3.33% instead of 28.33%. To further improve the classification accuracy, we then designed a new classification approach by the joint analysis of PLS-DA and SVM models. With this method, 95% of the validation spectra are correctly classified and no unclassified spectra are observed. This work demonstrated that the coupling of LIBS with the non-linear SVM method has great potential to be used for on-line classification of geological cutting samples, and the combination of PLS-DA and SVM enables the cuttings identification with an excellent performance. - Highlights: • The geological cuttings were classified using LIBS coupled with chemometric methods. • The non-linear SVM showed significantly better performance than PLS-DA. • The joint analysis of PLS-DA and SVMs provided an excellent accuracy of 95%.
Zhang, Mengliang; Zhao, Yang; Harrington, Peter de B; Chen, Pei
2016-03-01
Two simple fingerprinting methods, flow-injection coupled to ultraviolet spectroscopy and proton nuclear magnetic resonance, were used for discriminating between Aurantii fructus immaturus and Fructus poniciri trifoliatae immaturus. Both methods were combined with partial least-squares discriminant analysis. In the flow-injection method, four data representations were evaluated: total ultraviolet absorbance chromatograms, averaged ultraviolet spectra, absorbance at 193, 205, 225, and 283 nm, and absorbance at 225 and 283 nm. Prediction rates of 100% were achieved for all data representations by partial least-squares discriminant analysis using leave-one-sample-out cross-validation. The prediction rate for the proton nuclear magnetic resonance data by partial least-squares discriminant analysis with leave-one-sample-out cross-validation was also 100%. A new validation set of data was collected by flow-injection with ultraviolet spectroscopic detection two weeks later and predicted by partial least-squares discriminant analysis models constructed by the initial data representations with no parameter changes. The classification rates were 95% with the total ultraviolet absorbance chromatograms datasets and 100% with the other three datasets. Flow-injection with ultraviolet detection and proton nuclear magnetic resonance are simple, high throughput, and low-cost methods for discrimination studies.
Two simple fingerprinting methods, flow-injection UV spectroscopy (FIUV) and 1H nuclear magnetic resonance (NMR), for discrimination of Aurantii FructusImmaturus and Fructus Poniciri TrifoliataeImmaturususing were described. Both methods were combined with partial least-squares discriminant analysis...
Multisource Least-squares Reverse Time Migration
Dai, Wei
2012-12-01
Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is
Steady and transient least square solvers for thermal problems
Padovan, Joe
1987-01-01
This paper develops a hierarchical least square solution algorithm for highly nonlinear heat transfer problems. The methodology's capability is such that both steady and transient implicit formulations can be handled. This includes problems arising from highly nonlinear heat transfer systems modeled by either finite-element or finite-difference schemes. The overall procedure developed enables localized updating, iteration, and convergence checking as well as constraint application. The localized updating can be performed at a variety of hierarchical levels, i.e., degree of freedom, substructural, material-nonlinear groups, and/or boundary groups. The choice of such partitions can be made via energy partitioning or nonlinearity levels as well as by user selection. Overall, this leads to extremely robust computational characteristics. To demonstrate the methodology, problems are drawn from nonlinear heat conduction. These are used to quantify the robust capabilities of the hierarchical least square scheme.
Simultaneous least squares fitter based on the Langrange multiplier method
Guan, Yinghui; Zheng, Yangheng; Zhu, Yong-Sheng
2013-01-01
We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the $\\chi^2$ minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Langrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the $D^{0}-\\bar{D}^{0}$ mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.
Least Square Methods for Solving Systems of Inequalities with Application to an Assignment Problem
1992-11-01
problem using continuous methods and (2) solving systems of inequalities (and equalities) in a least square sense. The specific assignment problem has...linear equations, in a least square sense are developed. Common algorithmic approaches to solve nonlinear least square problems are adapted to solve
Constrained total least squares algorithm for passive location based on bearing-only measurements
Institute of Scientific and Technical Information of China (English)
WANG Ding; ZHANG Li; WU Ying
2007-01-01
The constrained total least squares algorithm for the passive location is presented based on the bearing-only measurements in this paper. By this algorithm the non-linear measurement equations are firstly transformed into linear equations and the effect of the measurement noise on the linear equation coefficients is analyzed,therefore the problem of the passive location can be considered as the problem of constrained total least squares, then the problem is changed into the optimized question without restraint which can be solved by the Newton algorithm, and finally the analysis of the location accuracy is given. The simulation results prove that the new algorithm is effective and practicable.
Partial Least Squares tutorial for analyzing neuroimaging data
Directory of Open Access Journals (Sweden)
Patricia Van Roon
2014-09-01
Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The
Partial update least-square adaptive filtering
Xie, Bei
2014-01-01
Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a
Energy Technology Data Exchange (ETDEWEB)
Matsuo, Takasuke; Tanaka, Nobuki; Fukai, Mari; Yamamuro, Osamu; Inaba, Akira; Ichikawa, Mizuhiko
2003-06-26
A non-linear least-squares method of analysis has been developed for the heat capacities of solids undergoing phase transitions. It utilizes harmonic heat capacity functions corrected for thermal expansion. The unique feature of the method is that it incorporates the effect of a gradual phase transition in the fitting function for the low temperature phase. Compact expressions approximating the Debye function and the Ising model heat capacity function have been derived and presented in practical forms for use in the Kaleidagraph software. The method has been tested on the heat capacity of sodium chloride (which lacks a phase transition) and tri-rubidium deuterium disulfate (Rb{sub 3}D(SO{sub 4}){sub 2}, TRDS) which undergoes a phase transition at 78.5 K in the deuterated form but not in the normal hydrogenous form. The excess entropy based on the fitting was 5.27 J K{sup -1} mol{sup -1}, close enough to R ln 2=5.76 J K{sup -1} mol{sup -1} to suggest an order-disorder mechanism for the phase transition.
Duarte, Janaína; Pacheco, Marcos T. T.; Villaverde, Antonio Balbin; Machado, Rosangela Z.; Zângaro, Renato A.; Silveira, Landulfo
2010-07-01
Toxoplasmosis is an important zoonosis in public health because domestic cats are the main agents responsible for the transmission of this disease in Brazil. We investigate a method for diagnosing toxoplasmosis based on Raman spectroscopy. Dispersive near-infrared Raman spectra are used to quantify anti-Toxoplasma gondii (IgG) antibodies in blood sera from domestic cats. An 830-nm laser is used for sample excitation, and a dispersive spectrometer is used to detect the Raman scattering. A serological test is performed in all serum samples by the enzyme-linked immunosorbent assay (ELISA) for validation. Raman spectra are taken from 59 blood serum samples and a quantification model is implemented based on partial least squares (PLS) to quantify the sample's serology by Raman spectra compared to the results provided by the ELISA test. Based on the serological values provided by the Raman/PLS model, diagnostic parameters such as sensitivity, specificity, accuracy, positive prediction values, and negative prediction values are calculated to discriminate negative from positive samples, obtaining 100, 80, 90, 83.3, and 100%, respectively. Raman spectroscopy, associated with the PLS, is promising as a serological assay for toxoplasmosis, enabling fast and sensitive diagnosis.
Nakagawa, Hiroshi; Tajima, Takahiro; Kano, Manabu; Kim, Sanghong; Hasebe, Shinji; Suzuki, Tatsuya; Nakagami, Hiroaki
2012-04-17
The usefulness of infrared-reflection absorption spectroscopy (IR-RAS) for the rapid measurement of residual drug substances without sampling was evaluated. In order to realize the highly accurate rapid measurement, locally weighted partial least-squares (LW-PLS) with a new weighting technique was developed. LW-PLS is an adaptive method that builds a calibration model on demand by using a database whenever prediction is required. By adding more weight to samples closer to a query, LW-PLS can achieve higher prediction accuracy than PLS. In this study, a new weighting technique is proposed to further improve the prediction accuracy of LW-PLS. The root-mean-square error of prediction (RMSEP) of the IR-RAS spectra analyzed by LW-PLS with the new weighting technique was compared with that analyzed by PLS and locally weighted regression (LWR). The RMSEP of LW-PLS with the proposed weighting technique was about 36% and 14% smaller than that of PLS and LWR, respectively, when ibuprofen was a residual drug substance. Similarly, LW-PLS with the weighting technique was about 39% and 24% better than PLS and LWR in RMSEP, respectively, when magnesium stearate was a residual excipient. The combination of IR-RAS and LW-PLS with the proposed weighting technique is a very useful rapid measurement technique of the residual drug substances.
Brestrich, Nina; Briskot, Till; Osberghaus, Anna; Hubbuch, Jürgen
2014-07-01
Selective quantification of co-eluting proteins in chromatography is usually performed by offline analytics. This is time-consuming and can lead to late detection of irregularities in chromatography processes. To overcome this analytical bottleneck, a methodology for selective protein quantification in multicomponent mixtures by means of spectral data and partial least squares regression was presented in two previous studies. In this paper, a powerful integration of software and chromatography hardware will be introduced that enables the applicability of this methodology for a selective inline quantification of co-eluting proteins in chromatography. A specific setup consisting of a conventional liquid chromatography system, a diode array detector, and a software interface to Matlab® was developed. The established tool for selective inline quantification was successfully applied for a peak deconvolution of a co-eluting ternary protein mixture consisting of lysozyme, ribonuclease A, and cytochrome c on SP Sepharose FF. Compared to common offline analytics based on collected fractions, no loss of information regarding the retention volumes and peak flanks was observed. A comparison between the mass balances of both analytical methods showed, that the inline quantification tool can be applied for a rapid determination of pool yields. Finally, the achieved inline peak deconvolution was successfully applied to make product purity-based real-time pooling decisions. This makes the established tool for selective inline quantification a valuable approach for inline monitoring and control of chromatographic purification steps and just in time reaction on process irregularities.
Combinatorics of least-squares trees.
Mihaescu, Radu; Pachter, Lior
2008-09-01
A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.
Iterative methods for weighted least-squares
Energy Technology Data Exchange (ETDEWEB)
Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)
1996-12-31
A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.
Least-squares fitting Gompertz curve
Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf
2004-08-01
In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.
Consistent Partial Least Squares Path Modeling
Dijkstra, Theo K.; Henseler, Jörg
2015-01-01
This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the
Time Scale in Least Square Method
Directory of Open Access Journals (Sweden)
Özgür Yeniay
2014-01-01
Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.
Energy Technology Data Exchange (ETDEWEB)
Marti-Aluja, Idoia; Ruisanchez, Itziar [Analytical and Organic Chemistry Department, Universitat Rovira i Virgili, Marcelli Domingo s/n, Campus Sescelades, 43007 Tarragona (Spain); Larrechi, M. Soledad, E-mail: mariasoledad.larrechi@urv.cat [Analytical and Organic Chemistry Department, Universitat Rovira i Virgili, Marcelli Domingo s/n, Campus Sescelades, 43007 Tarragona (Spain)
2013-01-14
Highlights: Black-Right-Pointing-Pointer The structure of insulin can be changed via interaction with antiretroviral drugs. Black-Right-Pointing-Pointer The chemical interaction promotes the formation of aggregates. Black-Right-Pointing-Pointer This drug effect was evaluated by MCR-ALS coupled to IR spectroscopy. Black-Right-Pointing-Pointer Formation of aggregates was favourable if drugs were able to form hydrogen bonds. Black-Right-Pointing-Pointer Higher drug concentrations favoured formation of amorphous aggregates. - Abstract: Quantification of the effect of antiretroviral drugs on the insulin aggregation process is an important area of research due to the serious metabolic diseases observed in AIDS patients after prolonged treatment with these drugs. In this work, multivariate curve resolution alternating least squares (MCR-ALS) was applied to infrared monitoring of the insulin aggregation process in the presence of three antiretroviral drugs to quantify their effect. To evidence concentration dependence in this process, mixtures at two different insulin:drug molar ratios were used. The interaction between insulin and each drug was analysed by {sup 1}H NMR spectroscopy. In all cases, the aggregation process was monitored during 45 min by infrared spectroscopy. The aggregates were further characterised by scanning electron microscopy (SEM). MCR-ALS provided the spectral and concentration profiles of the different insulin-drug conformations that are involved in the process. Their feasible band boundaries were calculated using the MCR-BANDS methodology. The kinetic profiles describe the aggregation pathway and the spectral profiles characterise the conformations involved. The retrieved results show that each of the three drugs modifies insulin conformation in a different way, promoting the formation of aggregates. Ritonavir shows the strongest promotion of aggregation, followed by efavirenz and zidovudine. In the studied concentration range, concentration
Budevska, Boiana O
2009-09-01
Target partial least squares (PLS) is applied to Fourier transform infrared-attenuated total reflection (FT-IR-ATR) hyperspectral images of plant leaf surface treated with crop protection products. Detection of active ingredient is demonstrated at application rates of 50 g active ingredient per hectare. This sensitivity could not be achieved without the application of multivariate analysis. Quantitative information appears to be easily recovered through analysis of combined images with known and unknown amounts of active ingredient.
Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang
2016-03-01
An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS.
Diagonal loading least squares time delay estimation
Institute of Scientific and Technical Information of China (English)
LI Xuan; YAN Shefeng; MA Xiaochuan
2012-01-01
Least squares （LS） time delay estimation is a classical and effective method. However, the performance is degraded severely in the scenario of low ratio of signal-noise （SNR） due to the instability of matrix inversing. In order to solve the problem, diagonal loading least squares （DL-LS） is proposed by adding a positive definite matrix to the inverse matrix. Furthermore, the shortcoming of fixed diagonal loading is analyzed from the point of regularization that when the tolerance of low SNR is increased, veracity is decreased. This problem is resolved by reloading. The primary estimation＇s reciprocal is introduced as diagonal loading and it leads to small diagonal loading at the time of arrival and larger loading at other time. Simulation and pool experiment prove the algorithm has better performance.
基于最小二乘的职工收入统计分析%Employee Income Statistics Analysis based on Least Squares
Institute of Scientific and Technical Information of China (English)
徐宪红
2015-01-01
文章基于最小二乘法模型，利用2001-2007年相关统计数据，对我国职工工资水平指数、发展速度及发展趋势进行了分析，报告了我国职工收入水平状况现状；通过采用描述统计及回归分析方法，对2001-2007年分地区进行了我国职工收入调查，反映了我国职工收入存在的问题，取得我国职工平均收入存在地区差别的基本结论，即从总体上来说，我国就业职工的平均工资水平比较低，不同行业或者不同部门间就业人员的工资水平差距也比较大；给出提高职工平均收入、消除地区收入差别的建议及对策，即加快产业转型，实现跨越式发展，增加就业人数和就业质量，进一步夯实城乡居民增收的基础、打通地区间的资本等流通渠道，实现地区间资本、劳动力等资源顺利转移，增加经济欠发达地区的就业机会，提高该地区的人力资本，进一步深化企事业单位相关工资制度的改革，打破行业垄断，缩小不同行业间工资差距。%Based on th e least square method and using 2001-2007 years' relevant statistical data , this thesis analyzes China's wage level development trend and reports workers' income present situation in our country. Then this thesis analyses workers' income level in China and reflects our country worker' income problem, and reaches a conclusion that Chinese workers average income has regional differences and overall average wage level in our country is relatively low , that is ,the wage gap in the different departments is relatively large. Finally, this thesis proposes to strengthen the staff average income and eliminate the income gap between regions:firstly, speeding up industrial restructuring, achieving leapfrog development, increasing employment's quantity and quality and reinforcing urban and rural residents' income and then opening up inter-regional between capital and other distribution channels
Meshfree First-order System Least Squares
Institute of Scientific and Technical Information of China (English)
Hugh R.MacMillan; Max D.Gunzburger; John V.Burkardt
2008-01-01
We prove convergence for a meshfree first-order system least squares (FOSLS) partition of unity finite element method (PUFEM). Essentially, by virtue of the partition of unity, local approximation gives rise to global approximation in H(div)∩ H(curl). The FOSLS formulation yields local a posteriori error estimates to guide the judicious allotment of new degrees of freedom to enrich the initial point set in a meshfree dis-cretization. Preliminary numerical results are provided and remaining challenges are discussed.
Efficient least-squares basket-weaving
Winkel, B.; Flöer, L.; Kraus, A.
2012-11-01
We report on a novel method to solve the basket-weaving problem. Basket-weaving is a technique that is used to remove scan-line patterns from single-dish radio maps. The new approach applies linear least-squares and works on gridded maps from arbitrarily sampled data, which greatly improves computational efficiency and robustness. It also allows masking of bad data, which is useful for cases where radio frequency interference is present in the data. We evaluate the algorithms using simulations and real data obtained with the Effelsberg 100-m telescope.
Efficient least-squares basket-weaving
Winkel, B; Kraus, A
2012-01-01
We report on a novel method to solve the basket-weaving problem. Basket-weaving is a technique that is used to remove scan-line patterns from single-dish radio maps. The new approach applies linear least-squares and works on gridded maps from arbitrarily sampled data, which greatly improves computational efficiency and robustness. It also allows masking of bad data, which is useful for cases where radio frequency interference is present in the data. We evaluate the algorithms using simulations and real data obtained with the Effelsberg 100-m telescope.
Least-squares Gaussian beam migration
Yuan, Maolin; Huang, Jianping; Liao, Wenyuan; Jiang, Fuyou
2017-02-01
A theory of least-squares Gaussian beam migration (LSGBM) is presented to optimally estimate a subsurface reflectivity. In the iterative inversion scheme, a Gaussian beam (GB) propagator is used as the kernel of linearized forward modeling (demigration) and its adjoint (migration). Born approximation based GB demigration relies on the calculation of Green’s function by a Gaussian-beam summation for the downward and upward wavefields. The adjoint operator of GB demigration accounts for GB prestack depth migration under the cross-correlation imaging condition, where seismic traces are processed one by one for each shot. A numerical test on the point diffractors model suggests that GB demigration can successfully simulate primary scattered data, while migration (adjoint) can yield a corresponding image. The GB demigration/migration algorithms are used for the least-squares migration scheme to deblur conventional migrated images. The proposed LSGBM is illustrated with two synthetic data for a four-layer model and the Marmousi2 model. Numerical results show that LSGBM, compared to migration (adjoint) with GBs, produces images with more balanced amplitude, higher resolution and even fewer artifacts. Additionally, the LSGBM shows a robust convergence rate.
Total least squares for anomalous change detection
Energy Technology Data Exchange (ETDEWEB)
Theiler, James P [Los Alamos National Laboratory; Matsekh, Anna M [Los Alamos National Laboratory
2010-01-01
A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.
A unified approach for least-squares surface fitting
Institute of Scientific and Technical Information of China (English)
ZHU; Limin; DING; Han
2004-01-01
This paper presents a novel approach for least-squares fitting of complex surface to measured 3D coordinate points by adjusting its location and/or shape. For a point expressed in the machine reference frame and a deformable smooth surface represented in its own model frame, a signed point-to-surface distance function is defined,and its increment with respect to the differential motion and differential deformation of the surface is derived. On this basis, localization, surface reconstruction and geometric variation characterization are formulated as a unified nonlinear least-squares problem defined on the product space SE(3)×m. By using Levenberg-Marquardt method, a sequential approximation surface fitting algorithm is developed. It has the advantages of implementational simplicity, computational efficiency and robustness. Applications confirm the validity of the proposed approach.
Multiples least-squares reverse time migration
Zhang, D. L.
2013-01-01
To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.
Least square regularized regression in sum space.
Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu
2013-04-01
This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.
Cichocki, A; Unbehauen, R
1994-01-01
In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.
Karami, K; Soltanzadeh, M M
2008-01-01
Using measured radial velocity data of nine double lined spectroscopic binary systems NSV 223, AB And, V2082 Cyg, HS Her, V918 Her, BV Dra, BW Dra, V2357 Oph, and YZ Cas, we find corresponding orbital and spectroscopic elements via the method introduced by Karami & Mohebi (2007a) and Karami & Teimoorinia (2007). Our numerical results are in good agreement with those obtained by others using more traditional methods.
Stability Analysis for Regularized Least Squares Regression
Rudin, Cynthia
2005-01-01
We discuss stability for a class of learning algorithms with respect to noisy labels. The algorithms we consider are for regression, and they involve the minimization of regularized risk functionals, such as L(f) := 1/N sum_i (f(x_i)-y_i)^2+ lambda ||f||_H^2. We shall call the algorithm `stable' if, when y_i is a noisy version of f*(x_i) for some function f* in H, the output of the algorithm converges to f* as the regularization term and noise simultaneously vanish. We consider two flavors of...
Elastic least-squares reverse time migration
Feng, Zongcai
2017-03-08
We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.
Skeletonized Least Squares Wave Equation Migration
Zhan, Ge
2010-10-17
The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).
Neural Network Inverse Adaptive Controller Based on Davidon Least Square
Institute of Scientific and Technical Information of China (English)
无
2000-01-01
General neural network inverse adaptive controller haa two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system.These defects limit the scope in which the neural network inverse adaptive controller is used.We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence,and then through constructing the pseudo-plant,a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system.The simulation results show the validity of this scheme.
Penalized Weighted Least Squares for Outlier Detection and Robust Regression
Gao, Xiaoli; Fang, Yixin
2016-01-01
To conduct regression analysis for data contaminated with outliers, many approaches have been proposed for simultaneous outlier detection and robust regression, so is the approach proposed in this manuscript. This new approach is called "penalized weighted least squares" (PWLS). By assigning each observation an individual weight and incorporating a lasso-type penalty on the log-transformation of the weight vector, the PWLS is able to perform outlier detection and robust regression simultaneou...
Temperature prediction control based on least squares support vector machines
Institute of Scientific and Technical Information of China (English)
Bin LIU; Hongye SU; Weihua HUANG; Jian CHU
2004-01-01
A prediction control algorithm is presented based on least squares support vector machines (LS-SVM) model for a class of complex systems with strong nonlinearity.The nonlinear off-line model of the controlled plant is built by LS-SVM with radial basis function (RBF) kernel.In the process of system running,the off-line model is linearized at each sampling instant,and the generalized prediction control (GPC) algorithm is employed to implement the prediction control for the controlled plant.The obtained algorithm is applied to a boiler temperature control system with complicated nonlinearity and large time delay.The results of the experiment verify the effectiveness and merit of the algorithm.
Institute of Scientific and Technical Information of China (English)
蔡凯; 向章敏; 周淑平; 耿召良; 葛永辉; 张婕
2013-01-01
采用多元统计分析方法,对贵州2010和2011年124个烤烟样品中的8种生物碱进行偏最小二乘回归分析.结果表明:假木贼碱、二烯烟碱、新烟草碱及可替宁对烟碱含量的影响最大；通过偏最小二乘回归方程对烟碱含量进行预测,训练集119个样本的预测值与测定值之间的相对标准偏差都小于17.04％,其中有109个样本小于10.00％;用检测集5个样本进一步对方程进行验证,其预测值与测定值之间的相对标准偏差都小于6.72％.%Using the method of multivariate statistical analysis,we have conducted the partial least squares regression analysis(PLSRA) of 8 kinds of alkaloids of 124 flue-cured tobacco samples in 2010 and 2011 in Guizhou.The result shows that nornicotine,nicotyrine,anatabine and cotinine have the greatest influence on nicotine content.By partial least squares regression equation to predict the content of nicotine.We have found that the relative standard deviation between predicted value of 119 samples of the training set and their measured value is less than 17.04％ and that in 109 of them is less than 10.00％.At last,5 samples of the detecting set have been applied to validate the partial least squares regression equation.The result shows that the relative standard deviation between predicted value and measured value is less than 6.72％ and a good fitting has been obtained.
Institute of Scientific and Technical Information of China (English)
陈宁; 于德介; 吕辉; 夏百战
2014-01-01
In order to improve the accuracy of simulation analysis of plate structural-acoustic coupled systems,the finite element-least square point interpolation method (FE-LSPIM)was extended to solve plate structural -acoustic coupled problems and a coupled FE-LSPIM for plate structural-acoustic coupled systems was proposed.With the proposed method,the shape functions of the finite element method and the least square point interpolation were used for local approximation,the element-compatibility of the finite element method and the quadratic polynomial completeness of LSPIM were inherited.Thus,the accuracy of simulation analysis could be improved.Numerical example of a box structural-acoustic coupled model was presented.Its results showed that using FE-LSPIMachieves a higher accuracy,compared with using FEMand smoothed FEMfor simulation of plate structural -acoustic coupled problems.%为提高板结构-声场耦合分析的计算精度，将有限元-最小二乘点插值法（Finite Element-Least Square Point Interpolation Method，FE-LSPIM）推广到板结构-声场耦合问题的分析中，提出了板结构-声场耦合问题分析的 FE-LSPIM/FE-LSPIM方法，推导了 FE-LSPIM/FE-LSPIM分析板结构-声场耦合问题的计算公式。FE-LSPIM/FE-LSPIM方法应用有限元单元形函数和最小二乘点插值法进行局部逼近，继承了有限元法的单元兼容性和最小二乘插值法的二次多项式完备性，提高了计算精度。以一六面体声场-结构耦合模型为研究对象进行分析，结果表明，与板结构-声场耦合问题分析的 FEM/FEM和光滑有限元/有限元（Smoothed Finite Element Method /Finite Element Method，SFEM/FEM）相比，FE-LSPIM/FE-LSPIM在分析板结构-声场耦合问题时具有更高的精度。
Götterdämmerung over total least squares
Malissiovas, G.; Neitzel, F.; Petrovic, S.
2016-06-01
The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.
Yang, Yuan-Gui; Zhang, Ji; Zhao, Yan-Li; Zhang, Jin-Yu; Wang, Yuan-Zhong
2017-07-01
A rapid method was developed and validated by ultra-performance liquid chromatography-triple quadrupole mass spectroscopy with ultraviolet detection (UPLC-UV-MS) for simultaneous determination of paris saponin I, paris saponin II, paris saponin VI and paris saponin VII. Partial least squares discriminant analysis (PLS-DA) based on UPLC and Fourier transform infrared (FT-IR) spectroscopy was employed to evaluate Paris polyphylla var. yunnanensis (PPY) at different harvesting times. Quantitative determination implied that the various contents of bioactive compounds with different harvesting times may lead to different pharmacological effects; the average content of total saponins for PPY harvested at 8 years was higher than that from other samples. The PLS-DA of FT-IR spectra had a better performance than that of UPLC for discrimination of PPY from different harvesting times. Copyright © 2016 John Wiley & Sons, Ltd.
DEFF Research Database (Denmark)
Hedegaard, Martin; Krafft, Christoph; Ditzel, Henrik J
2010-01-01
discarded as they showed much smaller differences between the two cell lines compared to cytoplasm spectra. Partial least squares-discriminant analysis (PLS-DA) was applied to distinguish the two cell lines. A cross-validated PLS-DA resulted in 92% correctly classified samples. Spectral differences were...... assigned to a higher unsaturated fatty acid content in the metastatic vs nonmetastatic cell line. Our study demonstrates the unique ability of Raman spectroscopy to distinguish minute differences at the subcellular level and yield new biological information. Our study is the first to demonstrate...... the association between polyunsaturated fatty acid content and metastatic ability in this unique cell model system and is in agreement with previous studies on this topic....
Institute of Scientific and Technical Information of China (English)
李浩瑾; 李俊杰; 康飞
2013-01-01
提出了一种基于最小二乘支持向量机的易损性分析方法,将有限元数值计算的结果作为学习样本,建立最小二乘支持向量机模型,进而进行Monte Carlo仿真,获得结构的易损性曲线,在保证计算精度的同时,提高了计算效率.采用该方法对某重力坝挡水坝段进行了动力稳定易损性分析,并对主要参数进行了敏感性分析.结果表明,该坝段能够保证设计地震作用下的动力抗滑稳定性,且有较大的安全裕度.%A fragility analysis methodology based on least square support vector machine was presented. Using the results of finite element analysis as learning samples, a least square support vector machine model was established. Integrating the learned model with Monte Carlo simulation, the fragility curve was obtained with higher accuracy and efficiency. Using the proposed methodology, the fragility of anti-sliding stability of a gravity dam was analyzed and the sensitivity of the key parameters was studied. The results showed that the dam can ensure the dynamic stability under the designed earthquake and possess a higher margin in safety.
Kernel-based least squares policy iteration for reinforcement learning.
Xu, Xin; Hu, Dewen; Lu, Xicheng
2007-07-01
In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating
A least-squares computational ``tool kit``. Nuclear data and measurements series
Energy Technology Data Exchange (ETDEWEB)
Smith, D.L.
1993-04-01
The information assembled in this report is intended to offer a useful computational ``tool kit`` to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.
Experiments on Coordinate Transformation based on Least Squares and Total Least Squares Methods
Tunalioglu, Nursu; Mustafa Durdag, Utkan; Hasan Dogan, Ali; Erdogan, Bahattin; Ocalan, Taylan
2016-04-01
Coordinate transformation is an important problem in geodesy discipline. Variations in stochastic and functional models in transformation problem cause different estimation results. Least-squares (LS) method is generally implemented to solve this problem. LS method accepts only one epoch coordinate data group erroneous in stochastic model. However, all the data in transformation problem are erroneous. In contrast to the traditional LS method, the Total Least Squares (TLS) method takes into account the errors in all the variables in the transformation. It is so-called errors-invariables (EIV) model. In the last decades, TLS method has been implemented to solve transformation problem. In this context, it is important to determine which method is more accurate. In this study, LS and TLS methods have been implemented on different 2D and 3D geodetic networks with different simulation scenarios. The first results show that the translation parameters are affected more than rotation and scale parameters. Although TLS method considers the errors for two coordinate the estimated parameters for both methods are different from simulated values.
Dinç, Erdal; Ertekin, Zehra Ceren
2016-01-01
An application of parallel factor analysis (PARAFAC) and three-way partial least squares (3W-PLS1) regression models to ultra-performance liquid chromatography-photodiode array detection (UPLC-PDA) data with co-eluted peaks in the same wavelength and time regions was described for the multicomponent quantitation of hydrochlorothiazide (HCT) and olmesartan medoxomil (OLM) in tablets. Three-way dataset of HCT and OLM in their binary mixtures containing telmisartan (IS) as an internal standard was recorded with a UPLC-PDA instrument. Firstly, the PARAFAC algorithm was applied for the decomposition of three-way UPLC-PDA data into the chromatographic, spectral and concentration profiles to quantify the concerned compounds. Secondly, 3W-PLS1 approach was subjected to the decomposition of a tensor consisting of three-way UPLC-PDA data into a set of triads to build 3W-PLS1 regression for the analysis of the same compounds in samples. For the proposed three-way analysis methods in the regression and prediction steps, the applicability and validity of PARAFAC and 3W-PLS1 models were checked by analyzing the synthetic mixture samples, inter-day and intra-day samples, and standard addition samples containing HCT and OLM. Two different three-way analysis methods, PARAFAC and 3W-PLS1, were successfully applied to the quantitative estimation of the solid dosage form containing HCT and OLM. Regression and prediction results provided from three-way analysis were compared with those obtained by traditional UPLC method.
Least square estimation of phase, frequency and PDEV
Danielson, Magnus; Rubiola, Enrico
2016-01-01
The Omega-preprocessing was introduced to improve phase noise rejection by using a least square algorithm. The associated variance is the PVAR which is more efficient than MVAR to separate the different noise types. However, unlike AVAR and MVAR, the decimation of PVAR estimates for multi-tau analysis is not possible if each counter measurement is a single scalar. This paper gives a decimation rule based on two scalars, the processing blocks, for each measurement. For the Omega-preprocessing, this implies the definition of an output standard as well as hardware requirements for performing high-speed computations of the blocks.
Machado, A. E. de A.; da Gama, A. A. de S.; de Barros Neto, B.
2011-09-01
A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ( β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.
Nguyen, Huy Truong; Lee, Dong-Kyu; Lee, Won Jun; Lee, GwangJin; Yoon, Sang Jun; Shin, Byong-Kyu; Nguyen, Minh Duc; Park, Jeong Hill; Lee, Jeongmi; Kwon, Sung Won
2016-02-15
Phylogenetic and metabolomic approaches have long been employed to study evolutionary relationships among plants. Nonetheless, few studies have examined the difference in metabolites within a clade and between clades of the phylogenetic tree. We attempted to relate phylogenetic studies to metabolomics using stepwise partial least squares-discriminant analysis (PLS-DA) for the genus Panax. Samples were analyzed by ultra-performance liquid chromatography-quadrupole time of flight mass spectrometry (UPLC-QTOFMS) to obtain metabolite profiles. Initially, conventional principal component analysis was subsequently applied to the metabolomic data to show the limitations in relating the expression of metabolites to divisions in the phylogenetic tree. Thereafter, we introduced stepwise PLS-DA with optimized scaling methods, which were properly applied according to the branches of the phylogenetic tree of the four species. Our approach highlighted metabolites of interest by elucidating the directions and degrees of metabolic alterations in each clade of the phylogenetic tree. The results revealed the relationship between metabolic changes in the genus Panax and its species' evolutionary adaptations to different climates. We believe our method will be useful to help understand the metabolite-evolution relationship.
Kuligowski, Julia; Quintás, Guillermo; Herwig, Christoph; Lendl, Bernhard
2012-01-01
This paper shows the ease of application and usefulness of mid-IR measurements for the investigation of orthogonal cell states on the example of the analysis of Pichia pastoris cells. A rapid method for the discrimination of entire yeast cells grown under carbon and nitrogen-limited conditions based on the direct acquisition of mid-IR spectra and partial least squares discriminant analysis (PLS-DA) is described. The obtained PLS-DA model was extensively validated employing two different validation strategies: (i) statistical validation employing a method based on permutation testing and (ii) external validation splitting the available data into two independent sub-sets. The Variable Importance in Projection scores of the PLS-DA model provided deeper insight into the differences between the two investigated states. Hence, we demonstrate the feasibility of a method which uses IR spectra from intact cells that may be employed in a second step as an in-line tool in process development and process control along Quality by Design principles. PMID:22967595
Lozano, Valeria A; Ibañez, Gabriela A; Olivieri, Alejandro C
2008-03-10
Lanthanide-sensitized luminescence excitation-time decay matrices were employed for achieving the second-order advantage using as chemometric algorithms parallel factor analysis (PARAFAC) and multidimensional partial least-squares with residual bilinearization (N-PLS/RBL). The second-order data were measured for a calibration set of samples containing the analyte benzoic acid in the concentration range from 0.00 to 5.00 mg L(-1), for a validation set containing the analyte and the potential interferent saccharin (in the range 0.00-6.00 mg L(-1)), and for real samples of beverages containing benzoic acid as preservant, saccharin, and other potentially interfering compounds. All samples were treated with terbium(III), trioctylphosphine oxide as a synergistic ligand, and contained a suitable imidazol buffer, in order to ensure maximum intensity of the luminescence signals. The results indicate a slightly better predictive ability of the newly introduced N-PLS/RBL procedure over standard PARAFAC, both in what concerns the comparison with nominal analyte concentrations in the validation sample set and with results provided by the reference high-performance liquid chromatographic technique for the real sample set.
Shahlaei, M; Saghaie, L
2014-01-01
A quantitative structure-activity relationship (QSAR) study is suggested for the prediction of biological activity (pIC50) of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors. Modeling of the biological activities of compounds of interest as a function of molecular structures was established by means of principal component analysis (PCA) and least square support vector machine (LS-SVM) methods. The results showed that the pIC50 values calculated by LS-SVM are in good agreement with the experimental data, and the performance of the LS-SVM regression model is superior to the PCA-based model. The developed LS-SVM model was applied for the prediction of the biological activities of pyrimidone derivatives, which were not in the modeling procedure. The resulted model showed high prediction ability with root mean square error of prediction of 0.460 for LS-SVM. The study provided a novel and effective approach for predicting biological activities of 3, 4-dihydropyrido [3,2-d] pyrimidone derivatives as p38 inhibitors and disclosed that LS-SVM can be used as a powerful chemometrics tool for QSAR studies.
Regularization Techniques for Linear Least-Squares Problems
Suliman, Mohamed
2016-04-01
Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA
Least Square Approximation by Linear Combinations of Multi(Poles).
1983-04-01
ID-R134 069 LEAST SQUARE APPROXIMATION BY LINEAR COMBINATIONS OF i/i MULTI(POLES). 1U OHIO STATE UNIV COLUMBUS DEPT OF GEODETIC SCIENCE AND SURVEY...TR-83-0 117 LEAST SQUARE APPROXIMATION BY LINEAR COMBINATIONS OF (MULTI)POLES WILLI FREEDEN DEPARTMENT OF GEODETIC SCIENCE AND SURVEYING THE OHIO...Subtitle) S. TYPE OF REPORT & PERIOD COVERED LEAST SQUARE APPROXIMATION BY LINEAR Scientific Report No. 3 COMBINATIONS OF (MULTI)POLES 6. PERFORMING ORG
The least-square method in complex number domain
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The classical least-square method was extended from the real number into the complex number domain, which is called the complex least-square method. The mathematical derivation and its applications show that the complex least-square method is different from one that the real number and the imaginary number are separately calculated with the classical least-square, by which the actual leastsquare estimation cannot be obtained in practice. Applications of this new method to an arbitrarily given series and to the precipitation in rainy season at 160 meteorological stations in China mainland show advantages of this method over other conventional statistical models.
Kernel-Based Least Squares Temporal Difference With Gradient Correction.
Song, Tianheng; Li, Dazi; Cao, Liulin; Hirasawa, Kotaro
2016-04-01
A least squares temporal difference with gradient correction (LS-TDC) algorithm and its kernel-based version kernel-based LS-TDC (KLS-TDC) are proposed as policy evaluation algorithms for reinforcement learning (RL). LS-TDC is derived from the TDC algorithm. Attributed to TDC derived by minimizing the mean-square projected Bellman error, LS-TDC has better convergence performance. The least squares technique is used to omit the size-step tuning of the original TDC and enhance robustness. For KLS-TDC, since the kernel method is used, feature vectors can be selected automatically. The approximate linear dependence analysis is performed to realize kernel sparsification. In addition, a policy iteration strategy motivated by KLS-TDC is constructed to solve control learning problems. The convergence and parameter sensitivities of both LS-TDC and KLS-TDC are tested through on-policy learning, off-policy learning, and control learning problems. Experimental results, as compared with a series of corresponding RL algorithms, demonstrate that both LS-TDC and KLS-TDC have better approximation and convergence performance, higher efficiency for sample usage, smaller burden of parameter tuning, and less sensitivity to parameters.
Kehimkar, Benjamin; Hoggard, Jamin C; Marney, Luke C; Billingsley, Matthew C; Fraga, Carlos G; Bruno, Thomas J; Synovec, Robert E
2014-01-31
There is an increased need to more fully assess and control the composition of kerosene-based rocket propulsion fuels such as RP-1. In particular, it is critical to make better quantitative connections among the following three attributes: fuel performance (thermal stability, sooting propensity, engine specific impulse, etc.), fuel properties (such as flash point, density, kinematic viscosity, net heat of combustion, and hydrogen content), and the chemical composition of a given fuel, i.e., amounts of specific chemical compounds and compound classes present in a fuel as a result of feedstock blending and/or processing. Recent efforts in predicting fuel chemical and physical behavior through modeling put greater emphasis on attaining detailed and accurate fuel properties and fuel composition information. Often, one-dimensional gas chromatography (GC) combined with mass spectrometry (MS) is employed to provide chemical composition information. Building on approaches that used GC-MS, but to glean substantially more chemical information from these complex fuels, we recently studied the use of comprehensive two dimensional (2D) gas chromatography combined with time-of-flight mass spectrometry (GC×GC-TOFMS) using a "reversed column" format: RTX-wax column for the first dimension, and a RTX-1 column for the second dimension. In this report, by applying chemometric data analysis, specifically partial least-squares (PLS) regression analysis, we are able to readily model (and correlate) the chemical compositional information provided by use of GC×GC-TOFMS to RP-1 fuel property information such as density, kinematic viscosity, net heat of combustion, and so on. Furthermore, we readily identified compounds that contribute significantly to measured differences in fuel properties based on results from the PLS models. We anticipate this new chemical analysis strategy will have broad implications for the development of high fidelity composition-property models, leading to an
Soteriades, Andreas Diomedes; Stott, Alistair William; Moreau, Sindy; Charroin, Thierry; Blanchard, Melanie; Liu, Jiayi; Faverdin, Philippe
2016-01-01
We aimed at quantifying the extent to which agricultural management practices linked to animal production and land use affect environmental outcomes at a larger scale. Two practices closely linked to farm environmental performance at a larger scale are farming intensity, often resulting in greater off-farm environmental impacts (land, non-renewable energy use etc.) associated with the production of imported inputs (e.g. concentrates, fertilizer); and the degree of self-sufficiency, i.e. the farm’s capacity to produce goods from its own resources, with higher control over nutrient recycling and thus minimization of losses to the environment, often resulting in greater on-farm impacts (eutrophication, acidification etc.). We explored the relationship of these practices with farm environmental performance for 185 French specialized dairy farms. We used Partial Least Squares Structural Equation Modelling to build, and relate, latent variables of environmental performance, intensification and self-sufficiency. Proxy indicators reflected the latent variables for intensification (milk yield/cow, use of maize silage etc.) and self-sufficiency (home-grown feed/total feed use, on-farm energy/total energy use etc.). Environmental performance was represented by an aggregate ‘eco-efficiency’ score per farm derived from a Data Envelopment Analysis model fed with LCA and farm output data. The dataset was split into two spatially heterogeneous (bio-physical conditions, production patterns) regions. For both regions, eco-efficiency was significantly negatively related with milk yield/cow and the use of maize silage and imported concentrates. However, these results might not necessarily hold for intensive yet more self-sufficient farms. This requires further investigation with latent variables for intensification and self-sufficiency that do not largely overlap- a modelling challenge that occurred here. We conclude that the environmental ‘sustainability’ of intensive dairy
Calculation of stratum surface principal curvature based on a moving least square method
Institute of Scientific and Technical Information of China (English)
LI Guo-qing; MENG Zhao-ping; MA Feng-shan; ZHAO Hai-jun; DING De-min; LIU Qin; WANG Cheng
2008-01-01
With the east section of the Changji sag Zhunger Basin as a case study, both a principal curvature method and a moving least square method are elaborated. The moving least square method is introduced, for the first time, to fit a stratum surface. The results show that, using the same-degree base function, compared with a traditional least square method, the moving least square method can produce lower fitting errors, the fitting surface can describe the morphological characteristics of stratum surfaces more accurately and the principal curvature values vary within a wide range and may be more suitable for the prediction of the distribu-tion of structural fractures. The moving least square method could be useful in curved surface fitting and stratum curvature analysis.
Solution of a Complex Least Squares Problem with Constrained Phase.
Bydder, Mark
2010-12-30
The least squares solution of a complex linear equation is in general a complex vector with independent real and imaginary parts. In certain applications in magnetic resonance imaging, a solution is desired such that each element has the same phase. A direct method for obtaining the least squares solution to the phase constrained problem is described.
An Algorithm for Positive Definite Least Square Estimation of Parameters.
1986-05-01
This document presents an algorithm for positive definite least square estimation of parameters. This estimation problem arises from the PILOT...dynamic macro-economic model and is equivalent to an infinite convex quadratic program. It differs from ordinary least square estimations in that the
note: The least square nucleolus is a general nucleolus
Elisenda Molina; Juan Tejada
2000-01-01
This short note proves that the least square nucleolus (Ruiz et al. (1996)) and the lexicographical solution (Sakawa and Nishizaki (1994)) select the same imputation in each game with nonempty imputation set. As a consequence the least square nucleolus is a general nucleolus (Maschler et al. (1992)).
Abnormal behavior of the least squares estimate of multiple regression
Institute of Scientific and Technical Information of China (English)
陈希孺; 安鸿志
1997-01-01
An example is given to reveal the abnormal behavior of the least squares estimate of multiple regression. It is shown that the least squares estimate of the multiple linear regression may be "improved in the sense of weak consistency when nuisance parameters are introduced into the model. A discussion on the implications of this finding is given.
Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants
One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...
del Río, Vanessa; Callao, M Pilar; Larrechi, M Soledad; Montero de Espinosa, Lucas; Ronda, J Carles; Cádiz, Virginia
2009-05-29
The aza-Michael reaction, a variation of the Michael reaction in which an amine acts as the nucleophile, permits the synthesis of sophisticated macromolecular structures with potential use in many applications such as drug delivery systems, high performance composites and coatings. The aza-Michael product can be affected by a retro-Mannich-type fragmentation. A way of determining the reactions that are taking place and evaluate the quantitative evolution of the chemical species involved in the reactions is presented. The aza-Michael reaction between a modified fatty acid ester with alpha,beta-unsaturated ketone groups (enone containing methyl oleate (eno-MO)) and aniline (1:1) was studied isothermally at 95 degrees C and monitored in situ by near-infrared spectroscopy (NIR). The number of reactions involved in the system was determined analyzing the rank matrix of NIR spectra data recorded during the reaction. Singular value decomposition (SVD) and evolving factor analysis (EFA) adapted to analyze full rank augmented data matrices have been used. In the experimental conditions, we found that the resulting aza-Michael adduct undergoes a retro-Mannich-type fragmentation, but the final products of this reaction were present in negligible amounts. This was confirmed by recording the (1)H NMR spectra of the final product. Applying multivariate curve resolution-alternating least squares (MCR-ALS) to the NIR spectra data obtained during the reaction, it has been possible to obtain the concentration values of the species involved in the aza-Michael reaction. The performance of the model was evaluated by two parameters: ALS lack of fit (lof=1.31%) and explained variance (R(2)=99.92%). Also, the recovered spectra were compared with the experimentally recorded spectra for the reagents (aniline and eno-MO) and the correlation coefficients (r) were 0.9997 for the aniline and 0.9578 for the eno-MO.
Pierce, Karisa M; Schale, Stephen P
2011-01-30
The percent composition of blends of biodiesel and conventional diesel from a variety of retail sources were modeled and predicted using partial least squares (PLS) analysis applied to gas chromatography-total-ion-current mass spectrometry (GC-TIC), gas chromatography-mass spectrometry (GC-MS), comprehensive two-dimensional gas chromatography-total-ion-current mass spectrometry (GCxGC-TIC) and comprehensive two-dimensional gas chromatography-mass spectrometry (GCxGC-MS) separations of the blends. In all four cases, the PLS predictions for a test set of chromatograms were plotted versus the actual blend percent composition. The GC-TIC plot produced a best-fit line with slope=0.773 and y-intercept=2.89, and the average percent error of prediction was 12.0%. The GC-MS plot produced a best-fit line with slope=0.864 and y-intercept=1.72, and the average percent error of prediction was improved to 6.89%. The GCxGC-TIC plot produced a best-fit line with slope=0.983 and y-intercept=0.680, and the average percent error was slightly improved to 6.16%. The GCxGC-MS plot produced a best-fit line with slope=0.980 and y-intercept=0.620, and the average percent error was 6.12%. The GCxGC models performed best presumably due to the multidimensional advantage of higher dimensional instrumentation providing more chemical selectivity. All the PLS models used 3 latent variables. The chemical components that differentiate the blend percent compositions are reported. Copyright Â© 2010 Elsevier B.V. All rights reserved.
A stochastic total least squares solution of adaptive filtering problem.
Javed, Shazia; Ahmad, Noor Atinah
2014-01-01
An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs.
Least squares deconvolution of the stellar intensity and polarization spectra
Kochukhov, O; Piskunov, N
2010-01-01
Least squares deconvolution (LSD) is a powerful method of extracting high-precision average line profiles from the stellar intensity and polarization spectra. Despite its common usage, the LSD method is poorly documented and has never been tested using realistic synthetic spectra. In this study we revisit the key assumptions of the LSD technique, clarify its numerical implementation, discuss possible improvements and give recommendations how to make LSD results understandable and reproducible. We also address the problem of interpretation of the moments and shapes of the LSD profiles in terms of physical parameters. We have developed an improved, multiprofile version of LSD and have extended the deconvolution procedure to linear polarization analysis taking into account anomalous Zeeman splitting of spectral lines. This code is applied to the theoretical Stokes parameter spectra. We test various methods of interpreting the mean profiles, investigating how coarse approximations of the multiline technique trans...
RNA structural motif recognition based on least-squares distance.
Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin
2013-09-01
RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.
Nonlinear least squares estimation based on multiple genetic algorithms%基于多群体遗传算法的非线性最小二乘估计
Institute of Scientific and Technical Information of China (English)
刘德玲; 马志强
2011-01-01
Conventional Newton-like algorithms, widely used for parameter estimation of nonlinear models,are sensitive to initial values while simple genetic algorithms are liable to fall into local optimization. This paper proposes a multiple genetic algorithm. It searches the solution with several genetic algorithms and can adjust the parameter domain dynamically according to the optimum solution found by each genetic algorithm with several iterations, for which it can avoid running into local optimization,increase the performance and liability that the solution found is the global optimum solution. Experimental results show that the proposed algorithm is an effective approach of parameter estimations of nonlinear systems.%由于非线性模型参数估计理论广泛使用的传统牛顿类算法对初值的敏感性,以及简单遗传算法易陷入局部最优的问题,提出了一种多群体遗传算法,它采用多个群体执行遗传算法搜索解,并且能根据各个群体在较少迭代次数中找到的最优解动态调整参数域,提高了遗传算法的性能及搜索到的解是全局最优解的可靠性.实验结果表明:新的算法是一种有效的非线性系统模型参数估计方法.
A Novel Kernel for Least Squares Support Vector Machine
Institute of Scientific and Technical Information of China (English)
FENG Wei; ZHAO Yong-ping; DU Zhong-hua; LI De-cai; WANG Li-feng
2012-01-01
Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel.ELM kernel based methods are able to solve the nonlinear problems by inducing an explicit mapping compared with the commonly-used kernels such as Gaussian kernel.In this paper,the ELM kernel is extended to the least squares support vector regression(LSSVR),so ELM-LSSVR was proposed.ELM-LSSVR can be used to reduce the training and test time simultaneously without extra techniques such as sequential minimal optimization and pruning mechanism.Moreover,the memory space for the training and test was relieved.To confirm the efficacy and feasibility of the proposed ELM-LSSVR,the experiments are reported to demonstrate that ELM-LSSVR takes the advantage of training and test time with comparable accuracy to other algorithms.
Parsimonious extreme learning machine using recursive orthogonal least squares.
Wang, Ning; Er, Meng Joo; Han, Min
2014-10-01
Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.
Huang, Kang; Wang, Hui-jun; Xu, Hui-rong; Wang, Jian-ping; Ying, Yi-bin
2009-04-01
The application of least square support vector machines (LS-SVM) regression method based on statistics study theory to the analysis with near infrared (NIR) spectra of tomato juice was introduced in the present paper. In this method, LS-SVM was used for establishing model of spectral analysis, and was applied to predict the sugar contents (SC) and available acid (VA) in tomato juice samples. NIR transmission spectra of tomato juice were measured in the spectral range of 800-2,500 nm using InGaAs detector. The radial basis function (RBF) was adopted as a kernel function of LS-SVM. Sixty seven tomato juice samples were used as calibration set, and thirty three samples were used as validation set. The results of the method for sugar contents (SC) and available acid (VA) prediction were: a high correlation coefficient of 0.9903 and 0.9675, and a low root mean square error of prediction (RMSEP) of 0.0056 degree Brix and 0.0245, respectively. And compared to PLS and PCR methods, the performance of the LSSVM method was better. The results indicated that it was possible to built statistic models to quantify some common components in tomato juice using near-infrared (NIR) spectroscopy and least square support vector machines (LS-SVM) regression method as a nonlinear multivariate calibration procedure, and LS-SVM could be a rapid and accurate method for juice components determination based on NIR spectra.
Multilevel solvers of first-order system least-squares for Stokes equations
Energy Technology Data Exchange (ETDEWEB)
Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)
1996-12-31
Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.
Visualizing Least-Square Lines of Best Fit.
Engebretsen, Arne
1997-01-01
Presents strategies that utilize graphing calculators and computer software to help students understand the concept of minimizing the squared residuals to find the line of best fit. Includes directions for least-squares drawings using a variety of technologies. (DDR)
Performance Evaluation of the Ordinary Least Square (OLS) and ...
African Journals Online (AJOL)
Nana Kwasi Peprah
Keywords: Differential Global Positioning, System, Total Least Square, Ordinary ... observation equations where only the observations are considered as ..... Dreiseitl, S., and Ohno-Machado, L. (2002), “Logistic Regression and Artificial Neural.
A Newton Algorithm for Multivariate Total Least Squares Problems
Directory of Open Access Journals (Sweden)
WANG Leyang
2016-04-01
Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.
Generalized Penalized Least Squares and Its Statistical Characteristics
Institute of Scientific and Technical Information of China (English)
DING Shijun; TAO Benzao
2006-01-01
The solution properties of semiparametric model are analyzed, especially that penalized least squares for semiparametric model will be invalid when the matrix BTPB is ill-posed or singular. According to the principle of ridge estimate for linear parametric model, generalized penalized least squares for semiparametric model are put forward, and some formulae and statistical properties of estimates are derived. Finally according to simulation examples some helpful conclusions are drawn.
An application of least squares fit mapping to clinical classification.
Yang, Y.; Chute, C. G.
1992-01-01
This paper describes a unique approach, "Least Square Fit Mapping," to clinical data classification. We use large collections of human-assigned text-to-category matches as training sets to compute the correlations between physicians' terms and canonical concepts. A Linear Least Squares Fit (LLSF) technique is employed to obtain a mapping function which optimally fits the known matches given in a training set and probabilistically captures the unknown matches for arbitrary texts. We tested our...
A Generalized Autocovariance Least-Squares Method for Covariance Estimation
DEFF Research Database (Denmark)
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;
2007-01-01
A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....
Institute of Scientific and Technical Information of China (English)
姚燕; 王常玥; 刘辉军; 汤建斌; 蔡晋辉; 汪静军
2015-01-01
Forest bio‐fuel ,a new type renewable energy ,has attracted increasing attention as a promising alternative .In this study ,a new method called Sparse Partial Least Squares Regression (SPLS) is used to construct the proximate analysis model to analyze the fuel characteristics of sawdust combining Near Infrared Spectrum Technique .Moisture ,Ash ,Volatile and Fixed Carbon percentage of 80 samples have been measured by traditional proximate analysis .Spectroscopic data were collected by Nicolet NIR spectrometer .After being filtered by wavelet transform ,all of the samples are divided into training set and valida‐tion set according to sample category and producing area .SPLS ,Principle Component Regression (PCR) ,Partial Least Squares Regression (PLS) and Least Absolute Shrinkage and Selection Operator (LASSO) are presented to construct prediction model . The result advocated that SPLS can select grouped wavelengths and improve the prediction performance .The absorption peaks of the Moisture is covered in the selected wavelengths ,well other compositions have not been confirmed yet .In a word ,SPLS can reduce the dimensionality of complex data sets and interpret the relationship between spectroscopic data and composition concen‐tration ,which will play an increasingly important role in the field of NIR application .%林木生物质能源作为一种新型可再生能源，具有非常广阔的发展前景。基于近红外光谱技术，首次引入稀疏偏最小二乘回归建立木屑生物质的工业分析模型，用于生物质燃料特性的快速分析测定。工业分析总共测定了80种木屑的水分、灰分、挥发分和固定碳含量百分比；按照样品种类和产地将其划分为训练集和测试集，利用近红外光谱仪采集光谱数据并进行小波滤波处理；再利用稀疏偏最小二乘回归建立木屑生物质的定量分析模型，并与主成分回归、偏最小二乘回归、最小绝对收敛及变量筛选
Institute of Scientific and Technical Information of China (English)
ZHANG Liqing; WU Xiaohua
2005-01-01
The computer auxiliary partial least squares is introduced to simultaneously determine the contents of Deoxyschizandin, Schisandrin, γ- Schisandrin in the extracted solution of wuweizi. Regression analysis of the experimental results shows that the average recovery of each component is all in the range from 98.9% to 110.3% ,which means the partial least squares regression spectrophotometry can circumvent the overlapping of absorption spectrums of multi-components, so that satisfactory results can be obtained without any sample pre-separation.
Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation
DEFF Research Database (Denmark)
Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt
2013-01-01
In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao...... estimation. Moreover, simulations on real-life data indicate that the NLS and aNLS methods are applicable even when reverberation is present and the noise is not white Gaussian....
Conditional least squares estimation in nonstationary nonlinear stochastic regression models
Jacob, Christine
2010-01-01
Let $\\{Z_n\\}$ be a real nonstationary stochastic process such that $E(Z_n|{\\mathcaligr F}_{n-1})\\stackrel{\\mathrm{a.s.}}{<}\\infty$ and $E(Z^2_n|{\\mathcaligr F}_{n-1})\\stackrel{\\mathrm{a.s.}}{<}\\infty$, where $\\{{\\mathcaligr F}_n\\}$ is an increasing sequence of $\\sigma$-algebras. Assuming that $E(Z_n|{\\mathcaligr F}_{n-1})=g_n(\\theta_0,\
Bootstrapping Nonlinear Least Squares Estimates in the Kalman Filter Model.
1986-01-01
Bias Bootstrapa 3.933 x 103 0.651 x 103 -0.166 x 10-- b b Newton - Rapshon 1.380 x 10- 0.479 x 10- 10_c 0_ c , e -.., Emperical 3.605 x 10 -0.026 x 10...most cases, parameter estimation for the KF model has been accomplished by maximum likelihood techniques involving the use of scoring or Newton ...is well behaved, the Newton -Raphson and scoring procedures enjoy quadratic convergence in the neighborhood of the maximum and one has a ready-made
8th International Conference on Partial Least Squares and Related Methods
Vinzi, Vincenzo; Russolillo, Giorgio; Saporta, Gilbert; Trinchera, Laura
2016-01-01
This volume presents state of the art theories, new developments, and important applications of Partial Least Square (PLS) methods. The text begins with the invited communications of current leaders in the field who cover the history of PLS, an overview of methodological issues, and recent advances in regression and multi-block approaches. The rest of the volume comprises selected, reviewed contributions from the 8th International Conference on Partial Least Squares and Related Methods held in Paris, France, on 26-28 May, 2014. They are organized in four coherent sections: 1) new developments in genomics and brain imaging, 2) new and alternative methods for multi-table and path analysis, 3) advances in partial least square regression (PLSR), and 4) partial least square path modeling (PLS-PM) breakthroughs and applications. PLS methods are very versatile methods that are now used in areas as diverse as engineering, life science, sociology, psychology, brain imaging, genomics, and business among both academics ...
Integer least-squares theory for the GNSS compass
Teunissen, P. J. G.
2010-07-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to high-precision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategies. It extends current unconstrained ILS theory to the nonlinearly constrained case, an extension that is particularly suited for precise attitude determination. As opposed to current practice, our method does proper justice to the a priori given information. The nonlinear baseline constraint is fully integrated into the ambiguity objective function, thereby receiving a proper weighting in its minimization and providing guidance for the integer search. Different search strategies are developed to compute exact and approximate solutions of the nonlinear constrained ILS problem. Their applicability depends on the strength of the GNSS model and on the length of the baseline. Two of the presented search strategies, a global and a local one, are based on the use of an ellipsoidal search space. This has the advantage that standard methods can be applied. The global ellipsoidal search strategy is applicable to GNSS models of sufficient strength, while the local ellipsoidal search strategy is applicable to models for which the baseline lengths are not too small. We also develop search strategies for the most challenging case, namely when the curvature of the non-ellipsoidal ambiguity search space needs to be taken into account. Two such strategies are presented, an approximate one and a rigorous, somewhat more complex, one. The approximate one is applicable when the fixed baseline variance matrix is close to diagonal. Both methods make use of a search and shrink strategy. The rigorous solution is efficiently obtained by means of a search and shrink strategy that uses non-quadratic, but easy-to-evaluate, bounding functions of the ambiguity objective function. The theory
Least-squares joint imaging of multiples and primaries
Brown, Morgan Parker
Current exploration geophysics practice still regards multiple reflections as noise, although multiples often contain considerable information about the earth's angle-dependent reflectivity that primary reflections do not. To exploit this information, multiples and primaries must be combined in a domain in which they are comparable, such as in the prestack image domain. However, unless the multiples and primaries have been pre-separated from the data, crosstalk leakage between multiple and primary images will significantly degrade any gains in the signal fidelity, geologic interpretability, and signal-to-noise ratio of the combined image. I present a global linear least-squares algorithm, denoted LSJIMP (Least-squares Joint Imaging of Multiples and Primaries), which separates multiples from primaries while simultaneously combining their information. The novelty of the method lies in the three model regularization operators which discriminate between crosstalk and signal and extend information between multiple and primary images. The LSJIMP method exploits the hitherto ignored redundancy between primaries and multiples in the data. While many different types of multiple imaging operators are well-suited for use with the LSJIMP method, in this thesis I utilize an efficient prestack time imaging strategy for multiples which sacrifices accuracy in a complex earth for computational speed and convenience. I derive a variant of the normal moveout (NMO) equation for multiples, called HEMNO, which can image "split" pegleg multiples which arise from a moderately heterogeneous earth. I also derive a series of prestack amplitude compensation operators which when combined with HEMNO, transform pegleg multiples into events are directly comparable---kinematically and in terms of amplitudes---to the primary reflection. I test my implementation of LSJIMP on two datasets from the deepwater Gulf of Mexico. The first, a 2-D line in the Mississippi Canyon region, exhibits a variety of
Recursive least squares background prediction of univariate syndromic surveillance data
Directory of Open Access Journals (Sweden)
Burkom Howard
2009-01-01
Full Text Available Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold
Neither fixed nor random: weighted least squares meta-regression.
Stanley, T D; Doucouliagos, Hristos
2016-06-20
Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.
Li, Qing-Bo; Huang, Zheng-Wei
2014-02-01
In order to improve the prediction accuracy of quantitative analysis model in the near-infrared spectroscopy of blood glucose, this paper, by combining net analyte preprocessing (NAP) algorithm and radial basis functions partial least squares (RBFPLS) regression, builds a nonlinear model building method which is suitable for glucose measurement of human, named as NAP-RBFPLS. First, NAP is used to pre-process the near-infrared spectroscopy of blood glucose, in order to effectively extract the information which only relates to glucose signal from the original near-infrared spectra, so that it could effectively weaken the occasional correlation problems of the glucose changes and the interference factors which are caused by the absorption of water, albumin, hemoglobin, fat and other components of the blood in human body, the change of temperature of human body, the drift of measuring instruments, the changes of measuring environment, and the changes of measuring conditions; and then a nonlinear quantitative analysis model is built with the near-infrared spectroscopy data after NAP, in order to solve the nonlinear relationship between glucose concentrations and near-infrared spectroscopy which is caused by body strong scattering. In this paper, the new method is compared with other three quantitative analysis models building on partial least squares (PLS), net analyte preprocessing partial least squares (NAP-PLS) and RBFPLS respectively. At last, the experimental results show that the nonlinear calibration model, developed by combining NAP algorithm and RBFPLS regression, which was put forward in this paper, greatly improves the prediction accuracy of prediction sets, and what has been proved in this paper is that the nonlinear model building method will produce practical applications for the research of non-invasive detection techniques on human glucose concentrations.
Natural gradient-based recursive least-squares algorithm for adaptive blind source separation
Institute of Scientific and Technical Information of China (English)
ZHU Xiaolong; ZHANG Xianda; YE Jimin
2004-01-01
This paper focuses on the problem of adaptive blind source separation (BSS).First, a recursive least-squares (RLS) whitening algorithm is proposed. By combining it with a natural gradient-based RLS algorithm for nonlinear principle component analysis (PCA), and using reasonable approximations, a novel RLS algorithm which can achieve BSS without additional pre-whitening of the observed mixtures is obtained. Analyses of the equilibrium points show that both of the RLS whitening algorithm and the natural gradient-based RLS algorithm for BSS have the desired convergence properties. It is also proved that the combined new RLS algorithm for BSS is equivariant and has the property of keeping the separating matrix from becoming singular. Finally, the effectiveness of the proposed algorithm is verified by extensive simulation results.
Hasegawa, K; Funatsu, K
2000-01-01
Quantitative structure-activity relationship (QSAR) studies based on chemometric techniques are reviewed. Partial least squares (PLS) is introduced as a novel robust method to replace classical methods such as multiple linear regression (MLR). Advantages of PLS compared to MLR are illustrated with typical applications. Genetic algorithm (GA) is a novel optimization technique which can be used as a search engine in variable selection. A novel hybrid approach comprising GA and PLS for variable selection developed in our group (GAPLS) is described. The more advanced method for comparative molecular field analysis (CoMFA) modeling called GA-based region selection (GARGS) is described as well. Applications of GAPLS and GARGS to QSAR and 3D-QSAR problems are shown with some representative examples. GA can be hybridized with nonlinear modeling methods such as artificial neural networks (ANN) for providing useful tools in chemometric and QSAR.
Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods
Chamis, Christos C.; Coroneos, Rula M.
2007-01-01
Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.
PREDIKSI WAKTU KETAHANAN HIDUP DENGAN METODE PARTIAL LEAST SQUARE
Directory of Open Access Journals (Sweden)
PANDE PUTU BUDI KUSUMA
2013-03-01
Full Text Available Coronary heart disease is caused due to an accumulation of fat on the inside walls of blood vessels of the heart (coronary arteries. The factors that had led to the occurrence of coronary heart disease is dominated by unhealthy lifestyle of patients, and the survival times of different patients. This research objective is to predict the survival time of patients with coronary heart disease by taking into account the explanatory variables were analyzed by the method of Partial Least Square (PLS. PLS method is used to resolve the multiple regression analysis when the specific problems of multicollinearity and microarray data. The purpose of the PLS method is to predict the explanatory variables with multiple response variables so as to produce a more accurate predictive value. The results of this research showed that the prediction of survival for the three samples of patients with coronary heart disease had an average of 13 days, with a RMSEP value (error value was 1.526 which means that the results of this study are not much different from the predicted results in the field of medicine. This is consistent with the fact that the medical field suggests that the average survival for patients with coronary heart disease by 13 days.
Energy Technology Data Exchange (ETDEWEB)
Gutierrez T, C.; Flores Ll, H. [ININ, 52045 Ocoyoacac, Estado de Mexico (Mexico)
2004-07-01
The second derived of the characteristic curve current-voltage (I - V) of a Langmuir probe (I - V) is numerically calculated using the Tikhonov method for to determine the distribution function of the electrons energy (EEDF). One comparison of the obtained EEDF and a fit by least square are discussed (LS). The I - V experimental curve is obtained in a plasma source in the electron cyclotron resonance (ECR) using a cylindrical probe. The parameters of plasma are determined of the EEDF by means of the Laframboise theory. For the case of the LS fit, the obtained results are similar to those obtained by the Tikhonov method, but in the first case the procedure is slow to achieve the best fit. (Author)
Least-squares finite-element lattice Boltzmann method.
Li, Yusong; LeBoeuf, Eugene J; Basu, P K
2004-06-01
A new numerical model of the lattice Boltzmann method utilizing least-squares finite element in space and Crank-Nicolson method in time is presented. The new method is able to solve problem domains that contain complex or irregular geometric boundaries by using finite-element method's geometric flexibility and numerical stability, while employing efficient and accurate least-squares optimization. For the pure advection equation on a uniform mesh, the proposed method provides for fourth-order accuracy in space and second-order accuracy in time, with unconditional stability in the time domain. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow and Couette flow.
A note on the limitations of lattice least squares
Gillis, J. T.; Gustafson, C. L.; Mcgraw, G. A.
1988-01-01
This paper quantifies the known limitation of lattice least squares to ARX models in terms of the dynamic properties of the system being modeled. This allows determination of the applicability of lattice least squares in a given situation. The central result is that an equivalent ARX model exists for an ARMAX system if and only if the ARMAX system has no transmission zeros from the noise port to the output port. The technique used to prove this fact is a construction using the matrix fractional description of the system. The final section presents two computational examples.
Multi-source least-squares migration of marine data
Wang, Xin
2012-11-04
Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.
Sparse least-squares reverse time migration using seislets
Dutta, Gaurav
2015-08-19
We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.
HERMITE SCATTERED DATA FITTING BY THE PENALIZED LEAST SQUARES METHOD
Institute of Scientific and Technical Information of China (English)
Tianhe Zhou; Danfu Han
2009-01-01
Given a set of scattered data with derivative values. If the data is noisy or there is an extremely large number of data, we use an extension of the penalized least squares method of von Golitschek and Schumaker[Serdica, 18 (2002), pp.1001-1020]to fit the data. We show that the extension of the penalized least squares method produces a unique spline to fit the data. Also we give the error bound for the extension method. Some numerical examples are presented to demonstrate the effectiveness of the proposed method.
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known principle of least-squares. With this method the estimation of the (co)variance components is based on a linear model of observation equations. The method is flexible since it works with a user-defined we...
Nonparametric Least Squares Estimation of a Multivariate Convex Regression Function
Seijo, Emilio
2010-01-01
This paper deals with the consistency of the least squares estimator of a convex regression function when the predictor is multidimensional. We characterize and discuss the computation of such an estimator via the solution of certain quadratic and linear programs. Mild sufficient conditions for the consistency of this estimator and its subdifferentials in fixed and stochastic design regression settings are provided. We also consider a regression function which is known to be convex and componentwise nonincreasing and discuss the characterization, computation and consistency of its least squares estimator.
Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J
2006-09-01
A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (
Consistency of System Identification by Global Total Least Squares
C. Heij (Christiaan); W. Scherrer
1996-01-01
textabstractGlobal total least squares (GTLS) is a method for the identification of linear systems where no distinction between input and output variables is required. This method has been developed within the deterministic behavioural approach to systems. In this paper we analyse statistical proper
Consistency of global total least squares in stochastic system identification
C. Heij (Christiaan); W. Scherrer
1995-01-01
textabstractGlobal total least squares has been introduced as a method for the identification of deterministic system behaviours. We analyse this method within a stochastic framework, where the observed data are generated by a stationary stochastic process. Conditions are formulated so that the meth
Integer least-squares theory for the GNSS compass
Teunissen, P.J.G.
2010-01-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to highprecision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategi
Risk and Management Control: A Partial Least Square Modelling Approach
DEFF Research Database (Denmark)
Nielsen, Steen; Pontoppidan, Iens Christian
and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...
SELECTION OF REFERENCE PLANE BY THE LEAST SQUARES FITTING METHODS
Directory of Open Access Journals (Sweden)
Przemysław Podulka
2016-06-01
For least squares polynomial fittings it was found that applied method for cylinder liners gave usually better robustness for scratches, valleys and dimples occurrence. For piston skirt surfaces better edge-filtering results were obtained. It was also recommended to analyse the Sk parameters for proper selection of reference plane in surface topography measurements.
Fuzzy modeling of friction by bacterial and least square optimization
Jastrzebski, Marcin
2006-03-01
In this paper a new method of tuning parameters of Sugeno fuzzy models is presented. Because modeled phenomenon is discontinuous, new type of consequent function was introduced. Described algorithm (BA+LSQ) combines bacterial algorithm (BA) for tuning parameters of membership functions and least square method (LSQ) for parameters of consequent functions.
Plane-wave Least-squares Reverse Time Migration
Dai, Wei
2012-11-04
Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.
A least squares estimation method for the linear learning model
B. Wierenga (Berend)
1978-01-01
textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a
An Orthogonal Least Squares Based Approach to FIR Designs
Institute of Scientific and Technical Information of China (English)
Xiao-Feng Wu; Zi-Qiang Lang; Stephen A Billings
2005-01-01
This paper is concerned with the application of forward Orthogonal Least Squares (OLS) algorithm to the design of Finite Impulse Response (FIR) filters. The focus of this study is a new FIR filter design procedure and to compare this with traditional methods known as the fir2() routine provided by MATLAB.
Weighted least squares stationary approximations to linear systems.
Bierman, G. J.
1972-01-01
Investigation of the problem of replacing a certain time-varying linear system by a stationary one. Several quadratic criteria are proposed to aid in determining suitable candidate systems. One criterion for choosing the matrix B (in the stationary system) is initial-condition dependent, and another bounds the 'worst case' homogeneous system performance. Both of these criteria produce weighted least square fits.
ON A FAMILY OF MULTIVARIATE LEAST-SQUARES ORTHOGONAL POLYNOMIALS
Institute of Scientific and Technical Information of China (English)
郑成德; 王仁宏
2003-01-01
In this paper the new notion of multivariate least-squares orthogonal poly-nomials from the rectangular form is introduced. Their existence and uniqueness isstudied and some methods for their recursive computation are given. As an applica-is constructed.
On the Routh approximation technique and least squares errors
Aburdene, M. F.; Singh, R.-N. P.
1979-01-01
A new method for calculating the coefficients of the numerator polynomial of the direct Routh approximation method (DRAM) using the least square error criterion is formulated. The necessary conditions have been obtained in terms of algebraic equations. The method is useful for low frequency as well as high frequency reduced-order models.
Optimization of sequential decisions by least squares Monte Carlo method
DEFF Research Database (Denmark)
Nishijima, Kazuyoshi; Anders, Annett
change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...
Least-squares variance component estimation: theory and GPS applications
Amiri-Simkooei, A.
2007-01-01
In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known
Integer least-squares theory for the GNSS compass
Teunissen, P.J.G.
2010-01-01
Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to highprecision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search
ON THE COMPARISION OF THE TOTAL LEAST SQUARES AND THE LEAST SQUARES PROBLEMS%TLS和LS问题的比较
Institute of Scientific and Technical Information of China (English)
刘永辉; 魏木生
2003-01-01
There are a number of articles discussing the total least squares(TLS) and the least squares(LS) problems.M.Wei(M.Wei, Mathematica Numerica Sinica 20(3)(1998),267-278) proposed a new orthogonal projection method to improve existing perturbation bounds of the TLS and LS problems.In this paper,wecontinue to improve existing bounds of differences between the squared residuals,the weighted squared residuals and the minimum norm correction matrices of the TLS and LS problems.
Least squares in calibration: dealing with uncertainty in x.
Tellinghuisen, Joel
2010-08-01
The least-squares (LS) analysis of data with error in x and y is generally thought to yield best results when carried out by minimizing the "total variance" (TV), defined as the sum of the properly weighted squared residuals in x and y. Alternative "effective variance" (EV) methods project the uncertainty in x into an effective contribution to that in y, and though easier to employ are considered to be less reliable. In the case of a linear response function with both sigma(x) and sigma(y) constant, the EV solutions are identically those from ordinary LS; and Monte Carlo (MC) simulations reveal that they can actually yield smaller root-mean-square errors than the TV method. Furthermore, the biases can be predicted from theory based on inverse regression--x upon y when x is error-free and y is uncertain--which yields a bias factor proportional to the ratio sigma(x)(2)/sigma(xm)(2) of the random-error variance in x to the model variance. The MC simulations confirm that the biases are essentially independent of the error in y, hence correctable. With such bias corrections, the better performance of the EV method in estimating the parameters translates into better performance in estimating the unknown (x(0)) from measurements (y(0)) of its response. The predictability of the EV parameter biases extends also to heteroscedastic y data as long as sigma(x) remains constant, but the estimation of x(0) is not as good in this case. When both x and y are heteroscedastic, there is no known way to predict the biases. However, the MC simulations suggest that for proportional error in x, a geometric x-structure leads to small bias and comparable performance for the EV and TV methods.
Topology testing of phylogenies using least squares methods
Directory of Open Access Journals (Sweden)
Wróbel Borys
2006-12-01
Full Text Available Abstract Background The least squares (LS method for constructing confidence sets of trees is closely related to LS tree building methods, in which the goodness of fit of the distances measured on the tree (patristic distances to the observed distances between taxa is the criterion used for selecting the best topology. The generalized LS (GLS method for topology testing is often frustrated by the computational difficulties in calculating the covariance matrix and its inverse, which in practice requires approximations. The weighted LS (WLS allows for a more efficient albeit approximate calculation of the test statistic by ignoring the covariances between the distances. Results The goal of this paper is to assess the applicability of the LS approach for constructing confidence sets of trees. We show that the approximations inherent to the WLS method did not affect negatively the accuracy and reliability of the test both in the analysis of biological sequences and DNA-DNA hybridization data (for which character-based testing methods cannot be used. On the other hand, we report several problems for the GLS method, at least for the available implementation. For many data sets of biological sequences, the GLS statistic could not be calculated. For some data sets for which it could, the GLS method included all the possible trees in the confidence set despite a strong phylogenetic signal in the data. Finally, contrary to WLS, for simulated sequences GLS showed undercoverage (frequent non-inclusion of the true tree in the confidence set. Conclusion The WLS method provides a computationally efficient approximation to the GLS useful especially in exploratory analyses of confidence sets of trees, when assessing the phylogenetic signal in the data, and when other methods are not available.
Partial least-squares: Theoretical issues and engineering applications in signal processing
Directory of Open Access Journals (Sweden)
Fredric M. Ham
1996-01-01
Full Text Available In this paper we present partial least-squares (PLS, which is a statistical modeling method used extensively in analytical chemistry for quantitatively analyzing spectroscopic data. Comparisons are made between classical least-squares (CLS and PLS to show how PLS can be used in certain engineering signal processing applications. Moreover, it is shown that in certain situations when there exists a linear relationship between the independent and dependent variables, PLS can yield better predictive performance than CLS when it is not desirable to use all of the empirical data to develop a calibration model used for prediction. Specifically, because PLS is a factor analysis method, optimal selection of the number of PLS factors can result in a calibration model whose predictive performance is considerably better than CLS. That is, factor analysis (rank reduction allows only those features of the data that are associated with information of interest to be retained for development of the calibration model, and the remaining data associated with noise are discarded. It is shown that PLS can yield physical insight into the system from which empirical data has been collected. Also, when there exists a non-linear cause-and-effect relationship between the independent and dependent variables, the PLS calibration model can yield prediction errors that are much less than those for CLS. Three PLS application examples are given and the results are compared to CLS. In one example, a method is presented using PLS for parametric system identification. Using PLS for system identification allows simultaneous estimation of the system dimension and the system parameter vector associated with a minimal realization of the system.
Wave-equation Q tomography and least-squares migration
Dutta, Gaurav
2016-03-01
This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic
Moving least-squares corrections for smoothed particle hydrodynamics
Directory of Open Access Journals (Sweden)
Ciro Del Negro
2011-12-01
Full Text Available First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small increase in computational cost, we manage to achieve smooth density distributions without the need for post-processing and with higher accuracy in the computation of the viscous term of the Navier–Stokes equations, thereby reducing the formation of spurious shockwaves or other streaming effects in the evolution of fluid flow. Numerical tests on a classic two-dimensional dam-break problem confirm the improvement of the new approach.
On derivative estimation and the solution of least squares problems
Belward, John A.; Turner, Ian W.; Ilic, Milos
2008-12-01
Surface interpolation finds application in many aspects of science and technology. Two specific areas of interest are surface reconstruction techniques for plant architecture and approximating cell face fluxes in the finite volume discretisation strategy for solving partial differential equations numerically. An important requirement of both applications is accurate local gradient estimation. In surface reconstruction this gradient information is used to increase the accuracy of the local interpolant, while in the finite volume framework accurate gradient information is essential to ensure second order spatial accuracy of the discretisation. In this work two different least squares strategies for approximating these local gradients are investigated and the errors associated with each analysed. It is shown that although the two strategies appear different, they produce the same least squares error. Some carefully chosen case studies are used to elucidate this finding.
Anisotropy minimization via least squares method for transformation optics.
Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H
2014-07-28
In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero.
Linearized least-square imaging of internally scattered data
Aldawood, Ali
2014-01-01
Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.
CONDITION NUMBER FOR WEIGHTED LINEAR LEAST SQUARES PROBLEM
Institute of Scientific and Technical Information of China (English)
Yimin Wei; Huaian Diao; Sanzheng Qiao
2007-01-01
In this paper,we investigate the condition numbers for the generalized matrix inversion and the rank deficient linear least squares problem:minx ||Ax-b||2,where A is an m-by-n (m≥n)rank deficient matrix.We first derive an explicit expression for the condition number in the weighted Frobenius norm || [AT,βb]||F of the data A and b,where T is a positive diagonal matrix and β is a positive scalar.We then discuss the sensitivity of the standard 2-norm condition numbers for the generalized matrix inversion and rank deficient least squares and establish relations between the condition numbers and their condition numbers called level-2 condition numbers.
Source allocation by least-squares hydrocarbon fingerprint matching
Energy Technology Data Exchange (ETDEWEB)
William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)
2006-11-01
There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.
SUBSPACE SEARCH METHOD FOR A CLASS OF LEAST SQUARES PROBLEM
Institute of Scientific and Technical Information of China (English)
Zi-Luan Wei
2000-01-01
A subspace search method for solving a class of least squares problem is pre sented in the paper. The original problem is divided into many independent sub problems, and a search direction is obtained by solving each of the subproblems, as well as a new iterative point is determined by choosing a suitable steplength such that the value of residual norm is decreasing. The convergence result is also given. The numerical test is also shown for a special problem,
Parallel Nonnegative Least Squares Solvers for Model Order Reduction
2016-03-01
not for the PQN method. For the latter method the size of the active set is controlled to promote sparse solutions. This is described in Section 3.2.1...or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington...21005-5066 primary author’s email: <james.p.collins106.civ@mail.mil>. Parallel nonnegative least squares (NNLS) solvers are developed specifically for
Least-Square Prediction for Backward Adaptive Video Coding
2006-01-01
Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP), and demonstrate its potential in video coding. Motivated by the duality between edge contour in im...
An iterative approach to a constrained least squares problem
Directory of Open Access Journals (Sweden)
Simeon Reich
2003-01-01
In the case where the set of the constraints is the nonempty intersection of a finite collection of closed convex subsets of H, an iterative algorithm is designed. The resulting sequence is shown to converge strongly to the unique solution of the regularized problem. The net of the solutions to the regularized problems strongly converges to the minimum norm solution of the least squares problem if its solution set is nonempty.
Online least-squares policy iteration for reinforcement learning control
2010-01-01
Reinforcement learning is a promising paradigm for learning optimal control. We consider policy iteration (PI) algorithms for reinforcement learning, which iteratively evaluate and improve control policies. State-of-the-art, least-squares techniques for policy evaluation are sample-efficient and have relaxed convergence requirements. However, they are typically used in offline PI, whereas a central goal of reinforcement learning is to develop online algorithms. Therefore, we propose an online...
MODIFIED LEAST SQUARE METHOD ON COMPUTING DIRICHLET PROBLEMS
Institute of Scientific and Technical Information of China (English)
无
2006-01-01
The singularity theory of dynamical systems is linked to the numerical computation of boundary value problems of differential equations. It turns out to be a modified least square method for a calculation of variational problem defined on Ck(Ω), in which the base functions are polynomials and the computation of problems is transferred to compute the coefficients of the base functions. The theoretical treatment and some simple examples are provided for understanding the modification procedure of the metho...
Least Squares Polynomial Chaos Expansion: A Review of Sampling Strategies
Hadigol, Mohammad; Doostan, Alireza
2017-01-01
As non-institutive polynomial chaos expansion (PCE) techniques have gained growing popularity among researchers, we here provide a comprehensive review of major sampling strategies for the least squares based PCE. Traditional sampling methods, such as Monte Carlo, Latin hypercube, quasi-Monte Carlo, optimal design of experiments (ODE), Gaussian quadratures, as well as more recent techniques, such as coherence-optimal and randomized quadratures are discussed. We also propose a hybrid sampling ...
Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations
Directory of Open Access Journals (Sweden)
Hongcai Yin
2014-01-01
Full Text Available By analyzing the eigenvalues of the related matrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equations AX+YB=C and DX+YE=F in this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equation AXB+CXD=F. The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm.
AN ASSESSMENT OF THE MESHLESS WEIGHTED LEAST-SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
PanXiaofei; SzeKimYim; ZhangXiong
2004-01-01
The meshless weighted least-square (MWLS) method was developed based on the weighted least-square method. The method possesses several advantages, such as high accuracy, high stability and high efficiency. Moreover, the coefficient matrix obtained is symmetric and semipositive definite. In this paper, the method is further examined critically. The effects of several parameters on the results of MWLS are investigated systematically by using a cantilever beam and an infinite plate with a central circular hole. The numerical results are compared with those obtained by using the collocation-based meshless method (CBMM) and Galerkin-based meshless method (GBMM). The investigated parameters include the type of approximations, the type of weight functions, the number of neighbors of an evaluation point, as well as the manner in which the neighbors of an evaluation point are determined. This study shows that the displacement accuracy and convergence rate obtained by MWLS is comparable to that of the GBMM while the stress accuracy and convergence rate yielded by MWLS is even higher than that of GBMM. Furthermore, MWLS is much more efficient than GBMM. This study also shows that the instability of CBMM is mainly due to the neglect of the equilibrium residuals at boundary nodes. In MWLS, the residuals of all the governing equations are minimized in a weighted least-square sense.
Multilevel first-order system least squares for PDEs
Energy Technology Data Exchange (ETDEWEB)
McCormick, S.
1994-12-31
The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.
Multi-source least-squares reverse time migration
Dai, Wei
2012-06-15
Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.
Solving linear inequalities in a least squares sense
Energy Technology Data Exchange (ETDEWEB)
Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)
1994-12-31
Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.
Analyzing industrial energy use through ordinary least squares regression models
Golden, Allyson Katherine
Extensive research has been performed using regression analysis and calibrated simulations to create baseline energy consumption models for residential buildings and commercial institutions. However, few attempts have been made to discuss the applicability of these methodologies to establish baseline energy consumption models for industrial manufacturing facilities. In the few studies of industrial facilities, the presented linear change-point and degree-day regression analyses illustrate ideal cases. It follows that there is a need in the established literature to discuss the methodologies and to determine their applicability for establishing baseline energy consumption models of industrial manufacturing facilities. The thesis determines the effectiveness of simple inverse linear statistical regression models when establishing baseline energy consumption models for industrial manufacturing facilities. Ordinary least squares change-point and degree-day regression methods are used to create baseline energy consumption models for nine different case studies of industrial manufacturing facilities located in the southeastern United States. The influence of ambient dry-bulb temperature and production on total facility energy consumption is observed. The energy consumption behavior of industrial manufacturing facilities is only sometimes sufficiently explained by temperature, production, or a combination of the two variables. This thesis also provides methods for generating baseline energy models that are straightforward and accessible to anyone in the industrial manufacturing community. The methods outlined in this thesis may be easily replicated by anyone that possesses basic spreadsheet software and general knowledge of the relationship between energy consumption and weather, production, or other influential variables. With the help of simple inverse linear regression models, industrial manufacturing facilities may better understand their energy consumption and
Estimasi Model Seemingly Unrelated Regression (SUR dengan Metode Generalized Least Square (GLS
Directory of Open Access Journals (Sweden)
Ade Widyaningsih
2014-06-01
Full Text Available Regression analysis is a statistical tool that is used to determine the relationship between two or more quantitative variables so that one variable can be predicted from the other variables. A method that can used to obtain a good estimation in the regression analysis is ordinary least squares method. The least squares method is used to estimate the parameters of one or more regression but relationships among the errors in the response of other estimators are not allowed. One way to overcome this problem is Seemingly Unrelated Regression model (SUR in which parameters are estimated using Generalized Least Square (GLS. In this study, the author applies SUR model using GLS method on world gasoline demand data. The author obtains that SUR using GLS is better than OLS because SUR produce smaller errors than the OLS.
Institute of Scientific and Technical Information of China (English)
任洪娥; 沈雯雯; 白杰云; 官俊
2015-01-01
Objective]This paper established a mathematical model of the particle size of larch wood floor,and obtained optimum particle size of wood flour that corresponding to the maximum aspect ratio of wood flour and revealed the change trend of the aspect ratio as the particle size of wood flour decreases and explained the reasons for this trend by analysing the mathematical model and the second order derivative.[Method]We take the microscopic image of wood floor with the optical microscope,and obtain the average length and average width of mature tracheids and the particle size of target wood floor by the measurement and calculation. With the digital image processing technology,we extract the length, width and rectangular of each single wood flour: we convert the color space of original wood flour microscopic image from RGB to Lab and then extract the b component; And the b component is filtered by 3 × 3 template median filter. To get the binary image of wood flour,we cluster the denoised image into 2 categories with the K-means algorithm. The binary images are executed by the first opening after closing operations of mathematical morphology with 5 × 5 structure elements. Then we mark each single wood flour by eight connected region labeling method. After that ,we calculate the geometric area of wood flour by the method of the number of target pixels in statistics and calculate the length,width,area of the minimum bounding rectangle of wood flour by the method of spindle law on the labeled image. At last we get the data of aspect ratio and squareness of each single wood flour. After getting the data above,we create the fitting curve between particle size and aspect ratio with the least squares method,and select the Gaussian function as the mathematical model by analyzing and evaluating the fitting function of polynomial function,Fourier function and Gaussian function,then we calculate the second derivative according to its fitting curve. Finally we analysis and discuss
Error Estimate and Adaptive Refinement in Mixed Discrete Least Squares Meshless Method
Directory of Open Access Journals (Sweden)
J. Amani
2014-01-01
Full Text Available The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.
Chen, S; Wu, Y; Luk, B L
1999-01-01
The paper presents a two-level learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.
Hierarchical Least Squares Identification and Its Convergence for Large Scale Multivariable Systems
Institute of Scientific and Technical Information of China (English)
丁锋; 丁韬
2002-01-01
The recursive least squares identification algorithm (RLS) for large scale multivariable systems requires a large amount of calculations, therefore, the RLS algorithm is difficult to implement on a computer. The computational load of estimation algorithms can be reduced using the hierarchical least squares identification algorithm (HLS) for large scale multivariable systems. The convergence analysis using the Martingale Convergence Theorem indicates that the parameter estimation error (PEE) given by the HLS algorithm is uniformly bounded without a persistent excitation signal and that the PEE consistently converges to zero for the persistent excitation condition. The HLS algorithm has a much lower computational load than the RLS algorithm.
Energy Technology Data Exchange (ETDEWEB)
Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov
2016-02-25
Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.
Energy Technology Data Exchange (ETDEWEB)
Machado, A.E. de A, E-mail: aeam@rpd.ufmg.br [Laboratorio de Quimica Computacional e Modelagem Molecular (LQC-MM), Departamento de Quimica, ICEx, Universidade Federal de Minas Gerais (UFMG), Campus Universitario, Pampulha, Belo Horizonte, MG 31270-90 (Brazil); Departamento de Quimica Fundamental, Universidade Federal de Pernambuco, Recife, PE 50740-540 (Brazil); Gama, A.A. de S da; Barros Neto, B. de [Departamento de Quimica Fundamental, Universidade Federal de Pernambuco, Recife, PE 50740-540 (Brazil)
2011-09-22
Graphical abstract: PLS regression equations predicts quite well static {beta} values for a large set of donor-acceptor organic molecules, in close agreement with the available experimental data. Display Omitted Highlights: {yields} PLS regression predicts static {beta} values of 35 push-pull organic molecules. {yields} PLS equations show correlation of {beta} with structural-electronic parameters. {yields} PLS regression selects best components of push-bridge-pull nonlinear compounds. {yields} PLS analyses can be routinely used to select novel second-order materials. - Abstract: A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ({beta}'s). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large {beta} values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of {pi}-electrons. The regression equation predicts quite well the static {beta} values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.
A Comparison of Mean Phase Difference and Generalized Least Squares for Analyzing Single-Case Data
Manolov, Rumen; Solanas, Antonio
2013-01-01
The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The…
distance from the load to the tap points of the line. A general least square method is developed to determine the parameters of a standing wave with...the general least square analysis are shown to reduce to those of the least square with no attenuation and the three pin analysis with attenuation. (Author)
Image denoising using least squares wavelet support vector machines
Institute of Scientific and Technical Information of China (English)
Guoping Zeng; Ruizhen Zhao
2007-01-01
We propose a new method for image denoising combining wavelet transform and support vector machines (SVMs). A new image filter operator based on the least squares wavelet support vector machines (LSWSVMs) is presented. Noisy image can be denoised through this filter operator and wavelet thresholding technique. Experimental results show that the proposed method is better than the existing SVM regression with the Gaussian radial basis function (RBF) and polynomial RBF. Meanwhile, it can achieve better performance than other traditional methods such as the average filter and median filter.
Spectral feature matching based on partial least squares
Institute of Scientific and Technical Information of China (English)
Weidong Yan; Zheng Tian; Lulu Pan; Mingtao Ding
2009-01-01
We investigate the spectral approaches to the problem of point pattern matching, and present a spectral feature descriptors based on partial least square (PLS). Given keypoints of two images, we define the position similarity matrices respectively, and extract the spectral features from the matrices by PLS, which indicate geometric distribution and inner relationships of the keypoints. Then the keypoints matching is done by bipartite graph matching. The experiments on both synthetic and real-world data corroborate the robustness and invariance of the algorithm.
Positive Scattering Cross Sections using Constrained Least Squares
Energy Technology Data Exchange (ETDEWEB)
Dahl, J.A.; Ganapol, B.D.; Morel, J.E.
1999-09-27
A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.
Handbook of Partial Least Squares Concepts, Methods and Applications
Vinzi, Vincenzo Esposito; Henseler, Jörg
2010-01-01
This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.
Directory of Open Access Journals (Sweden)
Jiao Long
2016-01-01
Full Text Available The application of interval partial least squares (IPLS and moving window partial least squares (MWPLS to the enantiomeric analysis of tryptophan (Trp was investigated. A UV-Vis spectroscopy method for determining the enantiomeric composition of Trp was developed. The calibration model was built by using partial least squares (PLS, IPLS and MWPLS respectively. Leave-one-out cross validation and external test validation were used to assess the prediction performance of the established models. The validation result demonstrates the established full-spectrum PLS model is impractical for quantifying the relationship between the spectral data and enantiomeric composition of L-Trp. On the contrary, the developed IPLS and MWPLS model are both practicable for modeling this relationship. For the IPLS model, the root mean square relative error (RMSRE of external test validation and leave-one-out cross validation is 4.03 and 6.50 respectively. For the MWPLS model, the RMSRE of external test validation and leave-one-out cross validation is 2.93 and 4.73 respectively. Obviously, the prediction accuracy of the MWPLS model is higher than that of the IPLS model. It is demonstrated UV-Vis spectroscopy combined with MWPLS is a commendable method for determining the enantiomeric composition of Trp. MWPLS is superior to IPLS for selecting spectral region in UV-Vis spectroscopy analysis.
On the stability and accuracy of least squares approximations
Cohen, Albert; Leviatan, Dany
2011-01-01
We consider the problem of reconstructing an unknown function $f$ on a domain $X$ from samples of $f$ at $n$ randomly chosen points with respect to a given measure $\\rho_X$. Given a sequence of linear spaces $(V_m)_{m>0}$ with ${\\rm dim}(V_m)=m\\leq n$, we study the least squares approximations from the spaces $V_m$. It is well known that such approximations can be inaccurate when $m$ is too close to $n$, even when the samples are noiseless. Our main result provides a criterion on $m$ that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in $L^2(X,\\rho_X)$, is comparable to the best approximation error of $f$ by elements from $V_m$. We illustrate this criterion for various approximation schemes, such as trigonometric polynomials, with $\\rho_X$ being the uniform measure, and algebraic polynomials, with $\\rho_X$ being either the uniform or Chebyshev measure. For such examples we also prove similar stability results using deterministic...
Orthogonal least squares learning algorithm for radial basis function networks
Energy Technology Data Exchange (ETDEWEB)
Chen, S.; Cowan, C.F.N.; Grant, P.M. (Dept. of Electrical Engineering, Univ. of Edinburgh, Mayfield Road, Edinburgh EH9 3JL, Scotland (GB))
1991-03-01
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular value decomposition to solve for the weights of the network. Such a procedure has several drawbacks and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The paper proposes an alternative learning procedure based on the orthogonal least squares method. The procedure choose radial basis function centers one by one in a rational way until an adequate network has been constructed. The algorithm has the property that each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least squares learning strategy provides a simple and efficient means for fitting radial basis function networks, and this is illustrated using examples taken from two different signal processing applications.
Orthogonal least squares learning algorithm for radial basis function networks.
Chen, S; Cowan, C N; Grant, P M
1991-01-01
The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.
Making the most out of the least (squares migration)
Dutta, Gaurav
2014-08-05
Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.
Plane-wave least-squares reverse-time migration
Dai, Wei
2013-06-03
A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.
Making the most out of least-squares migration
Huang, Yunsong
2014-09-01
Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.
Least squares weighted twin support vector machines with local information
Institute of Scientific and Technical Information of China (English)
花小朋; 徐森; 李先锋
2015-01-01
A least squares version of the recently proposed weighted twin support vector machine with local information (WLTSVM) for binary classification is formulated. This formulation leads to an extremely simple and fast algorithm, called least squares weighted twin support vector machine with local information (LSWLTSVM), for generating binary classifiers based on two non-parallel hyperplanes. Two modified primal problems of WLTSVM are attempted to solve, instead of two dual problems usually solved. The solution of the two modified problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in WLTSVM. Moreover, two extra modifications were proposed in LSWLTSVM to improve the generalization capability. One is that a hot kernel function, not the simple-minded definition in WLTSVM, is used to define the weight matrix of adjacency graph, which ensures that the underlying similarity information between any pair of data points in the same class can be fully reflected. The other is that the weight for each point in the contrary class is considered in constructing equality constraints, which makes LSWLTSVM less sensitive to noise points than WLTSVM. Experimental results indicate that LSWLTSVM has comparable classification accuracy to that of WLTSVM but with remarkably less computational time.
Point pattern matching based on kernel partial least squares
Institute of Scientific and Technical Information of China (English)
Weidong Yan; Zheng Tian; Lulu Pan; Jinhuan Wen
2011-01-01
@@ Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.%Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.
Rocconi, Louis M.
2013-01-01
This study examined the differing conclusions one may come to depending upon the type of analysis chosen, hierarchical linear modeling or ordinary least squares (OLS) regression. To illustrate this point, this study examined the influences of seniors' self-reported critical thinking abilities three ways: (1) an OLS regression with the student…
A Coupled Finite Difference and Moving Least Squares Simulation of Violent Breaking Wave Impact
DEFF Research Database (Denmark)
Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter
2012-01-01
Two model for simulation of free surface flow is presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A special...... feature of this model is a generalized finite point set method which is applied to the solution of the Poisson equation on an unstructured point distribution. The presented finite point set method is generalized to arbitrary order of approximation. The two models are applied to simulation of steep...... and overturning wave impacts on a vertical breakwater. Wave groups with five different wave heights are propagated from offshore to the vicinity of the breakwater, where the waves are steep, but still smooth and non-overturning. These waves are used as initial condition for the weighted least squares based...
Least-squares reverse time migration of multiples
Zhang, Dongliang
2013-12-06
The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower
Least-Squares Seismic Inversion with Stochastic Conjugate Gradient Method
Institute of Scientific and Technical Information of China (English)
Wei Huang; Hua-Wei Zhou
2015-01-01
With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.
Local validation of EU-DEM using Least Squares Collocation
Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios
2016-04-01
In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.
ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD
Institute of Scientific and Technical Information of China (English)
SONG Kaichen; NIE Xili
2006-01-01
Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.
Least-squares based iterative multipath super-resolution technique
Nam, Wooseok
2011-01-01
In this paper, we study the problem of multipath channel estimation for direct sequence spread spectrum signals. To resolve multipath components arriving within a short interval, we propose a new algorithm called the least-squares based iterative multipath super-resolution (LIMS). Compared to conventional super-resolution techniques, such as the multiple signal classification (MUSIC) and the estimation of signal parameters via rotation invariance techniques (ESPRIT), our algorithm has several appealing features. In particular, even in critical situations where the conventional super-resolution techniques are not very powerful due to limited data or the correlation between path coefficients, the LIMS algorithm can produce successful results. In addition, due to its iterative nature, the LIMS algorithm is suitable for recursive multipath tracking, whereas the conventional super-resolution techniques may not be. Through numerical simulations, we show that the LIMS algorithm can resolve the first arrival path amo...
Partial Least Squares Structural Equation Modeling with R
Directory of Open Access Journals (Sweden)
Hamdollah Ravand
2016-09-01
Full Text Available Structural equation modeling (SEM has become widespread in educational and psychological research. Its flexibility in addressing complex theoretical models and the proper treatment of measurement error has made it the model of choice for many researchers in the social sciences. Nevertheless, the model imposes some daunting assumptions and restrictions (e.g. normality and relatively large sample sizes that could discourage practitioners from applying the model. Partial least squares SEM (PLS-SEM is a nonparametric technique which makes no distributional assumptions and can be estimated with small sample sizes. In this paper a general introduction to PLS-SEM is given and is compared with conventional SEM. Next, step by step procedures, along with R functions, are presented to estimate the model. A data set is analyzed and the outputs are interpreted
DIRECT ITERATIVE METHODS FOR RANK DEFICIENT GENERALIZED LEAST SQUARES PROBLEMS
Institute of Scientific and Technical Information of China (English)
Jin-yun Yuan; Xiao-qing Jin
2000-01-01
The generalized least squares (LS) problem appears in many application areas. Here W is an m × m symmetric positive definite matrix and A is an m × n matrix with m≥n. Since the problem has many solutions in rank deficient case, some special preconditioned techniques are adapted to obtain the minimum 2-norm solution. A block SOR method and the preconditioned conjugate gradient (PCG) method are proposed here. Convergence and optimal relaxation parameter for the block SOR method are studied. An error bound for the PCG method is given. The comparison of these methods is investigated. Some remarks on the implementation of the methods and the operation cost are given as well.
Regularized plane-wave least-squares Kirchhoff migration
Wang, Xin
2013-09-22
A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.
Partial least squares regression in the social sciences
Directory of Open Access Journals (Sweden)
Megan L. Sawatsky
2015-06-01
Full Text Available Partial least square regression (PLSR is a statistical modeling technique that extracts latent factors to explain both predictor and response variation. PLSR is particularly useful as a data exploration technique because it is highly flexible (e.g., there are few assumptions, variables can be highly collinear. While gaining importance across a diverse number of fields, its application in the social sciences has been limited. Here, we provide a brief introduction to PLSR, directed towards a novice audience with limited exposure to the technique; demonstrate its utility as an alternative to more classic approaches (multiple linear regression, principal component regression; and apply the technique to a hypothetical dataset using JMP statistical software (with references to SAS software.
Least-squares reverse time migration with radon preconditioning
Dutta, Gaurav
2016-09-06
We present a least-squares reverse time migration (LSRTM) method using Radon preconditioning to regularize noisy or severely undersampled data. A high resolution local radon transform is used as a change of basis for the reflectivity and sparseness constraints are applied to the inverted reflectivity in the transform domain. This reflects the prior that for each location of the subsurface the number of geological dips is limited. The forward and the adjoint mapping of the reflectivity to the local Radon domain and back are done through 3D Fourier-based discrete Radon transform operators. The sparseness is enforced by applying weights to the Radon domain components which either vary with the amplitudes of the local dips or are thresholded at given quantiles. Numerical tests on synthetic and field data validate the effectiveness of the proposed approach in producing images with improved SNR and reduced aliasing artifacts when compared with standard RTM or LSRTM.
Cognitive assessment in mathematics with the least squares distance method.
Ma, Lin; Çetin, Emre; Green, Kathy E
2012-01-01
This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.
A Galerkin least squares approach to viscoelastic flow.
Energy Technology Data Exchange (ETDEWEB)
Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)
2015-10-01
A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.
semPLS: Structural Equation Modeling Using Partial Least Squares
Directory of Open Access Journals (Sweden)
Armin Monecke
2012-05-01
Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.
Estimating Military Aircraft Cost Using Least Squares Support Vector Machines
Institute of Scientific and Technical Information of China (English)
ZHU Jia-yuan; ZHANG Xi-bin; ZHANG Heng-xi; REN Bo
2004-01-01
A multi-layer adaptive optimizing parameters algorithm is developed for improving least squares support vector machines(LS-SVM),and a military aircraft life-cycle-cost(LCC)intelligent estimation model is proposed based on the improved LS-SVM.The intelligent cost estimation process is divided into three steps in the model.In the first step,a cost-drive-factor needs to be selected,which is significant for cost estimation.In the second step,military aircraft training samples within costs and cost-drive-factor set are obtained by the LS-SVM.Then the model can be used for new type aircraft cost estimation.Chinese military aircraft costs are estimated in the paper.The results show that the estimated costs by the new model are closer to the true costs than that of the traditionally used methods.
Improved linear least squares estimation using bounded data uncertainty
Ballal, Tarig
2015-04-01
This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.
Risk and Management Control: A Partial Least Square Modelling Approach
DEFF Research Database (Denmark)
Nielsen, Steen; Pontoppidan, Iens Christian
and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...... and a external attitude dimension. The results have important implications for both management control research and for the management control systems design for the way accountants consider the element of risk in their different tasks, both operational and strategic. Specifically, it seems that different risk......Risk and economic theory goes many year back (e.g. to Keynes & Knight 1921) and risk/uncertainty belong to one of the explanations for the existence of the firm (Coarse, 1937). The present financial crisis going on in the past years have re-accentuated risk and the need of coherence...
Institute of Scientific and Technical Information of China (English)
刘国海; 张懿; 魏海峰; 赵文祥
2012-01-01
针对神经网络逆控制存在的不足,对一类模型未知且某些状态量较难测得的多输入多输出（MIMO）非线性系统,在状态软测量函数存在的前提下,提出一种最小二乘支持向量机（LSSVM）广义逆辨识控制策略.通过广义逆将原被控系统转化为伪线性复合系统,并可使其极点任意配置,采用LSSVM代替神经网络拟合广义逆系统中的静态非线性映射.将系统的状态量辨识与LSSVM逆模型辨识结合,通过LSSVM训练拟合同时实现软测量功能.最后以双电机变频调速系统为对象,采用该控制策略进行仿真研究,结果验证了本文算法的有效性.%Considering the deficiency of neural network inverse control method,for a class of multi-input and multioutput（MIMO） nonlinear systems with unknown model,when soft-sensing functions for immeasurable states are available,we propose a new identification and control strategy based on the generalized inverse control of least squares support vector machines（LSSVM）.The generalized inverse converts the controlled nonlinear system into a pseudo linear system with expected pole placement.In place of the neural network,LSSVM is employed to fit the static nonlinear mapping of the generalized inverse system.The identification of state variables is combined with the identification of LSSVM inverse model.Meanwhile,the soft-sensing is implemented through LSSVM training and fitting.Simulation is performed on a two-motor variable-frequency speed-regulating system.Results show that the proposed control strategy is feasible and efficient.
Chkifa, Abdellah
2015-04-08
Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.
Generalized total least squares prediction algorithm for universal 3D similarity transformation
Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie
2017-02-01
Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
Directory of Open Access Journals (Sweden)
Greenwood L.R.
2016-01-01
Full Text Available The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002 [3], which was sponsored by the International Atomic Energy Agency (IAEA, though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the
Least-Squares Neutron Spectral Adjustment with STAYSL PNNL
Greenwood, L. R.; Johnson, C. D.
2016-02-01
The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator
Wavelet Neural Networks for Adaptive Equalization by Using the Orthogonal Least Square Algorithm
Institute of Scientific and Technical Information of China (English)
JIANG Minghu(江铭虎); DENG Beixing(邓北星); Georges Gielen
2004-01-01
Equalizers are widely used in digital communication systems for corrupted or time varying channels. To overcome performance decline for noisy and nonlinear channels, many kinds of neural network models have been used in nonlinear equalization. In this paper, we propose a new nonlinear channel equalization, which is structured by wavelet neural networks. The orthogonal least square algorithm is applied to update the weighting matrix of wavelet networks to form a more compact wavelet basis unit, thus obtaining good equalization performance. The experimental results show that performance of the proposed equalizer based on wavelet networks can significantly improve the neural modeling accuracy and outperform conventional neural network equalization in signal to noise ratio and channel non-linearity.
基于最小二乘法的 Lagrange 方法在衰减冲击波中的研究%Study on Lagrange Analysis with Least Squares in Attenuating Waves
Institute of Scientific and Technical Information of China (English)
陶为俊; 浣石
2014-01-01
The existing reactive flow Lagrange analysis methods are still inadequate to solve the particle velocity history from a series of gauges embedded in material.Based on this point,a new Lagrange analysis method combined the inverse analysis with self-consistent examination was presented.The theoretical accuracy of this method can achieve that the M-order partial derivative of the stress equals zero,and the self-consistent examination is satisfied.Besides,this method is applied to process the experimental data of the light gas gun.Comparing results of this method,experimental data and the traditional inverse analysis results,it turns out that this method not only makes the particle-line function reflecting the behavior of various physical quantities along the particle-line,but also reduces the accidental error of the particle-line.%在已知粒子速度的情况下，采用现有 Lagrange 分析方法求解动力学方程仍有不足。针对这一情况，将反解法和自洽检验法相结合，提出了基于最小二乘法的 Lagrange 反解法。该方法的理论精度能够实现应力沿路径线的 M（M 为迹线数）阶导数恒为零，并且能够满足自洽检验法。通过对一组混凝土的实验数据进行处理，并将处理结果与实验结果以及传统Lagrange反解法进行对比，比较结果表明，该方法不仅使得迹线函数能够很好地反应各物理量沿迹线的变化性态，而且还能够适当减小偶然误差。
Improving the gradient in least-squares reverse time migration
Liu, Qiancheng
2016-04-01
Least-squares reverse time migration (LSRTM) is a linearized inversion technique used for estimating high-wavenumber reflectivity. However, due to the redundant overlay of the band-limited source wavelet, the gradient based on the cross-correlated imaging principle suffers from a loss of wavenumber information. We first prepare the residuals between observed and demigrated data by deconvolving with the amplitude spectrum of the source wavelet, and then migrate the preprocessed residuals by using the cross-correlation imaging principle. In this way, a gradient that preserves the spectral signature of data residuals is obtained. The computational cost of source-wavelet removal is negligible compared to that of wavefield simulation. The two-dimensional Marmousi model containing complex geology structures is considered to test our scheme. Numerical examples show that our improved gradient in LSRTM has a better convergence behavior and promises inverted results of higher resolution. Finally, we attempt to update the background velocity with our inverted velocity perturbations to approach the true velocity.
HASM-AD Algorithm Based on the Sequential Least Squares
Institute of Scientific and Technical Information of China (English)
WANG Shihai; YUE Tianxiang
2010-01-01
The HASM (high accuracy surface modeling) technique is based on the fundamental theory of surfaces, which has been proved to improve the interpolation accuracy in surface fitting. However, the integral iterative solution in previous studies resulted in high temporal complexity in computation and huge memory usage so that it became difficult to put the technique into application,especially for large-scale datasets. In the study, an innovative model (HASM-AD) is developed according to the sequential least squares on the basis of data adjustment theory. Sequential division is adopted in the technique, so that linear equations can be divided into groups to be processed in sequence with the temporal complexity reduced greatly in computation. The experiment indicates that the HASM-AD technique surpasses the traditional spatial interpolation methods in accuracy. Also, the cross-validation result proves the same conclusion for the spatial interpolation of soil PH property with the data sampled in Jiangxi province. Moreover, it is demonstrated in the study that the HASM-AD technique significantly reduces the computational complexity and lessens memory usage in computation.
3D plane-wave least-squares Kirchhoff migration
Wang, Xin
2014-08-05
A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.
Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors
Energy Technology Data Exchange (ETDEWEB)
Gavel, D
2002-10-08
A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.
Efficient sparse kernel feature extraction based on partial least squares.
Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John
2009-08-01
The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.
Prediction of solubility parameters using partial least square regression.
Tantishaiyakul, Vimon; Worakul, Nimit; Wongpoowarak, Wibul
2006-11-15
The total solubility parameter (delta) values were effectively predicted by using computed molecular descriptors and multivariate partial least squares (PLS) statistics. The molecular descriptors in the derived models included heat of formation, dipole moment, molar refractivity, solvent-accessible surface area (SA), surface-bounded molecular volume (SV), unsaturated index (Ui), and hydrophilic index (Hy). The values of these descriptors were computed by the use of HyperChem 7.5, QSPR Properties module in HyperChem 7.5, and Dragon Web version. The other two descriptors, hydrogen bonding donor (HD), and hydrogen bond-forming ability (HB) were also included in the models. The final reduced model of the whole data set had R(2) of 0.853, Q(2) of 0.813, root mean squared error from the cross-validation of the training set (RMSEcv(tr)) of 2.096 and RMSE of calibration (RMSE(tr)) of 1.857. No outlier was observed from this data set of 51 diverse compounds. Additionally, the predictive power of the developed model was comparable to the well recognized systems of Hansen, van Krevelen and Hoftyzer, and Hoy.
River flow time series using least squares support vector machines
Samsudin, R.; Saad, P.; Shabri, A.
2011-06-01
This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA), GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE) and coefficient of correlation (R) are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.
Least-squares fit of a linear combination of functions
Directory of Open Access Journals (Sweden)
Niraj Upadhyay
2013-12-01
Full Text Available We propose that given a data-set $S=\\{(x_i,y_i/i=1,2,{\\dots}n\\}$ and real-valued functions $\\{f_\\alpha(x/\\alpha=1,2,{\\dots}m\\},$ the least-squares fit vector $A=\\{a_\\alpha\\}$ for $y=\\sum_\\alpha a_{\\alpha}f_\\alpha(x$ is $A = (F^TF^{-1}F^TY$ where $[F_{i\\alpha}]=[f_\\alpha(x_i].$ We test this formalism by deriving the algebraic expressions of the regression coefficients in $y = ax + b$ and in $y = ax^2 + bx + c.$ As a practical application, we successfully arrive at the coefficients in the semi-empirical mass formula of nuclear physics. The formalism is {\\it generic} - it has the potential of being applicable to any {\\it type} of $\\{x_i\\}$ as long as there exist appropriate $\\{f_\\alpha\\}.$ The method can be exploited with a CAS or an object-oriented language and is excellently suitable for parallel-processing.
Robust regularized least-squares beamforming approach to signal estimation
Suliman, Mohamed
2017-05-12
In this paper, we address the problem of robust adaptive beamforming of signals received by a linear array. The challenge associated with the beamforming problem is twofold. Firstly, the process requires the inversion of the usually ill-conditioned covariance matrix of the received signals. Secondly, the steering vector pertaining to the direction of arrival of the signal of interest is not known precisely. To tackle these two challenges, the standard capon beamformer is manipulated to a form where the beamformer output is obtained as a scaled version of the inner product of two vectors. The two vectors are linearly related to the steering vector and the received signal snapshot, respectively. The linear operator, in both cases, is the square root of the covariance matrix. A regularized least-squares (RLS) approach is proposed to estimate these two vectors and to provide robustness without exploiting prior information. Simulation results show that the RLS beamformer using the proposed regularization algorithm outperforms state-of-the-art beamforming algorithms, as well as another RLS beamformers using a standard regularization approaches.
A pruning method for the recursive least squared algorithm.
Leung, C S; Wong, K W; Sum, P F; Chan, L W
2001-03-01
The recursive least squared (RLS) algorithm is an effective online training method for neural networks. However, its conjunctions with weight decay and pruning have not been well studied. This paper elucidates how generalization ability can be improved by selecting an appropriate initial value of the error covariance matrix in the RLS algorithm. Moreover, how the pruning of neural networks can be benefited by using the final value of the error covariance matrix will also be investigated. Our study found that the RLS algorithm is implicitly a weight decay method, where the weight decay effect is controlled by the initial value of the error covariance matrix; and that the inverse of the error covariance matrix is approximately equal to the Hessian matrix of the network being trained. We propose that neural networks are first trained by the RLS algorithm and then some unimportant weights are removed based on the approximate Hessian matrix. Simulation results show that our approach is an effective training and pruning method for neural networks.
Non-parametric and least squares Langley plot methods
Directory of Open Access Journals (Sweden)
P. W. Kiedron
2015-04-01
Full Text Available Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer–Lambert–Beer law V=V>/i>0e−τ ·m, where a plot of ln (V voltage vs. m air mass yields a straight line with intercept ln (V0. This ln (V0 subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0's are smoothed and interpolated with median and mean moving window filters.
Energy Technology Data Exchange (ETDEWEB)
Kehimkar, Benjamin; Hoggard, Jamin C.; Marney, Luke C.; Billingsley, Matthew; Fraga, Carlos G.; Bruno, Thomas J.; Synovec, Robert E.
2014-01-31
There is an increased need to more fully assess and control the composition of kerosene based rocket propulsion fuels, namely RP-1 and RP-2. In particular, it is crucial to be able to make better quantitative connections between the following three attributes: (a) fuel performance, (b) fuel properties (flash point, density, kinematic viscosity, net heat of combustion, hydrogen content, etc) and (c) the chemical composition of a given fuel (i.e., specific chemical compounds and compound classes present as a result of feedstock blending and processing). Indeed, recent efforts in predicting fuel performance through modeling put greater emphasis on detailed and accurate fuel properties and fuel compositional information. In this regard, advanced distillation curve (ADC) metrology provides improved data relative to classical boiling point and volatility curve techniques. Using ADC metrology, data obtained from RP-1 and RP-2 fuels provides compositional variation information that is directly relevant to predictive modeling of fuel performance. Often, in such studies, one-dimensional gas chromatography (GC) combined with mass spectrometry (MS) is typically employed to provide chemical composition information. Building on approaches using GC-MS, but to glean substantially more chemical composition information from these complex fuels, we have recently studied the use of comprehensive two dimensional gas chromatography combined with time-of-flight mass spectrometry (GC × GC - TOFMS) to provide chemical composition data that is significantly richer than that provided by GC-MS methods. In this report, by applying multivariate data analysis techniques, referred to as chemometrics, we are able to readily model (correlate) the chemical compositional information from RP-1 and RP-2 fuels provided using GC × GC - TOFMS, to the fuel property information such as that provided by the ADC method and other specification properties. We anticipate that this new chemical analysis
Institute of Scientific and Technical Information of China (English)
李雪竹; 陈国龙
2015-01-01
大数据分析方法能发现数据中存在的关系和规则，预测事物未来的发展趋势，从而提高决策的科学性。针对传统预测方法精度低、泛化性差的问题，提出基于智能支持向量机的大数据分析与预测方法。设计一种新的支持向量机模型参数选择准则，即模型残差概率密度函数逼近给定的高斯分布，并按照该准则采用混沌收缩粒子群优化算法确定模型参数，从而提高数据分类或回归处理的精度与泛化性。采用选矿生产过程现场数据进行实验，结果验证了该方法的有效性，并表明其精度比LSSVM方法更高。%It is important to study on methods of big data analysis that is used to find the relationship and rule between data to predict the future trend of things. This paper presents a big data analysis and forecasting method based on Support Vector Machine ( SVM ) , which solves the problem that traditional prediction methods have low precision and poor generalization. The Probability Density Function( PDF) control based model parameters selection criterion is proposed to make the modeling error track a target Gaussian PDF. A contract Particle Swarm Optimization ( PSO ) algorithm is adopted to tune the parameters. The proposed modeling approach is validated using the practical data,and the results show its efficiency. Compared with LSSVM,the accuracy of the proposed method is higher.
Grigorie, Teodor Lucian; Corcau, Ileana Jenica; Tudosie, Alexandru Nicolae
2017-06-01
The paper presents a way to obtain an intelligent miniaturized three-axial accelerometric sensor, based on the on-line estimation and compensation of the sensor errors generated by the environmental temperature variation. Taking into account that this error's value is a strongly nonlinear complex function of the values of environmental temperature and of the acceleration exciting the sensor, its correction may not be done off-line and it requires the presence of an additional temperature sensor. The proposed identification methodology for the error model is based on the least square method which process off-line the numerical values obtained from the accelerometer experimental testing for different values of acceleration applied to its axes of sensitivity and for different values of operating temperature. A final analysis of the error level after the compensation highlights the best variant for the matrix in the error model. In the sections of the paper are shown the results of the experimental testing of the accelerometer on all the three sensitivity axes, the identification of the error models on each axis by using the least square method, and the validation of the obtained models with experimental values. For all of the three detection channels was obtained a reduction by almost two orders of magnitude of the acceleration absolute maximum error due to environmental temperature variation.
Institute of Scientific and Technical Information of China (English)
岳利群; 夏青; 柳佳佳; 陈轲
2012-01-01
We get some experiment data by flying on global multi-area and multi-level 3D ter- rain at different machine configuration. Then using the method of partial least-squares re- gression, we do study and analysis on the 11 hardware elements which affect 3D visualization system running capability. There is high collinearity in the data, making the least-squares regression not reliable. Using the partial least squares regression, the effect of collinearity can be mitigated effectively, and the affected complexion about 3D visualization system run- ning efficiency by hardware elements can be easily reflected. We conclude that the main hard- ware affecting elements to 3D visualization system are CPU basic frequency, L2 Cache, and Memory, and L2 Cache plays an important role on the running speed of 3D visualization system.%在多种配置的计算机上对全球多地区多级别的三维地形进行漫游测试,得出实验数据,然后利用偏最小二乘回归方法,对三维可视化系统中影响系统性能的11个硬件因素进行了分析。实验结果表明,影响三维可视化系统的主要硬件因素为CPU主频、二级缓存和内存,其中,二级缓存对可视化系统运行速率的影响尤为显著。
Application of the Marquardt least-squares method to the estimation of pulse function parameters
Lundengârd, Karl; Rančić, Milica; Javor, Vesna; Silvestrov, Sergei
2014-12-01
Application of the Marquardt least-squares method (MLSM) to the estimation of non-linear parameters of functions used for representing various lightning current waveshapes is presented in this paper. Parameters are determined for the Pulse, Heidler's and DEXP function representing the first positive, first and subsequent negative stroke currents as given in IEC 62305-1 Standard Ed.2, and also for some other fast- and slow-decaying lightning current waveshapes. The results prove the ability of the MLSM to be used for the estimation of parameters of the functions important in lightning discharge modeling.
Optimization of absorption placement using geometrical acoustic models and least squares.
Saksela, Kai; Botts, Jonathan; Savioja, Lauri
2015-04-01
Given a geometrical model of a space, the problem of optimally placing absorption in a space to match a desired impulse response is in general nonlinear. This has led some to use costly optimization procedures. This letter reformulates absorption assignment as a constrained linear least-squares problem. Regularized solutions result in direct distribution of absorption in the room and can accommodate multiple frequency bands, multiple sources and receivers, and constraints on geometrical placement of absorption. The method is demonstrated using a beam tracing model, resulting in the optimal absorption placement on the walls and ceiling of a classroom.
Institute of Scientific and Technical Information of China (English)
Yong Nian Ni; Wei Lin
2011-01-01
Near-infrared spectroscopy (NIR), which is generally used for online monitoring of the food analysis and production process, was applied to determine the internal quality of toothpaste samples. It is acknowledged that the spectra can be significantly influenced by non-linearities introduced by light scatter, therefore, four data preprocessing methods, including off-set correction, 1st-derivative, standard normal variate (SNV) and multiplicative scatter correction (MSC), were employed before the date analysis. The multivariate calibration model of partial least squares (PLS) was established and then was used to predict the pH values of the toothpaste samples of different brand. The results showed that the spectral date processed by MSC was the best one for predicting the pH value of the toothpaste samples.
Fast Dating Using Least-Squares Criteria and Algorithms.
To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier
2016-01-01
Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that
Least-squares reverse time migration in elastic media
Ren, Zhiming; Liu, Yang; Sen, Mrinal K.
2017-02-01
Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.
The moving-least-squares-particle hydrodynamics method (MLSPH)
Energy Technology Data Exchange (ETDEWEB)
Dilts, G. [Los Alamos National Lab., NM (United States)
1997-12-31
An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.
Learning rates of least-square regularized regression with polynomial kernels
Institute of Scientific and Technical Information of China (English)
无
2009-01-01
This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in Lρ2X with Borel probability measure.
A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem
Institute of Scientific and Technical Information of China (English)
徐洪国
2004-01-01
We present a numerical method for solving the indefinite least squares problem. We first normalize the coefficient matrix,Then we compute the hyperbolic QR factorization of the normalized matrix. Finally we compute the solution by solving several trian-gular systems. We give the first order error analysis to show that the method is backward stable. The method is more efficient thanthe backward stable method proposed by Chandrasekaran, Gu and Sayed.
A DYNAMICAL SYSTEM ALGORITHM FOR SOLVING A LEAST SQUARES PROBLEM WITH ORTHOGONALITY CONSTRAINTS
Institute of Scientific and Technical Information of China (English)
黄建国; 叶中行; 徐雷
2001-01-01
This paper introduced a dynamical system (neural networks) algorithm for solving a least squares problem with orthogonality constraints, which has wide applications in computer vision and signal processing. A rigorous analysis for the convergence and stability of the algorithm was provided. Moreover, a so called zero-extension technique was presented to keep the algorithm always convergent to the needed result for any randomly chosen initial data. Numerical experiments illustrate the effectiveness and efficiency of the algorithm.
Michaelis-Menten kinetics, the operator-repressor system, and least squares approaches.
Hadeler, Karl Peter
2013-01-01
The Michaelis-Menten (MM) function is a fractional linear function depending on two positive parameters. These can be estimated by nonlinear or linear least squares methods. The non-linear methods, based directly on the defect of the MM function, can fail and not produce any minimizer. The linear methods always produce a unique minimizer which, however, may not be positive. Here we give sufficient conditions on the data such that the nonlinear problem has at least one positive minimizer and also conditions for the minimizer of the linear problem to be positive. We discuss in detail the models and equilibrium relations of a classical operator-repressor system, and we extend our approach to the MM problem with leakage and to reversible MM kinetics. The arrangement of the sufficient conditions exhibits the important role of data that have a concavity property (chemically feasible data).
Zheng, Jun; Shao, Xinyu; Gao, Liang; Jiang, Ping; Qiu, Haobo
2015-06-01
Engineering design, especially for complex engineering systems, is usually a time-consuming process involving computation-intensive computer-based simulation and analysis methods. A difference mapping method using least square support vector regression is developed in this work, as a special metamodelling methodology that includes variable-fidelity data, to replace the computationally expensive computer codes. A general difference mapping framework is proposed where a surrogate base is first created, then the approximation is gained by a mapping the difference between the base and the real high-fidelity response surface. The least square support vector regression is adopted to accomplish the mapping. Two different sampling strategies, nested and non-nested design of experiments, are conducted to explore their respective effects on modelling accuracy. Different sample sizes and three approximation performance measures of accuracy are considered.
Institute of Scientific and Technical Information of China (English)
CHEN Nan-xiang; CAO Lian-hai; HUANG Qiang
2005-01-01
Scientific forecasting water yield of mine is of great significance to the safety production of mine and the colligated using of water resources. The paper established the forecasting model for water yield of mine, combining neural network with the partial least square method. Dealt with independent variables by the partial least square method, it can not only solve the relationship between independent variables but also reduce the input dimensions in neural network model, and then use the neural network which can solve the non-linear problem better. The result of an example shows that the prediction has higher precision in forecasting and fitting.
A least square extrapolation method for improving solution accuracy of PDE computations
Garbey, M
2003-01-01
Richardson extrapolation (RE) is based on a very simple and elegant mathematical idea that has been successful in several areas of numerical analysis such as quadrature or time integration of ODEs. In theory, RE can be used also on PDE approximations when the convergence order of a discrete solution is clearly known. But in practice, the order of a numerical method often depends on space location and is not accurately satisfied on different levels of grids used in the extrapolation formula. We propose in this paper a more robust and numerically efficient method based on the idea of finding automatically the order of a method as the solution of a least square minimization problem on the residual. We introduce a two-level and three-level least square extrapolation method that works on nonmatching embedded grid solutions via spline interpolation. Our least square extrapolation method is a post-processing of data produced by existing PDE codes, that is easy to implement and can be a better tool than RE for code v...
Precision PEP-II optics measurement with an SVD-enhanced Least-Square fitting
Yan, Y. T.; Cai, Y.
2006-03-01
A singular value decomposition (SVD)-enhanced Least-Square fitting technique is discussed. By automatic identifying, ordering, and selecting dominant SVD modes of the derivative matrix that responds to the variations of the variables, the converging process of the Least-Square fitting is significantly enhanced. Thus the fitting speed can be fast enough for a fairly large system. This technique has been successfully applied to precision PEP-II optics measurement in which we determine all quadrupole strengths (both normal and skew components) and sextupole feed-downs as well as all BPM gains and BPM cross-plane couplings through Least-Square fitting of the phase advances and the Local Green's functions as well as the coupling ellipses among BPMs. The local Green's functions are specified by 4 local transfer matrix components R12, R34, R32, R14. These measurable quantities (the Green's functions, the phase advances and the coupling ellipse tilt angles and axis ratios) are obtained by analyzing turn-by-turn Beam Position Monitor (BPM) data with a high-resolution model-independent analysis (MIA). Once all of the quadrupoles and sextupole feed-downs are determined, we obtain a computer virtual accelerator which matches the real accelerator in linear optics. Thus, beta functions, linear coupling parameters, and interaction point (IP) optics characteristics can be measured and displayed.
FOSLS (first-order systems least squares): An overivew
Energy Technology Data Exchange (ETDEWEB)
Manteuffel, T.A. [Univ. of Colorado, Boulder, CO (United States)
1996-12-31
The process of modeling a physical system involves creating a mathematical model, forming a discrete approximation, and solving the resulting linear or nonlinear system. The mathematical model may take many forms. The particular form chosen may greatly influence the ease and accuracy with which it may be discretized as well as the properties of the resulting linear or nonlinear system. If a model is chosen incorrectly it may yield linear systems with undesirable properties such as nonsymmetry or indefiniteness. On the other hand, if the model is designed with the discretization process and numerical solution in mind, it may be possible to avoid these undesirable properties.
Institute of Scientific and Technical Information of China (English)
Xudong Yu; Yu Wang; Guo Wei; Pengfei Zhang; Xingwu Long
2011-01-01
Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way. This is an important restraining factor for improving the accuracy of RLG. Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM). Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method. Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method. The results show the maximum error of RLG bias drops by almost two orders of magnitude after static temperature compensation, while bias stability of RLG improves by one order of magnitude after dynamic temperature compensation. Thus, the proposed method reduces the influence of temperature variation on the bias of the RLG effectively and improves the accuracy of the gyro scope considerably.%@@ Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way.This is an important restraining factor for improving the accuracy of RLG.Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM).Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method.Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method.
Application of least-squares spectral element solver methods to incompressible flow problems
Proot, M.M.J.; Gerritsma, M.I.; Nool, M.
2003-01-01
Least-squares spectral element methods are based on two important and successful numerical methods: spectral /hp element methods and least-squares finite element methods. In this respect, least-squares spectral element methods are very powerfull since they combine the generality of finite element me
Parallel Implementation of a Least-Squares Spectral Element Solver for Incomressible Flow Problems
Nool, M.; Proot, M.M.J.; Sloot, P.M.A.; Kenneth Tan, C.J.; Dongarra, J.J.; Hoekstra, A.G.
2002-01-01
Least-squares spectral element methods are based on two important and successful numerical methods: spectral/{\\em hp} element methods and least-squares finite element methods. Least-squares methods lead to symmetric and positive definite algebraic systems which circumvent the Ladyzhenskaya-Babu\\v{s}
Linear least squares compartmental-model-independent parameter identification in PET.
Thie, J A; Smith, G T; Hubner, K F
1997-02-01
A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals--all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible.
Iterative least square phase-measuring method that tolerates extended finite bandwidth illumination.
Munteanu, Florin; Schmit, Joanna
2009-02-20
Iterative least square phase-measuring techniques address the phase-shifting interferometry issue of sensitivity to vibrations and scanner nonlinearity. In these techniques the wavefront phase and phase steps are determined simultaneously from a single set of phase-shifted fringe frames where the phase shift does not need to have a nominal value or be a priori precisely known. This method is commonly used in laser interferometers in which the contrast of fringes is constant between frames and across the field. We present step-by-step modifications to the basic iterative least square method. These modifications allow for vibration insensitive measurements in an interferometric system in which fringe contrast varies across a single frame, as well as from frame to frame, due to the limited bandwidth light source and the nonzero numerical aperture of the objective. We demonstrate the efficiency of the new algorithm with experimental data, and we analyze theoretically the degree of contrast variation that this new algorithm can tolerate.
Extracting information from two-dimensional electrophoresis gels by partial least squares regression
DEFF Research Database (Denmark)
Jessen, Flemming; Lametsch, R.; Bendixen, E.;
2002-01-01
of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...
Khawaja, Taimoor Saleem
A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior
Chen, Y. M.; Lin, P.; He, J. Q.; He, Y.; Li, X. L.
2016-01-01
This study was carried out for rapid and noninvasive determination of the class of sorghum species by using the manifold dimensionality reduction (MDR) method and the nonlinear regression method of least squares support vector machines (LS-SVM) combing with the mid-infrared spectroscopy (MIRS) techniques. The methods of Durbin and Run test of augmented partial residual plot (APaRP) were performed to diagnose the nonlinearity of the raw spectral data. The nonlinear MDR methods of isometric feature mapping (ISOMAP), local linear embedding, laplacian eigenmaps and local tangent space alignment, as well as the linear MDR methods of principle component analysis and metric multidimensional scaling were employed to extract the feature variables. The extracted characteristic variables were utilized as the input of LS-SVM and established the relationship between the spectra and the target attributes. The mean average precision (MAP) scores and prediction accuracy were respectively used to evaluate the performance of models. The prediction results showed that the ISOMAP-LS-SVM model obtained the best classification performance, where the MAP scores and prediction accuracy were 0.947 and 92.86%, respectively. It can be concluded that the ISOMAP-LS-SVM model combined with the MIRS technique has the potential of classifying the species of sorghum in a reasonable accuracy.
Energy Technology Data Exchange (ETDEWEB)
Griffin, P.J.
1998-05-01
This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.
A negative-norm least-squares method for time-harmonic Maxwell equations
Copeland, Dylan M.
2012-04-01
This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.
Energy Technology Data Exchange (ETDEWEB)
Aziz, A., E-mail: aziz@gonzaga.edu [Department of Mechanical Engineering, School of Engineering and Applied Science, Gonzaga University, Spokane, WA 99258 (United States); Bouaziz, M.N. [Department of Mechanical Engineering, University of Medea, BP 164, Medea 26000 (Algeria)
2011-08-15
Highlights: {yields} Analytical solutions for a rectangular fin with temperature dependent heat generation and thermal conductivity. {yields} Graphs give temperature distributions and fin efficiency. {yields} Comparison of analytical and numerical solutions. {yields} Method of least squares used for the analytical solutions. - Abstract: Approximate but highly accurate solutions for the temperature distribution, fin efficiency, and optimum fin parameter for a constant area longitudinal fin with temperature dependent internal heat generation and thermal conductivity are derived analytically. The method of least squares recently used by the authors is applied to treat the two nonlinearities, one associated with the temperature dependent internal heat generation and the other due to temperature dependent thermal conductivity. The solution is built from the classical solution for a fin with uniform internal heat generation and constant thermal conductivity. The results are presented graphically and compared with the direct numerical solutions. The analytical solutions retain their accuracy (within 1% of the numerical solution) even when there is a 60% increase in thermal conductivity and internal heat generation at the base temperature from their corresponding values at the sink temperature. The present solution is simple (involves hyperbolic functions only) compared with the fairly complex approximate solutions based on the homotopy perturbation method, variational iteration method, and the double series regular perturbation method and offers high accuracy. The simple analytical expressions for the temperature distribution, the fin efficiency and the optimum fin parameter are convenient for use by engineers dealing with the design and analysis of heat generating fins operating with a large temperature difference between the base and the environment.
A Coupled Finite Difference and Moving Least Squares Simulation of Violent Breaking Wave Impact
DEFF Research Database (Denmark)
Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter
2012-01-01
incompressible and inviscid model and the wave impacts on the vertical breakwater are simulated in this model. The resulting maximum pressures and forces on the breakwater are relatively high when compared with other studies and this is due to the incompressible nature of the present model.......Two model for simulation of free surface flow is presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A special...... feature of this model is a generalized finite point set method which is applied to the solution of the Poisson equation on an unstructured point distribution. The presented finite point set method is generalized to arbitrary order of approximation. The two models are applied to simulation of steep...
Xu, Lin; Feng, Yanqiu; Liu, Xiaoyun; Kang, Lili; Chen, Wufan
2014-01-01
Accuracy of interpolation coefficients fitting to the auto-calibrating signal data is crucial for k-space-based parallel reconstruction. Both conventional generalized autocalibrating partially parallel acquisitions (GRAPPA) reconstruction that utilizes linear interpolation function and nonlinear GRAPPA (NLGRAPPA) reconstruction with polynomial kernel function are sensitive to interpolation window and often cannot consistently produce good results for overall acceleration factors. In this study, sparse multi-kernel learning is conducted within the framework of least squares support vector regression to fit interpolation coefficients as well as to reconstruct images robustly under different subsampling patterns and coil datasets. The kernel combination weights and interpolation coefficients are adaptively determined by efficient semi-infinite linear programming techniques. Experimental results on phantom and in vivo data indicate that the proposed method can automatically achieve an optimized compromise between noise suppression and residual artifacts for various sampling schemes. Compared with NLGRAPPA, our method is significantly less sensitive to the interpolation window and kernel parameters.
Baseline configuration for GNSS attitude determination with an analytical least-squares solution
Chang, Guobin; Xu, Tianhe; Wang, Qianxin
2016-12-01
The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance-covariance matrix.
Song, Jun-Ling; Hong, Yan-Ji; Wang, Guang-Yu; Pan, Hu
2013-08-01
The measurement of nonuniform temperature and concentration distributions was investigated based on tunable diode laser absorption spectroscopy technology. Through direct scanning multiple absorption lines of H2O, two zones for temperature and concentration distribution were achieved by solving nonlinear equations by least-square fitting from numerical and experimental studies. The numerical results show that the calculated temperature and concentration have relative errors of 8.3% and 7.6% compared to the model, respectively. The calculating accuracy can be improved by increasing the number of absorption lines and reduction in unknown numbers. Compared with the thermocouple readings, the high and low temperatures have relative errors of 13.8% and 3.5% respectively. The numerical results are in agreement with the experimental results.
Directory of Open Access Journals (Sweden)
Nenggen Ding
2010-01-01
Full Text Available A recursive least square (RLS algorithm for estimation of vehicle sideslip angle and road friction coefficient is proposed. The algorithm uses the information from sensors onboard vehicle and control inputs from the control logic and is intended to provide the essential information for active safety systems such as active steering, direct yaw moment control, or their combination. Based on a simple two-degree-of-freedom (DOF vehicle model, the algorithm minimizes the squared errors between estimated lateral acceleration and yaw acceleration of the vehicle and their measured values. The algorithm also utilizes available control inputs such as active steering angle and wheel brake torques. The proposed algorithm is evaluated using an 8-DOF full vehicle simulation model including all essential nonlinearities and an integrated active front steering and direct yaw moment control on dry and slippery roads.
Directory of Open Access Journals (Sweden)
Kuosheng Jiang
2014-07-01
Full Text Available In this paper a stochastic resonance (SR-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.
New predictive control algorithms based on Least Squares Support Vector Machines
Institute of Scientific and Technical Information of China (English)
LIU Bin; SU Hong-ye; CHU Jian
2005-01-01
Used for industrial process with different degree of nonlinearity, the two predictive control algorithms presented in this paper are based on Least Squares Support Vector Machines (LS-SVM) model. For the weakly nonlinear system, the system model is built by using LS-SVM with linear kernel function, and then the obtained linear LS-SVM model is transformed into linear input-output relation of the controlled system. However, for the strongly nonlinear system, the off-line model of the controlled system is built by using LS-SVM with Radial Basis Function (RBF) kernel. The obtained nonlinear LS-SVM model is linearized at each sampling instant of system running, after which the on-line linear input-output model of the system is built. Based on the obtained linear input-output model, the Generalized Predictive Control (GPC) algorithm is employed to implement predictive control for the controlled plant in both algorithms. The simulation results after the presented algorithms were implemented in two different industrial processes model; respectively revealed the effectiveness and merit of both algorithms.
Champagnat, Nicolas; Faou, Erwan
2010-01-01
We propose extensions and improvements of the statistical analysis of distributed multipoles (SADM) algorithm put forth by Chipot et al. in [6] for the derivation of distributed atomic multipoles from the quantum-mechanical electrostatic potential. The method is mathematically extended to general least-squares problems and provides an alternative approximation method in cases where the original least-squares problem is computationally not tractable, either because of its ill-posedness or its high-dimensionality. The solution is approximated employing a Monte Carlo method that takes the average of a random variable defined as the solutions of random small least-squares problems drawn as subsystems of the original problem. The conditions that ensure convergence and consistency of the method are discussed, along with an analysis of the computational cost in specific instances.
DEFF Research Database (Denmark)
Sørensen, Helle Aagaard; Petersen, Marianne Kjerstine; Jacobsen, Susanne;
2004-01-01
Rapid methods for the identification of wheat varieties and their end-use quality have been developed. The methods combine the analysis of wheat protein extracts by mass spectrometry with partial least-squares regression in order to predict the variety or end-use quality of unknown wheat samples....... The whole process takes similar to30 min. Extracts of alcohol-soluble storage proteins (gliadins) from wheat were analysed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Partial least-squares regression was subsequently applied using these mass spectra for making models...... that could predict the wheat variety or end-use quality. Previously, an artificial neural network was used to identify wheat varieties based on their protein mass spectra profiles. The present study showed that partial least-squares regression is at least as useful as neural networks for this identification...
Li, Chuang; Huang, Jian-Ping; Li, Zhen-Chun; Wang, Rong-Rong
2017-03-01
Least squares migration can eliminate the artifacts introduced by the direct imaging of irregular seismic data but is computationally costly and of slow convergence. In order to suppress the migration noise, we propose the preconditioned prestack plane-wave least squares reverse time migration (PLSRTM) method with singular spectrum constraint. Singular spectrum analysis (SSA) is used in the preconditioning of the take-offangle-domain common-image gathers (TADCIGs). In addition, we adopt randomized singular value decomposition (RSVD) to calculate the singular values. RSVD reduces the computational cost of SSA by replacing the singular value decomposition (SVD) of one large matrix with the SVD of two small matrices. We incorporate a regularization term into the preconditioned PLSRTM method that penalizes misfits between the migration images from the plane waves with adjacent angles to reduce the migration noise because the stacking of the migration results cannot effectively suppress the migration noise when the migration velocity contains errors. The regularization imposes smoothness constraints on the TADCIGs that favor differential semblance optimization constraints. Numerical analysis of synthetic data using the Marmousi model suggests that the proposed method can efficiently suppress the artifacts introduced by plane-wave gathers or irregular seismic data and improve the imaging quality of PLSRTM. Furthermore, it produces better images with less noise and more continuous structures even for inaccurate migration velocities.
Dimension reduction for p53 protein recognition by using incremental partial least squares.
Zeng, Xue-Qiang; Li, Guo-Zheng
2014-06-01
As an important tumor suppressor protein, reactivating mutated p53 was found in many kinds of human cancers and that restoring active p53 would lead to tumor regression. In recent years, more and more data extracted from biophysical simulations, which makes the modelling of mutant p53 transcriptional activity suffering from the problems of huge amount of instances and high feature dimension. Incremental feature extraction is effective to facilitate analysis of large-scale data. However, most current incremental feature extraction methods are not suitable for processing big data with high feature dimension. Partial Least Squares (PLS) has been demonstrated to be an effective dimension reduction technique for classification. In this paper, we design a highly efficient and powerful algorithm named Incremental Partial Least Squares (IPLS), which conducts a two-stage extraction process. In the first stage, the PLS target function is adapted to be incremental with updating historical mean to extract the leading projection direction. In the last stage, the other projection directions are calculated through equivalence between the PLS vectors and the Krylov sequence. We compare IPLS with some state-of-the-arts incremental feature extraction methods like Incremental Principal Component Analysis, Incremental Maximum Margin Criterion and Incremental Inter-class Scatter on real p53 proteins data. Empirical results show IPLS performs better than other methods in terms of balanced classification accuracy.
Identifying differentially methylated genes using mixed effect and generalized least square models
Directory of Open Access Journals (Sweden)
Yan Pearlly S
2009-12-01
Full Text Available Abstract Background DNA methylation plays an important role in the process of tumorigenesis. Identifying differentially methylated genes or CpG islands (CGIs associated with genes between two tumor subtypes is thus an important biological question. The methylation status of all CGIs in the whole genome can be assayed with differential methylation hybridization (DMH microarrays. However, patient samples or cell lines are heterogeneous, so their methylation pattern may be very different. In addition, neighboring probes at each CGI are correlated. How these factors affect the analysis of DMH data is unknown. Results We propose a new method for identifying differentially methylated (DM genes by identifying the associated DM CGI(s. At each CGI, we implement four different mixed effect and generalized least square models to identify DM genes between two groups. We compare four models with a simple least square regression model to study the impact of incorporating random effects and correlations. Conclusions We demonstrate that the inclusion (or exclusion of random effects and the choice of correlation structures can significantly affect the results of the data analysis. We also assess the false discovery rate of different models using CGIs associated with housekeeping genes.
Partial Least Squares Structural Equation Modeling with R
Directory of Open Access Journals (Sweden)
Hamdollah Ravand
2016-09-01
Full Text Available The ability of regression discontinuity (RD designs to provide an unbiased treatment effect while overcoming the ethical concerns plagued by Random Control Trials (RCTs make it a valuable and useful approach in education evaluation. RD is the only explicitly recognized quasi-experimental approach identified by the Institute of Education Statistics to meet the prerequisites of a causal relationship. Unfortunately, the statistical complexity of the RD design has limited its application in education research. This article provides a less technical introduction to RD for education researchers and practitioners. Using visual analysis to aide conceptual understanding, the article walks readers through the essential steps of a Sharp RD design using hypothetical, but realistic, district intervention data and provides additional resources for further exploration.
NEGATIVE NORM LEAST-SQUARES METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS
Institute of Scientific and Technical Information of China (English)
Gao Shaoqin; Duan Huoyuan
2008-01-01
The purpose of this article is to develop and analyze least-squares approxi-mations for the incompressible magnetohydrodynamic equations. The major advantage of the least-squares finite element method is that it is not subjected to the so-called Ladyzhenskaya-Babuska-Brezzi (LBB) condition. The authors employ least-squares func-tionals which involve a discrete inner product which is related to the inner product in H-1(Ω).
Directory of Open Access Journals (Sweden)
Cheng Wang
2014-01-01
Full Text Available The identification of a class of linear-in-parameters multiple-input single-output systems is considered. By using the iterative search, a least-squares based iterative algorithm and a gradient based iterative algorithm are proposed. A nonlinear example is used to verify the effectiveness of the algorithms, and the simulation results show that the least-squares based iterative algorithm can produce more accurate parameter estimates than the gradient based iterative algorithm.
Nanda, Sudarsan
2013-01-01
"Nonlinear analysis" presents recent developments in calculus in Banach space, convex sets, convex functions, best approximation, fixed point theorems, nonlinear operators, variational inequality, complementary problem and semi-inner-product spaces. Nonlinear Analysis has become important and useful in the present days because many real world problems are nonlinear, nonconvex and nonsmooth in nature. Although basic concepts have been presented here but many results presented have not appeared in any book till now. The book could be used as a text for graduate students and also it will be useful for researchers working in this field.
Harmonic estimation in a power system using a novel hybrid Least Squares-Adaline algorithm
Energy Technology Data Exchange (ETDEWEB)
Joorabian, M.; Mortazavi, S.S.; Khayyami, A.A. [Electrical Engineering Department, Shahid Chamran University, Ahwaz, 61355 (Iran)
2009-01-15
Nowadays many algorithms have been proposed for harmonic estimation in a power system. Most of them deal with this estimation as a totally nonlinear problem. Consequently, these methods either converge slowly, like GA algorithm [U. Qidwai, M. Bettayeb, GA based nonlinear harmonic estimation, IEEE Trans. Power Delivery (December) 1998], or need accurate parameter adjustment to track dynamic and abrupt changes of harmonics amplitudes, like adaptive Kalman filter (KF) [Steven Liu, An adaptive Kalman filter for dynamic estimation of harmonic signals, in: 8th International Conference On Harmonics and Quality of Power, ICHQP'98, Athens, Greece, October 14-16, 1998]. In this paper a novel hybrid approach, based on the decomposition of the problem into a linear and a nonlinear problem, is proposed. A linear estimator, i.e., Least Squares (LS), which is simple, fast and does not need any parameter tuning to follow harmonics amplitude changes, is used for amplitude estimation and an adaptive linear combiner called 'Adaline', which is very fast and very simple is used to estimate phases of harmonics. An improvement in convergence and processing time is achieved using this algorithm. Moreover, better performance in online tracking of dynamic and abrupt changes of signals is the result of applying this method. (author)
CHEBYSHEV WEIGHTED NORM LEAST-SQUARES SPECTRAL METHODS FOR THE ELLIPTIC PROBLEM
Institute of Scientific and Technical Information of China (English)
Sang Dong Kim; Byeong Chun Shin
2006-01-01
We develop and analyze a first-order system least-squares spectral method for the second-order elliptic boundary value problem with variable coefficients. We first analyze the Chebyshev weighted norm least-squares functional defined by the sum of the L2w-and H-1w,- norm of the residual equations and then we replace the negative norm by the discrete negative norm and analyze the discrete Chebyshev weighted least-squares method. The spectral convergence is derived for the proposed method. We also present various numerical experiments. The Legendre weighted least-squares method can be easily developed by following this paper.
Directory of Open Access Journals (Sweden)
Zhan-bo Chen
2014-01-01
Full Text Available In order to improve the performance prediction accuracy of hydraulic excavator, the regression least squares support vector machine is applied. First, the mathematical model of the regression least squares support vector machine is studied, and then the algorithm of the regression least squares support vector machine is designed. Finally, the performance prediction simulation of hydraulic excavator based on regression least squares support vector machine is carried out, and simulation results show that this method can predict the performance changing rules of hydraulic excavator correctly.
Least-squares finite-element scheme for the lattice Boltzmann method on an unstructured mesh.
Li, Yusong; LeBoeuf, Eugene J; Basu, P K
2005-10-01
A numerical model of the lattice Boltzmann method (LBM) utilizing least-squares finite-element method in space and the Crank-Nicolson method in time is developed. This method is able to solve fluid flow in domains that contain complex or irregular geometric boundaries by using the flexibility and numerical stability of a finite-element method, while employing accurate least-squares optimization. Fourth-order accuracy in space and second-order accuracy in time are derived for a pure advection equation on a uniform mesh; while high stability is implied from a von Neumann linearized stability analysis. Implemented on unstructured mesh through an innovative element-by-element approach, the proposed method requires fewer grid points and less memory compared to traditional LBM. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow, Couette flow, and flow past a circular cylinder. Finally, the proposed method is applied to estimate the permeability of a randomly generated porous media, which further demonstrates its inherent geometric flexibility.
Least-squares migration of multisource data with a deblurring filter
Dai, Wei
2011-09-01
Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.
Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems
Abbasi-Yadkori, Yasin; Szepesvari, Csaba
2011-01-01
The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bou...
Directory of Open Access Journals (Sweden)
Youxin Luo
2013-04-01
Full Text Available Machine tools based on a Stewart platform are considered the machine tools of the 21st century. Difficult problems exist in the design philosophy, of which forward displacement analysis is the most fundamental. Its mathematical model is a kind of strongly nonlinear multivariable equation set with unique characteristics and a high level of difficulty. Different variable numbers and different solving speeds can be obtained through using different methods to establish the model of forward displacement analysis. The damped least-square method based on chaos anti-control for solving displacement analysis of the general 6-6-type parallel mechanism was built up through the rotation transformation matrix R, translation vector P and the constraint conditions of the rod length. The Euler equations describing the rotational dynamics of a rigid body with principle axes at the centre of mass were converted to a chaotic system by using chaos anti-control, and chaotic sequences were produced using the chaos system. Combining the characteristics of the chaotic sequence with the damped least-square method, all real solutions of forward displacement in nonlinear equations were found. A numerical example shows that the new method has some interesting features, such as fast convergence and the capability of acquiring all real solutions, and comparisons with other methods prove its effectiveness and validity.
Neuenkirch, Andreas
2011-01-01
We study a least square-type estimator for an unknown parameter in the drift coefficient of a stochastic differential equation with additive fractional noise of Hurst parameter H>1/2. The estimator is based on discrete time observations of the stochastic differential equation, and using tools from ergodic theory and stochastic analysis we derive its strong consistency.
Directory of Open Access Journals (Sweden)
Ching-Yun Kao
2010-01-01
Full Text Available In engineering applications, the development of attenuation relationships in a seismic hazard analysis is a useful way to plan for earthquake hazard mitigation. However, finding an optimal solution is difficult using traditional mathematical methods because of the nonlinearity of many relationships. Furthermore, using unweighted regression analysis in which each recording carries an equal weight is often problematic because of the non-uniform distribution of the data with respect to distance. In this study, the least squares method (LSM and a genetic algorithm (GA were employed as optimization methods for an attenuation model to compare the robustness and prediction accuracy of the two methods. Different (equal and unequal weights of each recording were used to compare the adaptability of the weighting for practical application. The unequal weights of each recording were defined as functions of the hypocentral distance or the shortest distance from a station to the fault on the _ surface. Finally, regression analysis of horizontal peak ground acceleration (PGA attenuation model in southwest Taiwan was shown.
A Least Square Finite Element Technique for Transonic Flow with Shock,
1977-08-22
dimensional form. A least square finite element technique was used with a linearly interpolating polynomial to reduce the governing equation to a...partial differential equations by a system of ordinary differential equations. Using the least square finite element technique a computer program was
Speckle evolution with multiple steps of least-squares phase removal
CSIR Research Space (South Africa)
Chen, M
2011-08-01
Full Text Available The authors study numerically the evolution of speckle fields due to the annihilation of optical vortices after the least-squares phase has been removed. A process with multiple steps of least-squares phase removal is carried out to minimize both...
Least-Squares Mirrorsymmetric Solution for Matrix Equations (AX=B, XC=D)
Institute of Scientific and Technical Information of China (English)
Fanliang Li; Xiyan Hu; Lei Zhang
2006-01-01
In this paper, least-squares mirrorsymmetric solution for matrix equations (AX =B, XC=D) and its optimal approximation is considered. With special expression of mirrorsymmetric matrices, a general representation of solution for the least-squares problem is obtained. In addition, the optimal approximate solution and some algorithms to obtain the optimal approximation are provided.
A Generalized Autocovariance Least-Squares Method for Kalman Filter Tuning
DEFF Research Database (Denmark)
Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad
2008-01-01
of the state estimates. There is a linear relationship between covariances and autocovariance. Therefore, the covariance estimation problem can be stated as a least-squares problem, which can be solved as a symmetric semidefinite least-squares problem. This problem is convex and can be solved efficiently...
A Simple Introduction to Moving Least Squares and Local Regression Estimation
Energy Technology Data Exchange (ETDEWEB)
Garimella, Rao Veerabhadra [Los Alamos National Lab. (LANL), Los Alamos, NM (United States)
2017-06-22
In this brief note, a highly simpli ed introduction to esimating functions over a set of particles is presented. The note starts from Global Least Squares tting, going on to Moving Least Squares estimation (MLS) and nally, Local Regression Estimation (LRE).
Directory of Open Access Journals (Sweden)
Mingjun Zhang
2015-12-01
Full Text Available A novel thruster fault identification method for autonomous underwater vehicle is presented in this article. It uses the proposed peak region energy method to extract fault feature and uses the proposed least square grey relational grade method to estimate fault degree. The peak region energy method is developed from fusion feature modulus maximum method. It applies the fusion feature modulus maximum method to get fusion feature and then regards the maximum of peak region energy in the convolution operation results of fusion feature as fault feature. The least square grey relational grade method is developed from grey relational analysis algorithm. It determines the fault degree interval by the grey relational analysis algorithm and then estimates fault degree in the interval by least square algorithm. Pool experiments of the experimental prototype are conducted to verify the effectiveness of the proposed methods. The experimental results show that the fault feature extracted by the peak region energy method is monotonic to fault degree while the one extracted by the fusion feature modulus maximum method is not. The least square grey relational grade method can further get an estimation result between adjacent standard fault degrees while the estimation result of the grey relational analysis algorithm is just one of the standard fault degrees.
Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.
2017-04-01
To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.
Energy Technology Data Exchange (ETDEWEB)
Li, Chun-Hua; Zhu, Xin-Jian; Cao, Guang-Yi; Sui, Sheng; Hu, Ming-Ruo [Fuel Cell Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China)
2008-01-03
This paper reports a Hammerstein modeling study of a proton exchange membrane fuel cell (PEMFC) stack using least squares support vector machines (LS-SVM). PEMFC is a complex nonlinear, multi-input and multi-output (MIMO) system that is hard to model by traditional methodologies. Due to the generalization performance of LS-SVM being independent of the dimensionality of the input data and the particularly simple structure of the Hammerstein model, a MIMO SVM-ARX (linear autoregression model with exogenous input) Hammerstein model is used to represent the PEMFC stack in this paper. The linear model parameters and the static nonlinearity can be obtained simultaneously by solving a set of linear equations followed by the singular value decomposition (SVD). The simulation tests demonstrate the obtained SVM-ARX Hammerstein model can efficiently approximate the dynamic behavior of a PEMFC stack. Furthermore, based on the proposed SVM-ARX Hammerstein model, valid control strategy studies such as predictive control, robust control can be developed. (author)
Wilson, Edward (Inventor)
2006-01-01
The present invention is a method for identifying unknown parameters in a system having a set of governing equations describing its behavior that cannot be put into regression form with the unknown parameters linearly represented. In this method, the vector of unknown parameters is segmented into a plurality of groups where each individual group of unknown parameters may be isolated linearly by manipulation of said equations. Multiple concurrent and independent recursive least squares identification of each said group run, treating other unknown parameters appearing in their regression equation as if they were known perfectly, with said values provided by recursive least squares estimation from the other groups, thereby enabling the use of fast, compact, efficient linear algorithms to solve problems that would otherwise require nonlinear solution approaches. This invention is presented with application to identification of mass and thruster properties for a thruster-controlled spacecraft.
Extension of least squares spectral resolution algorithm to high-resolution lipidomics data
Energy Technology Data Exchange (ETDEWEB)
Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)
2016-03-31
Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.
Directory of Open Access Journals (Sweden)
Margaretha Ohyver
2014-12-01
Full Text Available Multicollinearity and outliers are the common problems when estimating regression model. Multicollinearitiy occurs when there are high correlations among predictor variables, leading to difficulties in separating the effects of each independent variable on the response variable. While, if outliers are present in the data to be analyzed, then the assumption of normality in the regression will be violated and the results of the analysis may be incorrect or misleading. Both of these cases occurred in the data on room occupancy rate of hotels in Kendari. The purpose of this study is to find a model for the data that is free of multicollinearity and outliers and to determine the factors that affect the level of room occupancy hotels in Kendari. The method used is Continuous Wavelet Transformation and Partial Least Squares. The result of this research is a regression model that is free of multicollinearity and a pattern of data that resolved the present of outliers.
Water Quantity Prediction Using Least Squares Support Vector Machines (LS-SVM Method
Directory of Open Access Journals (Sweden)
Nian Zhang
2014-08-01
Full Text Available The impact of reliable estimation of stream flows at highly urbanized areas and the associated receiving waters is very important for water resources analysis and design. We used the least squares support vector machine (LS-SVM based algorithm to forecast the future streamflow discharge. A Gaussian Radial Basis Function (RBF kernel framework was built on the data set to optimize the tuning parameters and to obtain the moderated output. The training process of LS-SVM was designed to select both kernel parameters and regularization constants. The USGS real-time water data were used as time series input. 50% of the data were used for training, and 50% were used for testing. The experimental results showed that the LS-SVM algorithm is a reliable and efficient method for streamflow prediction, which has an important impact to the water resource management field.
Li, Wen-Tao; Huang, Lin-Fang; Du, Jing; Chen, Shi-Lin
2013-10-01
A total of eleven ecological factors values were obtained from the ecological suitability database of the geographic information system for traditional Chinese medicines production areas (TCM-GIS), and the relationships between the chemical components of Dendrobium and the ecological factors were analyzed by partial least square (PLS) regression. There existed significant differences in the chemical components contents of the same species of Dendrobium in different areas. The polysaccharides content of D. officinale had significant positive correlation with soil type, the accumulated dendrobine in D. nobile was significantly positively correlated with annual precipitation, and the erianin content of D. chrysotoxum was mainly affected by air temperature. The principal component analysis (PCA) showed that Zhejiang Province was the optimal production area for D. officinale, Guizhou Province was the most appropriate planting area for D. nobile, and Yunnan Province was the best production area of D. chrysotoxum.
Montgomery, R. C.; Sundararajan, N.
1984-01-01
The basic theory of least square lattice filters and their use in identification of structural dynamics systems is summarized. Thereafter, this theory is applied to a two-dimensional grid structure made of overlapping bars. Previously, this theory has been applied to an integral beam. System identification results are presented for both simulated and experimental tests and they are compared with those predicted using finite element modelling. The lattice filtering approach works well for simulated data based on finite element modelling. However, considerable discrepancy exists between estimates obtained from experimental data and the finite element analysis. It is believed that this discrepancy is the result of inadequacies in the finite element modelling to represent the damped motion of the laboratory apparatus.
A Novel Soft Sensor Modeling Approach Based on Least Squares Support Vector Machines
Institute of Scientific and Technical Information of China (English)
Feng Rui(冯瑞); Song Chunlin; Zhang Yanzhu; Shao Huihe
2004-01-01
Artificial Neural Networks (ANNs) such as radial basis function neural networks (RBFNNs) have been successfully used in soft sensor modeling. However, the generalization ability of conventional ANNs is not very well. For this reason, we present a novel soft sensor modeling approach based on Support Vector Machines (SVMs). Since standard SVMs have the limitation of speed and size in training large data set, we hereby propose Least Squares Support Vector Machines (LS_SVMs) and apply it to soft sensor modeling. Systematic analysis is performed and the result indicates that the proposed method provides satisfactory performance with excellent approximation and generalization property. Monte Carlo simulations show that our soft sensor modeling approach achieves performance superior to the conventional method based on RBFNNs.
关于TLS问题%ON THE TOTAL LEAST SQUARES PROBLEM
Institute of Scientific and Technical Information of China (English)
魏木生; 朱超
2002-01-01
The total least squares(TLS) is a method of solving an overdetermined systemof linear equations AX = B that is appropriate when there are errors in both A andB. Golub and Van Loan(G. H. Golub and C. F. Van Loan, SIAM J. Numer. Anal.17(1980), 883-893) introduced this method into the field of numerical analysis anddeveloped an algorithm based on singular value decomposition. While M. Wei(M.Wei, Numer. Math. 62(1992), 123-148) proposed a new definition for TLS problem.In this paper, we discuss the relations between the two definitions. As a result,one can see that the latter definition is a generalization of the former one.
Underwater terrain positioning method based on least squares estimation for AUV
Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing
2015-12-01
To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.
Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model
Yu, Lean; Wang, Shouyang; Lai, K. K.
Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.
Abbasi Tarighat, Maryam; Nabavi, Masoume; Mohammadizadeh, Mohammad Reza
2015-06-01
A new multi-component analysis method based on zero-crossing point-continuous wavelet transformation (CWT) was developed for simultaneous spectrophotometric determination of Cu2+ and Pb2+ ions based on the complex formation with 2-benzyl espiro[isoindoline-1,5oxasolidine]-2,3,4 trione (BSIIOT). The absorption spectra were evaluated with respect to synthetic ligand concentration, time of complexation and pH. Therefore according the absorbance values, 0.015 mmol L-1 BSIIOT, 10 min after mixing and pH 8.0 were used as optimum values. The complex formation between BSIIOT ligand and the cations Cu2+ and Pb2+ by application of rank annihilation factor analysis (RAFA) were investigated. Daubechies-4 (db4), discrete Meyer (dmey), Morlet (morl) and Symlet-8 (sym8) continuous wavelet transforms for signal treatments were found to be suitable among the wavelet families. The applicability of new synthetic ligand and selected mother wavelets were used for the simultaneous determination of strongly overlapped spectra of species without using any pre-chemical treatment. Therefore, CWT signals together with zero crossing technique were directly applied to the overlapping absorption spectra of Cu2+ and Pb2+. The calibration graphs for estimation of Pb2+ and Cu 2+were obtained by measuring the CWT amplitudes at zero crossing points for Cu2+ and Pb2+ at the wavelet domain, respectively. The proposed method was validated by simultaneous determination of Cu2+ and Pb2+ ions in red beans, walnut, rice, tea and soil samples. The obtained results of samples with proposed method have been compared with those predicted by partial least squares (PLS) and flame atomic absorption spectrophotometry (FAAS).
The use of least squares methods in functional optimization of energy use prediction models
Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.
2012-06-01
The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.
An Effective Hybrid Artificial Bee Colony Algorithm for Nonnegative Linear Least Squares Problems
Directory of Open Access Journals (Sweden)
Xiangyu Kong
2014-07-01
Full Text Available An effective hybrid artificial bee colony algorithm is proposed in this paper for nonnegative linear least squares problems. To further improve the performance of algorithm, orthogonal initialization method is employed to generate the initial swarm. Furthermore, to balance the exploration and exploitation abilities, a new search mechanism is designed. The performance of this algorithm is verified by using 27 benchmark functions and 5 nonnegative linear least squares test problems. And the comparison analyses are given between the proposed algorithm and other swarm intelligence algorithms. Numerical results demonstrate that the proposed algorithm displays a high performance compared with other algorithms for global optimization problems and nonnegative linear least squares problems.
On the interpretation of least squares collocation. [for geodetic data reduction
Tapley, B. D.
1976-01-01
A demonstration is given of the strict mathematical equivalence between the least squares collocation and the classical minimum variance estimates. It is shown that the least squares collocation algorithms are a special case of the modified minimum variance estimates. The computational efficiency of several forms of the general minimum variance estimation algorithm is discussed. It is pointed out that for certain geodetic applications the least square collocation algorithm may provide a more efficient formulation of the results from the point of view of the computations required.
DEFF Research Database (Denmark)
Anders, Annett; Nishijima, Kazuyoshi
The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... the improvement of the computational efficiency is to “best utilize” the least squares method; i.e. least squares method is applied for estimating the expected utility for terminal decisions, conditional on realizations of underlying random phenomena at respective times in a parametric way. The implementation...
A least squares finite element scheme for transonic flow around harmonically oscillating airfoils
Cox, C. L.; Fix, G. J.; Gunzburger, M. D.
1983-01-01
The present investigation shows that a finite element scheme with a weighted least squares variational principle is applicable to the problem of transonic flow around a harmonically oscillating airfoil. For the flat plate case, numerical results compare favorably with the exact solution. The obtained numerical results for the transonic problem, for which an exact solution is not known, have the characteristics of known experimental results. It is demonstrated that the performance of the employed numerical method is independent of equation type (elliptic or hyperbolic) and frequency. The weighted least squares principle allows the appropriate modeling of singularities, which such a modeling of singularities is not possible with normal least squares.
Least-squares methods involving the H{sup -1} inner product
Energy Technology Data Exchange (ETDEWEB)
Pasciak, J.
1996-12-31
Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.
Dutta, Gaurav
2013-08-20
Attenuation leads to distortion of amplitude and phase of seismic waves propagating inside the earth. Conventional acoustic and least-squares reverse time migration do not account for this distortion which leads to defocusing of migration images in highly attenuative geological environments. To account for this distortion, we propose to use the visco-acoustic wave equation for least-squares reverse time migration. Numerical tests on synthetic data show that least-squares reverse time migration with the visco-acoustic wave equation corrects for this distortion and produces images with better balanced amplitudes compared to the conventional approach. © 2013 SEG.