WorldWideScience

Sample records for carlo library least-squares

  1. A Monte Carlo Library Least Square approach in the Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) process in bulk coal samples

    Science.gov (United States)

    Reyhancan, Iskender Atilla; Ebrahimi, Alborz; Çolak, Üner; Erduran, M. Nizamettin; Angin, Nergis

    2017-01-01

    A new Monte-Carlo Library Least Square (MCLLS) approach for treating non-linear radiation analysis problem in Neutron Inelastic-scattering and Thermal-capture Analysis (NISTA) was developed. 14 MeV neutrons were produced by a neutron generator via the 3H (2H , n) 4He reaction. The prompt gamma ray spectra from bulk samples of seven different materials were measured by a Bismuth Germanate (BGO) gamma detection system. Polyethylene was used as neutron moderator along with iron and lead as neutron and gamma ray shielding, respectively. The gamma detection system was equipped with a list mode data acquisition system which streams spectroscopy data directly to the computer, event-by-event. A GEANT4 simulation toolkit was used for generating the single-element libraries of all the elements of interest. These libraries were then used in a Linear Library Least Square (LLLS) approach with an unknown experimental sample spectrum to fit it with the calculated elemental libraries. GEANT4 simulation results were also used for the selection of the neutron shielding material.

  2. Optimization of sequential decisions by least squares Monte Carlo method

    DEFF Research Database (Denmark)

    Nishijima, Kazuyoshi; Anders, Annett

    change adaptation measures, and evacuation of people and assets in the face of an emerging natural hazard event. Focusing on the last example, an efficient solution scheme is proposed by Anders and Nishijima (2011). The proposed solution scheme takes basis in the least squares Monte Carlo method, which...

  3. Enhanced least squares Monte Carlo method for real-time decision optimizations for evolving natural hazards

    DEFF Research Database (Denmark)

    Anders, Annett; Nishijima, Kazuyoshi

    The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... the improvement of the computational efficiency is to “best utilize” the least squares method; i.e. least squares method is applied for estimating the expected utility for terminal decisions, conditional on realizations of underlying random phenomena at respective times in a parametric way. The implementation...

  4. Canonical Least-Squares Monte Carlo Valuation of American Options: Convergence and Empirical Pricing Analysis

    Directory of Open Access Journals (Sweden)

    Xisheng Yu

    2014-01-01

    Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.

  5. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  6. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  7. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  8. A SUCCESSIVE LEAST SQUARES METHOD FOR STRUCTURED TOTAL LEAST SQUARES

    Institute of Scientific and Technical Information of China (English)

    Plamen Y. Yalamov; Jin-yun Yuan

    2003-01-01

    A new method for Total Least Squares (TLS) problems is presented. It differs from previous approaches and is based on the solution of successive Least Squares problems.The method is quite suitable for Structured TLS (STLS) problems. We study mostly the case of Toeplitz matrices in this paper. The numerical tests illustrate that the method converges to the solution fast for Toeplitz STLS problems. Since the method is designed for general TLS problems, other structured problems can be treated similarly.

  9. AKLSQF - LEAST SQUARES CURVE FITTING

    Science.gov (United States)

    Kantak, A. V.

    1994-01-01

    The Least Squares Curve Fitting program, AKLSQF, computes the polynomial which will least square fit uniformly spaced data easily and efficiently. The program allows the user to specify the tolerable least squares error in the fitting or allows the user to specify the polynomial degree. In both cases AKLSQF returns the polynomial and the actual least squares fit error incurred in the operation. The data may be supplied to the routine either by direct keyboard entry or via a file. AKLSQF produces the least squares polynomial in two steps. First, the data points are least squares fitted using the orthogonal factorial polynomials. The result is then reduced to a regular polynomial using Sterling numbers of the first kind. If an error tolerance is specified, the program starts with a polynomial of degree 1 and computes the least squares fit error. The degree of the polynomial used for fitting is then increased successively until the error criterion specified by the user is met. At every step the polynomial as well as the least squares fitting error is printed to the screen. In general, the program can produce a curve fitting up to a 100 degree polynomial. All computations in the program are carried out under Double Precision format for real numbers and under long integer format for integers to provide the maximum accuracy possible. AKLSQF was written for an IBM PC X/AT or compatible using Microsoft's Quick Basic compiler. It has been implemented under DOS 3.2.1 using 23K of RAM. AKLSQF was developed in 1989.

  10. Quasi-least squares regression

    CERN Document Server

    Shults, Justine

    2014-01-01

    Drawing on the authors' substantial expertise in modeling longitudinal and clustered data, Quasi-Least Squares Regression provides a thorough treatment of quasi-least squares (QLS) regression-a computational approach for the estimation of correlation parameters within the framework of generalized estimating equations (GEEs). The authors present a detailed evaluation of QLS methodology, demonstrating the advantages of QLS in comparison with alternative methods. They describe how QLS can be used to extend the application of the traditional GEE approach to the analysis of unequally spaced longitu

  11. Bayesian Sparse Partial Least Squares

    NARCIS (Netherlands)

    Vidaurre, D.; Gerven, M.A.J. van; Bielza, C.; Larrañaga, P.; Heskes, T.M.

    2013-01-01

    Partial least squares (PLS) is a class of methods that makes use of a set of latent or unobserved variables to model the relation between (typically) two sets of input and output variables, respectively. Several flavors, depending on how the latent variables or components are computed, have been dev

  12. Monte Carlo method of least squares fitting of experimental data%基于蒙特卡罗最小二乘的实验数据拟合方法

    Institute of Scientific and Technical Information of China (English)

    颜清; 彭小平

    2011-01-01

    Using the least squares method that fits chemical industry empirical datum, the correlation coefficient approaches in 1, and the precision is high.the results differ with the empirical correlation. Monte Carlo method is a probabilistic model based on non-deterministic numerical methods. Monte Carlo method of least squares fits of experimental data processing chemicals.so the application is more flexible and broader scope. In the Excel spreadsheet, using the worksheet data and VBA programming is easy to complete mixing least-squares data fitting Monte Carlo, VBA and Excel spreadsheets to achieve data communications and parallel processing experimental data, to read the worksheet experimental data and calculate the approximate point random search, the least-squares statistical analysis, and the results output to the worksheet. Monte Carlo method of least squares fits method of least squares using the same precision with the standard, in line with large numbers theorem, which is based on the accuracy improved significantly. Monte Carlo method in the random search point is small, the error, and when the random search points to 10 000, its accuracy is almost the same with the method of least squares. At the same time we can get the empirical correlation that has been very close relationship between the number of quasi-equation sand practice which make unified theory of the experimental results.%采用最小二乘法拟合化工实验数据,相关系数接近于1,精度高,但所得的结果与经验关联式大相径庭.蒙特卡罗方法是一种基于概率模型的非确定性数值方法.蒙特卡罗最小二乘拟合方法处理化工实验数据,应用中更为灵活,适用范围更广.在Excel电子表格中,利用工作表中的数据与VBA混合编程很容易完成蒙特卡罗最小二乘数据拟合,VBA实现与Excel电子表格的数据通讯及并行处理实验数据,读取工作表中的实验数据,计算随机点的大致搜索范围,进行最小二乘

  13. Partial least squares methods: partial least squares correlation and partial least square regression.

    Science.gov (United States)

    Abdi, Hervé; Williams, Lynne J

    2013-01-01

    Partial least square (PLS) methods (also sometimes called projection to latent structures) relate the information present in two data tables that collect measurements on the same set of observations. PLS methods proceed by deriving latent variables which are (optimal) linear combinations of the variables of a data table. When the goal is to find the shared information between two tables, the approach is equivalent to a correlation problem and the technique is then called partial least square correlation (PLSC) (also sometimes called PLS-SVD). In this case there are two sets of latent variables (one set per table), and these latent variables are required to have maximal covariance. When the goal is to predict one data table the other one, the technique is then called partial least square regression. In this case there is one set of latent variables (derived from the predictor table) and these latent variables are required to give the best possible prediction. In this paper we present and illustrate PLSC and PLSR and show how these descriptive multivariate analysis techniques can be extended to deal with inferential questions by using cross-validation techniques such as the bootstrap and permutation tests.

  14. Nonlinear Least Squares for Inverse Problems

    CERN Document Server

    Chavent, Guy

    2009-01-01

    Presents an introduction into the least squares resolution of nonlinear inverse problems. This title intends to develop a geometrical theory to analyze nonlinear least square (NLS) problems with respect to their quadratic wellposedness, that is, both wellposedness and optimizability

  15. Collinearity in Least-Squares Analysis

    Science.gov (United States)

    de Levie, Robert

    2012-01-01

    How useful are the standard deviations per se, and how reliable are results derived from several least-squares coefficients and their associated standard deviations? When the output parameters obtained from a least-squares analysis are mutually independent, as is often assumed, they are reliable estimators of imprecision and so are the functions…

  16. Tikhonov Regularization and Total Least Squares

    DEFF Research Database (Denmark)

    Golub, G. H.; Hansen, Per Christian; O'Leary, D. P.

    2000-01-01

    formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that...

  17. The Monte Carlo validation framework for the discriminant partial least squares model extended with variable selection methods applied to authenticity studies of Viagra® based on chromatographic impurity profiles.

    Science.gov (United States)

    Krakowska, B; Custers, D; Deconinck, E; Daszykowski, M

    2016-02-07

    The aim of this work was to develop a general framework for the validation of discriminant models based on the Monte Carlo approach that is used in the context of authenticity studies based on chromatographic impurity profiles. The performance of the validation approach was applied to evaluate the usefulness of the diagnostic logic rule obtained from the partial least squares discriminant model (PLS-DA) that was built to discriminate authentic Viagra® samples from counterfeits (a two-class problem). The major advantage of the proposed validation framework stems from the possibility of obtaining distributions for different figures of merit that describe the PLS-DA model such as, e.g., sensitivity, specificity, correct classification rate and area under the curve in a function of model complexity. Therefore, one can quickly evaluate their uncertainty estimates. Moreover, the Monte Carlo model validation allows balanced sets of training samples to be designed, which is required at the stage of the construction of PLS-DA and is recommended in order to obtain fair estimates that are based on an independent set of samples. In this study, as an illustrative example, 46 authentic Viagra® samples and 97 counterfeit samples were analyzed and described by their impurity profiles that were determined using high performance liquid chromatography with photodiode array detection and further discriminated using the PLS-DA approach. In addition, we demonstrated how to extend the Monte Carlo validation framework with four different variable selection schemes: the elimination of uninformative variables, the importance of a variable in projections, selectivity ratio and significance multivariate correlation. The best PLS-DA model was based on a subset of variables that were selected using the variable importance in the projection approach. For an independent test set, average estimates with the corresponding standard deviation (based on 1000 Monte Carlo runs) of the correct

  18. Least Squares Data Fitting with Applications

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela

    predictively. The main concern of Least Squares Data Fitting with Applications is how to do this on a computer with efficient and robust computational methods for linear and nonlinear relationships. The presentation also establishes a link between the statistical setting and the computational issues...... with problems of linear and nonlinear least squares fitting will find this book invaluable as a hands-on guide, with accessible text and carefully explained problems. Included are • an overview of computational methods together with their properties and advantages • topics from statistical regression analysis......As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data...

  19. Partial update least-square adaptive filtering

    CERN Document Server

    Xie, Bei

    2014-01-01

    Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a

  20. Deformation analysis with Total Least Squares

    Directory of Open Access Journals (Sweden)

    M. Acar

    2006-01-01

    Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.

  1. Combinatorics of least-squares trees.

    Science.gov (United States)

    Mihaescu, Radu; Pachter, Lior

    2008-09-01

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  2. Consistent Partial Least Squares Path Modeling

    NARCIS (Netherlands)

    Dijkstra, Theo K.; Henseler, Jörg

    2015-01-01

    This paper resumes the discussion in information systems research on the use of partial least squares (PLS) path modeling and shows that the inconsistency of PLS path coefficient estimates in the case of reflective measurement can have adverse consequences for hypothesis testing. To remedy this, the

  3. Least-squares fitting Gompertz curve

    Science.gov (United States)

    Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf

    2004-08-01

    In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.

  4. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  5. Least square fitting with one parameter less

    CERN Document Server

    Berg, Bernd A

    2015-01-01

    It is shown that whenever the multiplicative normalization of a fitting function is not known, least square fitting by $\\chi^2$ minimization can be performed with one parameter less than usual by converting the normalization parameter into a function of the remaining parameters and the data.

  6. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  7. Least Squares Moving-Window Spectral Analysis.

    Science.gov (United States)

    Lee, Young Jong

    2017-01-01

    Least squares regression is proposed as a moving-windows method for analysis of a series of spectra acquired as a function of external perturbation. The least squares moving-window (LSMW) method can be considered an extended form of the Savitzky-Golay differentiation for nonuniform perturbation spacing. LSMW is characterized in terms of moving-window size, perturbation spacing type, and intensity noise. Simulation results from LSMW are compared with results from other numerical differentiation methods, such as single-interval differentiation, autocorrelation moving-window, and perturbation correlation moving-window methods. It is demonstrated that this simple LSMW method can be useful for quantitative analysis of nonuniformly spaced spectral data with high frequency noise.

  8. Diagonal loading least squares time delay estimation

    Institute of Scientific and Technical Information of China (English)

    LI Xuan; YAN Shefeng; MA Xiaochuan

    2012-01-01

    Least squares (LS) time delay estimation is a classical and effective method. However, the performance is degraded severely in the scenario of low ratio of signal-noise (SNR) due to the instability of matrix inversing. In order to solve the problem, diagonal loading least squares (DL-LS) is proposed by adding a positive definite matrix to the inverse matrix. Furthermore, the shortcoming of fixed diagonal loading is analyzed from the point of regularization that when the tolerance of low SNR is increased, veracity is decreased. This problem is resolved by reloading. The primary estimation's reciprocal is introduced as diagonal loading and it leads to small diagonal loading at the time of arrival and larger loading at other time. Simulation and pool experiment prove the algorithm has better performance.

  9. Efficient least-squares basket-weaving

    Science.gov (United States)

    Winkel, B.; Flöer, L.; Kraus, A.

    2012-11-01

    We report on a novel method to solve the basket-weaving problem. Basket-weaving is a technique that is used to remove scan-line patterns from single-dish radio maps. The new approach applies linear least-squares and works on gridded maps from arbitrarily sampled data, which greatly improves computational efficiency and robustness. It also allows masking of bad data, which is useful for cases where radio frequency interference is present in the data. We evaluate the algorithms using simulations and real data obtained with the Effelsberg 100-m telescope.

  10. Efficient least-squares basket-weaving

    CERN Document Server

    Winkel, B; Kraus, A

    2012-01-01

    We report on a novel method to solve the basket-weaving problem. Basket-weaving is a technique that is used to remove scan-line patterns from single-dish radio maps. The new approach applies linear least-squares and works on gridded maps from arbitrarily sampled data, which greatly improves computational efficiency and robustness. It also allows masking of bad data, which is useful for cases where radio frequency interference is present in the data. We evaluate the algorithms using simulations and real data obtained with the Effelsberg 100-m telescope.

  11. Meshfree First-order System Least Squares

    Institute of Scientific and Technical Information of China (English)

    Hugh R.MacMillan; Max D.Gunzburger; John V.Burkardt

    2008-01-01

    We prove convergence for a meshfree first-order system least squares (FOSLS) partition of unity finite element method (PUFEM). Essentially, by virtue of the partition of unity, local approximation gives rise to global approximation in H(div)∩ H(curl). The FOSLS formulation yields local a posteriori error estimates to guide the judicious allotment of new degrees of freedom to enrich the initial point set in a meshfree dis-cretization. Preliminary numerical results are provided and remaining challenges are discussed.

  12. Augmented Classical Least Squares Multivariate Spectral Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Haaland, David M. (Albuquerque, NM); Melgaard, David K. (Albuquerque, NM)

    2005-07-26

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  13. Augmented Classical Least Squares Multivariate Spectral Analysis

    Energy Technology Data Exchange (ETDEWEB)

    Haaland, David M. (Albuquerque, NM); Melgaard, David K. (Albuquerque, NM)

    2005-01-11

    A method of multivariate spectral analysis, termed augmented classical least squares (ACLS), provides an improved CLS calibration model when unmodeled sources of spectral variation are contained in a calibration sample set. The ACLS methods use information derived from component or spectral residuals during the CLS calibration to provide an improved calibration-augmented CLS model. The ACLS methods are based on CLS so that they retain the qualitative benefits of CLS, yet they have the flexibility of PLS and other hybrid techniques in that they can define a prediction model even with unmodeled sources of spectral variation that are not explicitly included in the calibration model. The unmodeled sources of spectral variation may be unknown constituents, constituents with unknown concentrations, nonlinear responses, non-uniform and correlated errors, or other sources of spectral variation that are present in the calibration sample spectra. Also, since the various ACLS methods are based on CLS, they can incorporate the new prediction-augmented CLS (PACLS) method of updating the prediction model for new sources of spectral variation contained in the prediction sample set without having to return to the calibration process. The ACLS methods can also be applied to alternating least squares models. The ACLS methods can be applied to all types of multivariate data.

  14. Least-squares Gaussian beam migration

    Science.gov (United States)

    Yuan, Maolin; Huang, Jianping; Liao, Wenyuan; Jiang, Fuyou

    2017-02-01

    A theory of least-squares Gaussian beam migration (LSGBM) is presented to optimally estimate a subsurface reflectivity. In the iterative inversion scheme, a Gaussian beam (GB) propagator is used as the kernel of linearized forward modeling (demigration) and its adjoint (migration). Born approximation based GB demigration relies on the calculation of Green’s function by a Gaussian-beam summation for the downward and upward wavefields. The adjoint operator of GB demigration accounts for GB prestack depth migration under the cross-correlation imaging condition, where seismic traces are processed one by one for each shot. A numerical test on the point diffractors model suggests that GB demigration can successfully simulate primary scattered data, while migration (adjoint) can yield a corresponding image. The GB demigration/migration algorithms are used for the least-squares migration scheme to deblur conventional migrated images. The proposed LSGBM is illustrated with two synthetic data for a four-layer model and the Marmousi2 model. Numerical results show that LSGBM, compared to migration (adjoint) with GBs, produces images with more balanced amplitude, higher resolution and even fewer artifacts. Additionally, the LSGBM shows a robust convergence rate.

  15. Total least squares for anomalous change detection

    Energy Technology Data Exchange (ETDEWEB)

    Theiler, James P [Los Alamos National Laboratory; Matsekh, Anna M [Los Alamos National Laboratory

    2010-01-01

    A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

  16. Least square regularized regression in sum space.

    Science.gov (United States)

    Xu, Yong-Li; Chen, Di-Rong; Li, Han-Xiong; Liu, Lu

    2013-04-01

    This paper proposes a least square regularized regression algorithm in sum space of reproducing kernel Hilbert spaces (RKHSs) for nonflat function approximation, and obtains the solution of the algorithm by solving a system of linear equations. This algorithm can approximate the low- and high-frequency component of the target function with large and small scale kernels, respectively. The convergence and learning rate are analyzed. We measure the complexity of the sum space by its covering number and demonstrate that the covering number can be bounded by the product of the covering numbers of basic RKHSs. For sum space of RKHSs with Gaussian kernels, by choosing appropriate parameters, we tradeoff the sample error and regularization error, and obtain a polynomial learning rate, which is better than that in any single RKHS. The utility of this method is illustrated with two simulated data sets and five real-life databases.

  17. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, D. L.

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.

  18. Simplified neural networks for solving linear least squares and total least squares problems in real time.

    Science.gov (United States)

    Cichocki, A; Unbehauen, R

    1994-01-01

    In this paper a new class of simplified low-cost analog artificial neural networks with on chip adaptive learning algorithms are proposed for solving linear systems of algebraic equations in real time. The proposed learning algorithms for linear least squares (LS), total least squares (TLS) and data least squares (DLS) problems can be considered as modifications and extensions of well known algorithms: the row-action projection-Kaczmarz algorithm and/or the LMS (Adaline) Widrow-Hoff algorithms. The algorithms can be applied to any problem which can be formulated as a linear regression problem. The correctness and high performance of the proposed neural networks are illustrated by extensive computer simulation results.

  19. Positive Scattering Cross Sections using Constrained Least Squares

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-09-27

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.

  20. Multisource Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-12-01

    Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is

  1. Skeletonized Least Squares Wave Equation Migration

    KAUST Repository

    Zhan, Ge

    2010-10-17

    The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).

  2. Elastic least-squares reverse time migration

    KAUST Repository

    Feng, Zongcai

    2017-03-08

    We use elastic least-squares reverse time migration (LSRTM) to invert for the reflectivity images of P- and S-wave impedances. Elastic LSRTMsolves the linearized elastic-wave equations for forward modeling and the adjoint equations for backpropagating the residual wavefield at each iteration. Numerical tests on synthetic data and field data reveal the advantages of elastic LSRTM over elastic reverse time migration (RTM) and acoustic LSRTM. For our examples, the elastic LSRTM images have better resolution and amplitude balancing, fewer artifacts, and less crosstalk compared with the elastic RTM images. The images are also better focused and have better reflector continuity for steeply dipping events compared to the acoustic LSRTM images. Similar to conventional leastsquares migration, elastic LSRTM also requires an accurate estimation of the P- and S-wave migration velocity models. However, the problem remains that, when there are moderate errors in the velocity model and strong multiples, LSRTMwill produce migration noise stronger than that seen in the RTM images.

  3. A NEW SOLUTION MODEL OF NONLINEAR DYNAMIC LEAST SQUARE ADJUSTMENT

    Institute of Scientific and Technical Information of China (English)

    陶华学; 郭金运

    2000-01-01

    The nonlinear least square adjustment is a head object studied in technology fields. The paper studies on the non-derivative solution to the nonlinear dynamic least square adjustment and puts forward a new algorithm model and its solution model. The method has little calculation load and is simple. This opens up a theoretical method to solve the linear dynamic least square adjustment.

  4. Experiments on Coordinate Transformation based on Least Squares and Total Least Squares Methods

    Science.gov (United States)

    Tunalioglu, Nursu; Mustafa Durdag, Utkan; Hasan Dogan, Ali; Erdogan, Bahattin; Ocalan, Taylan

    2016-04-01

    Coordinate transformation is an important problem in geodesy discipline. Variations in stochastic and functional models in transformation problem cause different estimation results. Least-squares (LS) method is generally implemented to solve this problem. LS method accepts only one epoch coordinate data group erroneous in stochastic model. However, all the data in transformation problem are erroneous. In contrast to the traditional LS method, the Total Least Squares (TLS) method takes into account the errors in all the variables in the transformation. It is so-called errors-invariables (EIV) model. In the last decades, TLS method has been implemented to solve transformation problem. In this context, it is important to determine which method is more accurate. In this study, LS and TLS methods have been implemented on different 2D and 3D geodetic networks with different simulation scenarios. The first results show that the translation parameters are affected more than rotation and scale parameters. Although TLS method considers the errors for two coordinate the estimated parameters for both methods are different from simulated values.

  5. Least Square Approximation by Linear Combinations of Multi(Poles).

    Science.gov (United States)

    1983-04-01

    ID-R134 069 LEAST SQUARE APPROXIMATION BY LINEAR COMBINATIONS OF i/i MULTI(POLES). 1U OHIO STATE UNIV COLUMBUS DEPT OF GEODETIC SCIENCE AND SURVEY...TR-83-0 117 LEAST SQUARE APPROXIMATION BY LINEAR COMBINATIONS OF (MULTI)POLES WILLI FREEDEN DEPARTMENT OF GEODETIC SCIENCE AND SURVEYING THE OHIO...Subtitle) S. TYPE OF REPORT & PERIOD COVERED LEAST SQUARE APPROXIMATION BY LINEAR Scientific Report No. 3 COMBINATIONS OF (MULTI)POLES 6. PERFORMING ORG

  6. A Note on Separable Nonlinear Least Squares Problem

    CERN Document Server

    Gharibi, Wajeb

    2011-01-01

    Separable nonlinear least squares (SNLS)problem is a special class of nonlinear least squares (NLS)problems, whose objective function is a mixture of linear and nonlinear functions. It has many applications in many different areas, especially in Operations Research and Computer Sciences. They are difficult to solve with the infinite-norm metric. In this paper, we give a short note on the separable nonlinear least squares problem, unseparated scheme for NLS, and propose an algorithm for solving mixed linear-nonlinear minimization problem, method of which results in solving a series of least squares separable problems.

  7. The least-square method in complex number domain

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The classical least-square method was extended from the real number into the complex number domain, which is called the complex least-square method. The mathematical derivation and its applications show that the complex least-square method is different from one that the real number and the imaginary number are separately calculated with the classical least-square, by which the actual leastsquare estimation cannot be obtained in practice. Applications of this new method to an arbitrarily given series and to the precipitation in rainy season at 160 meteorological stations in China mainland show advantages of this method over other conventional statistical models.

  8. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    Science.gov (United States)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator

  9. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    Directory of Open Access Journals (Sweden)

    Greenwood L.R.

    2016-01-01

    Full Text Available The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002 [3], which was sponsored by the International Atomic Energy Agency (IAEA, though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the

  10. Using Weighted Least Squares Regression for Obtaining Langmuir Sorption Constants

    Science.gov (United States)

    One of the most commonly used models for describing phosphorus (P) sorption to soils is the Langmuir model. To obtain model parameters, the Langmuir model is fit to measured sorption data using least squares regression. Least squares regression is based on several assumptions including normally dist...

  11. An Algorithm for Positive Definite Least Square Estimation of Parameters.

    Science.gov (United States)

    1986-05-01

    This document presents an algorithm for positive definite least square estimation of parameters. This estimation problem arises from the PILOT...dynamic macro-economic model and is equivalent to an infinite convex quadratic program. It differs from ordinary least square estimations in that the

  12. note: The least square nucleolus is a general nucleolus

    OpenAIRE

    Elisenda Molina; Juan Tejada

    2000-01-01

    This short note proves that the least square nucleolus (Ruiz et al. (1996)) and the lexicographical solution (Sakawa and Nishizaki (1994)) select the same imputation in each game with nonempty imputation set. As a consequence the least square nucleolus is a general nucleolus (Maschler et al. (1992)).

  13. Abnormal behavior of the least squares estimate of multiple regression

    Institute of Scientific and Technical Information of China (English)

    陈希孺; 安鸿志

    1997-01-01

    An example is given to reveal the abnormal behavior of the least squares estimate of multiple regression. It is shown that the least squares estimate of the multiple linear regression may be "improved in the sense of weak consistency when nuisance parameters are introduced into the model. A discussion on the implications of this finding is given.

  14. Visualizing Least-Square Lines of Best Fit.

    Science.gov (United States)

    Engebretsen, Arne

    1997-01-01

    Presents strategies that utilize graphing calculators and computer software to help students understand the concept of minimizing the squared residuals to find the line of best fit. Includes directions for least-squares drawings using a variety of technologies. (DDR)

  15. A novel extended kernel recursive least squares algorithm.

    Science.gov (United States)

    Zhu, Pingping; Chen, Badong; Príncipe, José C

    2012-08-01

    In this paper, a novel extended kernel recursive least squares algorithm is proposed combining the kernel recursive least squares algorithm and the Kalman filter or its extensions to estimate or predict signals. Unlike the extended kernel recursive least squares (Ex-KRLS) algorithm proposed by Liu, the state model of our algorithm is still constructed in the original state space and the hidden state is estimated using the Kalman filter. The measurement model used in hidden state estimation is learned by the kernel recursive least squares algorithm (KRLS) in reproducing kernel Hilbert space (RKHS). The novel algorithm has more flexible state and noise models. We apply this algorithm to vehicle tracking and the nonlinear Rayleigh fading channel tracking, and compare the tracking performances with other existing algorithms.

  16. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  17. Recursive least squares background prediction of univariate syndromic surveillance data

    OpenAIRE

    Burkom Howard; Najmi Amir-Homayoon

    2009-01-01

    Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of s...

  18. Generalized Penalized Least Squares and Its Statistical Characteristics

    Institute of Scientific and Technical Information of China (English)

    DING Shijun; TAO Benzao

    2006-01-01

    The solution properties of semiparametric model are analyzed, especially that penalized least squares for semiparametric model will be invalid when the matrix BTPB is ill-posed or singular. According to the principle of ridge estimate for linear parametric model, generalized penalized least squares for semiparametric model are put forward, and some formulae and statistical properties of estimates are derived. Finally according to simulation examples some helpful conclusions are drawn.

  19. An application of least squares fit mapping to clinical classification.

    OpenAIRE

    Yang, Y.; Chute, C. G.

    1992-01-01

    This paper describes a unique approach, "Least Square Fit Mapping," to clinical data classification. We use large collections of human-assigned text-to-category matches as training sets to compute the correlations between physicians' terms and canonical concepts. A Linear Least Squares Fit (LLSF) technique is employed to obtain a mapping function which optimally fits the known matches given in a training set and probabilistically captures the unknown matches for arbitrary texts. We tested our...

  20. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  1. Least squares in calibration: dealing with uncertainty in x.

    Science.gov (United States)

    Tellinghuisen, Joel

    2010-08-01

    The least-squares (LS) analysis of data with error in x and y is generally thought to yield best results when carried out by minimizing the "total variance" (TV), defined as the sum of the properly weighted squared residuals in x and y. Alternative "effective variance" (EV) methods project the uncertainty in x into an effective contribution to that in y, and though easier to employ are considered to be less reliable. In the case of a linear response function with both sigma(x) and sigma(y) constant, the EV solutions are identically those from ordinary LS; and Monte Carlo (MC) simulations reveal that they can actually yield smaller root-mean-square errors than the TV method. Furthermore, the biases can be predicted from theory based on inverse regression--x upon y when x is error-free and y is uncertain--which yields a bias factor proportional to the ratio sigma(x)(2)/sigma(xm)(2) of the random-error variance in x to the model variance. The MC simulations confirm that the biases are essentially independent of the error in y, hence correctable. With such bias corrections, the better performance of the EV method in estimating the parameters translates into better performance in estimating the unknown (x(0)) from measurements (y(0)) of its response. The predictability of the EV parameter biases extends also to heteroscedastic y data as long as sigma(x) remains constant, but the estimation of x(0) is not as good in this case. When both x and y are heteroscedastic, there is no known way to predict the biases. However, the MC simulations suggest that for proportional error in x, a geometric x-structure leads to small bias and comparable performance for the EV and TV methods.

  2. Least-squares RTM with L1 norm regularisation

    Science.gov (United States)

    Wu, Di; Yao, Gang; Cao, Jingjie; Wang, Yanghua

    2016-10-01

    Reverse time migration (RTM), for imaging complex Earth models, is a reversal procedure of the forward modelling of seismic wavefields, and hence can be formulated as an inverse problem. The least-squares RTM method attempts to minimise the difference between the observed field data and the synthetic data generated by the migration image. It can reduce the artefacts in the images of a conventional RTM which uses an adjoint operator, instead of an inverse operator, for the migration. However, as the least-squares inversion provides an average solution with minimal variation, the resolution of the reflectivity image is compromised. This paper presents the least-squares RTM method with a model constraint defined by an L1-norm of the reflectivity image. For solving the least-squares RTM with L1 norm regularisation, the inversion is reformulated as a ‘basis pursuit de-noise (BPDN)’ problem, and is solved directly using an algorithm called ‘spectral projected gradient for L1 minimisation (SPGL1)’. Three numerical examples demonstrate the effectiveness of the method which can mitigate artefacts and produce clean images with significantly higher resolution than the least-squares RTM without such a constraint.

  3. Performance analysis of the Least-Squares estimator in Astrometry

    CERN Document Server

    Lobos, Rodrigo A; Mendez, Rene A; Orchard, Marcos

    2015-01-01

    We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated...

  4. Distributed Recursive Least-Squares: Stability and Performance Analysis

    CERN Document Server

    Mateos, Gonzalo

    2011-01-01

    The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged' system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying...

  5. Nonparametric Least Squares Estimation of a Multivariate Convex Regression Function

    CERN Document Server

    Seijo, Emilio

    2010-01-01

    This paper deals with the consistency of the least squares estimator of a convex regression function when the predictor is multidimensional. We characterize and discuss the computation of such an estimator via the solution of certain quadratic and linear programs. Mild sufficient conditions for the consistency of this estimator and its subdifferentials in fixed and stochastic design regression settings are provided. We also consider a regression function which is known to be convex and componentwise nonincreasing and discuss the characterization, computation and consistency of its least squares estimator.

  6. Sparse least-squares reverse time migration using seislets

    KAUST Repository

    Dutta, Gaurav

    2015-08-19

    We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.

  7. A note on the limitations of lattice least squares

    Science.gov (United States)

    Gillis, J. T.; Gustafson, C. L.; Mcgraw, G. A.

    1988-01-01

    This paper quantifies the known limitation of lattice least squares to ARX models in terms of the dynamic properties of the system being modeled. This allows determination of the applicability of lattice least squares in a given situation. The central result is that an equivalent ARX model exists for an ARMAX system if and only if the ARMAX system has no transmission zeros from the noise port to the output port. The technique used to prove this fact is a construction using the matrix fractional description of the system. The final section presents two computational examples.

  8. An Algorithm to Solve Separable Nonlinear Least Square Problem

    Directory of Open Access Journals (Sweden)

    Wajeb Gharibi

    2013-07-01

    Full Text Available Separable Nonlinear Least Squares (SNLS problem is a special class of Nonlinear Least Squares (NLS problems, whose objective function is a mixture of linear and nonlinear functions. SNLS has many applications in several areas, especially in the field of Operations Research and Computer Science. Problems related to the class of NLS are hard to resolve having infinite-norm metric. This paper gives a brief explanation about SNLS problem and offers a Lagrangian based algorithm for solving mixed linear-nonlinear minimization problem

  9. Least-squares finite-element lattice Boltzmann method.

    Science.gov (United States)

    Li, Yusong; LeBoeuf, Eugene J; Basu, P K

    2004-06-01

    A new numerical model of the lattice Boltzmann method utilizing least-squares finite element in space and Crank-Nicolson method in time is presented. The new method is able to solve problem domains that contain complex or irregular geometric boundaries by using finite-element method's geometric flexibility and numerical stability, while employing efficient and accurate least-squares optimization. For the pure advection equation on a uniform mesh, the proposed method provides for fourth-order accuracy in space and second-order accuracy in time, with unconditional stability in the time domain. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow and Couette flow.

  10. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin

    2012-11-04

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.

  11. Moving least squares simulation of free surface flows

    DEFF Research Database (Denmark)

    Felter, C. L.; Walther, Jens Honore; Henriksen, Christian

    2014-01-01

    In this paper a Moving Least Squares method (MLS) for the simulation of 2D free surface flows is presented. The emphasis is on the governing equations, the boundary conditions, and the numerical implementation. The compressible viscous isothermal Navier–Stokes equations are taken as the starting...... point. Then a boundary condition for pressure (or density) is developed. This condition is applicable at interfaces between different media such as fluid–solid or fluid–void. The effect of surface tension is included. The equations are discretized by a moving least squares method for the spatial...

  12. HERMITE SCATTERED DATA FITTING BY THE PENALIZED LEAST SQUARES METHOD

    Institute of Scientific and Technical Information of China (English)

    Tianhe Zhou; Danfu Han

    2009-01-01

    Given a set of scattered data with derivative values. If the data is noisy or there is an extremely large number of data, we use an extension of the penalized least squares method of von Golitschek and Schumaker[Serdica, 18 (2002), pp.1001-1020]to fit the data. We show that the extension of the penalized least squares method produces a unique spline to fit the data. Also we give the error bound for the extension method. Some numerical examples are presented to demonstrate the effectiveness of the proposed method.

  13. Nonlinear multifunctional sensor signal reconstruction based on least squares support vector machines and total least squares algorithm

    Institute of Scientific and Technical Information of China (English)

    Xin LIU; Guo WEI; Jin-wei SUN; Dan LIU

    2009-01-01

    Least squares support vector machines (LS-SVMs) are modified support vector machines (SVMs) that involve equality constraints and work with a least squares cost function, which simplifies the optimization procedure. In this paper, a novel training algorithm based on total least squares (TLS) for an LS-SVM is presented and applied to muhifunctional sensor signal reconstruction. For three different nonlinearities of a multi functional sensor model, the reconstruction accuracies of input signals are 0.001 36%, 0.03184% and 0.504 80%, respectively. The experimental results demonstrate the higher reliability and accuracy of the proposed method for multi functional sensor signal reconstruction than the original LS-SVM training algorithm, and verify the feasibility and stability of the proposed method.

  14. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-11-04

    Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.

  15. A least squares estimation method for the linear learning model

    NARCIS (Netherlands)

    B. Wierenga (Berend)

    1978-01-01

    textabstractThe author presents a new method for estimating the parameters of the linear learning model. The procedure, essentially a least squares method, is easy to carry out and avoids certain difficulties of earlier estimation procedures. Applications to three different data sets are reported, a

  16. SELECTION OF REFERENCE PLANE BY THE LEAST SQUARES FITTING METHODS

    Directory of Open Access Journals (Sweden)

    Przemysław Podulka

    2016-06-01

    For least squares polynomial fittings it was found that applied method for cylinder liners gave usually better robustness for scratches, valleys and dimples occurrence. For piston skirt surfaces better edge-filtering results were obtained. It was also recommended to analyse the Sk parameters for proper selection of reference plane in surface topography measurements.

  17. A Genetic Algorithm Approach to Nonlinear Least Squares Estimation

    Science.gov (United States)

    Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.

    2004-01-01

    A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…

  18. Consistency of System Identification by Global Total Least Squares

    NARCIS (Netherlands)

    C. Heij (Christiaan); W. Scherrer

    1996-01-01

    textabstractGlobal total least squares (GTLS) is a method for the identification of linear systems where no distinction between input and output variables is required. This method has been developed within the deterministic behavioural approach to systems. In this paper we analyse statistical proper

  19. Consistency of global total least squares in stochastic system identification

    NARCIS (Netherlands)

    C. Heij (Christiaan); W. Scherrer

    1995-01-01

    textabstractGlobal total least squares has been introduced as a method for the identification of deterministic system behaviours. We analyse this method within a stochastic framework, where the observed data are generated by a stationary stochastic process. Conditions are formulated so that the meth

  20. A Hybrid Method for Nonlinear Least Squares Problems

    Institute of Scientific and Technical Information of China (English)

    Zhongyi Liu; Linping Sun

    2007-01-01

    A negative curvature method is applied to nonlinear least squares problems with indefinite Hessian approximation matrices. With the special structure of the method,a new switch is proposed to form a hybrid method. Numerical experiments show that this method is feasible and effective for zero-residual,small-residual and large-residual problems.

  1. Integer least-squares theory for the GNSS compass

    NARCIS (Netherlands)

    Teunissen, P.J.G.

    2010-01-01

    Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to highprecision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategi

  2. An Orthogonal Least Squares Based Approach to FIR Designs

    Institute of Scientific and Technical Information of China (English)

    Xiao-Feng Wu; Zi-Qiang Lang; Stephen A Billings

    2005-01-01

    This paper is concerned with the application of forward Orthogonal Least Squares (OLS) algorithm to the design of Finite Impulse Response (FIR) filters. The focus of this study is a new FIR filter design procedure and to compare this with traditional methods known as the fir2() routine provided by MATLAB.

  3. Weighted least squares stationary approximations to linear systems.

    Science.gov (United States)

    Bierman, G. J.

    1972-01-01

    Investigation of the problem of replacing a certain time-varying linear system by a stationary one. Several quadratic criteria are proposed to aid in determining suitable candidate systems. One criterion for choosing the matrix B (in the stationary system) is initial-condition dependent, and another bounds the 'worst case' homogeneous system performance. Both of these criteria produce weighted least square fits.

  4. ON A FAMILY OF MULTIVARIATE LEAST-SQUARES ORTHOGONAL POLYNOMIALS

    Institute of Scientific and Technical Information of China (English)

    郑成德; 王仁宏

    2003-01-01

    In this paper the new notion of multivariate least-squares orthogonal poly-nomials from the rectangular form is introduced. Their existence and uniqueness isstudied and some methods for their recursive computation are given. As an applica-is constructed.

  5. On the Routh approximation technique and least squares errors

    Science.gov (United States)

    Aburdene, M. F.; Singh, R.-N. P.

    1979-01-01

    A new method for calculating the coefficients of the numerator polynomial of the direct Routh approximation method (DRAM) using the least square error criterion is formulated. The necessary conditions have been obtained in terms of algebraic equations. The method is useful for low frequency as well as high frequency reduced-order models.

  6. Fuzzy modeling of friction by bacterial and least square optimization

    Science.gov (United States)

    Jastrzebski, Marcin

    2006-03-01

    In this paper a new method of tuning parameters of Sugeno fuzzy models is presented. Because modeled phenomenon is discontinuous, new type of consequent function was introduced. Described algorithm (BA+LSQ) combines bacterial algorithm (BA) for tuning parameters of membership functions and least square method (LSQ) for parameters of consequent functions.

  7. Risk and Management Control: A Partial Least Square Modelling Approach

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...

  8. ON THE COMPARISION OF THE TOTAL LEAST SQUARES AND THE LEAST SQUARES PROBLEMS%TLS和LS问题的比较

    Institute of Scientific and Technical Information of China (English)

    刘永辉; 魏木生

    2003-01-01

    There are a number of articles discussing the total least squares(TLS) and the least squares(LS) problems.M.Wei(M.Wei, Mathematica Numerica Sinica 20(3)(1998),267-278) proposed a new orthogonal projection method to improve existing perturbation bounds of the TLS and LS problems.In this paper,wecontinue to improve existing bounds of differences between the squared residuals,the weighted squared residuals and the minimum norm correction matrices of the TLS and LS problems.

  9. A unifying theoretical and algorithmic framework for least squares methods of estimation in diffusion tensor imaging.

    Science.gov (United States)

    Koay, Cheng Guan; Chang, Lin-Ching; Carew, John D; Pierpaoli, Carlo; Basser, Peter J

    2006-09-01

    A unifying theoretical and algorithmic framework for diffusion tensor estimation is presented. Theoretical connections among the least squares (LS) methods, (linear least squares (LLS), weighted linear least squares (WLLS), nonlinear least squares (NLS) and their constrained counterparts), are established through their respective objective functions, and higher order derivatives of these objective functions, i.e., Hessian matrices. These theoretical connections provide new insights in designing efficient algorithms for NLS and constrained NLS (CNLS) estimation. Here, we propose novel algorithms of full Newton-type for the NLS and CNLS estimations, which are evaluated with Monte Carlo simulations and compared with the commonly used Levenberg-Marquardt method. The proposed methods have a lower percent of relative error in estimating the trace and lower reduced chi2 value than those of the Levenberg-Marquardt method. These results also demonstrate that the accuracy of an estimate, particularly in a nonlinear estimation problem, is greatly affected by the Hessian matrix. In other words, the accuracy of a nonlinear estimation is algorithm-dependent. Further, this study shows that the noise variance in diffusion weighted signals is orientation dependent when signal-to-noise ratio (SNR) is low (

  10. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-03-01

    This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic

  11. Source allocation by least-squares hydrocarbon fingerprint matching

    Energy Technology Data Exchange (ETDEWEB)

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  12. Least Squares Shadowing for Sensitivity Analysis of Turbulent Fluid Flows

    CERN Document Server

    Blonigan, Patrick; Wang, Qiqi

    2014-01-01

    Computational methods for sensitivity analysis are invaluable tools for aerodynamics research and engineering design. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in turbulent fluid flow fields, specifically those obtained using high-fidelity turbulence simulations. This is because of a number of dynamical properties of turbulent and chaotic fluid flows, most importantly high sensitivity of the initial value problem, popularly known as the "butterfly effect". The recently developed least squares shadowing (LSS) method avoids the issues encountered by traditional sensitivity analysis methods by approximating the "shadow trajectory" in phase space, avoiding the high sensitivity of the initial value problem. The following paper discusses how the least squares problem associated with LSS is solved. Two methods are presented and are demonstrated on a simulation of homogeneous isotropic turbulence and the Kuramoto-Sivashinsky (KS) equation, a 4th order c...

  13. Linearized least-square imaging of internally scattered data

    KAUST Repository

    Aldawood, Ali

    2014-01-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.

  14. Moving least-squares corrections for smoothed particle hydrodynamics

    Directory of Open Access Journals (Sweden)

    Ciro Del Negro

    2011-12-01

    Full Text Available First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small increase in computational cost, we manage to achieve smooth density distributions without the need for post-processing and with higher accuracy in the computation of the viscous term of the Navier–Stokes equations, thereby reducing the formation of spurious shockwaves or other streaming effects in the evolution of fluid flow. Numerical tests on a classic two-dimensional dam-break problem confirm the improvement of the new approach.

  15. Steady and transient least square solvers for thermal problems

    Science.gov (United States)

    Padovan, Joe

    1987-01-01

    This paper develops a hierarchical least square solution algorithm for highly nonlinear heat transfer problems. The methodology's capability is such that both steady and transient implicit formulations can be handled. This includes problems arising from highly nonlinear heat transfer systems modeled by either finite-element or finite-difference schemes. The overall procedure developed enables localized updating, iteration, and convergence checking as well as constraint application. The localized updating can be performed at a variety of hierarchical levels, i.e., degree of freedom, substructural, material-nonlinear groups, and/or boundary groups. The choice of such partitions can be made via energy partitioning or nonlinearity levels as well as by user selection. Overall, this leads to extremely robust computational characteristics. To demonstrate the methodology, problems are drawn from nonlinear heat conduction. These are used to quantify the robust capabilities of the hierarchical least square scheme.

  16. CONDITION NUMBER FOR WEIGHTED LINEAR LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Yimin Wei; Huaian Diao; Sanzheng Qiao

    2007-01-01

    In this paper,we investigate the condition numbers for the generalized matrix inversion and the rank deficient linear least squares problem:minx ||Ax-b||2,where A is an m-by-n (m≥n)rank deficient matrix.We first derive an explicit expression for the condition number in the weighted Frobenius norm || [AT,βb]||F of the data A and b,where T is a positive diagonal matrix and β is a positive scalar.We then discuss the sensitivity of the standard 2-norm condition numbers for the generalized matrix inversion and rank deficient least squares and establish relations between the condition numbers and their condition numbers called level-2 condition numbers.

  17. A unified approach for least-squares surface fitting

    Institute of Scientific and Technical Information of China (English)

    ZHU; Limin; DING; Han

    2004-01-01

    This paper presents a novel approach for least-squares fitting of complex surface to measured 3D coordinate points by adjusting its location and/or shape. For a point expressed in the machine reference frame and a deformable smooth surface represented in its own model frame, a signed point-to-surface distance function is defined,and its increment with respect to the differential motion and differential deformation of the surface is derived. On this basis, localization, surface reconstruction and geometric variation characterization are formulated as a unified nonlinear least-squares problem defined on the product space SE(3)×m. By using Levenberg-Marquardt method, a sequential approximation surface fitting algorithm is developed. It has the advantages of implementational simplicity, computational efficiency and robustness. Applications confirm the validity of the proposed approach.

  18. Anisotropy minimization via least squares method for transformation optics.

    Science.gov (United States)

    Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H

    2014-07-28

    In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero.

  19. MODIFIED LEAST SQUARE METHOD ON COMPUTING DIRICHLET PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The singularity theory of dynamical systems is linked to the numerical computation of boundary value problems of differential equations. It turns out to be a modified least square method for a calculation of variational problem defined on Ck(Ω), in which the base functions are polynomials and the computation of problems is transferred to compute the coefficients of the base functions. The theoretical treatment and some simple examples are provided for understanding the modification procedure of the metho...

  20. Online least-squares policy iteration for reinforcement learning control

    OpenAIRE

    2010-01-01

    Reinforcement learning is a promising paradigm for learning optimal control. We consider policy iteration (PI) algorithms for reinforcement learning, which iteratively evaluate and improve control policies. State-of-the-art, least-squares techniques for policy evaluation are sample-efficient and have relaxed convergence requirements. However, they are typically used in offline PI, whereas a central goal of reinforcement learning is to develop online algorithms. Therefore, we propose an online...

  1. Multisplitting for linear, least squares and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  2. Least-Square Prediction for Backward Adaptive Video Coding

    OpenAIRE

    2006-01-01

    Almost all existing approaches towards video coding exploit the temporal redundancy by block-matching-based motion estimation and compensation. Regardless of its popularity, block matching still reflects an ad hoc understanding of the relationship between motion and intensity uncertainty models. In this paper, we present a novel backward adaptive approach, named "least-square prediction" (LSP), and demonstrate its potential in video coding. Motivated by the duality between edge contour in im...

  3. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    Science.gov (United States)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  4. Penalized Weighted Least Squares for Outlier Detection and Robust Regression

    OpenAIRE

    Gao, Xiaoli; Fang, Yixin

    2016-01-01

    To conduct regression analysis for data contaminated with outliers, many approaches have been proposed for simultaneous outlier detection and robust regression, so is the approach proposed in this manuscript. This new approach is called "penalized weighted least squares" (PWLS). By assigning each observation an individual weight and incorporating a lasso-type penalty on the log-transformation of the weight vector, the PWLS is able to perform outlier detection and robust regression simultaneou...

  5. An iterative approach to a constrained least squares problem

    Directory of Open Access Journals (Sweden)

    Simeon Reich

    2003-01-01

    In the case where the set of the constraints is the nonempty intersection of a finite collection of closed convex subsets of H, an iterative algorithm is designed. The resulting sequence is shown to converge strongly to the unique solution of the regularized problem. The net of the solutions to the regularized problems strongly converges to the minimum norm solution of the least squares problem if its solution set is nonempty.

  6. SUBSPACE SEARCH METHOD FOR A CLASS OF LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Zi-Luan Wei

    2000-01-01

    A subspace search method for solving a class of least squares problem is pre sented in the paper. The original problem is divided into many independent sub problems, and a search direction is obtained by solving each of the subproblems, as well as a new iterative point is determined by choosing a suitable steplength such that the value of residual norm is decreasing. The convergence result is also given. The numerical test is also shown for a special problem,

  7. Parallel Nonnegative Least Squares Solvers for Model Order Reduction

    Science.gov (United States)

    2016-03-01

    not for the PQN method. For the latter method the size of the active set is controlled to promote sparse solutions. This is described in Section 3.2.1...or any other aspect of this collection of information, including suggestions for reducing the burden, to Department of Defense, Washington...21005-5066 primary author’s email: <james.p.collins106.civ@mail.mil>. Parallel nonnegative least squares (NNLS) solvers are developed specifically for

  8. AN ASSESSMENT OF THE MESHLESS WEIGHTED LEAST-SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    PanXiaofei; SzeKimYim; ZhangXiong

    2004-01-01

    The meshless weighted least-square (MWLS) method was developed based on the weighted least-square method. The method possesses several advantages, such as high accuracy, high stability and high efficiency. Moreover, the coefficient matrix obtained is symmetric and semipositive definite. In this paper, the method is further examined critically. The effects of several parameters on the results of MWLS are investigated systematically by using a cantilever beam and an infinite plate with a central circular hole. The numerical results are compared with those obtained by using the collocation-based meshless method (CBMM) and Galerkin-based meshless method (GBMM). The investigated parameters include the type of approximations, the type of weight functions, the number of neighbors of an evaluation point, as well as the manner in which the neighbors of an evaluation point are determined. This study shows that the displacement accuracy and convergence rate obtained by MWLS is comparable to that of the GBMM while the stress accuracy and convergence rate yielded by MWLS is even higher than that of GBMM. Furthermore, MWLS is much more efficient than GBMM. This study also shows that the instability of CBMM is mainly due to the neglect of the equilibrium residuals at boundary nodes. In MWLS, the residuals of all the governing equations are minimized in a weighted least-square sense.

  9. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  10. Solving linear inequalities in a least squares sense

    Energy Technology Data Exchange (ETDEWEB)

    Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  11. Simple procedures for imposing constraints for nonlinear least squares optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, R. [Petrobras, Rio de Janeiro (Brazil); Thompson, L.G.; Redner, R.; Reynolds, A.C. [Univ. of Tulsa, OK (United States)

    1995-12-31

    Nonlinear regression method (least squares, least absolute value, etc.) have gained acceptance as practical technology for analyzing well-test pressure data. Even for relatively simple problems, however, commonly used algorithms sometimes converge to nonfeasible parameter estimates (e.g., negative permeabilities) resulting in a failure of the method. The primary objective of this work is to present a new method for imaging the objective function across all boundaries imposed to satisfy physical constraints on the parameters. The algorithm is extremely simple and reliable. The method uses an equivalent unconstrained objective function to impose the physical constraints required in the original problem. Thus, it can be used with standard unconstrained least squares software without reprogramming and provides a viable alternative to penalty functions for imposing constraints when estimating well and reservoir parameters from pressure transient data. In this work, the authors also present two methods of implementing the penalty function approach for imposing parameter constraints in a general unconstrained least squares algorithm. Based on their experience, the new imaging method always converges to a feasible solution in less time than the penalty function methods.

  12. Multilevel first-order system least squares for PDEs

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, S.

    1994-12-31

    The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.

  13. Kernel-based least squares policy iteration for reinforcement learning.

    Science.gov (United States)

    Xu, Xin; Hu, Dewen; Lu, Xicheng

    2007-07-01

    In this paper, we present a kernel-based least squares policy iteration (KLSPI) algorithm for reinforcement learning (RL) in large or continuous state spaces, which can be used to realize adaptive feedback control of uncertain dynamic systems. By using KLSPI, near-optimal control policies can be obtained without much a priori knowledge on dynamic models of control plants. In KLSPI, Mercer kernels are used in the policy evaluation of a policy iteration process, where a new kernel-based least squares temporal-difference algorithm called KLSTD-Q is proposed for efficient policy evaluation. To keep the sparsity and improve the generalization ability of KLSTD-Q solutions, a kernel sparsification procedure based on approximate linear dependency (ALD) is performed. Compared to the previous works on approximate RL methods, KLSPI makes two progresses to eliminate the main difficulties of existing results. One is the better convergence and (near) optimality guarantee by using the KLSTD-Q algorithm for policy evaluation with high precision. The other is the automatic feature selection using the ALD-based kernel sparsification. Therefore, the KLSPI algorithm provides a general RL method with generalization performance and convergence guarantee for large-scale Markov decision problems (MDPs). Experimental results on a typical RL task for a stochastic chain problem demonstrate that KLSPI can consistently achieve better learning efficiency and policy quality than the previous least squares policy iteration (LSPI) algorithm. Furthermore, the KLSPI method was also evaluated on two nonlinear feedback control problems, including a ship heading control problem and the swing up control of a double-link underactuated pendulum called acrobot. Simulation results illustrate that the proposed method can optimize controller performance using little a priori information of uncertain dynamic systems. It is also demonstrated that KLSPI can be applied to online learning control by incorporating

  14. Classification using least squares support vector machine for reliability analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-wei GUO; Guang-chen BAI

    2009-01-01

    In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.

  15. Handbook of Partial Least Squares Concepts, Methods and Applications

    CERN Document Server

    Vinzi, Vincenzo Esposito; Henseler, Jörg

    2010-01-01

    This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.

  16. Least square estimation of phase, frequency and PDEV

    CERN Document Server

    Danielson, Magnus; Rubiola, Enrico

    2016-01-01

    The Omega-preprocessing was introduced to improve phase noise rejection by using a least square algorithm. The associated variance is the PVAR which is more efficient than MVAR to separate the different noise types. However, unlike AVAR and MVAR, the decimation of PVAR estimates for multi-tau analysis is not possible if each counter measurement is a single scalar. This paper gives a decimation rule based on two scalars, the processing blocks, for each measurement. For the Omega-preprocessing, this implies the definition of an output standard as well as hardware requirements for performing high-speed computations of the blocks.

  17. MULTI-RESOLUTION LEAST SQUARES SUPPORT VECTOR MACHINES

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The Least Squares Support Vector Machines (LS-SVM) is an improvement to the SVM.Combined the LS-SVM with the Multi-Resolution Analysis (MRA), this letter proposes the Multi-resolution LS-SVM (MLS-SVM). The proposed algorithm has the same theoretical framework as MRA but with better approximation ability. At a fixed scale MLS-SVM is a classical LS-SVM, but MLS-SVM can gradually approximate the target function at different scales. In experiments, the MLS-SVM is used for nonlinear system identification, and achieves better identification accuracy.

  18. Spectral feature matching based on partial least squares

    Institute of Scientific and Technical Information of China (English)

    Weidong Yan; Zheng Tian; Lulu Pan; Mingtao Ding

    2009-01-01

    We investigate the spectral approaches to the problem of point pattern matching, and present a spectral feature descriptors based on partial least square (PLS). Given keypoints of two images, we define the position similarity matrices respectively, and extract the spectral features from the matrices by PLS, which indicate geometric distribution and inner relationships of the keypoints. Then the keypoints matching is done by bipartite graph matching. The experiments on both synthetic and real-world data corroborate the robustness and invariance of the algorithm.

  19. Image denoising using least squares wavelet support vector machines

    Institute of Scientific and Technical Information of China (English)

    Guoping Zeng; Ruizhen Zhao

    2007-01-01

    We propose a new method for image denoising combining wavelet transform and support vector machines (SVMs). A new image filter operator based on the least squares wavelet support vector machines (LSWSVMs) is presented. Noisy image can be denoised through this filter operator and wavelet thresholding technique. Experimental results show that the proposed method is better than the existing SVM regression with the Gaussian radial basis function (RBF) and polynomial RBF. Meanwhile, it can achieve better performance than other traditional methods such as the average filter and median filter.

  20. Least Squares Adjustment: Linear and Nonlinear Weighted Regression Analysis

    DEFF Research Database (Denmark)

    Nielsen, Allan Aasbjerg

    2007-01-01

    This note primarily describes the mathematics of least squares regression analysis as it is often used in geodesy including land surveying and satellite positioning applications. In these fields regression is often termed adjustment. The note also contains a couple of typical land surveying...... and satellite positioning application examples. In these application areas we are typically interested in the parameters in the model typically 2- or 3-D positions and not in predictive modelling which is often the main concern in other regression analysis applications. Adjustment is often used to obtain...

  1. Neural Network Inverse Adaptive Controller Based on Davidon Least Square

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    General neural network inverse adaptive controller haa two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system.These defects limit the scope in which the neural network inverse adaptive controller is used.We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence,and then through constructing the pseudo-plant,a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system.The simulation results show the validity of this scheme.

  2. Kernel-Based Least Squares Temporal Difference With Gradient Correction.

    Science.gov (United States)

    Song, Tianheng; Li, Dazi; Cao, Liulin; Hirasawa, Kotaro

    2016-04-01

    A least squares temporal difference with gradient correction (LS-TDC) algorithm and its kernel-based version kernel-based LS-TDC (KLS-TDC) are proposed as policy evaluation algorithms for reinforcement learning (RL). LS-TDC is derived from the TDC algorithm. Attributed to TDC derived by minimizing the mean-square projected Bellman error, LS-TDC has better convergence performance. The least squares technique is used to omit the size-step tuning of the original TDC and enhance robustness. For KLS-TDC, since the kernel method is used, feature vectors can be selected automatically. The approximate linear dependence analysis is performed to realize kernel sparsification. In addition, a policy iteration strategy motivated by KLS-TDC is constructed to solve control learning problems. The convergence and parameter sensitivities of both LS-TDC and KLS-TDC are tested through on-policy learning, off-policy learning, and control learning problems. Experimental results, as compared with a series of corresponding RL algorithms, demonstrate that both LS-TDC and KLS-TDC have better approximation and convergence performance, higher efficiency for sample usage, smaller burden of parameter tuning, and less sensitivity to parameters.

  3. Least squares weighted twin support vector machines with local information

    Institute of Scientific and Technical Information of China (English)

    花小朋; 徐森; 李先锋

    2015-01-01

    A least squares version of the recently proposed weighted twin support vector machine with local information (WLTSVM) for binary classification is formulated. This formulation leads to an extremely simple and fast algorithm, called least squares weighted twin support vector machine with local information (LSWLTSVM), for generating binary classifiers based on two non-parallel hyperplanes. Two modified primal problems of WLTSVM are attempted to solve, instead of two dual problems usually solved. The solution of the two modified problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in WLTSVM. Moreover, two extra modifications were proposed in LSWLTSVM to improve the generalization capability. One is that a hot kernel function, not the simple-minded definition in WLTSVM, is used to define the weight matrix of adjacency graph, which ensures that the underlying similarity information between any pair of data points in the same class can be fully reflected. The other is that the weight for each point in the contrary class is considered in constructing equality constraints, which makes LSWLTSVM less sensitive to noise points than WLTSVM. Experimental results indicate that LSWLTSVM has comparable classification accuracy to that of WLTSVM but with remarkably less computational time.

  4. Plane-wave least-squares reverse-time migration

    KAUST Repository

    Dai, Wei

    2013-06-03

    A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.

  5. Making the most out of the least (squares migration)

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  6. Making the most out of least-squares migration

    KAUST Repository

    Huang, Yunsong

    2014-09-01

    Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  7. Point pattern matching based on kernel partial least squares

    Institute of Scientific and Technical Information of China (English)

    Weidong Yan; Zheng Tian; Lulu Pan; Jinhuan Wen

    2011-01-01

    @@ Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.%Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.

  8. On the stability and accuracy of least squares approximations

    CERN Document Server

    Cohen, Albert; Leviatan, Dany

    2011-01-01

    We consider the problem of reconstructing an unknown function $f$ on a domain $X$ from samples of $f$ at $n$ randomly chosen points with respect to a given measure $\\rho_X$. Given a sequence of linear spaces $(V_m)_{m>0}$ with ${\\rm dim}(V_m)=m\\leq n$, we study the least squares approximations from the spaces $V_m$. It is well known that such approximations can be inaccurate when $m$ is too close to $n$, even when the samples are noiseless. Our main result provides a criterion on $m$ that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in $L^2(X,\\rho_X)$, is comparable to the best approximation error of $f$ by elements from $V_m$. We illustrate this criterion for various approximation schemes, such as trigonometric polynomials, with $\\rho_X$ being the uniform measure, and algebraic polynomials, with $\\rho_X$ being either the uniform or Chebyshev measure. For such examples we also prove similar stability results using deterministic...

  9. Orthogonal least squares learning algorithm for radial basis function networks

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S.; Cowan, C.F.N.; Grant, P.M. (Dept. of Electrical Engineering, Univ. of Edinburgh, Mayfield Road, Edinburgh EH9 3JL, Scotland (GB))

    1991-03-01

    The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular value decomposition to solve for the weights of the network. Such a procedure has several drawbacks and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The paper proposes an alternative learning procedure based on the orthogonal least squares method. The procedure choose radial basis function centers one by one in a rational way until an adequate network has been constructed. The algorithm has the property that each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least squares learning strategy provides a simple and efficient means for fitting radial basis function networks, and this is illustrated using examples taken from two different signal processing applications.

  10. Orthogonal least squares learning algorithm for radial basis function networks.

    Science.gov (United States)

    Chen, S; Cowan, C N; Grant, P M

    1991-01-01

    The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.

  11. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations.

    Science.gov (United States)

    Cao, Jiguo; Huang, Jianhua Z; Wu, Hulin

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.

  12. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    KAUST Repository

    Cao, Jiguo

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.

  13. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang

    2013-12-06

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower

  14. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  15. Robust Homography Estimation Based on Nonlinear Least Squares Optimization

    Directory of Open Access Journals (Sweden)

    Wei Mou

    2014-01-01

    Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.

  16. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  17. RNA structural motif recognition based on least-squares distance.

    Science.gov (United States)

    Shen, Ying; Wong, Hau-San; Zhang, Shaohong; Zhang, Lin

    2013-09-01

    RNA structural motifs are recurrent structural elements occurring in RNA molecules. RNA structural motif recognition aims to find RNA substructures that are similar to a query motif, and it is important for RNA structure analysis and RNA function prediction. In view of this, we propose a new method known as RNA Structural Motif Recognition based on Least-Squares distance (LS-RSMR) to effectively recognize RNA structural motifs. A test set consisting of five types of RNA structural motifs occurring in Escherichia coli ribosomal RNA is compiled by us. Experiments are conducted for recognizing these five types of motifs. The experimental results fully reveal the superiority of the proposed LS-RSMR compared with four other state-of-the-art methods.

  18. Partial least squares regression in the social sciences

    Directory of Open Access Journals (Sweden)

    Megan L. Sawatsky

    2015-06-01

    Full Text Available Partial least square regression (PLSR is a statistical modeling technique that extracts latent factors to explain both predictor and response variation. PLSR is particularly useful as a data exploration technique because it is highly flexible (e.g., there are few assumptions, variables can be highly collinear. While gaining importance across a diverse number of fields, its application in the social sciences has been limited. Here, we provide a brief introduction to PLSR, directed towards a novice audience with limited exposure to the technique; demonstrate its utility as an alternative to more classic approaches (multiple linear regression, principal component regression; and apply the technique to a hypothetical dataset using JMP statistical software (with references to SAS software.

  19. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  20. Partial Least Squares Structural Equation Modeling with R

    Directory of Open Access Journals (Sweden)

    Hamdollah Ravand

    2016-09-01

    Full Text Available Structural equation modeling (SEM has become widespread in educational and psychological research. Its flexibility in addressing complex theoretical models and the proper treatment of measurement error has made it the model of choice for many researchers in the social sciences. Nevertheless, the model imposes some daunting assumptions and restrictions (e.g. normality and relatively large sample sizes that could discourage practitioners from applying the model. Partial least squares SEM (PLS-SEM is a nonparametric technique which makes no distributional assumptions and can be estimated with small sample sizes. In this paper a general introduction to PLS-SEM is given and is compared with conventional SEM. Next, step by step procedures, along with R functions, are presented to estimate the model. A data set is analyzed and the outputs are interpreted

  1. Estimating Military Aircraft Cost Using Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    ZHU Jia-yuan; ZHANG Xi-bin; ZHANG Heng-xi; REN Bo

    2004-01-01

    A multi-layer adaptive optimizing parameters algorithm is developed for improving least squares support vector machines(LS-SVM),and a military aircraft life-cycle-cost(LCC)intelligent estimation model is proposed based on the improved LS-SVM.The intelligent cost estimation process is divided into three steps in the model.In the first step,a cost-drive-factor needs to be selected,which is significant for cost estimation.In the second step,military aircraft training samples within costs and cost-drive-factor set are obtained by the LS-SVM.Then the model can be used for new type aircraft cost estimation.Chinese military aircraft costs are estimated in the paper.The results show that the estimated costs by the new model are closer to the true costs than that of the traditionally used methods.

  2. A stochastic total least squares solution of adaptive filtering problem.

    Science.gov (United States)

    Javed, Shazia; Ahmad, Noor Atinah

    2014-01-01

    An efficient and computationally linear algorithm is derived for total least squares solution of adaptive filtering problem, when both input and output signals are contaminated by noise. The proposed total least mean squares (TLMS) algorithm is designed by recursively computing an optimal solution of adaptive TLS problem by minimizing instantaneous value of weighted cost function. Convergence analysis of the algorithm is given to show the global convergence of the proposed algorithm, provided that the stepsize parameter is appropriately chosen. The TLMS algorithm is computationally simpler than the other TLS algorithms and demonstrates a better performance as compared with the least mean square (LMS) and normalized least mean square (NLMS) algorithms. It provides minimum mean square deviation by exhibiting better convergence in misalignment for unknown system identification under noisy inputs.

  3. Least squares deconvolution of the stellar intensity and polarization spectra

    CERN Document Server

    Kochukhov, O; Piskunov, N

    2010-01-01

    Least squares deconvolution (LSD) is a powerful method of extracting high-precision average line profiles from the stellar intensity and polarization spectra. Despite its common usage, the LSD method is poorly documented and has never been tested using realistic synthetic spectra. In this study we revisit the key assumptions of the LSD technique, clarify its numerical implementation, discuss possible improvements and give recommendations how to make LSD results understandable and reproducible. We also address the problem of interpretation of the moments and shapes of the LSD profiles in terms of physical parameters. We have developed an improved, multiprofile version of LSD and have extended the deconvolution procedure to linear polarization analysis taking into account anomalous Zeeman splitting of spectral lines. This code is applied to the theoretical Stokes parameter spectra. We test various methods of interpreting the mean profiles, investigating how coarse approximations of the multiline technique trans...

  4. Least-Squares Seismic Inversion with Stochastic Conjugate Gradient Method

    Institute of Scientific and Technical Information of China (English)

    Wei Huang; Hua-Wei Zhou

    2015-01-01

    With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.

  5. Least-squares based iterative multipath super-resolution technique

    CERN Document Server

    Nam, Wooseok

    2011-01-01

    In this paper, we study the problem of multipath channel estimation for direct sequence spread spectrum signals. To resolve multipath components arriving within a short interval, we propose a new algorithm called the least-squares based iterative multipath super-resolution (LIMS). Compared to conventional super-resolution techniques, such as the multiple signal classification (MUSIC) and the estimation of signal parameters via rotation invariance techniques (ESPRIT), our algorithm has several appealing features. In particular, even in critical situations where the conventional super-resolution techniques are not very powerful due to limited data or the correlation between path coefficients, the LIMS algorithm can produce successful results. In addition, due to its iterative nature, the LIMS algorithm is suitable for recursive multipath tracking, whereas the conventional super-resolution techniques may not be. Through numerical simulations, we show that the LIMS algorithm can resolve the first arrival path amo...

  6. A Galerkin least squares approach to viscoelastic flow.

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  7. Simultaneous least squares fitter based on the Langrange multiplier method

    CERN Document Server

    Guan, Yinghui; Zheng, Yangheng; Zhu, Yong-Sheng

    2013-01-01

    We developed a least squares fitter used for extracting expected physics parameters from the correlated experimental data in high energy physics. This fitter considers the correlations among the observables and handles the nonlinearity using linearization during the $\\chi^2$ minimization. This method can naturally be extended to the analysis with external inputs. By incorporating with Langrange multipliers, the fitter includes constraints among the measured observables and the parameters of interest. We applied this fitter to the study of the $D^{0}-\\bar{D}^{0}$ mixing parameters as the test-bed based on MC simulation. The test results show that the fitter gives unbiased estimators with correct uncertainties and the approach is credible.

  8. DIRECT ITERATIVE METHODS FOR RANK DEFICIENT GENERALIZED LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Jin-yun Yuan; Xiao-qing Jin

    2000-01-01

    The generalized least squares (LS) problem appears in many application areas. Here W is an m × m symmetric positive definite matrix and A is an m × n matrix with m≥n. Since the problem has many solutions in rank deficient case, some special preconditioned techniques are adapted to obtain the minimum 2-norm solution. A block SOR method and the preconditioned conjugate gradient (PCG) method are proposed here. Convergence and optimal relaxation parameter for the block SOR method are studied. An error bound for the PCG method is given. The comparison of these methods is investigated. Some remarks on the implementation of the methods and the operation cost are given as well.

  9. Cognitive assessment in mathematics with the least squares distance method.

    Science.gov (United States)

    Ma, Lin; Çetin, Emre; Green, Kathy E

    2012-01-01

    This study investigated the validation of comprehensive cognitive attributes of an eighth-grade mathematics test using the least squares distance method and compared performance on attributes by gender and region. A sample of 5,000 students was randomly selected from the data of the 2005 Turkish national mathematics assessment of eighth-grade students. Twenty-five math items were assessed for presence or absence of 20 cognitive attributes (content, cognitive processes, and skill). Four attributes were found to be misspecified or nonpredictive. However, results demonstrated the validity of cognitive attributes in terms of the revised set of 17 attributes. The girls had similar performance on the attributes as the boys. The students from the two eastern regions significantly underperformed on the most attributes.

  10. Temperature prediction control based on least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Bin LIU; Hongye SU; Weihua HUANG; Jian CHU

    2004-01-01

    A prediction control algorithm is presented based on least squares support vector machines (LS-SVM) model for a class of complex systems with strong nonlinearity.The nonlinear off-line model of the controlled plant is built by LS-SVM with radial basis function (RBF) kernel.In the process of system running,the off-line model is linearized at each sampling instant,and the generalized prediction control (GPC) algorithm is employed to implement the prediction control for the controlled plant.The obtained algorithm is applied to a boiler temperature control system with complicated nonlinearity and large time delay.The results of the experiment verify the effectiveness and merit of the algorithm.

  11. ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    SONG Kaichen; NIE Xili

    2006-01-01

    Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.

  12. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  13. Local validation of EU-DEM using Least Squares Collocation

    Science.gov (United States)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  14. Partial Least Squares tutorial for analyzing neuroimaging data

    Directory of Open Access Journals (Sweden)

    Patricia Van Roon

    2014-09-01

    Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The

  15. Götterdämmerung over total least squares

    Science.gov (United States)

    Malissiovas, G.; Neitzel, F.; Petrovic, S.

    2016-06-01

    The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.

  16. Recursive least squares background prediction of univariate syndromic surveillance data

    Directory of Open Access Journals (Sweden)

    Burkom Howard

    2009-01-01

    Full Text Available Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold

  17. Least-squares joint imaging of multiples and primaries

    Science.gov (United States)

    Brown, Morgan Parker

    Current exploration geophysics practice still regards multiple reflections as noise, although multiples often contain considerable information about the earth's angle-dependent reflectivity that primary reflections do not. To exploit this information, multiples and primaries must be combined in a domain in which they are comparable, such as in the prestack image domain. However, unless the multiples and primaries have been pre-separated from the data, crosstalk leakage between multiple and primary images will significantly degrade any gains in the signal fidelity, geologic interpretability, and signal-to-noise ratio of the combined image. I present a global linear least-squares algorithm, denoted LSJIMP (Least-squares Joint Imaging of Multiples and Primaries), which separates multiples from primaries while simultaneously combining their information. The novelty of the method lies in the three model regularization operators which discriminate between crosstalk and signal and extend information between multiple and primary images. The LSJIMP method exploits the hitherto ignored redundancy between primaries and multiples in the data. While many different types of multiple imaging operators are well-suited for use with the LSJIMP method, in this thesis I utilize an efficient prestack time imaging strategy for multiples which sacrifices accuracy in a complex earth for computational speed and convenience. I derive a variant of the normal moveout (NMO) equation for multiples, called HEMNO, which can image "split" pegleg multiples which arise from a moderately heterogeneous earth. I also derive a series of prestack amplitude compensation operators which when combined with HEMNO, transform pegleg multiples into events are directly comparable---kinematically and in terms of amplitudes---to the primary reflection. I test my implementation of LSJIMP on two datasets from the deepwater Gulf of Mexico. The first, a 2-D line in the Mississippi Canyon region, exhibits a variety of

  18. River flow time series using least squares support vector machines

    Science.gov (United States)

    Samsudin, R.; Saad, P.; Shabri, A.

    2011-06-01

    This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA), GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE) and coefficient of correlation (R) are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.

  19. 3D plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2014-08-05

    A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.

  20. Prediction of solubility parameters using partial least square regression.

    Science.gov (United States)

    Tantishaiyakul, Vimon; Worakul, Nimit; Wongpoowarak, Wibul

    2006-11-15

    The total solubility parameter (delta) values were effectively predicted by using computed molecular descriptors and multivariate partial least squares (PLS) statistics. The molecular descriptors in the derived models included heat of formation, dipole moment, molar refractivity, solvent-accessible surface area (SA), surface-bounded molecular volume (SV), unsaturated index (Ui), and hydrophilic index (Hy). The values of these descriptors were computed by the use of HyperChem 7.5, QSPR Properties module in HyperChem 7.5, and Dragon Web version. The other two descriptors, hydrogen bonding donor (HD), and hydrogen bond-forming ability (HB) were also included in the models. The final reduced model of the whole data set had R(2) of 0.853, Q(2) of 0.813, root mean squared error from the cross-validation of the training set (RMSEcv(tr)) of 2.096 and RMSE of calibration (RMSE(tr)) of 1.857. No outlier was observed from this data set of 51 diverse compounds. Additionally, the predictive power of the developed model was comparable to the well recognized systems of Hansen, van Krevelen and Hoftyzer, and Hoy.

  1. Efficient sparse kernel feature extraction based on partial least squares.

    Science.gov (United States)

    Dhanjal, Charanpal; Gunn, Steve R; Shawe-Taylor, John

    2009-08-01

    The presence of irrelevant features in training data is a significant obstacle for many machine learning tasks. One approach to this problem is to extract appropriate features and, often, one selects a feature extraction method based on the inference algorithm. Here, we formalize a general framework for feature extraction, based on Partial Least Squares, in which one can select a user-defined criterion to compute projection directions. The framework draws together a number of existing results and provides additional insights into several popular feature extraction methods. Two new sparse kernel feature extraction methods are derived under the framework, called Sparse Maximal Alignment (SMA) and Sparse Maximal Covariance (SMC), respectively. Key advantages of these approaches include simple implementation and a training time which scales linearly in the number of examples. Furthermore, one can project a new test example using only k kernel evaluations, where k is the output dimensionality. Computational results on several real-world data sets show that SMA and SMC extract features which are as predictive as those found using other popular feature extraction methods. Additionally, on large text retrieval and face detection data sets, they produce features which match the performance of the original ones in conjunction with a Support Vector Machine.

  2. A Novel Kernel for Least Squares Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    FENG Wei; ZHAO Yong-ping; DU Zhong-hua; LI De-cai; WANG Li-feng

    2012-01-01

    Extreme learning machine(ELM) has attracted much attention in recent years due to its fast convergence and good performance.Merging both ELM and support vector machine is an important trend,thus yielding an ELM kernel.ELM kernel based methods are able to solve the nonlinear problems by inducing an explicit mapping compared with the commonly-used kernels such as Gaussian kernel.In this paper,the ELM kernel is extended to the least squares support vector regression(LSSVR),so ELM-LSSVR was proposed.ELM-LSSVR can be used to reduce the training and test time simultaneously without extra techniques such as sequential minimal optimization and pruning mechanism.Moreover,the memory space for the training and test was relieved.To confirm the efficacy and feasibility of the proposed ELM-LSSVR,the experiments are reported to demonstrate that ELM-LSSVR takes the advantage of training and test time with comparable accuracy to other algorithms.

  3. Non-linear Least Squares Fitting in IDL with MPFIT

    CERN Document Server

    Markwardt, Craig B

    2009-01-01

    MPFIT is a port to IDL of the non-linear least squares fitting program MINPACK-1. MPFIT inherits the robustness of the original FORTRAN version of MINPACK-1, but is optimized for performance and convenience in IDL. In addition to the main fitting engine, MPFIT, several specialized functions are provided to fit 1-D curves and 2-D images; 1-D and 2-D peaks; and interactive fitting from the IDL command line. Several constraints can be applied to model parameters, including fixed constraints, simple bounding constraints, and "tying" the value to another parameter. Several data weighting methods are allowed, and the parameter covariance matrix is computed. Extensive diagnostic capabilities are available during the fit, via a call-back subroutine, and after the fit is complete. Several different forms of documentation are provided, including a tutorial, reference pages, and frequently asked questions. The package has been translated to C and Python as well. The full IDL and C packages can be found at http://purl.co...

  4. Non-parametric and least squares Langley plot methods

    Directory of Open Access Journals (Sweden)

    P. W. Kiedron

    2015-04-01

    Full Text Available Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer–Lambert–Beer law V=V>/i>0e−τ ·m, where a plot of ln (V voltage vs. m air mass yields a straight line with intercept ln (V0. This ln (V0 subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0's are smoothed and interpolated with median and mean moving window filters.

  5. PREDIKSI WAKTU KETAHANAN HIDUP DENGAN METODE PARTIAL LEAST SQUARE

    Directory of Open Access Journals (Sweden)

    PANDE PUTU BUDI KUSUMA

    2013-03-01

    Full Text Available Coronary heart disease is caused due to an accumulation of fat on the inside walls of blood vessels of the heart (coronary arteries. The factors that had led to the occurrence of coronary heart disease is dominated by unhealthy lifestyle of patients, and the survival times of different patients. This research objective is to predict the survival time of patients with coronary heart disease by taking into account the explanatory variables were analyzed by the method of Partial Least Square (PLS.  PLS method is used to resolve the multiple regression analysis when the specific problems of multicollinearity and microarray data. The purpose of the PLS method is to predict the explanatory variables with multiple response variables so as to produce a more accurate predictive value.  The results of this research showed that the prediction of survival for the three samples of patients with coronary heart disease had an average of 13 days, with a RMSEP value (error value was 1.526 which means that the results of this study are not much different from the predicted results in the field of medicine. This is consistent with the fact that the medical field suggests that the average survival for patients with coronary heart disease by 13 days.

  6. Parsimonious extreme learning machine using recursive orthogonal least squares.

    Science.gov (United States)

    Wang, Ning; Er, Meng Joo; Han, Min

    2014-10-01

    Novel constructive and destructive parsimonious extreme learning machines (CP- and DP-ELM) are proposed in this paper. By virtue of the proposed ELMs, parsimonious structure and excellent generalization of multiinput-multioutput single hidden-layer feedforward networks (SLFNs) are obtained. The proposed ELMs are developed by innovative decomposition of the recursive orthogonal least squares procedure into sequential partial orthogonalization (SPO). The salient features of the proposed approaches are as follows: 1) Initial hidden nodes are randomly generated by the ELM methodology and recursively orthogonalized into an upper triangular matrix with dramatic reduction in matrix size; 2) the constructive SPO in the CP-ELM focuses on the partial matrix with the subcolumn of the selected regressor including nonzeros as the first column while the destructive SPO in the DP-ELM operates on the partial matrix including elements determined by the removed regressor; 3) termination criteria for CP- and DP-ELM are simplified by the additional residual error reduction method; and 4) the output weights of the SLFN need not be solved in the model selection procedure and is derived from the final upper triangular equation by backward substitution. Both single- and multi-output real-world regression data sets are used to verify the effectiveness and superiority of the CP- and DP-ELM in terms of parsimonious architecture and generalization accuracy. Innovative applications to nonlinear time-series modeling demonstrate superior identification results.

  7. A pruning method for the recursive least squared algorithm.

    Science.gov (United States)

    Leung, C S; Wong, K W; Sum, P F; Chan, L W

    2001-03-01

    The recursive least squared (RLS) algorithm is an effective online training method for neural networks. However, its conjunctions with weight decay and pruning have not been well studied. This paper elucidates how generalization ability can be improved by selecting an appropriate initial value of the error covariance matrix in the RLS algorithm. Moreover, how the pruning of neural networks can be benefited by using the final value of the error covariance matrix will also be investigated. Our study found that the RLS algorithm is implicitly a weight decay method, where the weight decay effect is controlled by the initial value of the error covariance matrix; and that the inverse of the error covariance matrix is approximately equal to the Hessian matrix of the network being trained. We propose that neural networks are first trained by the RLS algorithm and then some unimportant weights are removed based on the approximate Hessian matrix. Simulation results show that our approach is an effective training and pruning method for neural networks.

  8. HASM-AD Algorithm Based on the Sequential Least Squares

    Institute of Scientific and Technical Information of China (English)

    WANG Shihai; YUE Tianxiang

    2010-01-01

    The HASM (high accuracy surface modeling) technique is based on the fundamental theory of surfaces, which has been proved to improve the interpolation accuracy in surface fitting. However, the integral iterative solution in previous studies resulted in high temporal complexity in computation and huge memory usage so that it became difficult to put the technique into application,especially for large-scale datasets. In the study, an innovative model (HASM-AD) is developed according to the sequential least squares on the basis of data adjustment theory. Sequential division is adopted in the technique, so that linear equations can be divided into groups to be processed in sequence with the temporal complexity reduced greatly in computation. The experiment indicates that the HASM-AD technique surpasses the traditional spatial interpolation methods in accuracy. Also, the cross-validation result proves the same conclusion for the spatial interpolation of soil PH property with the data sampled in Jiangxi province. Moreover, it is demonstrated in the study that the HASM-AD technique significantly reduces the computational complexity and lessens memory usage in computation.

  9. Least-squares fit of a linear combination of functions

    Directory of Open Access Journals (Sweden)

    Niraj Upadhyay

    2013-12-01

    Full Text Available We propose that given a data-set $S=\\{(x_i,y_i/i=1,2,{\\dots}n\\}$ and real-valued functions $\\{f_\\alpha(x/\\alpha=1,2,{\\dots}m\\},$ the least-squares fit vector $A=\\{a_\\alpha\\}$ for $y=\\sum_\\alpha a_{\\alpha}f_\\alpha(x$ is $A = (F^TF^{-1}F^TY$ where $[F_{i\\alpha}]=[f_\\alpha(x_i].$ We test this formalism by deriving the algebraic expressions of the regression coefficients in $y = ax + b$ and in $y = ax^2 + bx + c.$ As a practical application, we successfully arrive at the coefficients in the semi-empirical mass formula of nuclear physics. The formalism is {\\it generic} - it has the potential of being applicable to any {\\it type} of $\\{x_i\\}$ as long as there exist appropriate $\\{f_\\alpha\\}.$ The method can be exploited with a CAS or an object-oriented language and is excellently suitable for parallel-processing.

  10. Suppressing Anomalous Localized Waffle Behavior in Least Squares Wavefront Reconstructors

    Energy Technology Data Exchange (ETDEWEB)

    Gavel, D

    2002-10-08

    A major difficulty with wavefront slope sensors is their insensitivity to certain phase aberration patterns, the classic example being the waffle pattern in the Fried sampling geometry. As the number of degrees of freedom in AO systems grows larger, the possibility of troublesome waffle-like behavior over localized portions of the aperture is becoming evident. Reconstructor matrices have associated with them, either explicitly or implicitly, an orthogonal mode space over which they operate, called the singular mode space. If not properly preconditioned, the reconstructor's mode set can consist almost entirely of modes that each have some localized waffle-like behavior. In this paper we analyze the behavior of least-squares reconstructors with regard to their mode spaces. We introduce a new technique that is successful in producing a mode space that segregates the waffle-like behavior into a few ''high order'' modes, which can then be projected out of the reconstructor matrix. This technique can be adapted so as to remove any specific modes that are undesirable in the final reconstructor (such as piston, tip, and tilt for example) as well as suppress (the more nebulously defined) localized waffle behavior.

  11. Neither fixed nor random: weighted least squares meta-regression.

    Science.gov (United States)

    Stanley, T D; Doucouliagos, Hristos

    2016-06-20

    Our study revisits and challenges two core conventional meta-regression estimators: the prevalent use of 'mixed-effects' or random-effects meta-regression analysis and the correction of standard errors that defines fixed-effects meta-regression analysis (FE-MRA). We show how and explain why an unrestricted weighted least squares MRA (WLS-MRA) estimator is superior to conventional random-effects (or mixed-effects) meta-regression when there is publication (or small-sample) bias that is as good as FE-MRA in all cases and better than fixed effects in most practical applications. Simulations and statistical theory show that WLS-MRA provides satisfactory estimates of meta-regression coefficients that are practically equivalent to mixed effects or random effects when there is no publication bias. When there is publication selection bias, WLS-MRA always has smaller bias than mixed effects or random effects. In practical applications, an unrestricted WLS meta-regression is likely to give practically equivalent or superior estimates to fixed-effects, random-effects, and mixed-effects meta-regression approaches. However, random-effects meta-regression remains viable and perhaps somewhat preferable if selection for statistical significance (publication bias) can be ruled out and when random, additive normal heterogeneity is known to directly affect the 'true' regression coefficient. Copyright © 2016 John Wiley & Sons, Ltd.

  12. Nonlinear least-squares data fitting in Excel spreadsheets.

    Science.gov (United States)

    Kemmer, Gerdi; Keller, Sandro

    2010-02-01

    We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.

  13. Reconciling alternate methods for the determination of charge distributions: A probabilistic approach to high-dimensional least-squares approximations

    CERN Document Server

    Champagnat, Nicolas; Faou, Erwan

    2010-01-01

    We propose extensions and improvements of the statistical analysis of distributed multipoles (SADM) algorithm put forth by Chipot et al. in [6] for the derivation of distributed atomic multipoles from the quantum-mechanical electrostatic potential. The method is mathematically extended to general least-squares problems and provides an alternative approximation method in cases where the original least-squares problem is computationally not tractable, either because of its ill-posedness or its high-dimensionality. The solution is approximated employing a Monte Carlo method that takes the average of a random variable defined as the solutions of random small least-squares problems drawn as subsystems of the original problem. The conditions that ensure convergence and consistency of the method are discussed, along with an analysis of the computational cost in specific instances.

  14. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  15. The moving-least-squares-particle hydrodynamics method (MLSPH)

    Energy Technology Data Exchange (ETDEWEB)

    Dilts, G. [Los Alamos National Lab., NM (United States)

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.

  16. Topology testing of phylogenies using least squares methods

    Directory of Open Access Journals (Sweden)

    Wróbel Borys

    2006-12-01

    Full Text Available Abstract Background The least squares (LS method for constructing confidence sets of trees is closely related to LS tree building methods, in which the goodness of fit of the distances measured on the tree (patristic distances to the observed distances between taxa is the criterion used for selecting the best topology. The generalized LS (GLS method for topology testing is often frustrated by the computational difficulties in calculating the covariance matrix and its inverse, which in practice requires approximations. The weighted LS (WLS allows for a more efficient albeit approximate calculation of the test statistic by ignoring the covariances between the distances. Results The goal of this paper is to assess the applicability of the LS approach for constructing confidence sets of trees. We show that the approximations inherent to the WLS method did not affect negatively the accuracy and reliability of the test both in the analysis of biological sequences and DNA-DNA hybridization data (for which character-based testing methods cannot be used. On the other hand, we report several problems for the GLS method, at least for the available implementation. For many data sets of biological sequences, the GLS statistic could not be calculated. For some data sets for which it could, the GLS method included all the possible trees in the confidence set despite a strong phylogenetic signal in the data. Finally, contrary to WLS, for simulated sequences GLS showed undercoverage (frequent non-inclusion of the true tree in the confidence set. Conclusion The WLS method provides a computationally efficient approximation to the GLS useful especially in exploratory analyses of confidence sets of trees, when assessing the phylogenetic signal in the data, and when other methods are not available.

  17. Least-squares reverse time migration in elastic media

    Science.gov (United States)

    Ren, Zhiming; Liu, Yang; Sen, Mrinal K.

    2017-02-01

    Elastic reverse time migration (RTM) can yield accurate subsurface information (e.g. PP and PS reflectivity) by imaging the multicomponent seismic data. However, the existing RTM methods are still insufficient to provide satisfactory results because of the finite recording aperture, limited bandwidth and imperfect illumination. Besides, the P- and S-wave separation and the polarity reversal correction are indispensable in conventional elastic RTM. Here, we propose an iterative elastic least-squares RTM (LSRTM) method, in which the imaging accuracy is improved gradually with iteration. We first use the Born approximation to formulate the elastic de-migration operator, and employ the Lagrange multiplier method to derive the adjoint equations and gradients with respect to reflectivity. Then, an efficient inversion workflow (only four forward computations needed in each iteration) is introduced to update the reflectivity. Synthetic and field data examples reveal that the proposed LSRTM method can obtain higher-quality images than the conventional elastic RTM. We also analyse the influence of model parametrizations and misfit functions in elastic LSRTM. We observe that Lamé parameters, velocity and impedance parametrizations have similar and plausible migration results when the structures of different models are correlated. For an uncorrelated subsurface model, velocity and impedance parametrizations produce fewer artefacts caused by parameter crosstalk than the Lamé coefficient parametrization. Correlation- and convolution-type misfit functions are effective when amplitude errors are involved and the source wavelet is unknown, respectively. Finally, we discuss the dependence of elastic LSRTM on migration velocities and its antinoise ability. Imaging results determine that the new elastic LSRTM method performs well as long as the low-frequency components of migration velocities are correct. The quality of images of elastic LSRTM degrades with increasing noise.

  18. Application of least square method for muscular strength estimation in hand motion recognition using surface EMG.

    Science.gov (United States)

    Nakano, Takemi; Nagata, Kentaro; Yamada, Masafumi; Magatani, Kazushige

    2009-01-01

    In this study, we describe the application of least square method for muscular strength estimation in hand motion recognition based on surface electromyogram (SEMG). Although the muscular strength can consider the various evaluation methods, a grasp force is applied as an index to evaluate the muscular strength. Today, SEMG, which is measured from skin surface, is widely used as a control signal for many devices. Because, SEMG is one of the most important biological signal in which the human motion intention is directly reflected. And various devices using SEMG are reported by lots of researchers. Those devices which use SEMG as a control signal, we call them SEMG system. In SEMG system, to achieve high accuracy recognition is an important requirement. Conventionally SEMG system mainly focused on how to achieve this objective. Although it is also important to estimate muscular strength of motions, most of them cannot detect power of muscle. The ability to estimate muscular strength is a very important factor to control the SEMG systems. Thus, our objective of this study is to develop the estimation method for muscular strength by application of least square method, and reflecting the result of measured power to the controlled object. Since it was known that SEMG is formed by physiological variations in the state of muscle fiber membranes, it is thought that it can be related with grasp force. We applied to the least-squares method to construct a relationship between SEMG and grasp force. In order to construct an effective evaluation model, four SEMG measurement locations in consideration of individual difference were decided by the Monte Carlo method.

  19. Integer least-squares theory for the GNSS compass

    Science.gov (United States)

    Teunissen, P. J. G.

    2010-07-01

    Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to high-precision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategies. It extends current unconstrained ILS theory to the nonlinearly constrained case, an extension that is particularly suited for precise attitude determination. As opposed to current practice, our method does proper justice to the a priori given information. The nonlinear baseline constraint is fully integrated into the ambiguity objective function, thereby receiving a proper weighting in its minimization and providing guidance for the integer search. Different search strategies are developed to compute exact and approximate solutions of the nonlinear constrained ILS problem. Their applicability depends on the strength of the GNSS model and on the length of the baseline. Two of the presented search strategies, a global and a local one, are based on the use of an ellipsoidal search space. This has the advantage that standard methods can be applied. The global ellipsoidal search strategy is applicable to GNSS models of sufficient strength, while the local ellipsoidal search strategy is applicable to models for which the baseline lengths are not too small. We also develop search strategies for the most challenging case, namely when the curvature of the non-ellipsoidal ambiguity search space needs to be taken into account. Two such strategies are presented, an approximate one and a rigorous, somewhat more complex, one. The approximate one is applicable when the fixed baseline variance matrix is close to diagonal. Both methods make use of a search and shrink strategy. The rigorous solution is efficiently obtained by means of a search and shrink strategy that uses non-quadratic, but easy-to-evaluate, bounding functions of the ambiguity objective function. The theory

  20. Linear least squares compartmental-model-independent parameter identification in PET.

    Science.gov (United States)

    Thie, J A; Smith, G T; Hubner, K F

    1997-02-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals--all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible.

  1. Modeling geochemical datasets for source apportionment: Comparison of least square regression and inversion approaches.

    Digital Repository Service at National Institute of Oceanography (India)

    Tripathy, G.R.; Das, Anirban.

    . Analysis of different modes of factor analysis as least squares fit problems. Chemometrics and Intelligent Laboratory Systems 18, 183–194. Paatero, P., 1997. Least squares formulation of robust non-negative factor analysis. Chemometrics and Intelligent...

  2. Frequency domain analysis and synthesis of lumped parameter systems using nonlinear least squares techniques

    Science.gov (United States)

    Hays, J. R.

    1969-01-01

    Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.

  3. Parallel Implementation of a Least-Squares Spectral Element Solver for Incomressible Flow Problems

    NARCIS (Netherlands)

    Nool, M.; Proot, M.M.J.; Sloot, P.M.A.; Kenneth Tan, C.J.; Dongarra, J.J.; Hoekstra, A.G.

    2002-01-01

    Least-squares spectral element methods are based on two important and successful numerical methods: spectral/{\\em hp} element methods and least-squares finite element methods. Least-squares methods lead to symmetric and positive definite algebraic systems which circumvent the Ladyzhenskaya-Babu\\v{s}

  4. Application of least-squares spectral element solver methods to incompressible flow problems

    NARCIS (Netherlands)

    Proot, M.M.J.; Gerritsma, M.I.; Nool, M.

    2003-01-01

    Least-squares spectral element methods are based on two important and successful numerical methods: spectral /hp element methods and least-squares finite element methods. In this respect, least-squares spectral element methods are very powerfull since they combine the generality of finite element me

  5. Least Square Methods for Solving Systems of Inequalities with Application to an Assignment Problem

    Science.gov (United States)

    1992-11-01

    problem using continuous methods and (2) solving systems of inequalities (and equalities) in a least square sense. The specific assignment problem has...linear equations, in a least square sense are developed. Common algorithmic approaches to solve nonlinear least square problems are adapted to solve

  6. A Novel Soft Sensor Modeling Approach Based on Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    Feng Rui(冯瑞); Song Chunlin; Zhang Yanzhu; Shao Huihe

    2004-01-01

    Artificial Neural Networks (ANNs) such as radial basis function neural networks (RBFNNs) have been successfully used in soft sensor modeling. However, the generalization ability of conventional ANNs is not very well. For this reason, we present a novel soft sensor modeling approach based on Support Vector Machines (SVMs). Since standard SVMs have the limitation of speed and size in training large data set, we hereby propose Least Squares Support Vector Machines (LS_SVMs) and apply it to soft sensor modeling. Systematic analysis is performed and the result indicates that the proposed method provides satisfactory performance with excellent approximation and generalization property. Monte Carlo simulations show that our soft sensor modeling approach achieves performance superior to the conventional method based on RBFNNs.

  7. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    Science.gov (United States)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  8. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  9. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data

    Energy Technology Data Exchange (ETDEWEB)

    Zeng, Ying-Xu [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); Mjøs, Svein Are, E-mail: svein.mjos@kj.uib.no [Department of Chemistry, University of Bergen, PO Box 7803, N-5020 Bergen (Norway); David, Fabrice P.A. [Bioinformatics and Biostatistics Core Facility, School of Life Sciences, Ecole Polytechnique Fédérale de Lausanne (EPFL) and Swiss Institute of Bioinformatics (SIB), Lausanne (Switzerland); Schmid, Adrien W. [Proteomics Core Facility, Ecole Polytechnique Fédérale de Lausanne (EPFL), 1015 Lausanne (Switzerland)

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis. - Highlights: • A flexible strategy for analyzing MS and LC-MS data of lipid molecules is proposed. • Isotope distribution spectra of theoretically possible compounds were generated. • High resolution MS and LC-MS data were resolved by least squares spectral resolution. • The method proposed compounds that are likely to occur in the analyzed samples. • The proposed compounds matched results from manual interpretation of fragment spectra.

  10. Comparison of structural and least-squares lines for estimating geologic relations

    Science.gov (United States)

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  11. NEGATIVE NORM LEAST-SQUARES METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Gao Shaoqin; Duan Huoyuan

    2008-01-01

    The purpose of this article is to develop and analyze least-squares approxi-mations for the incompressible magnetohydrodynamic equations. The major advantage of the least-squares finite element method is that it is not subjected to the so-called Ladyzhenskaya-Babuska-Brezzi (LBB) condition. The authors employ least-squares func-tionals which involve a discrete inner product which is related to the inner product in H-1(Ω).

  12. Parameter Estimation of Jelinski-Moranda Model Based on Weighted Nonlinear Least Squares and Heteroscedasticity

    OpenAIRE

    Liu, Jingwei; Liu, Yi; Xu, Meizhi

    2015-01-01

    Parameter estimation method of Jelinski-Moranda (JM) model based on weighted nonlinear least squares (WNLS) is proposed. The formulae of resolving the parameter WNLS estimation (WNLSE) are derived, and the empirical weight function and heteroscedasticity problem are discussed. The effects of optimization parameter estimation selection based on maximum likelihood estimation (MLE) method, least squares estimation (LSE) method and weighted nonlinear least squares estimation (WNLSE) method are al...

  13. Research on Application of Regression Least Squares Support Vector Machine on Performance Prediction of Hydraulic Excavator

    Directory of Open Access Journals (Sweden)

    Zhan-bo Chen

    2014-01-01

    Full Text Available In order to improve the performance prediction accuracy of hydraulic excavator, the regression least squares support vector machine is applied. First, the mathematical model of the regression least squares support vector machine is studied, and then the algorithm of the regression least squares support vector machine is designed. Finally, the performance prediction simulation of hydraulic excavator based on regression least squares support vector machine is carried out, and simulation results show that this method can predict the performance changing rules of hydraulic excavator correctly.

  14. CHEBYSHEV WEIGHTED NORM LEAST-SQUARES SPECTRAL METHODS FOR THE ELLIPTIC PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Sang Dong Kim; Byeong Chun Shin

    2006-01-01

    We develop and analyze a first-order system least-squares spectral method for the second-order elliptic boundary value problem with variable coefficients. We first analyze the Chebyshev weighted norm least-squares functional defined by the sum of the L2w-and H-1w,- norm of the residual equations and then we replace the negative norm by the discrete negative norm and analyze the discrete Chebyshev weighted least-squares method. The spectral convergence is derived for the proposed method. We also present various numerical experiments. The Legendre weighted least-squares method can be easily developed by following this paper.

  15. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    Science.gov (United States)

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.

  16. Least-Squares Mirrorsymmetric Solution for Matrix Equations (AX=B, XC=D)

    Institute of Scientific and Technical Information of China (English)

    Fanliang Li; Xiyan Hu; Lei Zhang

    2006-01-01

    In this paper, least-squares mirrorsymmetric solution for matrix equations (AX =B, XC=D) and its optimal approximation is considered. With special expression of mirrorsymmetric matrices, a general representation of solution for the least-squares problem is obtained. In addition, the optimal approximate solution and some algorithms to obtain the optimal approximation are provided.

  17. A Least Square Finite Element Technique for Transonic Flow with Shock,

    Science.gov (United States)

    1977-08-22

    dimensional form. A least square finite element technique was used with a linearly interpolating polynomial to reduce the governing equation to a...partial differential equations by a system of ordinary differential equations. Using the least square finite element technique a computer program was

  18. Function Based Nonlinear Least Squares and Application to Jelinski--Moranda Software Reliability Model

    CERN Document Server

    Liu, Jingwei

    2011-01-01

    A function based nonlinear least squares estimation (FNLSE) method is proposed and investigated in parameter estimation of Jelinski-Moranda software reliability model. FNLSE extends the potential fitting functions of traditional least squares estimation (LSE), and takes the logarithm transformed nonlinear least squares estimation (LogLSE) as a special case. A novel power transformation function based nonlinear least squares estimation (powLSE) is proposed and applied to the parameter estimation of Jelinski-Moranda model. Solved with Newton-Raphson method, Both LogLSE and powLSE of Jelinski-Moranda models are applied to the mean time between failures (MTBF) predications on six standard software failure time data sets. The experimental results demonstrate the effectiveness of powLSE with optimal power index compared to the classical least--squares estimation (LSE), maximum likelihood estimation (MLE) and LogLSE in terms of recursively relative error (RE) index and Braun statistic index.

  19. Calculation of stratum surface principal curvature based on a moving least square method

    Institute of Scientific and Technical Information of China (English)

    LI Guo-qing; MENG Zhao-ping; MA Feng-shan; ZHAO Hai-jun; DING De-min; LIU Qin; WANG Cheng

    2008-01-01

    With the east section of the Changji sag Zhunger Basin as a case study, both a principal curvature method and a moving least square method are elaborated. The moving least square method is introduced, for the first time, to fit a stratum surface. The results show that, using the same-degree base function, compared with a traditional least square method, the moving least square method can produce lower fitting errors, the fitting surface can describe the morphological characteristics of stratum surfaces more accurately and the principal curvature values vary within a wide range and may be more suitable for the prediction of the distribu-tion of structural fractures. The moving least square method could be useful in curved surface fitting and stratum curvature analysis.

  20. Methods for Least Squares Data Smoothing by Adjustment of Divided Differences

    Science.gov (United States)

    Demetriou, I. C.

    2008-09-01

    A brief survey is presented for the main methods that are used in least squares data smoothing by adjusting the signs of divided differences of the smoothed values. The most distinctive feature of the smoothing approach is that it provides automatically a piecewise monotonic or a piecewise convex/concave fit to the data. The data are measured values of a function of one variable that contain random errors. As a consequence of the errors, the number of sign alterations in the sequence of mth divided differences is usually unacceptably large, where m is a prescribed positive integer. Therefore, we make the least sum of squares change to the measurements by requiring the sequence of the divided differences of order m to have at most k-1 sign changes, for some positive integer k. Although, it is a combinatorial problem, whose solution can require about O(nk) quadratic programming calculations in n variables and n-m constraints, where n is the number of data, very efficient algorithms have been developed for the cases when m = 1 or m = 2 and k is arbitrary, as well as when m>2 for small values of k. Attention is paid to the purpose of each method instead of to its details. Some software packages make the methods publicly accessible through library systems.

  1. An Effective Hybrid Artificial Bee Colony Algorithm for Nonnegative Linear Least Squares Problems

    Directory of Open Access Journals (Sweden)

    Xiangyu Kong

    2014-07-01

    Full Text Available An effective hybrid artificial bee colony algorithm is proposed in this paper for nonnegative linear least squares problems. To further improve the performance of algorithm, orthogonal initialization method is employed to generate the initial swarm. Furthermore, to balance the exploration and exploitation abilities, a new search mechanism is designed. The performance of this algorithm is verified by using 27 benchmark functions and 5 nonnegative linear least squares test problems. And the comparison analyses are given between the proposed algorithm and other swarm intelligence algorithms. Numerical results demonstrate that the proposed algorithm displays a high performance compared with other algorithms for global optimization problems and nonnegative linear least squares problems.

  2. A least squares finite element scheme for transonic flow around harmonically oscillating airfoils

    Science.gov (United States)

    Cox, C. L.; Fix, G. J.; Gunzburger, M. D.

    1983-01-01

    The present investigation shows that a finite element scheme with a weighted least squares variational principle is applicable to the problem of transonic flow around a harmonically oscillating airfoil. For the flat plate case, numerical results compare favorably with the exact solution. The obtained numerical results for the transonic problem, for which an exact solution is not known, have the characteristics of known experimental results. It is demonstrated that the performance of the employed numerical method is independent of equation type (elliptic or hyperbolic) and frequency. The weighted least squares principle allows the appropriate modeling of singularities, which such a modeling of singularities is not possible with normal least squares.

  3. Least-squares methods involving the H{sup -1} inner product

    Energy Technology Data Exchange (ETDEWEB)

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  4. Multilevel solvers of first-order system least-squares for Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  5. On the interpretation of least squares collocation. [for geodetic data reduction

    Science.gov (United States)

    Tapley, B. D.

    1976-01-01

    A demonstration is given of the strict mathematical equivalence between the least squares collocation and the classical minimum variance estimates. It is shown that the least squares collocation algorithms are a special case of the modified minimum variance estimates. The computational efficiency of several forms of the general minimum variance estimation algorithm is discussed. It is pointed out that for certain geodetic applications the least square collocation algorithm may provide a more efficient formulation of the results from the point of view of the computations required.

  6. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    Science.gov (United States)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  7. Iterative least-squares solvers for the Navier-Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Bochev, P. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  8. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Manteuffel, T.A [Univ. of Colorado, Boulder, CO (United States); Ressel, K.J. [Interdisciplinary Project Center for Supercomputing, Zurich (Switzerland); Starkes, G. [Universtaet Karlsruhe (Germany)

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  9. LEAST-SQUARES MIXED FINITE ELEMENT METHODS FOR NONLINEAR PARABOLIC PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Dan-ping Yang

    2002-01-01

    Two least-squares mixed finite element schemes are formulated to solve the initialboundary value problem of a nonlinear parabolic partial differential equation and the convergence of these schemes are analyzed.

  10. 8th International Conference on Partial Least Squares and Related Methods

    CERN Document Server

    Vinzi, Vincenzo; Russolillo, Giorgio; Saporta, Gilbert; Trinchera, Laura

    2016-01-01

    This volume presents state of the art theories, new developments, and important applications of Partial Least Square (PLS) methods. The text begins with the invited communications of current leaders in the field who cover the history of PLS, an overview of methodological issues, and recent advances in regression and multi-block approaches. The rest of the volume comprises selected, reviewed contributions from the 8th International Conference on Partial Least Squares and Related Methods held in Paris, France, on 26-28 May, 2014. They are organized in four coherent sections: 1) new developments in genomics and brain imaging, 2) new and alternative methods for multi-table and path analysis, 3) advances in partial least square regression (PLSR), and 4) partial least square path modeling (PLS-PM) breakthroughs and applications. PLS methods are very versatile methods that are now used in areas as diverse as engineering, life science, sociology, psychology, brain imaging, genomics, and business among both academics ...

  11. Simultaneous Spectrophotometric Determination of Three Components Including Deoxyschizandrin by Partial Least Squares Regression

    Institute of Scientific and Technical Information of China (English)

    ZHANG Liqing; WU Xiaohua

    2005-01-01

    The computer auxiliary partial least squares is introduced to simultaneously determine the contents of Deoxyschizandin, Schisandrin, γ- Schisandrin in the extracted solution of wuweizi. Regression analysis of the experimental results shows that the average recovery of each component is all in the range from 98.9% to 110.3% ,which means the partial least squares regression spectrophotometry can circumvent the overlapping of absorption spectrums of multi-components, so that satisfactory results can be obtained without any sample pre-separation.

  12. Global Convergence of Adaptive Generalized Predictive Controller Based on Least Squares Algorithm

    Institute of Scientific and Technical Information of China (English)

    张兴会; 陈增强; 袁著祉

    2003-01-01

    Some papers on stochastic adaptive control schemes have established convergence algorithm using a leastsquares parameters. With the popular application of GPC, global convergence has become a key problem in automatic control theory. However, now global convergence of GPC has not been established for algorithms in computing a least squares iteration. A generalized model of adaptive generalized predictive control is presented. The global convergebce is also given on the basis of estimating the parameters of GPC by least squares algorithm.

  13. ON STABLE PERTURBATIONS OF THE STIFFLY WEIGHTED PSEUDOINVERSE AND WEIGHTED LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Mu-sheng Wei

    2005-01-01

    In this paper we study perturbations of the stiffly weighted pseudoinverse (W1/2 A)+W1/2 and the related stiffly weighted least squares problem, where both the matrices A and W are given with W positive diagonal and severely stiff. We show that the perturbations to the stiffly weighted pseudoinverse and the related stiffly weighted least squares problem are stable, if and only if the perturbed matrices (^)A = A+δA satisfy several row rank preserving conditions.

  14. SUPERCONVERGENCE OF LEAST-SQUARES MIXED FINITE ELEMENTS FOR ELLIPTIC PROBLEMS ON TRIANGULATION

    Institute of Scientific and Technical Information of China (English)

    陈艳萍; 杨菊娥

    2003-01-01

    In this paper,we present the least-squares mixed finite element method and investigate superconvergence phenomena for the second order elliptic boundary-value problems over triangulations.On the basis of the L2-projection and some mixed finite element projections,we obtain the superconvergence result of least-squares mixed finite element solutions.This error estimate indicates an accuracy of O(h3/2)if the lowest order Raviart-Thomas elements are employed.

  15. Solving method of generalized nonlinear dynamic least squares for data processing in building of digital mine

    Institute of Scientific and Technical Information of China (English)

    TAO Hua-xue (陶华学); GUO Jin-yun (郭金运)

    2003-01-01

    Data are very important to build the digital mine. Data come from many sources, have different types and temporal states. Relations between one class of data and the other one, or between data and unknown parameters are more nonlinear. The unknown parameters are non-random or random, among which the random parameters often dynamically vary with time. Therefore it is not accurate and reliable to process the data in building the digital mine with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method to process data in building the digital mine is put forward. In the meantime, the corresponding mathematical model is also given. The generalized nonlinear least squares problem is more complex than the common nonlinear least squares problem and its solution is more difficultly obtained because the dimensions of data and parameters in the former are bigger. So a new solution model and the method are put forward to solve the generalized nonlinear dynamic least squares problem. In fact, the problem can be converted to two sub-problems, each of which has a single variable. That is to say, a complex problem can be separated and then solved. So the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations. The method lessens the calculating load and opens up a new way to process the data in building the digital mine, which have more sources, different types and more temporal states.

  16. Interval partial least squares and moving window partial least squares in determining the enantiomeric composition of tryptophan by using UV-Vis spectroscopy

    Directory of Open Access Journals (Sweden)

    Jiao Long

    2016-01-01

    Full Text Available The application of interval partial least squares (IPLS and moving window partial least squares (MWPLS to the enantiomeric analysis of tryptophan (Trp was investigated. A UV-Vis spectroscopy method for determining the enantiomeric composition of Trp was developed. The calibration model was built by using partial least squares (PLS, IPLS and MWPLS respectively. Leave-one-out cross validation and external test validation were used to assess the prediction performance of the established models. The validation result demonstrates the established full-spectrum PLS model is impractical for quantifying the relationship between the spectral data and enantiomeric composition of L-Trp. On the contrary, the developed IPLS and MWPLS model are both practicable for modeling this relationship. For the IPLS model, the root mean square relative error (RMSRE of external test validation and leave-one-out cross validation is 4.03 and 6.50 respectively. For the MWPLS model, the RMSRE of external test validation and leave-one-out cross validation is 2.93 and 4.73 respectively. Obviously, the prediction accuracy of the MWPLS model is higher than that of the IPLS model. It is demonstrated UV-Vis spectroscopy combined with MWPLS is a commendable method for determining the enantiomeric composition of Trp. MWPLS is superior to IPLS for selecting spectral region in UV-Vis spectroscopy analysis.

  17. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    Science.gov (United States)

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS.

  18. Data libraries as a collaborative tool across Monte Carlo codes

    CERN Document Server

    Augelli, Mauro; Han, Mincheol; Hauf, Steffen; Kim, Chan-Hyeung; Kuster, Markus; Pia, Maria Grazia; Quintieri, Lina; Saracco, Paolo; Seo, Hee; Sudhakar, Manju; Eidenspointner, Georg; Zoglauer, Andreas

    2010-01-01

    The role of data libraries in Monte Carlo simulation is discussed. A number of data libraries currently in preparation are reviewed; their data are critically examined with respect to the state-of-the-art in the respective fields. Extensive tests with respect to experimental data have been performed for the validation of their content.

  19. Segmented targeted least squares estimator for material decomposition in multi bin PCXDs

    Science.gov (United States)

    Rajbhandary, Paurakh L.; Hsieh, Scott S.; Pelc, Norbert J.

    2014-03-01

    We present a fast, noise-efficient, and accurate estimator for material separation using photon-counting x-ray detectors (PCXDs) with multiple energy bin capability. The proposed targeted least squares estimator (TLSE) improves a previously proposed A-Table method by incorporating dynamic weighting that allows noise to be closer to the Cramér- Rao Lower Bound (CRLB) throughout the operating range. We explore Cartesian and average-energy segmentation of the basis material space for TLSE, and show that iso-average-energy contours require fewer segments compared to Cartesian segmentation to achieve similar performance. We compare the iso-average-energy TLSE to other proposed estimators - including the gold standard maximum likelihood estimator (MLE) and the A-Table1 - in terms of variance, bias and computational efficiency. The variance and bias of this estimator between 0 to 6 cm of aluminum and 0 to 50 cm of water is simulated with Monte Carlo methods. Iso-average energy TLSE achieves an average variance within 2% of CRLB, and mean of absolute error of (3.68 +/- 0.06) x 10-6 cm. Using the same protocol, MLE showed variance-to- CRLB ratio and average bias of 1.0186 +/- 0.0002 and (3.10 +/- 0.06) x 10-6 cm, respectively, but was 50 times slower in our simulation. Compared to the A-Table method, TLSE gives a more homogenous variance-to-CRLB profile in the operating region. We show that variance-to-CRLB for TLSE is lower by as much as ~36% than A-Table method in the peripheral region of operation (thin or thick objects). The TLSE is a computationally efficient and fast method for implementing material separation technique in PCXDs, with performance parameters comparable to the MLE.

  20. The possibilities of least-squares migration of internally scattered seismic energy

    KAUST Repository

    Aldawood, Ali

    2015-05-26

    Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.

  1. A Linear-correction Least-squares Approach for Geolocation Using FDOA Measurements Only

    Institute of Scientific and Technical Information of China (English)

    LI Jinzhou; GUO Fucheng; JIANG Wenli

    2012-01-01

    A linear-correction least-squares(LCLS) estimation procedure is proposed for geolocation using frequency difference of arrival(FDOA) measurements only.We first analyze the measurements of FDOA,and further derive the Cramér-Rao lower bound(CRLB) of geolocation using FDOA measurements.For the localization model is a nonlinear least squares(LS) estimator with a nonlinear constrained,a linearizing method is used to convert the model to a linear least squares estimator with a nonlinear constrained.The Gauss-Newton iteration method is developed to conquer the source localization problem.From the analysis of solving Lagrange multiplier,the algorithm is a generalization of linear-correction least squares estimation procedure under the condition of geolocation using FDOA measurements only.The algorithm is compared with common least squares estimation.Comparisons of their estimation accuracy and the CRLB are made,and the proposed method attains the CRLB.Simulation resuits are included to corroborate the theoretical development.

  2. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  3. Sensitivity analysis on chaotic dynamical system by Non-Intrusive Least Square Shadowing (NILSS)

    CERN Document Server

    Ni, Angxiu

    2016-01-01

    This paper develops the tangent Non-Intrusive Least Square Shadowing (NILSS) method, which computes sensitivity for chaotic dynamical systems. In NILSS, a tangent solution is represented as a linear combination of a inhomogeneous tangent solution and some homogeneous tangent solutions. Then we solve a least square problem under this new representation. As a result, this new variant is easier to implement with existing solvers. For chaotic systems with large degrees of freedom but low dimensional attractors, NILSS has low computation cost. NILSS is applied to two chaotic PDE systems: the Lorenz 63 system, and a CFD simulation of a backward-facing step. The results show that NILSS computes the correct derivative with a lower cost than the conventional Least Square Shadowing method and the conventional finite difference method.

  4. On the equivalence of Kalman filtering and least-squares estimation

    Science.gov (United States)

    Mysen, E.

    2017-01-01

    The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

  5. Error Estimate and Adaptive Refinement in Mixed Discrete Least Squares Meshless Method

    Directory of Open Access Journals (Sweden)

    J. Amani

    2014-01-01

    Full Text Available The node moving and multistage node enrichment adaptive refinement procedures are extended in mixed discrete least squares meshless (MDLSM method for efficient analysis of elasticity problems. In the formulation of MDLSM method, mixed formulation is accepted to avoid second-order differentiation of shape functions and to obtain displacements and stresses simultaneously. In the refinement procedures, a robust error estimator based on the value of the least square residuals functional of the governing differential equations and its boundaries at nodal points is used which is inherently available from the MDLSM formulation and can efficiently identify the zones with higher numerical errors. The results are compared with the refinement procedures in the irreducible formulation of discrete least squares meshless (DLSM method and show the accuracy and efficiency of the proposed procedures. Also, the comparison of the error norms and convergence rate show the fidelity of the proposed adaptive refinement procedures in the MDLSM method.

  6. On the equivalence of Kalman filtering and least-squares estimation

    Science.gov (United States)

    Mysen, E.

    2016-07-01

    The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

  7. ON THE BREAKDOWNS OF THE GALERKIN AND LEAST-SQUARES METHODS

    Institute of Scientific and Technical Information of China (English)

    钟宝江

    2002-01-01

    The Galerkin and least-squares methods are two classes of the most popular Krylovsubspace methOds for solving large linear systems of equations. Unfortunately, both the methodsmay suffer from serious breakdowns of the same type: In a breakdown situation the Galerkinmethod is unable to calculate an approximate solution, while the least-squares method, althoughdoes not really break down, is unsucessful in reducing the norm of its residual. In this paper wefrst establish a unified theorem which gives a relationship between breakdowns in the two meth-ods. We further illustrate theoretically and experimentally that if the coefficient matrix of alienar system is of high defectiveness with the associated eigenvalues less than 1, then the restart-ed Galerkin and least-squares methods will be in great risks of complete breakdowns. It appearsthat our findings may help to understand phenomena observed practically and to derive treat-ments for breakdowns of this type.

  8. Iterative weighted partial spline least squares estimation in semiparametric modeling of longitudinal data

    Institute of Scientific and Technical Information of China (English)

    孙孝前; 尤进红

    2003-01-01

    In this paper we consider the estimating problem of a semiparametric regression modelling whenthe data are longitudinal. An iterative weighted partial spline least squares estimator (IWPSLSE) for the para-metric component is proposed which is more efficient than the weighted partial spline least squares estimator(WPSLSE) with weights constructed by using the within-group partial spline least squares residuals in the senseof asymptotic variance. The asymptotic normality of this IWPSLSE is established. An adaptive procedure ispresented which ensures that the iterative process stops after a finite number of iterations and produces anestimator asymptotically equivalent to the best estimator that can be obtained by using the iterative proce-dure. These results are generalizations of those in heteroscedastic linear model to the case of semiparametric regression.

  9. ELASTO-PLASTICITY ANALYSIS BASED ON COLLOCATION WITH THE MOVING LEAST SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    SongKangzu; ZhangXiong; LuMiugwau

    2003-01-01

    A meshless approach based on the moving least square method is developed for elasto-plasticity analysis, in which the incremental formulation is used. In this approach, the displacement shape functions are constructed by using the moving least square approximation, and the discrete governing equations for elasto-plastic material are constructed with the direct collocation method. The boundary conditions are also imposed by collocation. The method established is a truly meshless one, as it does not need any mesh, either for the purpose of interpolation of the solution variables, or for the purpose of construction of the discrete equations. It is simply formulated and very efficient, and no post-processing procedure is required to compute the derivatives of the unknown variables, since the solution from this method based on the moving least square approximation is already smooth enough. Numerical examples are given to verify the accuracy of the meshless method proposed for elasto-rdasticity analysis.

  10. Meshless Least-Squares Method for Solving the Steady-State Heat Conduction Equation

    Institute of Scientific and Technical Information of China (English)

    LIU Yan; ZHANG Xiong; LU Mingwan

    2005-01-01

    The meshless weighted least-squares (MWLS) method is a pure meshless method that combines the moving least-squares approximation scheme and least-square discretization. Previous studies of the MWLS method for elastostatics and wave propagation problems have shown that the MWLS method possesses several advantages, such as high accuracy, high convergence rate, good stability, and high computational efficiency. In this paper, the MWLS method is extended to heat conduction problems. The MWLS computational parameters are chosen based on a thorough numerical study of 1-dimensional problems. Several 2-dimensional examples show that the MWLS method is much faster than the element free Galerkin method (EFGM), while the accuracy of the MWLS method is close to, or even better than the EFGM. These numerical results demonstrate that the MWLS method has good potential for numerical analyses of heat transfer problems.

  11. A note on implementation of decaying product correlation structures for quasi-least squares.

    Science.gov (United States)

    Shults, Justine; Guerra, Matthew W

    2014-08-30

    This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable.

  12. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    Science.gov (United States)

    Kermarrec, Gaël; Schön, Steffen

    2016-09-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  13. Analysis of total least squares in estimating the parameters of a mortar trajectory

    Energy Technology Data Exchange (ETDEWEB)

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  14. Constrained total least squares algorithm for passive location based on bearing-only measurements

    Institute of Scientific and Technical Information of China (English)

    WANG Ding; ZHANG Li; WU Ying

    2007-01-01

    The constrained total least squares algorithm for the passive location is presented based on the bearing-only measurements in this paper. By this algorithm the non-linear measurement equations are firstly transformed into linear equations and the effect of the measurement noise on the linear equation coefficients is analyzed,therefore the problem of the passive location can be considered as the problem of constrained total least squares, then the problem is changed into the optimized question without restraint which can be solved by the Newton algorithm, and finally the analysis of the location accuracy is given. The simulation results prove that the new algorithm is effective and practicable.

  15. The structured total least squares algorithm research for passive location based on angle information

    Institute of Scientific and Technical Information of China (English)

    WANG Ding; ZHANG Li; WU Ying

    2009-01-01

    Based on the constrained total least squares (CTLS) passive location algorithm with bearing-only measurements, in this paper, the same passive location problem is transformed into the structured total least squares (STLS) problem. The solution of the STLS problem for passive location can be obtained using the inverse iteration method. It also expatiates that both the STLS algorithm and the CTLS algorithm have the same location mean squares error under certain condition. Finally, the article presents a kind of location and tracking algorithm for moving target by combining STLS location algorithm with Kalman filter (KF). The efficiency and superiority of the proposed algorithms can be confirmed by computer simulation results.

  16. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    Science.gov (United States)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  17. Least square neural network model of the crude oil blending process.

    Science.gov (United States)

    Rubio, José de Jesús

    2016-06-01

    In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process.

  18. Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation

    Directory of Open Access Journals (Sweden)

    Ahmad Bilfarsah

    2005-04-01

    Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.

  19. The Jackknife Interval Estimation of Parametersin Partial Least Squares Regression Modelfor Poverty Data Analysis

    Directory of Open Access Journals (Sweden)

    Pudji Ismartini

    2010-08-01

    Full Text Available One of the major problem facing the data modelling at social area is multicollinearity. Multicollinearity can have significant impact on the quality and stability of the fitted regression model. Common classical regression technique by using Least Squares estimate is highly sensitive to multicollinearity problem. In such a problem area, Partial Least Squares Regression (PLSR is a useful and flexible tool for statistical model building; however, PLSR can only yields point estimations. This paper will construct the interval estimations for PLSR regression parameters by implementing Jackknife technique to poverty data. A SAS macro programme is developed to obtain the Jackknife interval estimator for PLSR.

  20. Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression

    Science.gov (United States)

    Verdoolaege, G.; Shabbir, A.; Hornung, G.

    2016-11-01

    Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.

  1. Least Squares Based Iterative Algorithm for the Coupled Sylvester Matrix Equations

    Directory of Open Access Journals (Sweden)

    Hongcai Yin

    2014-01-01

    Full Text Available By analyzing the eigenvalues of the related matrices, the convergence analysis of the least squares based iteration is given for solving the coupled Sylvester equations AX+YB=C and DX+YE=F in this paper. The analysis shows that the optimal convergence factor of this iterative algorithm is 1. In addition, the proposed iterative algorithm can solve the generalized Sylvester equation AXB+CXD=F. The analysis demonstrates that if the matrix equation has a unique solution then the least squares based iterative solution converges to the exact solution for any initial values. A numerical example illustrates the effectiveness of the proposed algorithm.

  2. A New Neural Network for Solving a Class of Constrained Least Square Problems

    Institute of Scientific and Technical Information of China (English)

    YE Dazhen; XIA Youshen; WU Xinyu

    2001-01-01

    A new neural network for solvinga class of constrained least square problems is pre-sented. The network is shown to be completely stableand globally convergent to the exact solutions to theconstrained least square problem. In contrast to theneural network proposed in the Ref.[1], our new neu-ral network has the following advantages in two ma-jor aspects. First, the convergent region of this newnetwork is the whole space Rn. Second, in hardwareimplementations this new network does not need theexpensive analogue multiplier for variables.

  3. Hierarchical Least Squares Identification and Its Convergence for Large Scale Multivariable Systems

    Institute of Scientific and Technical Information of China (English)

    丁锋; 丁韬

    2002-01-01

    The recursive least squares identification algorithm (RLS) for large scale multivariable systems requires a large amount of calculations, therefore, the RLS algorithm is difficult to implement on a computer. The computational load of estimation algorithms can be reduced using the hierarchical least squares identification algorithm (HLS) for large scale multivariable systems. The convergence analysis using the Martingale Convergence Theorem indicates that the parameter estimation error (PEE) given by the HLS algorithm is uniformly bounded without a persistent excitation signal and that the PEE consistently converges to zero for the persistent excitation condition. The HLS algorithm has a much lower computational load than the RLS algorithm.

  4. Constrained hierarchical least square nonlinear equation solvers. [for indefinite stiffness and large structural deformations

    Science.gov (United States)

    Padovan, J.; Lackney, J.

    1986-01-01

    The current paper develops a constrained hierarchical least square nonlinear equation solver. The procedure can handle the response behavior of systems which possess indefinite tangent stiffness characteristics. Due to the generality of the scheme, this can be achieved at various hierarchical application levels. For instance, in the case of finite element simulations, various combinations of either degree of freedom, nodal, elemental, substructural, and global level iterations are possible. Overall, this enables a solution methodology which is highly stable and storage efficient. To demonstrate the capability of the constrained hierarchical least square methodology, benchmarking examples are presented which treat structure exhibiting highly nonlinear pre- and postbuckling behavior wherein several indefinite stiffness transitions occur.

  5. Medium Band Least Squares Estimation of Fractional Cointegration in the Presence of Low-Frequency Contamination

    DEFF Research Database (Denmark)

    Christensen, Bent Jesper; Varneskov, Rasmus T.

    band least squares (MBLS) estimator uses sample dependent trimming of frequencies in the vicinity of the origin to account for such contamination. Consistency and asymptotic normality of the MBLS estimator are established, a feasible inference procedure is proposed, and rigorous tools for assessing...... the cointegration strength and testing MBLS against the existing narrow band least squares estimator are developed. Finally, the asymptotic framework for the MBLS estimator is used to provide new perspectives on volatility factors in an empirical application to long-span realized variance series for S&P 500...

  6. Galerkin-Petrov least squares mixed element method for stationary incompressible magnetohydrodynamics

    Institute of Scientific and Technical Information of China (English)

    LUO Zhen-dong; MAO Yun-kui; ZHU Jiang

    2007-01-01

    The Galerkin-Petrov least squares method is combined with the mixed finite element method to deal with the stationary, incompressible magnetohydrodynamics system of equations with viscosity. A Galerkin-Petrov least squares mixed finite element format for the stationary incompressible magnetohydrodynamics equations is presented.And the existence and error estimates of its solution are derived. Through this method,the combination among the mixed finite element spaces does not demand the discrete Babu(s)ka-Brezzi stability conditions so that the mixed finite element spaces could be chosen arbitrartily and the error estimates with optimal order could be obtained.

  7. A Least-Squares Solution to Nonlinear Steady-State Multi-Dimensional IHCP

    Institute of Scientific and Technical Information of China (English)

    1996-01-01

    In this paper,the least-squares method is used to solve the Inverse Heat Conduction Probles(IHCP) to determine the space-wise variation of the unknown boundary condition on the inner surface of a helically coied tube with fluid flow inside,electrical heating and insulation outside.The sensitivity coefficient is analyzed to give a rational distribution of the thermocouples.The results demonstrate that the method effectively extracts information about the unknown boundary condition for the heat conduction problem from the experimental measurements.The results also show that the least-squares method conerges very quickly.

  8. Modified Recursive Least Squares Algorithm with Variable Parameters and Resetting for Time—Varying System

    Institute of Scientific and Technical Information of China (English)

    YUEYuncan; QIANJixin

    2002-01-01

    Based on the idea of the set-membership identification,a modified recursive least squares algorithm with variable gain, variable forgetting factor and resetting is presented.The concept of the error tolerance level is proposed.The selection criteria of the error tolerance level are also given according to the min-max principle.The algorithm is particularly suitable for tracing time-varying systems and is similar in computational complexity to the standard recursive least squares algorithm.The superior performance of the algorithm is verified via simulation studies on a dynamic fermentation process.

  9. Unknown parameter's variance-covariance propagation and calculation in generalized nonlinear least squares problem

    Institute of Scientific and Technical Information of China (English)

    TAO Hua-xue; GUO Jin-yun

    2005-01-01

    The unknown parameter's variance-covariance propagation and calculation in the generalized nonlinear least squares remain to be studied now,which didn't appear in the internal and external referencing documents. The unknown parameter's variance-covariance propagation formula, considering the two-power terms, was concluded used to evaluate the accuracy of unknown parameter estimators in the generalized nonlinear least squares problem. It is a new variance-covariance formula and opens up a new way to evaluate the accuracy when processing data which have the multi-source,multi-dimensional, multi-type, multi-time-state, different accuracy and nonlinearity.

  10. ALGEBRAIC OPERATION OF SPECIAL MATRICES RELATED TO METHOD OF LEAST SQUARES

    Institute of Scientific and Technical Information of China (English)

    XuFuhua

    2003-01-01

    The follwing situation in using the method of least squares to solve problems often occurs.After m experiments completed and a solution of least squares obtained,the(m+1)-th experiment is made further in order to improve the results.A method of algebraic operation of special matrices involed in the problem is given is this paper for obtaining a new solution for the m+1 experiments based upon the old solution for the primary m experiments .This method is valid for more general matrices.

  11. Iterative Weighted Semiparametric Least Squares Estimation in Repeated Measurement Partially Linear Regression Models

    Institute of Scientific and Technical Information of China (English)

    Ge-mai Chen; Jin-hong You

    2005-01-01

    Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.

  12. Acceleration Control in Nonlinear Vibrating Systems based on Damped Least Squares

    CERN Document Server

    Pilipchuk, V N

    2011-01-01

    A discrete time control algorithm using the damped least squares is introduced for acceleration and energy exchange controls in nonlinear vibrating systems. It is shown that the damping constant of least squares and sampling time step of the controller must be inversely related to insure that vanishing the time step has little effect on the results. The algorithm is illustrated on two linearly coupled Duffing oscillators near the 1:1 internal resonance. In particular, it is shown that varying the dissipation ratio of one of the two oscillators can significantly suppress the nonlinear beat phenomenon.

  13. Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions

    Institute of Scientific and Technical Information of China (English)

    ZHU Meng-Hua; LIU Liang-Gang; QI Dong-Xu; YOU Zhong; XU Ao-Ao

    2009-01-01

    In this paper,the least square fitting method with the cubic B-spline basis hmctioas is derived to reduce the influence of statistical fluctuations in the gamma ray spectra.The derived procedure is simple and automatic.The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation.

  14. Using AMMI, factorial regression and partial least squares regression models for interpreting genotype x environment interaction.

    NARCIS (Netherlands)

    Vargas, M.; Crossa, J.; Eeuwijk, van F.A.; Ramirez, M.E.; Sayre, K.

    1999-01-01

    Partial least squares (PLS) and factorial regression (FR) are statistical models that incorporate external environmental and/or cultivar variables for studying and interpreting genotype × environment interaction (GEl). The Additive Main effect and Multiplicative Interaction (AMMI) model uses only th

  15. Linking Socioeconomic Status to Social Cognitive Career Theory Factors: A Partial Least Squares Path Modeling Analysis

    Science.gov (United States)

    Huang, Jie-Tsuen; Hsieh, Hui-Hsien

    2011-01-01

    The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…

  16. Using Technology to Optimize and Generalize: The Least-Squares Line

    Science.gov (United States)

    Burke, Maurice J.; Hodgson, Ted R.

    2007-01-01

    With the help of technology and a basic high school algebra method for finding the vertex of a quadratic polynomial, students can develop and prove the formula for least-squares lines. Students are exposed to the power of a computer algebra system to generalize processes they understand and to see deeper patterns in those processes. (Contains 4…

  17. Multigroup Analysis in Partial Least Squares (PLS) Path Modeling: Alternative Methods and Empirical Results

    NARCIS (Netherlands)

    Sarstedt, Marko; Henseler, Jörg; Ringle, Christian M.

    2011-01-01

    Purpose – Partial least squares (PLS) path modeling has become a pivotal empirical research method in international marketing. Owing to group comparisons' important role in research on international marketing, we provide researchers with recommendations on how to conduct multigroup analyses in PLS p

  18. Risk Bounds for Regularized Least-Squares Algorithm with Operator-Value Kernels

    Science.gov (United States)

    2005-05-16

    for regularized least-squares algorithm with operator-valued kernels Ernesto De Vito a Andrea Caponnetto b aDipartimento di Matematica , Università...0915, National Science Foundation (ITR/SYS) Contract No. IIS - 0112991, National Science Foundation (ITR) Contract No. IIS -0209289, National Science

  19. Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes

    Science.gov (United States)

    Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)

    2003-01-01

    The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.

  20. How to handle colored observation noise in large least-squares problems

    NARCIS (Netherlands)

    Klees, R.; Ditmar, P.; Broersen, P.

    2003-01-01

    An approach to handling colored observation noise in large least-squares (LS) problems is presented. The handling of colored noise is reduced to the problem of solving a Toeplitz system of linear equations. The colored noise is represented as an auto regressive moving-average (ARMA) process. Stabili

  1. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise

    DEFF Research Database (Denmark)

    Nolte, Ingmar; Voev, Valeri

    The expected value of sums of squared intraday returns (realized variance) gives rise to a least squares regression which adapts itself to the assumptions of the noise process and allows for a joint inference on integrated volatility (IV), noise moments and price-noise relations. In the iid noise...

  2. Least-Squares Approaches for the Time-Dependent Maxwell Equations

    Energy Technology Data Exchange (ETDEWEB)

    Zhiquiang, C; Jones, J

    2001-12-01

    When the author was at CASC in LLNL during the period between July and December of last year, he was working on two research topics: (1) least-squares approaches for elasticity and Maxwell equations and (2) high-accuracy approximations for non-smooth problems.

  3. A Progress Report on Numerical Solutions of Least Squares Adjustment in GNU Project Gama

    Directory of Open Access Journals (Sweden)

    A. Čepek

    2005-01-01

    Full Text Available GNU project Gama for adjustment of geodetic networks is presented. Numerical solution of Least Squares Adjustment in the project is based on Singular Value Decomposition (SVD and General Orthogonalization Algorithm (GSO. Both algorithms enable solution of singular systems resulting from adjustment of free geodetic networks. 

  4. Least-squares spectral element method applied to the Euler equations

    NARCIS (Netherlands)

    Gerritsma, M.I.; Bas, R. van der; De Maerschalck, B.; Koren, B.; Deconinck, H.

    2008-01-01

    This paper describes the application of the least-squares spectral element method to compressible flow problems. Special attention is paid to the imposition of the weak boundary conditions along curved walls and the influence of the time step on the position and resolution of shocks. The method is d

  5. Superresolution of 3-D computational integral imaging based on moving least square method.

    Science.gov (United States)

    Kim, Hyein; Lee, Sukho; Ryu, Taekyung; Yoon, Jungho

    2014-11-17

    In this paper, we propose an edge directive moving least square (ED-MLS) based superresolution method for computational integral imaging reconstruction(CIIR). Due to the low resolution of the elemental images and the alignment error of the microlenses, it is not easy to obtain an accurate registration result in integral imaging, which makes it difficult to apply superresolution to the CIIR application. To overcome this problem, we propose the edge directive moving least square (ED-MLS) based superresolution method which utilizes the properties of the moving least square. The proposed ED-MLS based superresolution takes the direction of the edge into account in the moving least square reconstruction to deal with the abrupt brightness changes in the edge regions, and is less sensitive to the registration error. Furthermore, we propose a framework which shows how the data have to be collected for the superresolution problem in the CIIR application. Experimental results verify that the resolution of the elemental images is enhanced, and that a high resolution reconstructed 3-D image can be obtained with the proposed method.

  6. Use of correspondence analysis partial least squares on linear and unimodal data

    DEFF Research Database (Denmark)

    Frisvad, Jens C.; Bergsøe, Merete Norsker

    1996-01-01

    Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...

  7. Nonlinear least square estimation using difference quotient instead of derivative containing different classes of measurements

    Institute of Scientific and Technical Information of China (English)

    陶华学; 郭金运

    2002-01-01

    Using difference quotient instead of derivative, the paper presents the solution method and procedure of the nonlinear least square estimation containing different classes of measurements. In the meantime, the paper shows several practical cases, which indicate the method is very valid and reliable.

  8. A Comparison of Mean Phase Difference and Generalized Least Squares for Analyzing Single-Case Data

    Science.gov (United States)

    Manolov, Rumen; Solanas, Antonio

    2013-01-01

    The present study focuses on single-case data analysis specifically on two procedures for quantifying differences between baseline and treatment measurements. The first technique tested is based on generalized least square regression analysis and is compared to a proposed non-regression technique, which allows obtaining similar information. The…

  9. Harmonic tidal analysis at a few stations using the least squares method

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A; Das, V.K.; Bahulayan, N

    Using the least squares method, harmonic analysis has been performed on hourly water level records of 29 days at several stations depicting different types of non-tidal noise. For a tidal record at Mormugao, which was free from storm surges (low...

  10. A comparison of least-squares and Bayesian minimum risk edge parameter estimation

    NARCIS (Netherlands)

    Mulder, Nanno J.; Abkar, Ali A.

    1999-01-01

    The problem considered here is to compare two methods for finding a common boundary between two objects with two unknown geometric parameters, such as edge position and edge orientation. We compare two model-based approaches: the least squares and the minimum Bayesian risk method. An expression is d

  11. Robust Mean and Covariance Structure Analysis through Iteratively Reweighted Least Squares.

    Science.gov (United States)

    Yuan, Ke-Hai; Bentler, Peter M.

    2000-01-01

    Adapts robust schemes to mean and covariance structures, providing an iteratively reweighted least squares approach to robust structural equation modeling. Each case is weighted according to its distance, based on first and second order moments. Test statistics and standard error estimators are given. (SLD)

  12. Gauss’s, Cholesky’s and Banachiewicz’s Contributions to Least Squares

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Wasniewski, Jerzy

    This paper describes historically Gauss’s contributions to the area of Least Squares. Also mentioned are Cholesky’s and Banachiewicz’s contributions to linear algebra. The material given is backup information to a Tutorial given at PPAM 2011 to honor Cholesky on the hundred anniversary of his...

  13. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    Science.gov (United States)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower-upper-middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  14. Efficient GOCE satellite gravity field recovery based on least-squares using QR decomposition

    NARCIS (Netherlands)

    Baur, O.; Austen, G.; Kusche, J.

    2007-01-01

    We develop and apply an efficient strategy for Earth gravity field recovery from satellite gravity gradiometry data. Our approach is based upon the Paige-Saunders iterative least-squares method using QR decomposition (LSQR). We modify the original algorithm for space-geodetic applications: firstly,

  15. APPLICATION OF PARTIAL LEAST SQUARES REGRESSION FOR AUDIO-VISUAL SPEECH PROCESSING AND MODELING

    Directory of Open Access Journals (Sweden)

    A. L. Oleinik

    2015-09-01

    Full Text Available Subject of Research. The paper deals with the problem of lip region image reconstruction from speech signal by means of Partial Least Squares regression. Such problems arise in connection with development of audio-visual speech processing methods. Audio-visual speech consists of acoustic and visual components (called modalities. Applications of audio-visual speech processing methods include joint modeling of voice and lips’ movement dynamics, synchronization of audio and video streams, emotion recognition, liveness detection. Method. Partial Least Squares regression was applied to solve the posed problem. This method extracts components of initial data with high covariance. These components are used to build regression model. Advantage of this approach lies in the possibility of achieving two goals: identification of latent interrelations between initial data components (e.g. speech signal and lip region image and approximation of initial data component as a function of another one. Main Results. Experimental research on reconstruction of lip region images from speech signal was carried out on VidTIMIT audio-visual speech database. Results of the experiment showed that Partial Least Squares regression is capable of solving reconstruction problem. Practical Significance. Obtained findings give the possibility to assert that Partial Least Squares regression is successfully applicable for solution of vast variety of audio-visual speech processing problems: from synchronization of audio and video streams to liveness detection.

  16. An unstructured parallel least-squares spectral element solver for incompressible flow problems

    NARCIS (Netherlands)

    Nool, M.; Proot, M.M.J.

    2003-01-01

    The parallelization of the least-squares spectral element formulation of the Stokes problem has recently been discussed for incompressible flow problems on structured grids. In the present work, the extension to unstructured grids is discussed. It will be shown that, to obtain an efficient and scala

  17. Representing Topography with Second-Degree Bivariate Polynomial Functions Fitted by Least Squares.

    Science.gov (United States)

    Neuman, Arthur Edward

    1987-01-01

    There is a need for abstracting topography other than for mapping purposes. The method employed should be simple and available to non-specialists, thereby ruling out spline representations. Generalizing from univariate first-degree least squares and from multiple regression, this article introduces bivariate polynomial functions fitted by least…

  18. LEAST-SQUARES MIXED FINITE ELEMENT METHOD FOR SADDLE-POINT PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Lie-heng Wang; Huo-yuan Duan

    2000-01-01

    In this paper, a least-squares mixed finite element method for the solution of the primal saddle-point problem is developed. It is proved that the approximate problem is consistent ellipticity in the conforming finite element spaces with only the discrete BB-condition needed for a smaller auxiliary problem. The abstract error estimate is derived.

  19. Memory and computation reduction for least-square channel estimation of mobile OFDM systems

    NARCIS (Netherlands)

    Xu, T.; Tang, Z.; Lu, H.; Leuken, R van

    2012-01-01

    Mobile OFDM refers to OFDM systems with fast moving transceivers, contrastive to traditional OFDM systems whose transceivers are stationary or have a low velocity. In this paper, we use Basis Expansion Models (BEM) to model the time-variation of channels, based on which two least-squares (LS) channe

  20. Computational Experience with Confidence Regions and Confidence Intervals for Nonlinear Least Squares.

    Science.gov (United States)

    1985-05-01

    first generated the errors and response variables. The errors, i, were produced using the Marsaglia and Tsang pseudo-normal ran- dom number algorithm...34Asymptotic properties of non-linear least squares estimators," The Annals of Mathematical Statistici, 40(2), pp. 633-643. Marsaglia , G., Tsang, W

  1. On Fits of Seasonal Data by the Ordinary Least Square Method

    CERN Document Server

    Rotundo, G; Herteli, C; Ileanu, B V

    2016-01-01

    Following Shimura et al. pioneering paper (1981) on "Geographical and secular changes in the seasonal distribution of births", much data has been reported by seasonal effects time series. We discuss how one can be misled in testing Linear Regression Models by an Ordinary Least Square method.

  2. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  3. 总体最小二乘解性质研究%RESEARCH ON PROPERTIES OF TOTAL LEAST SQUARES ESTIMATION

    Institute of Scientific and Technical Information of China (English)

    王乐洋

    2012-01-01

    通过理论推导,发现总体最小二乘解是最小二乘解的线性变换;当系数矩阵含有误差时最小二乘解是有偏的,而总体最小二乘解是无偏的;总体最小二乘解的条件数大于最小二乘解的条件数,总体最小二乘解更容易受到数据误差的影响.通过进一步推导给出了总体最小二乘与最小二乘在解、残差、单位权方差估值等方面的关系式.%Through theory derivation and proof, some properties of the total least squares estimation are found. The total least squares estimation is the linear transformation of the least squares estimation. When the coefficient matrix contains error,the least squares is biased. The total least squares estimation is unbiased. The condition number of the total least squares estimation is bigger than that of the least squares estimation, so the total least squares estimation is more easier to be affected by the data error than the least squares estimation. Through further derivation, the relation of solutions, residuals, unit weight variance estimations between the total least squares and the least squares are given.

  4. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    Science.gov (United States)

    Demetriou, I. C.

    2006-04-01

    , biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Program summaryTitle of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc. Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components

  5. Fractional Order Digital Differentiator Design Based on Power Function and Least squares

    Science.gov (United States)

    Kumar, Manjeet; Rawat, Tarun Kumar

    2016-10-01

    In this article, we propose the use of power function and least squares method for designing of a fractional order digital differentiator. The input signal is transformed into a power function by using Taylor series expansion, and its fractional derivative is computed using the Grunwald-Letnikov (G-L) definition. Next, the fractional order digital differentiator is modelled as a finite impulse response (FIR) system that yields fractional order derivative of the G-L type for a power function. The FIR system coefficients are obtained by using the least squares method. Two examples are used to demonstrate that the fractional derivative of the digital signals is computed by using the proposed technique. The results of the third and fourth examples reveal that the proposed technique gives superior performance in comparison with the existing techniques.

  6. A NUMERICALLY STABLE BLOCK MODIFIED GRAM-SCHMIDT ALGORITHM FOR SOLVING STIFF WEIGHTED LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Musheng Wei; Qiaohua Liu

    2007-01-01

    Recently,Wei in[18]proved that perturbed stiff weighted pseudoinverses and stiff weighted least squares problems are stable,if and only if the original and perturbed coefficient matrices A and A satisfy several row rank preservation conditions.According to these conditions,in this paper we show that in general,ordinary modified Gram-Schmidt with column pivoting is not numerically stable for solving the stiff weighted least squares problem.We then propose a row block modified Gram-Schmidt algorithm with column pivoting,and show that with appropriately chosen tolerance,this algorithm can correctly determine the numerical ranks of these row partitioned sub-matrices,and the computed QR factor R contains small roundoff error which is row stable.Several numerical experiments are also provided to compare the results of the ordinary Modified Gram-Schmidt algorithm with column pivoting and the row block Modified Gram-Schmidt algorithm with column pivoting.

  7. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  8. Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization

    CERN Document Server

    Transtrum, Mark K

    2012-01-01

    When minimizing a nonlinear least-squares function, the Levenberg-Marquardt algorithm can suffer from a slow convergence, particularly when it must navigate a narrow canyon en route to a best fit. On the other hand, when the least-squares function is very flat, the algorithm may easily become lost in parameter space. We introduce several improvements to the Levenberg-Marquardt algorithm in order to improve both its convergence speed and robustness to initial parameter guesses. We update the usual step to include a geodesic acceleration correction term, explore a systematic way of accepting uphill steps that may increase the residual sum of squares due to Umrigar and Nightingale, and employ the Broyden method to update the Jacobian matrix. We test these changes by comparing their performance on a number of test problems with standard implementations of the algorithm. We suggest that these two particular challenges, slow convergence and robustness to initial guesses, are complimentary problems. Schemes that imp...

  9. A Weighed Least Square TDOA Location Algorithm for TDMA Multi-target

    Directory of Open Access Journals (Sweden)

    WANG XU

    2011-04-01

    Full Text Available In order to improve the location precision of multiple targets in a time division multiple address (TDMA system, a new weighed least square algorithm is presented for multi-target ranging and locating. According to the time synchronization of the TDMA system, the range difference model between multiple targets is built using the time relations among the slot signals. Thus, the range of one target can be estimated by the other one's, and a group of estimated value can be acquired for every target. Then, the weighed least square algorithm is used to estimate the range of every target. Due to the time differences of arrival (TDOA of all targets are used in one target's location, the location precision is improved. The ambiguity and non-solution problems in the traditional TDOA location algorithm are avoided also in the presented algorithm. At the end, the simulation results illustrate the validity of the proposed algorithm.

  10. Least Orthogonal Distance Estimator and Total Least Square for Simultaneous Equation Models

    Directory of Open Access Journals (Sweden)

    Alessia Naccarato

    2014-01-01

    Full Text Available Least Orthogonal Distance Estimator (LODE of Simultaneous Equation Models’ structural parameters is based on minimizing the orthogonal distance between Reduced Form (RF and the Structural Form (SF parameters. In this work we propose a new version – with respect to Pieraccini and Naccarato (2008 – of Full Information (FI LODE based on decomposition of a new structure of the variance-covariance matrix using Singular Value Decomposition (SVD instead of Spectral Decomposition (SD. In this context Total Least Square is applied. A simulation experiment to compare the performances of the new version of FI LODE with respect to Three Stage Least Square (3SLS and Full Information Maximum Likelihood (FIML is presented. Finally a comparison between the FI LODE new and old version together with few words of conclusion conclude the paper.

  11. Geodesic least squares regression for scaling studies in magnetic confinement fusion

    Energy Technology Data Exchange (ETDEWEB)

    Verdoolaege, Geert [Department of Applied Physics, Ghent University, Ghent, Belgium and Laboratory for Plasma Physics, Royal Military Academy, Brussels (Belgium)

    2015-01-13

    In regression analyses for deriving scaling laws that occur in various scientific disciplines, usually standard regression methods have been applied, of which ordinary least squares (OLS) is the most popular. However, concerns have been raised with respect to several assumptions underlying OLS in its application to scaling laws. We here discuss a new regression method that is robust in the presence of significant uncertainty on both the data and the regression model. The method, which we call geodesic least squares regression (GLS), is based on minimization of the Rao geodesic distance on a probabilistic manifold. We demonstrate the superiority of the method using synthetic data and we present an application to the scaling law for the power threshold for the transition to the high confinement regime in magnetic confinement fusion devices.

  12. Least square regression method for estimating gas concentration in an electronic nose system.

    Science.gov (United States)

    Khalaf, Walaa; Pace, Calogero; Gaudioso, Manlio

    2009-01-01

    We describe an Electronic Nose (ENose) system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values), the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM) approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte.

  13. ON THE SINGULARITY OF LEAST SQUARES ESTIMATOR FOR MEAN-REVERTING Α-STABLE MOTIONS

    Institute of Scientific and Technical Information of China (English)

    Hu Yaozhong; Long Hongwei

    2009-01-01

    We study the problem of parameter estimation for mean-reverting α-stable motion, dXt= (a0- θ0Xt)dt + dZt, observed at discrete time instants.A least squares estimator is obtained and its asymptotics is discussed in the singular case (a0, θ0)=(0,0).If a0=0, then the mean-reverting α-stable motion becomes Ornstein-Uhlenbeck process and is studied in [7] in the ergodie case θ0 > 0.For the Ornstein-Uhlenbeck process, asymptoties of the least squares estimators for the singular case (θ0 = 0) and for ergodic case (θ0 > 0) are completely different.

  14. Robust On-Line Fault Diagnosis for Nonlinear Difference-Algebraic Systems Using Least Squares Estimate

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A new robust on-line fault diagnosis method based on least squares estimate for nonlinear difference-algebraic systems (DAS) with uncertainties is proposed. Based on the known nominal model of the DAS, this method firstly constructs an auxiliary system consisting of a difference equation and an algebraic equation, then, based on the relationship between the state deviation and the faults in the difference equation and the relationship between the algebraic variable deviation and the faults in algebraic equation, it identifies the faults on-line through least squares estimate. This method can not only detect, isolate and identify faults for DAS, but also give the upper bound of the error of fault identification. The simulation results indicate that it can give satisfactory diagnostic results for both abrupt and incipient faults.

  15. Determination of glucose concentration from near-infrared spectra using locally weighted partial least square regression.

    Science.gov (United States)

    Malik, Bilal; Benaissa, Mohammed

    2012-01-01

    This paper proposes the use of locally weighted partial least square regression (LW-PLSR) as an alternative multivariate calibration method for the prediction of glucose concentration from NIR spectra. The efficiency of the proposed model is validated in experiments carried out in a non-controlled environment or sample conditions using mixtures composed of glucose, urea and triacetin. The collected data spans the spectral region from 2100 nm to 2400 nm with spectra resolution of 1 nm. The results show that the standard error of prediction (SEP) decreases to 23.85 mg/dL when using LW-PLSR in comparison to the SEP values of 49.40 mg/dL, and 27.56 mg/dL using Principal Component Regression (PCR) and Partial Least Square (PLS) regression respectively.

  16. A Least Square-Based Self-Adaptive Localization Method for Wireless Sensor Networks

    Directory of Open Access Journals (Sweden)

    Baoguo Yu

    2016-01-01

    Full Text Available In the wireless sensor network (WSN localization methods based on Received Signal Strength Indicator (RSSI, it is usually required to determine the parameters of the radio signal propagation model before estimating the distance between the anchor node and an unknown node with reference to their communication RSSI value. And finally we use a localization algorithm to estimate the location of the unknown node. However, this localization method, though high in localization accuracy, has weaknesses such as complex working procedure and poor system versatility. Concerning these defects, a self-adaptive WSN localization method based on least square is proposed, which uses the least square criterion to estimate the parameters of radio signal propagation model, which positively reduces the computation amount in the estimation process. The experimental results show that the proposed self-adaptive localization method outputs a high processing efficiency while satisfying the high localization accuracy requirement. Conclusively, the proposed method is of definite practical value.

  17. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    Science.gov (United States)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  18. Least Square Regression Method for Estimating Gas Concentration in an Electronic Nose System

    Directory of Open Access Journals (Sweden)

    Walaa Khalaf

    2009-03-01

    Full Text Available We describe an Electronic Nose (ENose system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values, the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte.

  19. Least Squares Ranking on Graphs, Hodge Laplacians, Time Optimality, and Iterative Methods

    CERN Document Server

    Hirani, Anil N; Watts, Seth

    2010-01-01

    Given a set of alternatives to be ranked and some pairwise comparison values, ranking can be posed as a least squares computation on a graph. This was first used by Leake for ranking football teams. The residual can be further analyzed to find inconsistencies in the given data, and this leads to a second least squares problem. This whole process was formulated recently by Jiang et al. as a Hodge decomposition of the edge values. Recently, Koutis et al., showed that linear systems involving symmetric diagonally dominant (SDD) matrices can be solved in time approaching optimality. By using Hodge 0-Laplacian and 2-Laplacian, we give various results on when the normal equations for ranking are SDD and when iterative Krylov methods should be used. We also give iteration bounds for conjugate gradient method for these problems.

  20. Difference mapping method using least square support vector regression for variable-fidelity metamodelling

    Science.gov (United States)

    Zheng, Jun; Shao, Xinyu; Gao, Liang; Jiang, Ping; Qiu, Haobo

    2015-06-01

    Engineering design, especially for complex engineering systems, is usually a time-consuming process involving computation-intensive computer-based simulation and analysis methods. A difference mapping method using least square support vector regression is developed in this work, as a special metamodelling methodology that includes variable-fidelity data, to replace the computationally expensive computer codes. A general difference mapping framework is proposed where a surrogate base is first created, then the approximation is gained by a mapping the difference between the base and the real high-fidelity response surface. The least square support vector regression is adopted to accomplish the mapping. Two different sampling strategies, nested and non-nested design of experiments, are conducted to explore their respective effects on modelling accuracy. Different sample sizes and three approximation performance measures of accuracy are considered.

  1. SUPERCONVERGENCE OF LEAST-SQUARES MIXED FINITE ELEMENT FOR SECOND-ORDER ELLIPTIC PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Yan-ping Chen; De-hao Yu

    2003-01-01

    In this paper the least-squares mixed finite element is considered for solving secondorder elliptic problems in two dimensional domains. The primary solution u and the flux σ are approximated using finite element spaces consisting of piecewise polynomials of degree k and r respectively. Based on interpolation operators and an auxiliary projection,superconvergent Hi-error estimates of both the primary solution approximation uh and the flux approximation σh are obtained under the standard quasi-uniform assumption on finite element partition. The superconvergence indicates an accuracy of O(hr+2) for the least-squares mixed finite element approximation if Raviart-Thomas or Brezzi-DouglasFortin-Marini elements of order r are employed with optimal error estimate of O(hr+1).

  2. Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    XU Rui-Rui; BIAN Guo-Xing; GAO Chen-Feng; CHEN Tian-Lun

    2005-01-01

    The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction.First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.

  3. LEAST-SQUARES METHOD-BASED FEATURE FITTING AND EXTRACTION IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The main purpose of reverse engineering is to convert discrete data points into piecewise smooth, continuous surface models.Before carrying out model reconstruction it is significant to extract geometric features because the quality of modeling greatly depends on the representation of features.Some fitting techniques of natural quadric surfaces with least-squares method are described.And these techniques can be directly used to extract quadric surfaces features during the process of segmentation for point cloud.

  4. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  5. Least squares algorithm for region-of-interest evaluation in emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Formiconi, A.R. (Sezione di Medicina Nucleare, Firenze (Italy). Dipt. di Fisiopatologia Clinica)

    1993-03-01

    In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region.

  6. A Backward Stable Hyperbolic QR Factorization Method for Solving Indefinite Least Squares Problem

    Institute of Scientific and Technical Information of China (English)

    徐洪国

    2004-01-01

    We present a numerical method for solving the indefinite least squares problem. We first normalize the coefficient matrix,Then we compute the hyperbolic QR factorization of the normalized matrix. Finally we compute the solution by solving several trian-gular systems. We give the first order error analysis to show that the method is backward stable. The method is more efficient thanthe backward stable method proposed by Chandrasekaran, Gu and Sayed.

  7. A DYNAMICAL SYSTEM ALGORITHM FOR SOLVING A LEAST SQUARES PROBLEM WITH ORTHOGONALITY CONSTRAINTS

    Institute of Scientific and Technical Information of China (English)

    黄建国; 叶中行; 徐雷

    2001-01-01

    This paper introduced a dynamical system (neural networks) algorithm for solving a least squares problem with orthogonality constraints, which has wide applications in computer vision and signal processing. A rigorous analysis for the convergence and stability of the algorithm was provided. Moreover, a so called zero-extension technique was presented to keep the algorithm always convergent to the needed result for any randomly chosen initial data. Numerical experiments illustrate the effectiveness and efficiency of the algorithm.

  8. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  9. Least-Squares Solutions of the Equation AX = B Over Anti-Hermitian Generalized Hamiltonian Matrices

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Upon using the denotative theorem of anti-Hermitian generalized Hamiltonian matrices, we solve effectively the least-squares problem min ‖AX - B‖ over anti-Hermitian generalized Hamiltonian matrices. We derive some necessary and sufficient conditions for solvability of the problem and an expression for general solution of the matrix equation AX = B. In addition, we also obtain the expression for the solution of a relevant optimal approximate problem.

  10. Combined genetic algorithm optimization and regularized orthogonal least squares learning for radial basis function networks.

    Science.gov (United States)

    Chen, S; Wu, Y; Luk, B L

    1999-01-01

    The paper presents a two-level learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.

  11. On the efficiency of the orthogonal least squares training method for radial basis function networks.

    Science.gov (United States)

    Sherstinsky, A; Picard, R W

    1996-01-01

    The efficiency of the orthogonal least squares (OLS) method for training approximation networks is examined using the criterion of energy compaction. We show that the selection of basis vectors produced by the procedure is not the most compact when the approximation is performed using a nonorthogonal basis. Hence, the algorithm does not produce the smallest possible networks for a given approximation error. Specific examples are given using the Gaussian radial basis functions type of approximation networks.

  12. Learning rates of least-square regularized regression with polynomial kernels

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in Lρ2X with Borel probability measure.

  13. LEAST-SQUARES SOLUTION OF AXB = D OVER SYMMETRIC POSITIVE SEMIDEFINITE MATRICES X

    Institute of Scientific and Technical Information of China (English)

    Anping Liao; Zhongzhi Bai

    2003-01-01

    Least-squares solution of AXB = D with respect to symmetric positive semidefinite matrix X is considered. By making use of the generalized singular value decomposition,we derive general analytic formulas, and present necessary and sufficient conditions for guaranteeing the existence of the solution. By applying MATLAB 5.2, we give some numerical examples to show the feasibility and accuracy of this construction technique in the finite precision arithmetic.

  14. Least squares adjustment of large-scale geodetic networks by orthogonal decomposition

    Energy Technology Data Exchange (ETDEWEB)

    George, J.A.; Golub, G.H.; Heath, M.T.; Plemmons, R.J.

    1981-11-01

    This article reviews some recent developments in the solution of large sparse least squares problems typical of those arising in geodetic adjustment problems. The new methods are distinguished by their use of orthogonal transformations which tend to improve numerical accuracy over the conventional approach based on the use of the normal equations. The adaptation of these new schemes to allow for the use of auxiliary storage and their extension to rank deficient problems are also described.

  15. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  16. Modelling of the Relaxation Least Squares-Based Neural Networks and Its Application

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A relaxation least squares-based learning algorithm for neural networks is proposed. Not only does it have a fast convergence rate, but it involves less computation quantity. Therefore, it is suitable to deal with the case when a network has a large scale but the number of training data is very limited. It has been used in converting furnace process modelling, and impressive result has been obtained.

  17. A weighted least-squares method for parameter estimation in structured models

    OpenAIRE

    Galrinho, Miguel; Rojas, Cristian R.; Hjalmarsson, Håkan

    2014-01-01

    Parameter estimation in structured models is generally considered a difficult problem. For example, the prediction error method (PEM) typically gives a non-convex optimization problem, while it is difficult to incorporate structural information in subspace identification. In this contribution, we revisit the idea of iteratively using the weighted least-squares method to cope with the problem of non-convex optimization. The method is, essentially, a three-step method. First, a high order least...

  18. Least-Squares Data Adjustment with Rank-Deficient Data Covariance Matrices

    Energy Technology Data Exchange (ETDEWEB)

    Williams, J.G. [The University of Arizona, Tucson, AZ 85721-0119 (United States)

    2011-07-01

    A derivation of the linear least-squares adjustment formulae is required that avoids the assumption that the covariance matrix of prior parameters can be inverted. Possible proofs are of several kinds, including: (i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. In this paper, the least-squares adjustment equations are derived in both these ways, while explicitly assuming that the covariance matrix of prior parameters is singular. It will be proved that the solutions are unique and that, contrary to statements that have appeared in the literature, the least-squares adjustment problem is not ill-posed. No modification is required to the adjustment formulae that have been used in the past in the case of a singular covariance matrix for the priors. In conclusion: The linear least-squares adjustment formula that has been used in the past is valid in the case of a singular covariance matrix for the covariance matrix of prior parameters. Furthermore, it provides a unique solution. Statements in the literature, to the effect that the problem is ill-posed are wrong. No regularization of the problem is required. This has been proved in the present paper by two methods, while explicitly assuming that the covariance matrix of prior parameters is singular: i) extension of standard results for the linear regression formulae, and (ii) minimization by differentiation of a quadratic form of the deviations in parameters and responses. No modification is needed to the adjustment formulae that have been used in the past. (author)

  19. INVESTIGATION OF TRACKING OF VOLTAGE SIGNAL CONTAINING HARMONICS AND SPIKE BY USING RECURSIVE LEAST SQUARES METHOD

    Directory of Open Access Journals (Sweden)

    H. Hüseyin SAYAN

    2009-01-01

    Full Text Available In this study, recursive least squares method (RLSM that is one of the adaptable classical methods was used. Firstly forgetting factor was adapted to RLSM. Phase information of voltage signal belonging to an electric power network that contains harmonics and spike was obtained by developed approach. Then responses of the algorithm were investigated for voltage collapse, phase shift and spike. Simulation was implemented by using MATLAB® code. Results of simulation were examined and efficiency of method was presented.

  20. Integrated application of uniform design and least-squares support vector machines to transfection optimization

    Directory of Open Access Journals (Sweden)

    Pan Jin-Shui

    2009-05-01

    Full Text Available Abstract Background Transfection in mammalian cells based on liposome presents great challenge for biological professionals. To protect themselves from exogenous insults, mammalian cells tend to manifest poor transfection efficiency. In order to gain high efficiency, we have to optimize several conditions of transfection, such as amount of liposome, amount of plasmid, and cell density at transfection. However, this process may be time-consuming and energy-consuming. Fortunately, several mathematical methods, developed in the past decades, may facilitate the resolution of this issue. This study investigates the possibility of optimizing transfection efficiency by using a method referred to as least-squares support vector machine, which requires only a few experiments and maintains fairly high accuracy. Results A protocol consists of 15 experiments was performed according to the principle of uniform design. In this protocol, amount of liposome, amount of plasmid, and the number of seeded cells 24 h before transfection were set as independent variables and transfection efficiency was set as dependent variable. A model was deduced from independent variables and their respective dependent variable. Another protocol made up by 10 experiments was performed to test the accuracy of the model. The model manifested a high accuracy. Compared to traditional method, the integrated application of uniform design and least-squares support vector machine greatly reduced the number of required experiments. What's more, higher transfection efficiency was achieved. Conclusion The integrated application of uniform design and least-squares support vector machine is a simple technique for obtaining high transfection efficiency. Using this novel method, the number of required experiments would be greatly cut down while higher efficiency would be gained. Least-squares support vector machine may be applicable to many other problems that need to be optimized.

  1. A stable least square algorithm based on predictors and its application to fast Newton transversal filters

    OpenAIRE

    Wang, Youhua; Nakayama, Kenji

    1995-01-01

    In this letter, we introduce a predictor based least square (PLS) algorithm. By involving both order- and time-update recursions, the PLS algorithm is found to have a more stable performance compared with the stable version (Version II) of the RLS algorithm shown in Ref. [1]. Nevertheless, the computational requirement is about 50% of that of the RLS algorithm. As an application, the PLS algorithm can be applied to the fast newton transversal filters (FNTF) [2]. The FNTF algorithms suffer fro...

  2. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    Science.gov (United States)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  3. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  4. A least-squares computational ``tool kit``. Nuclear data and measurements series

    Energy Technology Data Exchange (ETDEWEB)

    Smith, D.L.

    1993-04-01

    The information assembled in this report is intended to offer a useful computational ``tool kit`` to individuals who are interested in a variety of practical applications for the least-squares method of parameter estimation. The fundamental principles of Bayesian analysis are outlined first and these are applied to development of both the simple and the generalized least-squares conditions. Formal solutions that satisfy these conditions are given subsequently. Their application to both linear and non-linear problems is described in detail. Numerical procedures required to implement these formal solutions are discussed and two utility computer algorithms are offered for this purpose (codes LSIOD and GLSIOD written in FORTRAN). Some simple, easily understood examples are included to illustrate the use of these algorithms. Several related topics are then addressed, including the generation of covariance matrices, the role of iteration in applications of least-squares procedures, the effects of numerical precision and an approach that can be pursued in developing data analysis packages that are directed toward special applications.

  5. Generalized total least squares prediction algorithm for universal 3D similarity transformation

    Science.gov (United States)

    Wang, Bin; Li, Jiancheng; Liu, Chao; Yu, Jie

    2017-02-01

    Three-dimensional (3D) similarity datum transformation is extensively applied to transform coordinates from GNSS-based datum to a local coordinate system. Recently, some total least squares (TLS) algorithms have been successfully developed to solve the universal 3D similarity transformation problem (probably with big rotation angles and an arbitrary scale ratio). However, their procedures of the parameter estimation and new point (non-common point) transformation were implemented separately, and the statistical correlation which often exists between the common and new points in the original coordinate system was not considered. In this contribution, a generalized total least squares prediction (GTLSP) algorithm, which implements the parameter estimation and new point transformation synthetically, is proposed. All of the random errors in the original and target coordinates, and their variance-covariance information will be considered. The 3D transformation model in this case is abstracted as a kind of generalized errors-in-variables (EIV) model and the equation for new point transformation is incorporated into the functional model as well. Then the iterative solution is derived based on the Gauss-Newton approach of nonlinear least squares. The performance of GTLSP algorithm is verified in terms of a simulated experiment, and the results show that GTLSP algorithm can improve the statistical accuracy of the transformed coordinates compared with the existing TLS algorithms for 3D similarity transformation.

  6. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    Science.gov (United States)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  7. A least square extrapolation method for improving solution accuracy of PDE computations

    CERN Document Server

    Garbey, M

    2003-01-01

    Richardson extrapolation (RE) is based on a very simple and elegant mathematical idea that has been successful in several areas of numerical analysis such as quadrature or time integration of ODEs. In theory, RE can be used also on PDE approximations when the convergence order of a discrete solution is clearly known. But in practice, the order of a numerical method often depends on space location and is not accurately satisfied on different levels of grids used in the extrapolation formula. We propose in this paper a more robust and numerically efficient method based on the idea of finding automatically the order of a method as the solution of a least square minimization problem on the residual. We introduce a two-level and three-level least square extrapolation method that works on nonmatching embedded grid solutions via spline interpolation. Our least square extrapolation method is a post-processing of data produced by existing PDE codes, that is easy to implement and can be a better tool than RE for code v...

  8. Limitation of the Least Square Method in the Evaluation of Dimension of Fractal Brownian Motions

    CERN Document Server

    Qiao, Bingqiang; Zeng, Houdun; Li, Xiang; Dai, Benzhong

    2015-01-01

    With the standard deviation for the logarithm of the re-scaled range $\\langle |F(t+\\tau)-F(t)|\\rangle$ of simulated fractal Brownian motions $F(t)$ given in a previous paper \\cite{q14}, the method of least squares is adopted to determine the slope, $S$, and intercept, $I$, of the log$(\\langle |F(t+\\tau)-F(t)|\\rangle)$ vs $\\rm{log}(\\tau)$ plot to investigate the limitation of this procedure. It is found that the reduced $\\chi^2$ of the fitting decreases with the increase of the Hurst index, $H$ (the expectation value of $S$), which may be attributed to the correlation among the re-scaled ranges. Similarly, it is found that the errors of the fitting parameters $S$ and $I$ are usually smaller than their corresponding standard deviations. These results show the limitation of using the simple least square method to determine the dimension of a fractal time series. Nevertheless, they may be used to reinterpret the fitting results of the least square method to determine the dimension of fractal Brownian motions more...

  9. A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.

    Directory of Open Access Journals (Sweden)

    Dominique Van de Sompel

    Full Text Available Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles.

  10. A hybrid least squares and principal component analysis algorithm for Raman spectroscopy.

    Science.gov (United States)

    Van de Sompel, Dominique; Garai, Ellis; Zavaleta, Cristina; Gambhir, Sanjiv Sam

    2012-01-01

    Raman spectroscopy is a powerful technique for detecting and quantifying analytes in chemical mixtures. A critical part of Raman spectroscopy is the use of a computer algorithm to analyze the measured Raman spectra. The most commonly used algorithm is the classical least squares method, which is popular due to its speed and ease of implementation. However, it is sensitive to inaccuracies or variations in the reference spectra of the analytes (compounds of interest) and the background. Many algorithms, primarily multivariate calibration methods, have been proposed that increase robustness to such variations. In this study, we propose a novel method that improves robustness even further by explicitly modeling variations in both the background and analyte signals. More specifically, it extends the classical least squares model by allowing the declared reference spectra to vary in accordance with the principal components obtained from training sets of spectra measured in prior characterization experiments. The amount of variation allowed is constrained by the eigenvalues of this principal component analysis. We compare the novel algorithm to the least squares method with a low-order polynomial residual model, as well as a state-of-the-art hybrid linear analysis method. The latter is a multivariate calibration method designed specifically to improve robustness to background variability in cases where training spectra of the background, as well as the mean spectrum of the analyte, are available. We demonstrate the novel algorithm's superior performance by comparing quantitative error metrics generated by each method. The experiments consider both simulated data and experimental data acquired from in vitro solutions of Raman-enhanced gold-silica nanoparticles.

  11. Clustering technique-based least square support vector machine for EEG signal classification.

    Science.gov (United States)

    Siuly; Li, Yan; Wen, Peng Paul

    2011-12-01

    This paper presents a new approach called clustering technique-based least square support vector machine (CT-LS-SVM) for the classification of EEG signals. Decision making is performed in two stages. In the first stage, clustering technique (CT) has been used to extract representative features of EEG data. In the second stage, least square support vector machine (LS-SVM) is applied to the extracted features to classify two-class EEG signals. To demonstrate the effectiveness of the proposed method, several experiments have been conducted on three publicly available benchmark databases, one for epileptic EEG data, one for mental imagery tasks EEG data and another one for motor imagery EEG data. Our proposed approach achieves an average sensitivity, specificity and classification accuracy of 94.92%, 93.44% and 94.18%, respectively, for the epileptic EEG data; 83.98%, 84.37% and 84.17% respectively, for the motor imagery EEG data; and 64.61%, 58.77% and 61.69%, respectively, for the mental imagery tasks EEG data. The performance of the CT-LS-SVM algorithm is compared in terms of classification accuracy and execution (running) time with our previous study where simple random sampling with a least square support vector machine (SRS-LS-SVM) was employed for EEG signal classification. We also compare the proposed method with other existing methods in the literature for the three databases. The experimental results show that the proposed algorithm can produce a better classification rate than the previous reported methods and takes much less execution time compared to the SRS-LS-SVM technique. The research findings in this paper indicate that the proposed approach is very efficient for classification of two-class EEG signals.

  12. Multisource least-squares migration of marine streamer and land data with frequency-division encoding

    KAUST Repository

    Huang, Yunsong

    2012-05-22

    Multisource migration of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. The accompanying crosstalk noise, in addition to the migration footprint, can be reduced by least-squares inversion. But the application of this approach to marine streamer data is hampered by the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modelling method. This leads to a strong mismatch in the misfit function and results in strong artefacts (crosstalk) in the multisource least-squares migration image. To eliminate this noise, we present a frequency-division multiplexing (FDM) strategy with iterative least-squares migration (ILSM) of supergathers. The key idea is, at each ILSM iteration, to assign a unique frequency band to each shot gather. In this case there is no overlap in the crosstalk spectrum of each migrated shot gather m(x, ω i), so the spectral crosstalk product m(x, ω i)m(x, ω j) =δ i, j is zero, unless i=j. Our results in applying this method to 2D marine data for a SEG/EAGE salt model show better resolved images than standard migration computed at about 1/10 th of the cost. Similar results are achieved after applying this method to synthetic data for a 3D SEG/EAGE salt model, except the acquisition geometry is similar to that of a marine OBS survey. Here, the speedup of this method over conventional migration is more than 10. We conclude that multisource migration for a marine geometry can be successfully achieved by a frequency-division encoding strategy, as long as crosstalk-prone sources are segregated in their spectral content. This is both the strength and the potential limitation of this method. © 2012 European Association of Geoscientists & Engineers.

  13. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    Science.gov (United States)

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks).

  14. Analysis Linking the Tensor Structure to the Least-Squares Method.

    Science.gov (United States)

    1984-01-01

    7 A-A142 159 ANALYSIS LINKIN THE TENSOR STRUCTURE TO THE LEAST-SQUARES METHOD(U) NOVA UNIV OCEANOGRAPHIC CENTER DANIA FL G BLAHA JAN 84 AFGL-TR-84...Tienstra ([5], [6], [7]), Baarda (C81, [9], [10]), Kooimans ([11]) and a number of others (for example, the editors of [71). It would probably be more...institute, Technische Hogeschool, Delft, 1967 & 1970. 11. A. H. KOOIMANS : "Principles of the Calculus of Observations". Rapport Special, Neuvi me Congr

  15. Solving Time of Least Square Systems in Sigma-Pi Unit Networks

    CERN Document Server

    Courrieu, Pierre

    2008-01-01

    The solving of least square systems is a useful operation in neurocomputational modeling of learning, pattern matching, and pattern recognition. In these last two cases, the solution must be obtained on-line, thus the time required to solve a system in a plausible neural architecture is critical. This paper presents a recurrent network of Sigma-Pi neurons, whose solving time increases at most like the logarithm of the system size, and of its condition number, which provides plausible computation times for biological systems.

  16. Regularization Paths for Least Squares Problems with Generalized $\\ell_1$ Penalties

    CERN Document Server

    Tibshirani, Ryan J

    2010-01-01

    We present a path algorithm for least squares problems with generalized $\\ell_1$ penalties. This includes as a special case the lasso and fused lasso problems. The algorithm is based on solving the (equivalent) Lagrange dual problem, an approach which offers both a computational advantage and an interesting geometric interpretation of the solution path. Using insights gathered from the dual formulation, we study degrees of freedom for the generalized problem, and develop an unbiased estimate of the degrees of freedom of the fused lasso fit. Our approach bears similarities to least angle regression (LARS), and a simple modification to our method gives the LARS procedure exactly.

  17. Circular and linear regression fitting circles and lines by least squares

    CERN Document Server

    Chernov, Nikolai

    2010-01-01

    Find the right algorithm for your image processing applicationExploring the recent achievements that have occurred since the mid-1990s, Circular and Linear Regression: Fitting Circles and Lines by Least Squares explains how to use modern algorithms to fit geometric contours (circles and circular arcs) to observed data in image processing and computer vision. The author covers all facets-geometric, statistical, and computational-of the methods. He looks at how the numerical algorithms relate to one another through underlying ideas, compares the strengths and weaknesses of each algorithm, and il

  18. Useful and little-known applications of the Least Square Method and some consequences of covariances

    Science.gov (United States)

    Helene, Otaviano; Mariano, Leandro; Guimarães-Filho, Zwinglio

    2016-10-01

    Covariances are as important as variances when dealing with experimental data and they must be considered in fitting procedures and adjustments in order to preserve the statistical properties of the adjusted quantities. In this paper, we apply the Least Square Method in matrix form to several simple problems in order to evaluate the consequences of covariances in the fitting procedure. Among the examples, we demonstrate how a measurement of a physical quantity can change the adopted value of all other covariant quantities and how a new single point (x , y) improves the parameters of a previously adjusted straight-line.

  19. Framework for gradient integration by combining radial basis functions method and least-squares method.

    Science.gov (United States)

    Huang, Lei; Asundi, Anand Krishna

    2013-08-20

    A framework with a combination of the radial basis functions (RBFs) method and the least-squares integration method is proposed to improve the integration process from gradient to shape. The principle of the framework is described, and the performance of the proposed method is investigated by simulation. Improvement in accuracy is verified by comparing the result with the usual RBFs-based subset-by-subset stitching method. The proposed method is accurate, automatic, easily implemented, and robust and even works with incomplete data.

  20. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.;

    2002-01-01

    of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary......Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...

  1. A Pascal program for the least-squares evaluation of standard RBS spectra

    Science.gov (United States)

    Hnatowicz, V.; Havránek, V.; Kvítek, J.

    1992-11-01

    A computer program for least-squares fitting of energy spectra obtained in common Rutherford backscattering (RBS) analyses is described. The samples analyzed by RBS technique are considered to be made up of a finite number of layers, each with uniform composition. The RBS spectra are treated as a combination of variable number of three different basic figures (strip, bulge and Gaussian) which are represented by ad-hoc chosen analytical expressions. The initial parameter estimates are inserted by the operator (with an assistance of graphical support on a TV screen) and the result of the fit is displayed on the screen and stored as a table on hard disk.

  2. Optimization of absorption placement using geometrical acoustic models and least squares.

    Science.gov (United States)

    Saksela, Kai; Botts, Jonathan; Savioja, Lauri

    2015-04-01

    Given a geometrical model of a space, the problem of optimally placing absorption in a space to match a desired impulse response is in general nonlinear. This has led some to use costly optimization procedures. This letter reformulates absorption assignment as a constrained linear least-squares problem. Regularized solutions result in direct distribution of absorption in the room and can accommodate multiple frequency bands, multiple sources and receivers, and constraints on geometrical placement of absorption. The method is demonstrated using a beam tracing model, resulting in the optimal absorption placement on the walls and ceiling of a classroom.

  3. A negative-norm least-squares method for time-harmonic Maxwell equations

    KAUST Repository

    Copeland, Dylan M.

    2012-04-01

    This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.

  4. Speed control of induction motor using fuzzy recursive least squares technique

    Directory of Open Access Journals (Sweden)

    Santiago Sánchez

    2008-12-01

    Full Text Available A simple adaptive controller design is presented in this paper, the control system uses the adaptive fuzzy logic, sliding modes and is trained with the recursive least squares technique. The problem of parameter variation is solved with the adaptive controller; the use of an internal PI regulator produces that the speed control of the induction motor be achieved by the stator currents instead the input voltage. The rotor-flux oriented coordinated system model is used to develop and test the control system.

  5. Thrust estimator design based on least squares support vector regression machine

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-ping; SUN Jian-guo

    2010-01-01

    In order to realize direct thrust control instead of traditional sensor-based control for nero-engines,it is indispensable to design a thrust estimator with high accuracy,so a scheme for thrust estimator design based on the least square support vector regression machine is proposed to solve this problem.Furthermore,numerical simulations confirm the effectiveness of our presented scheme.During the process of estimator design,a wrap per criterion that can not only reduce the computational complexity but also enhance the generalization performance is proposed to select variables as input variables for estimator.

  6. Least Squares Spectral Analysis and Its Application to Superconducting Gravimeter Data Analysis

    Institute of Scientific and Technical Information of China (English)

    YIN Hui; Spiros D. Pagiatakis

    2004-01-01

    Detection of a periodic signal hidden in noise is the goal of Superconducting Gravimeter (SG) data analysis. Due to spikes, gaps, datum shrifts (offsets) and other disturbances, the traditional FFT method shows inherent limitations. Instead, the least squares spectral analysis (LSSA) has showed itself more suitable than Fourier analysis of gappy, unequally spaced and unequally weighted data series in a variety of applications in geodesy and geophysics. This paper reviews the principle of LSSA and gives a possible strategy for the analysis of time series obtained from the Canadian Superconducting Gravimeter Installation (CGSI), with gaps, offsets, unequal sampling decimation of the data and unequally weighted data points.

  7. LEAST-SQUARES MIXED FINITE ELEMENT METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Shao-qin Gao

    2005-01-01

    Least-squares mixed finite element methods are proposed and analyzed for the incompressible magnetohydrodynamic equations, where the two vorticities are additionally introduced as independent variables in order that the primal equations are transformed into the first-order systems. We show that there hold the coerciveness and the optimal error bound in appropriate norms for all variables under consideration, which can be approximated by all kinds of continuous element. Consequently, the Babuska-Brezzi condition (i.e. the inf-sup condition) and the indefiniteness are avoided which are essential features of the classical mixed methods.

  8. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A;

    1997-01-01

    technique is preferable to sparse matrix technique when the matrices are not large, because the high computational speed compensates fully the disadvantages of using more arithmetic operations and more storage. For very large matrices the computations must be organized as a sequence of tasks in each...... the matrix so that dense blocks can be constructed and treated with some standard software, say LAPACK or NAG. These ideas are implemented for linear least-squares problems. The rectangular matrices (that appear in such problems) are decomposed by an orthogonal method. Results obtained on a CRAY C92A...

  9. Calibration of Vector Magnetogram with the Nonlinear Least-squares Fitting Technique

    Institute of Scientific and Technical Information of China (English)

    Jiang-Tao Su; Hong-Qi Zhang

    2004-01-01

    To acquire Stokes profiles from observations of a simple sunspot with the Video Vector Magnetograph at Huairou Solar Observing Station(HSOS),we scanned the FeIλ5324.19 A line over the wavelength interval from 150mA redward of the line center to 150mA blueward,in steps of 10mA.With the technique of analytic inversion of Stokes profiles via nonlinear least-squares,we present the calibration coefficients for the HSOS vector magnetic magnetogram.We obtained the theoretical calibration error with linear expressions derived from the Unno-Becker equation under weak-field approximation.

  10. A least squares procedure for calculating the calibration constants of a portable gamma-ray spectrometer.

    Science.gov (United States)

    Ribeiro, F B; Carlos, D U; Hiodo, F Y; Strobino, E F

    2005-01-01

    In this study, a least squares procedure for calculating the calibration constants of a portable gamma-ray spectrometer using the general inverse matrix method is presented. The procedure weights the model equations fitting to the calibration data, taking into account the variances in the counting rates and in the radioactive standard concentrations. The application of the described procedure is illustrated by calibrating twice the same gamma-ray spectrometer, with two independent data sets collected approximately 18 months apart in the same calibration facility.

  11. Globally Conservative, Hybrid Self-Adjoint Angular Flux and Least-Squares Method Compatible with Void

    OpenAIRE

    Laboure, Vincent M.; McClarren, Ryan G.; Wang, Yaqi

    2016-01-01

    In this paper, we derive a method for the second-order form of the transport equation that is both globally conservative and compatible with voids, using Continuous Finite Element Methods (CFEM). The main idea is to use the Least-Squares (LS) form of the transport equation in the void regions and the Self-Adjoint Angular Flux (SAAF) form elsewhere. While the SAAF formulation is globally conservative, the LS formulation need a correction in void. The price to pay for this fix is the loss of sy...

  12. Recursive Least Squares Estimator with Multiple Exponential Windows in Vector Autoregression

    Institute of Scientific and Technical Information of China (English)

    Hong-zhi An; Zhi-guo Li

    2002-01-01

    In the parameter tracking of time-varying systems, the ordinary method is weighted least squares with the rectangular window or the exponential window. In this paper we propose a new kind of sliding window called the multiple exponential window, and then use it to fit time-varying Gaussian vector autoregressive models. The asymptotic bias and covariance of the estimator of the parameter for time-invariant models are also derived. Simulation results show that the multiple exponential windows have better parameter tracking effect than rectangular windows and exponential ones.

  13. An Improved Algorithm of Grounding Grids Corrosion Diagnosis Based on Total Least Square Method

    Institute of Scientific and Technical Information of China (English)

    ZHANG Ying-jiao; NIU Tao; WANG Sen

    2011-01-01

    A new model considering corrosion property for grounding grids diagnosis is proposed, which provides reference solutions of ambiguous branches. The constraint total least square method based on singular value decomposition is adopted to improve the effectiveness of grounding grids' diagnosis algorithm. The improvement can weaken the influence of the model's error, which results from the differences between design paper and actual grid. Its influence on touch and step voltages caused by the interior resistance of conductors is taken into account. Simulation results show the validity of this approach.

  14. Normalized least-squares estimation in time-varying ARCH models

    OpenAIRE

    Fryzlewicz, Piotr; Sapatinas, Theofanis; Subba Rao, Suhasini

    2008-01-01

    We investigate the time-varying ARCH (tvARCH) process. It is shown that it can be used to describe the slow decay of the sample autocorrelations of the squared returns often observed in financial time series, which warrants the further study of parameter estimation methods for the model. ¶ Since the parameters are changing over time, a successful estimator needs to perform well for small samples. We propose a kernel normalized-least-squares (kernel-NLS) estimator which has a closed form...

  15. A Weighted Least-Squares Approach to Parameter Estimation Problems Based on Binary Measurements

    OpenAIRE

    Colinet, Eric; Juillard, Jérôme

    2010-01-01

    We present a new approach to parameter estimation problems based on binary measurements, motivated by the need to add integrated low-cost self-test features to microfabricated devices. This approach is based on the use of original weighted least-squares criteria: as opposed to other existing methods, it requires no dithering signal and it does not rely on an approximation of the quantizer. In this paper, we focus on a simple choice for the weights and establish some asymptotical properties of...

  16. Partial least squares prediction of the first hyperpolarizabilities of donor-acceptor polyenic derivatives

    Science.gov (United States)

    Machado, A. E. de A.; da Gama, A. A. de S.; de Barros Neto, B.

    2011-09-01

    A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities ( β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.

  17. Selective Weighted Least Squares Method for Fourier Transform Infrared Quantitative Analysis.

    Science.gov (United States)

    Wang, Xin; Li, Yan; Wei, Haoyun; Chen, Xia

    2016-10-26

    Classical least squares (CLS) regression is a popular multivariate statistical method used frequently for quantitative analysis using Fourier transform infrared (FT-IR) spectrometry. Classical least squares provides the best unbiased estimator for uncorrelated residual errors with zero mean and equal variance. However, the noise in FT-IR spectra, which accounts for a large portion of the residual errors, is heteroscedastic. Thus, if this noise with zero mean dominates in the residual errors, the weighted least squares (WLS) regression method described in this paper is a better estimator than CLS. However, if bias errors, such as the residual baseline error, are significant, WLS may perform worse than CLS. In this paper, we compare the effect of noise and bias error in using CLS and WLS in quantitative analysis. Results indicated that for wavenumbers with low absorbance, the bias error significantly affected the error, such that the performance of CLS is better than that of WLS. However, for wavenumbers with high absorbance, the noise significantly affected the error, and WLS proves to be better than CLS. Thus, we propose a selective weighted least squares (SWLS) regression that processes data with different wavenumbers using either CLS or WLS based on a selection criterion, i.e., lower or higher than an absorbance threshold. The effects of various factors on the optimal threshold value (OTV) for SWLS have been studied through numerical simulations. These studies reported that: (1) the concentration and the analyte type had minimal effect on OTV; and (2) the major factor that influences OTV is the ratio between the bias error and the standard deviation of the noise. The last part of this paper is dedicated to quantitative analysis of methane gas spectra, and methane/toluene mixtures gas spectra as measured using FT-IR spectrometry and CLS, WLS, and SWLS. The standard error of prediction (SEP), bias of prediction (bias), and the residual sum of squares of the errors

  18. On-line Weighted Least Squares Kernel Method for Nonlinear Dynamic Modeling

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Support vector machines (SVM) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on rolling optimization method and on-line learning strategies, a novel approach based on weighted least squares support vector machines (WLS-SVM) is proposed for nonlinear dynamic modeling.The good robust property of the novel approach enhances the generalization ability of kernel method-based modeling and some experimental results are presented to illustrate the feasibility of the proposed method.

  19. Mixed Least Square Method for Priority of Complementary Judgement Matrix and Its Algorithm

    Institute of Scientific and Technical Information of China (English)

    ZHOU Hong-an; LIU San-yang

    2007-01-01

    Based on the concept of multiplicative fuzzy consistent complementary judgement matrix, the mixed least square method (MLSM) for priority of complementary judgement matrix is proposed and proved. Then, the corresponding convergent iterative algorithm is given and its convergence is proved. Finally, some main properties of the developed priority method, such as rank preservation under strong condition, etc., are introduced. The theoretical analyses show that the MLSM can sufficiently reflect the preference information of the decision maker, and is easy to realize on a computer.

  20. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

  1. SPARSE REPRESENTATIONS WITH DATA FIDELITY TERM VIA AN ITERATIVELY REWEIGHTED LEAST SQUARES ALGORITHM

    Energy Technology Data Exchange (ETDEWEB)

    WOHLBERG, BRENDT [Los Alamos National Laboratory; RODRIGUEZ, PAUL [Los Alamos National Laboratory

    2007-01-08

    Basis Pursuit and Basis Pursuit Denoising, well established techniques for computing sparse representations, minimize an {ell}{sup 2} data fidelity term subject to an {ell}{sup 1} sparsity constraint or regularization term on the solution by mapping the problem to a linear or quadratic program. Basis Pursuit Denoising with an {ell}{sup 1} data fidelity term has recently been proposed, also implemented via a mapping to a linear program. They introduce an alternative approach via an iteratively Reweighted Least Squares algorithm, providing greater flexibility in the choice of data fidelity term norm, and computational advantages in certain circumstances.

  2. Application of the Marquardt least-squares method to the estimation of pulse function parameters

    Science.gov (United States)

    Lundengârd, Karl; Rančić, Milica; Javor, Vesna; Silvestrov, Sergei

    2014-12-01

    Application of the Marquardt least-squares method (MLSM) to the estimation of non-linear parameters of functions used for representing various lightning current waveshapes is presented in this paper. Parameters are determined for the Pulse, Heidler's and DEXP function representing the first positive, first and subsequent negative stroke currents as given in IEC 62305-1 Standard Ed.2, and also for some other fast- and slow-decaying lightning current waveshapes. The results prove the ability of the MLSM to be used for the estimation of parameters of the functions important in lightning discharge modeling.

  3. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  4. Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems

    CERN Document Server

    Abbasi-Yadkori, Yasin; Szepesvari, Csaba

    2011-01-01

    The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bou...

  5. Multi-loop adaptive internal model control based on a dynamic partial least squares model

    Institute of Scientific and Technical Information of China (English)

    Zhao ZHAO; Bin HU; Jun LIANG

    2011-01-01

    A multi-loop adaptive internal model control (IMC) strategy based on a dynamic partial least squares (PLS) framework is proposed to account for plant model errors caused by slow aging, drift in operational conditions, or environmental changes. Since PLS decomposition structure enables multi-loop controller design within latent spaces, a multivariable adaptive control scheme can be converted easily into several independent univariable control loops in the PLS space. In each latent subspace,once the model error exceeds a specific threshold, online adaptation rules are implemented separately to correct the plant model mismatch via a recursive least squares (RLS) algorithm. Because the IMC extracts the inverse of the minimum part of the internal model as its structure, the IMC controller is self-tuned by explicitly updating the parameters, which are parts of the internal model.Both parameter convergence and system stability are briefly analyzed, and proved to be effective. Finally, the proposed control scheme is tested and evaluated using a widely-used benchmark of a multi-input multi-output (MIMO) system with pure delay.

  6. An Augmented Classical Least Squares Method for Quantitative Raman Spectral Analysis against Component Information Loss

    Directory of Open Access Journals (Sweden)

    Yan Zhou

    2013-01-01

    Full Text Available We propose an augmented classical least squares (ACLS calibration method for quantitative Raman spectral analysis against component information loss. The Raman spectral signals with low analyte concentration correlations were selected and used as the substitutes for unknown quantitative component information during the CLS calibration procedure. The number of selected signals was determined by using the leave-one-out root-mean-square error of cross-validation (RMSECV curve. An ACLS model was built based on the augmented concentration matrix and the reference spectral signal matrix. The proposed method was compared with partial least squares (PLS and principal component regression (PCR using one example: a data set recorded from an experiment of analyte concentration determination using Raman spectroscopy. A 2-fold cross-validation with Venetian blinds strategy was exploited to evaluate the predictive power of the proposed method. The one-way variance analysis (ANOVA was used to access the predictive power difference between the proposed method and existing methods. Results indicated that the proposed method is effective at increasing the robust predictive power of traditional CLS model against component information loss and its predictive power is comparable to that of PLS or PCR.

  7. Identifying differentially methylated genes using mixed effect and generalized least square models

    Directory of Open Access Journals (Sweden)

    Yan Pearlly S

    2009-12-01

    Full Text Available Abstract Background DNA methylation plays an important role in the process of tumorigenesis. Identifying differentially methylated genes or CpG islands (CGIs associated with genes between two tumor subtypes is thus an important biological question. The methylation status of all CGIs in the whole genome can be assayed with differential methylation hybridization (DMH microarrays. However, patient samples or cell lines are heterogeneous, so their methylation pattern may be very different. In addition, neighboring probes at each CGI are correlated. How these factors affect the analysis of DMH data is unknown. Results We propose a new method for identifying differentially methylated (DM genes by identifying the associated DM CGI(s. At each CGI, we implement four different mixed effect and generalized least square models to identify DM genes between two groups. We compare four models with a simple least square regression model to study the impact of incorporating random effects and correlations. Conclusions We demonstrate that the inclusion (or exclusion of random effects and the choice of correlation structures can significantly affect the results of the data analysis. We also assess the false discovery rate of different models using CGIs associated with housekeeping genes.

  8. Equalization of Loudspeaker and Room Responses Using Kautz Filters: Direct Least Squares Design

    Directory of Open Access Journals (Sweden)

    Tuomas Paatero

    2007-01-01

    Full Text Available DSP-based correction of loudspeaker and room responses is becoming an important part of improving sound reproduction. Such response equalization (EQ is based on using a digital filter in cascade with the reproduction channel to counteract the response errors introduced by loudspeakers and room acoustics. Several FIR and IIR filter design techniques have been proposed for equalization purposes. In this paper we investigate Kautz filters, an interesting class of IIR filters, from the point of view of direct least squares EQ design. Kautz filters can be seen as generalizations of FIR filters and their frequency-warped counterparts. They provide a flexible means to obtain desired frequency resolution behavior, which allows low filter orders even for complex corrections. Kautz filters have also the desirable property to avoid inverting dips in transfer function to sharp and long-ringing resonances in the equalizer. Furthermore, the direct least squares design is applicable to nonminimum-phase EQ design and allows using a desired target response. The proposed method is demonstrated by case examples with measured and synthetic loudspeaker and room responses.

  9. Step-heating infrared thermographic inspection of steel structures by applying least-squares regression.

    Science.gov (United States)

    Zhao, Hanxue; Zhou, Zhenggan; Fan, Jin; Li, Gen; Sun, Guangkai

    2017-02-01

    This paper reports the application of the least-squares regression method in the step-heating thermographic inspection of steel structures. The surface temperature variation of a slab with finite thickness during both the step-heating phase and the cooling-down phase is presented. A mild steel slab with holes of various depths and diameters is chosen as the specimen. The step-heating thermographic inspection experiments are carried out on the specimen with different heating times. The heating as well as the cooling-down phases are recorded with an infrared camera and are analyzed separately by linear regression of the double logarithmic temperature increase versus time plots. Three statistics of the linear regression, the slope, the coefficient of determination, and the F-test value, are used to create image maps according to the processing results. The signal-to-noise ratio of each map is calculated to evaluate the performance of the three imaging methods with different durations of heating time and cooling time. The results prove that the F-test value maps present a good performance for the sequences of the step-heating phase, while the slope maps present a good performance for the sequences of the cooling-down phase. The optimal heating time and cooling time for a steel structure are also concluded. The comparison with the results of the thermographic signal reconstruction (TSR) method proves that the least-squares regression method has better detectability and a higher inspection efficiency.

  10. Two-Stage Orthogonal Least Squares Methods for Neural Network Construction.

    Science.gov (United States)

    Zhang, Long; Li, Kang; Bai, Er-Wei; Irwin, George W

    2015-08-01

    A number of neural networks can be formulated as the linear-in-the-parameters models. Training such networks can be transformed to a model selection problem where a compact model is selected from all the candidates using subset selection algorithms. Forward selection methods are popular fast subset selection approaches. However, they may only produce suboptimal models and can be trapped into a local minimum. More recently, a two-stage fast recursive algorithm (TSFRA) combining forward selection and backward model refinement has been proposed to improve the compactness and generalization performance of the model. This paper proposes unified two-stage orthogonal least squares methods instead of the fast recursive-based methods. In contrast to the TSFRA, this paper derives a new simplified relationship between the forward and the backward stages to avoid repetitive computations using the inherent orthogonal properties of the least squares methods. Furthermore, a new term exchanging scheme for backward model refinement is introduced to reduce computational demand. Finally, given the error reduction ratio criterion, effective and efficient forward and backward subset selection procedures are proposed. Extensive examples are presented to demonstrate the improved model compactness constructed by the proposed technique in comparison with some popular methods.

  11. Using Perturbed QR Factorizations To Solve Linear Least-Squares Problems

    Energy Technology Data Exchange (ETDEWEB)

    Avron, Haim; Ng, Esmond G.; Toledo, Sivan

    2008-03-21

    We propose and analyze a new tool to help solve sparse linear least-squares problems min{sub x} {parallel}Ax-b{parallel}{sub 2}. Our method is based on a sparse QR factorization of a low-rank perturbation {cflx A} of A. More precisely, we show that the R factor of {cflx A} is an effective preconditioner for the least-squares problem min{sub x} {parallel}Ax-b{parallel}{sub 2}, when solved using LSQR. We propose applications for the new technique. When A is rank deficient we can add rows to ensure that the preconditioner is well-conditioned without column pivoting. When A is sparse except for a few dense rows we can drop these dense rows from A to obtain {cflx A}. Another application is solving an updated or downdated problem. If R is a good preconditioner for the original problem A, it is a good preconditioner for the updated/downdated problem {cflx A}. We can also solve what-if scenarios, where we want to find the solution if a column of the original matrix is changed/removed. We present a spectral theory that analyzes the generalized spectrum of the pencil (A*A,R*R) and analyze the applications.

  12. Weighted Least Squares Algorithm for Single-observer Passive Coherent Location Using DOA and TDOA Measurements

    Directory of Open Access Journals (Sweden)

    Zhao Yongsheng

    2016-06-01

    Full Text Available In order to determine single-observer passive coherent locations using illuminators of opportunity, we propose a jointing angle and Time Difference Of Arrival (TDOA Weighted Least Squares (WLS location method. First, we linearize the DOA and TDOA measurement equations. We establish the localization problem as the WLS optimization model by considering the errors in the location equations. Then, we iteratively solve the WLS optimization. Finally, we conduct a performance analysis of the proposed method. Simulation results show that, unlike the TDOA-only method, which needs at least three illuminators to locate a target, the jointing DOA and TDOA method requires only one illuminator. It also has a higher localization accuracy than the TDOA-only method when using the same number of illuminators. The proposed method yields a lower mean square error than the least squares algorithm, which makes it possible to approach the Cramér-Rao lower bound at a relatively high TDOA noise level. Moreover, on the basis of the geometric dilution of precision, we conclude that the positions of the target and illuminators are also important factors affecting the localization accuracy.

  13. Online segmentation of time series based on polynomial least-squares approximations.

    Science.gov (United States)

    Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard

    2010-12-01

    The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs.

  14. Least-squares finite-element scheme for the lattice Boltzmann method on an unstructured mesh.

    Science.gov (United States)

    Li, Yusong; LeBoeuf, Eugene J; Basu, P K

    2005-10-01

    A numerical model of the lattice Boltzmann method (LBM) utilizing least-squares finite-element method in space and the Crank-Nicolson method in time is developed. This method is able to solve fluid flow in domains that contain complex or irregular geometric boundaries by using the flexibility and numerical stability of a finite-element method, while employing accurate least-squares optimization. Fourth-order accuracy in space and second-order accuracy in time are derived for a pure advection equation on a uniform mesh; while high stability is implied from a von Neumann linearized stability analysis. Implemented on unstructured mesh through an innovative element-by-element approach, the proposed method requires fewer grid points and less memory compared to traditional LBM. Accurate numerical results are presented through two-dimensional incompressible Poiseuille flow, Couette flow, and flow past a circular cylinder. Finally, the proposed method is applied to estimate the permeability of a randomly generated porous media, which further demonstrates its inherent geometric flexibility.

  15. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences.

  16. [NIR spectroscopy based on least square support vector machines for quality prediction of tomato juice].

    Science.gov (United States)

    Huang, Kang; Wang, Hui-jun; Xu, Hui-rong; Wang, Jian-ping; Ying, Yi-bin

    2009-04-01

    The application of least square support vector machines (LS-SVM) regression method based on statistics study theory to the analysis with near infrared (NIR) spectra of tomato juice was introduced in the present paper. In this method, LS-SVM was used for establishing model of spectral analysis, and was applied to predict the sugar contents (SC) and available acid (VA) in tomato juice samples. NIR transmission spectra of tomato juice were measured in the spectral range of 800-2,500 nm using InGaAs detector. The radial basis function (RBF) was adopted as a kernel function of LS-SVM. Sixty seven tomato juice samples were used as calibration set, and thirty three samples were used as validation set. The results of the method for sugar contents (SC) and available acid (VA) prediction were: a high correlation coefficient of 0.9903 and 0.9675, and a low root mean square error of prediction (RMSEP) of 0.0056 degree Brix and 0.0245, respectively. And compared to PLS and PCR methods, the performance of the LSSVM method was better. The results indicated that it was possible to built statistic models to quantify some common components in tomato juice using near-infrared (NIR) spectroscopy and least square support vector machines (LS-SVM) regression method as a nonlinear multivariate calibration procedure, and LS-SVM could be a rapid and accurate method for juice components determination based on NIR spectra.

  17. Iterative least square phase-measuring method that tolerates extended finite bandwidth illumination.

    Science.gov (United States)

    Munteanu, Florin; Schmit, Joanna

    2009-02-20

    Iterative least square phase-measuring techniques address the phase-shifting interferometry issue of sensitivity to vibrations and scanner nonlinearity. In these techniques the wavefront phase and phase steps are determined simultaneously from a single set of phase-shifted fringe frames where the phase shift does not need to have a nominal value or be a priori precisely known. This method is commonly used in laser interferometers in which the contrast of fringes is constant between frames and across the field. We present step-by-step modifications to the basic iterative least square method. These modifications allow for vibration insensitive measurements in an interferometric system in which fringe contrast varies across a single frame, as well as from frame to frame, due to the limited bandwidth light source and the nonzero numerical aperture of the objective. We demonstrate the efficiency of the new algorithm with experimental data, and we analyze theoretically the degree of contrast variation that this new algorithm can tolerate.

  18. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2013-12-01

    Full Text Available The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM, combined with its sparsified version (sparse online LS-OC-SVM. LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.

  19. The least square particle finite element method for simulating large amplitude sloshing flows

    Institute of Scientific and Technical Information of China (English)

    Bo Tang; Junfeng Li; Tianshu Wang

    2008-01-01

    Large amplitude sloshing in tanks is simulated by the least square particle finite element method (LSPFEM) in this paper: The least square finite element method (LSFEM) is employed to spatially discrete the Navier-Stokes equations, and to avoid the stabilization issues due to the incompressibility condition for equal-order interpolation of the velocity and the pressure, as usually used in Galerkin method to satisfy the well-known LBB condition. The LSPFEM also uses the Lagrangian description to model the motion of nodes (particles). A mesh which connects these nodes is constructed by a triangulation algorithm to avoid the mesh distortion. A quasi α-shapes algorithm is used to identify the free surface boundary. The nodes are viewed as particles which can freely move and even separate from the main fluid domain. Finally this method is used to study the large amplitude sloshing evolution in two dimensional tanks. The results are compared with those obtained by Flow-3d with good agreement.

  20. Least-squares migration of multisource data with a deblurring filter

    KAUST Repository

    Dai, Wei

    2011-09-01

    Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.

  1. Fitting of dihedral terms in classical force fields as an analytic linear least-squares problem.

    Science.gov (United States)

    Hopkins, Chad W; Roitberg, Adrian E

    2014-07-28

    The derivation and optimization of most energy terms in modern force fields are aided by automated computational tools. It is therefore important to have algorithms to rapidly and precisely train large numbers of interconnected parameters to allow investigators to make better decisions about the content of molecular models. In particular, the traditional approach to deriving dihedral parameters has been a least-squares fit to target conformational energies through variational optimization strategies. We present a computational approach for simultaneously fitting force field dihedral amplitudes and phase constants which is analytic within the scope of the data set. This approach completes the optimal molecular mechanics representation of a quantum mechanical potential energy surface in a single linear least-squares fit by recasting the dihedral potential into a linear function in the parameters. We compare the resulting method to a genetic algorithm in terms of computational time and quality of fit for two simple molecules. As suggested in previous studies, arbitrary dihedral phases are only necessary when modeling chiral molecules, which include more than half of drugs currently in use, so we also examined a dihedral parametrization case for the drug amoxicillin and one of its stereoisomers where the target dihedral includes a chiral center. Asymmetric dihedral phases are needed in these types of cases to properly represent the quantum mechanical energy surface and to differentiate between stereoisomers about the chiral center.

  2. Confidence Region of Least Squares Solution for Single-Arc Observations

    Science.gov (United States)

    Principe, G.; Armellin, R.; Lewis, H.

    2016-09-01

    The total number of active satellites, rocket bodies, and debris larger than 10 cm is currently about 20,000. Considering all resident space objects larger than 1 cm this rises to an estimated minimum of 500,000 objects. Latest generation sensor networks will be able to detect small-size objects, producing millions of observations per day. Due to observability constraints it is likely that long gaps between observations will occur for small objects. This requires to determine the space object (SO) orbit and to accurately describe the associated uncertainty when observations are acquired on a single arc. The aim of this work is to revisit the classical least squares method taking advantage of the high order Taylor expansions enabled by differential algebra. In particular, the high order expansion of the residuals with respect to the state is used to implement an arbitrary order least squares solver, avoiding the typical approximations of differential correction methods. In addition, the same expansions are used to accurately characterize the confidence region of the solution, going beyond the classical Gaussian distributions. The properties and performances of the proposed method are discussed using optical observations of objects in LEO, HEO, and GEO.

  3. A compact and accurate semi-global potential energy surface for malonaldehyde from constrained least squares regression

    Energy Technology Data Exchange (ETDEWEB)

    Mizukami, Wataru, E-mail: wataru.mizukami@bristol.ac.uk; Tew, David P., E-mail: david.tew@bristol.ac.uk [School of Chemistry, University of Bristol, Bristol BS8 1TS (United Kingdom); Habershon, Scott, E-mail: S.Habershon@warwick.ac.uk [Department of Chemistry and Centre for Scientific Computing, University of Warwick, Gibbet Hill Road, Coventry CV4 7AL (United Kingdom)

    2014-10-14

    We present a new approach to semi-global potential energy surface fitting that uses the least absolute shrinkage and selection operator (LASSO) constrained least squares procedure to exploit an extremely flexible form for the potential function, while at the same time controlling the risk of overfitting and avoiding the introduction of unphysical features such as divergences or high-frequency oscillations. Drawing from a massively redundant set of overlapping distributed multi-dimensional Gaussian functions of inter-atomic separations we build a compact full-dimensional surface for malonaldehyde, fit to explicitly correlated coupled cluster CCSD(T)(F12*) energies with a root mean square deviations accuracy of 0.3%–0.5% up to 25 000 cm{sup −1} above equilibrium. Importance-sampled diffusion Monte Carlo calculations predict zero point energies for malonaldehyde and its deuterated isotopologue of 14 715.4(2) and 13 997.9(2) cm{sup −1} and hydrogen transfer tunnelling splittings of 21.0(4) and 3.2(4) cm{sup −1}, respectively, which are in excellent agreement with the experimental values of 21.583 and 2.915(4) cm{sup −1}.

  4. A compact and accurate semi-global potential energy surface for malonaldehyde from constrained least squares regression

    Science.gov (United States)

    Mizukami, Wataru; Habershon, Scott; Tew, David P.

    2014-10-01

    We present a new approach to semi-global potential energy surface fitting that uses the least absolute shrinkage and selection operator (LASSO) constrained least squares procedure to exploit an extremely flexible form for the potential function, while at the same time controlling the risk of overfitting and avoiding the introduction of unphysical features such as divergences or high-frequency oscillations. Drawing from a massively redundant set of overlapping distributed multi-dimensional Gaussian functions of inter-atomic separations we build a compact full-dimensional surface for malonaldehyde, fit to explicitly correlated coupled cluster CCSD(T)(F12*) energies with a root mean square deviations accuracy of 0.3%-0.5% up to 25 000 cm-1 above equilibrium. Importance-sampled diffusion Monte Carlo calculations predict zero point energies for malonaldehyde and its deuterated isotopologue of 14 715.4(2) and 13 997.9(2) cm-1 and hydrogen transfer tunnelling splittings of 21.0(4) and 3.2(4) cm-1, respectively, which are in excellent agreement with the experimental values of 21.583 and 2.915(4) cm-1.

  5. A compact and accurate semi-global potential energy surface for malonaldehyde from constrained least squares regression.

    Science.gov (United States)

    Mizukami, Wataru; Habershon, Scott; Tew, David P

    2014-10-14

    We present a new approach to semi-global potential energy surface fitting that uses the least absolute shrinkage and selection operator (LASSO) constrained least squares procedure to exploit an extremely flexible form for the potential function, while at the same time controlling the risk of overfitting and avoiding the introduction of unphysical features such as divergences or high-frequency oscillations. Drawing from a massively redundant set of overlapping distributed multi-dimensional Gaussian functions of inter-atomic separations we build a compact full-dimensional surface for malonaldehyde, fit to explicitly correlated coupled cluster CCSD(T)(F12*) energies with a root mean square deviations accuracy of 0.3%-0.5% up to 25,000 cm(-1) above equilibrium. Importance-sampled diffusion Monte Carlo calculations predict zero point energies for malonaldehyde and its deuterated isotopologue of 14 715.4(2) and 13 997.9(2) cm(-1) and hydrogen transfer tunnelling splittings of 21.0(4) and 3.2(4) cm(-1), respectively, which are in excellent agreement with the experimental values of 21.583 and 2.915(4) cm(-1).

  6. Radio astronomical image formation using constrained least squares and Krylov subspaces

    Science.gov (United States)

    Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan

    2016-04-01

    Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources

  7. A Coupled Finite Difference and Moving Least Squares Simulation of Violent Breaking Wave Impact

    DEFF Research Database (Denmark)

    Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2012-01-01

    incompressible and inviscid model and the wave impacts on the vertical breakwater are simulated in this model. The resulting maximum pressures and forces on the breakwater are relatively high when compared with other studies and this is due to the incompressible nature of the present model.......Two model for simulation of free surface flow is presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A special...... feature of this model is a generalized finite point set method which is applied to the solution of the Poisson equation on an unstructured point distribution. The presented finite point set method is generalized to arbitrary order of approximation. The two models are applied to simulation of steep...

  8. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    Science.gov (United States)

    Samsudin, R.; Saad, P.; Shabri, A.

    2010-06-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  9. SOM-based nonlinear least squares twin SVM via active contours for noisy image segmentation

    Science.gov (United States)

    Xie, Xiaomin; Wang, Tingting

    2017-02-01

    In this paper, a nonlinear least square twin support vector machine (NLSTSVM) with the integration of active contour model (ACM) is proposed for noisy image segmentation. Efforts have been made to seek the kernel-generated surfaces instead of hyper-planes for the pixels belonging to the foreground and background, respectively, using the kernel trick to enhance the performance. The concurrent self organizing maps (SOMs) are applied to approximate the intensity distributions in a supervised way, so as to establish the original training sets for the NLSTSVM. Further, the two sets are updated by adding the global region average intensities at each iteration. Moreover, a local variable regional term rather than edge stop function is adopted in the energy function to ameliorate the noise robustness. Experiment results demonstrate that our model holds the higher segmentation accuracy and more noise robustness.

  10. Computational Experiments on the Tikhonov Regularization of the Total Least Squares Problem

    Directory of Open Access Journals (Sweden)

    Maziar Salahi

    2009-06-01

    Full Text Available In this paper we consider finding meaningful solutions of ill-conditioned overdetermined linear systems Ax≈b, where A and b are both contaminated by noise. This kind of problems frequently arise in discretization of certain integral equations. One of the most popular approaches to find meaningful solutions of such systems is the so called total least squares problem. First we introduce this approach and then present three numerical algorithms to solve the resulting fractional minimization problem. In spite of the fact that the fractional minimization problem is not necessarily a convex problem, on all test problems we can get the global optimal solution. Extensive numerical experiments are reported to demonstrate the practical performance of the presented algorithms.

  11. Modelling of chaotic systems based on modified weighted recurrent least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Sun Jian-Cheng; Zhang Tai-Yi; Liu Feng

    2004-01-01

    Positive Lyapunov exponents cause the errors in modelling of the chaotic time series to grow exponentially. In this paper, we propose the modified version of the support vector machines (SVM) to deal with this problem. Based on recurrent least squares support vector machines (RLS-SVM), we introduce a weighted term to the cost function to compensate the prediction errors resulting from the positive global Lyapunov exponents. To demonstrate the effectiveness of our algorithm, we use the power spectrum and dynamic invariants involving the Lyapunov exponents and the correlation dimension as criterions, and then apply our method to the Santa Fe competition time series. The simulation results shows that the proposed method can capture the dynamics of the chaotic time series effectively.

  12. CXFTV2: A Fortran subroutine for the discrete least squares convex approximation

    Science.gov (United States)

    Demetriou, I. C.

    1997-03-01

    A Fortan subroutine calculates the least squares approximation to n data values containing random errors subject to non-negative second divided differences (convexity). The method employs a dual active set quadratic programming technique that allows several concavities of an iterate to be corrected simultaneously, which is a distinctive feature of this calculation. A B-spline representation of the iterates reduces each active set calculation to an unconstrained minimization with fewer variables that requires only O( n) computer operations. Details in these techniques including the data structure that establishes the implementation of the method are specified. Numerical testing on a variety of data sets indicates that the subroutine is particularly efficient, terminating after a small number of active set changes, the subroutine being suitable for large numbers of data. A numerical example and its output is provided to help the use of the software.

  13. Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    CERN Document Server

    Zhu, Hao; Giannakis, Georgios B

    2010-01-01

    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also all...

  14. Scaled first-order methods for a class of large-scale constrained least square problems

    Science.gov (United States)

    Coli, Vanna Lisa; Ruggiero, Valeria; Zanni, Luca

    2016-10-01

    Typical applications in signal and image processing often require the numerical solution of large-scale linear least squares problems with simple constraints, related to an m × n nonnegative matrix A, m « n. When the size of A is such that the matrix is not available in memory and only the operators of the matrix-vector products involving A and AT can be computed, forward-backward methods combined with suitable accelerating techniques are very effective; in particular, the gradient projection methods can be improved by suitable step-length rules or by an extrapolation/inertial step. In this work, we propose a further acceleration technique for both schemes, based on the use of variable metrics tailored for the considered problems. The numerical effectiveness of the proposed approach is evaluated on randomly generated test problems and real data arising from a problem of fibre orientation estimation in diffusion MRI.

  15. [Measurement of nonuniform temperature and concentration distribution by absorption spectroscopy based on least-square fitting].

    Science.gov (United States)

    Song, Jun-Ling; Hong, Yan-Ji; Wang, Guang-Yu; Pan, Hu

    2013-08-01

    The measurement of nonuniform temperature and concentration distributions was investigated based on tunable diode laser absorption spectroscopy technology. Through direct scanning multiple absorption lines of H2O, two zones for temperature and concentration distribution were achieved by solving nonlinear equations by least-square fitting from numerical and experimental studies. The numerical results show that the calculated temperature and concentration have relative errors of 8.3% and 7.6% compared to the model, respectively. The calculating accuracy can be improved by increasing the number of absorption lines and reduction in unknown numbers. Compared with the thermocouple readings, the high and low temperatures have relative errors of 13.8% and 3.5% respectively. The numerical results are in agreement with the experimental results.

  16. On-line least squares support vector machine algorithm in gas prediction

    Institute of Scientific and Technical Information of China (English)

    ZHAO Xiao-hu; WANG Gang; ZHAO Ke-ke; TAN De-jian

    2009-01-01

    Traditional coal mine safety prediction methods are off-line and do not have dynamic prediction functions. The Support Vector Machine (SVM) is a new machine learning algorithm that has excellent properties. The least squares support vector machine (LS-SVM) algorithm is an improved algorithm of SVM. But the common LS-SVM algorithm, used directly in safety predictions, has some problems. We have first studied gas prediction problems and the basic theory of LS-SVM. Given these problems, we have investigated the affect of the time factor about safety prediction and present an on-line prediction algorithm, based on LS-SVM. Finally, given our observed data, we used the on-line algorithm to predict gas emissions and used other related algorithm to com- pare its performance. The simulation results have verified the validity of the new algorithm.

  17. A Selective Moving Window Partial Least Squares Method and Its Application in Process Modeling

    Institute of Scientific and Technical Information of China (English)

    Ouguan Xu; Yongfeng Fu; Hongye Su; Lijuan Li

    2014-01-01

    A selective moving window partial least squares (SMW-PLS) soft sensor was proposed in this paper and applied to a hydro-isomerization process for on-line estimation of para-xylene (PX) content. Aiming at the high frequen-cy of model updating in previous recursive PLS methods, a selective updating strategy was developed. The model adaptation is activated once the prediction error is larger than a preset threshold, or the model is kept unchanged. As a result, the frequency of model updating is reduced greatly, while the change of prediction accuracy is minor. The performance of the proposed model is better as compared with that of other PLS-based model. The compro-mise between prediction accuracy and real-time performance can be obtained by regulating the threshold. The guidelines to determine the model parameters are illustrated. In summary, the proposed SMW-PLS method can deal with the slow time-varying processes effectively.

  18. Multivariate analysis of remote LIBS spectra using partial least squares, principal component analysis, and related techniques

    Energy Technology Data Exchange (ETDEWEB)

    Clegg, Samuel M [Los Alamos National Laboratory; Barefield, James E [Los Alamos National Laboratory; Wiens, Roger C [Los Alamos National Laboratory; Sklute, Elizabeth [MT HOLYOKE COLLEGE; Dyare, Melinda D [MT HOLYOKE COLLEGE

    2008-01-01

    Quantitative analysis with LIBS traditionally employs calibration curves that are complicated by the chemical matrix effects. These chemical matrix effects influence the LIBS plasma and the ratio of elemental composition to elemental emission line intensity. Consequently, LIBS calibration typically requires a priori knowledge of the unknown, in order for a series of calibration standards similar to the unknown to be employed. In this paper, three new Multivariate Analysis (MV A) techniques are employed to analyze the LIBS spectra of 18 disparate igneous and highly-metamorphosed rock samples. Partial Least Squares (PLS) analysis is used to generate a calibration model from which unknown samples can be analyzed. Principal Components Analysis (PCA) and Soft Independent Modeling of Class Analogy (SIMCA) are employed to generate a model and predict the rock type of the samples. These MV A techniques appear to exploit the matrix effects associated with the chemistries of these 18 samples.

  19. Underwater terrain positioning method based on least squares estimation for AUV

    Science.gov (United States)

    Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing

    2015-12-01

    To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.

  20. Analysis of Shift and Deformation of Planar Surfaces Using the Least Squares Plane

    Directory of Open Access Journals (Sweden)

    Hrvoje Matijević

    2006-12-01

    Full Text Available Modern methods of measurement developed on the basis of advanced reflectorless distance measurement have paved the way for easier detection and analysis of shift and deformation. A large quantity of collected data points will often require a mathematical model of the surface that fits best into these. Although this can be a complex task, in the case of planar surfaces it is easily done, enabling further processing and analysis of measurement results. The paper describes the fitting of a plane to a set of collected points using the least squares distance, with previously excluded outliers via the RANSAC algorithm. Based on that, a method for analysis of the deformation and shift of planar surfaces is also described.

  1. A fast iterative recursive least squares algorithm for Wiener model identification of highly nonlinear systems.

    Science.gov (United States)

    Kazemi, Mahdi; Arefi, Mohammad Mehdi

    2016-12-15

    In this paper, an online identification algorithm is presented for nonlinear systems in the presence of output colored noise. The proposed method is based on extended recursive least squares (ERLS) algorithm, where the identified system is in polynomial Wiener form. To this end, an unknown intermediate signal is estimated by using an inner iterative algorithm. The iterative recursive algorithm adaptively modifies the vector of parameters of the presented Wiener model when the system parameters vary. In addition, to increase the robustness of the proposed method against variations, a robust RLS algorithm is applied to the model. Simulation results are provided to show the effectiveness of the proposed approach. Results confirm that the proposed method has fast convergence rate with robust characteristics, which increases the efficiency of the proposed model and identification approach. For instance, the FIT criterion will be achieved 92% in CSTR process where about 400 data is used.

  2. Michaelis-Menten kinetics, the operator-repressor system, and least squares approaches.

    Science.gov (United States)

    Hadeler, Karl Peter

    2013-01-01

    The Michaelis-Menten (MM) function is a fractional linear function depending on two positive parameters. These can be estimated by nonlinear or linear least squares methods. The non-linear methods, based directly on the defect of the MM function, can fail and not produce any minimizer. The linear methods always produce a unique minimizer which, however, may not be positive. Here we give sufficient conditions on the data such that the nonlinear problem has at least one positive minimizer and also conditions for the minimizer of the linear problem to be positive. We discuss in detail the models and equilibrium relations of a classical operator-repressor system, and we extend our approach to the MM problem with leakage and to reversible MM kinetics. The arrangement of the sufficient conditions exhibits the important role of data that have a concavity property (chemically feasible data).

  3. Comparison of SIRT and SQS for Regularized Weighted Least Squares Image Reconstruction.

    Science.gov (United States)

    Gregor, Jens; Fessler, Jeffrey A

    2015-03-01

    Tomographic image reconstruction is often formulated as a regularized weighted least squares (RWLS) problem optimized by iterative algorithms that are either inherently algebraic or derived from a statistical point of view. This paper compares a modified version of SIRT (Simultaneous Iterative Reconstruction Technique), which is of the former type, with a version of SQS (Separable Quadratic Surrogates), which is of the latter type. We show that the two algorithms minimize the same criterion function using similar forms of preconditioned gradient descent. We present near-optimal relaxation for both based on eigenvalue bounds and include a heuristic extension for use with ordered subsets. We provide empirical evidence that SIRT and SQS converge at the same rate for all intents and purposes. For context, we compare their performance with an implementation of preconditioned conjugate gradient. The illustrative application is X-ray CT of luggage for aviation security.

  4. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis

    DEFF Research Database (Denmark)

    Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel;

    2014-01-01

    Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods....... Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3...

  5. Distributed weighted least-squares estimation with fast convergence for large-scale systems.

    Science.gov (United States)

    Marelli, Damián Edgardo; Fu, Minyue

    2015-01-01

    In this paper we study a distributed weighted least-squares estimation problem for a large-scale system consisting of a network of interconnected sub-systems. Each sub-system is concerned with a subset of the unknown parameters and has a measurement linear in the unknown parameters with additive noise. The distributed estimation task is for each sub-system to compute the globally optimal estimate of its own parameters using its own measurement and information shared with the network through neighborhood communication. We first provide a fully distributed iterative algorithm to asymptotically compute the global optimal estimate. The convergence rate of the algorithm will be maximized using a scaling parameter and a preconditioning method. This algorithm works for a general network. For a network without loops, we also provide a different iterative algorithm to compute the global optimal estimate which converges in a finite number of steps. We include numerical experiments to illustrate the performances of the proposed methods.

  6. Joint cluster and non-negative least squares analysis for aerosol mass spectrum data

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, T; Zhu, W [Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY 11794-3600 (United States); McGraw, R [Environmental Sciences Department, Brookhaven National Laboratory, Upton, NY 11973-5000 (United States)], E-mail: zhu@ams.sunysb.edu

    2008-07-15

    Aerosol mass spectrum (AMS) data contain hundreds of mass to charge ratios and their corresponding intensities from air collected through the mass spectrometer. The observations are usually taken sequentially in time to monitor the air composition, quality and temporal change in an area of interest. An important goal of AMS data analysis is to reduce the dimensionality of the original data yielding a small set of representing tracers for various atmospheric and climatic models. In this work, we present an approach to jointly apply the cluster analysis and the non-negative least squares method towards this goal. Application to a relevant study demonstrates the effectiveness of this new approach. Comparisons are made to other relevant multivariate statistical techniques including the principal component analysis and the positive matrix factorization method, and guidelines are provided.

  7. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    Science.gov (United States)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  8. Nonlinear Least-Squares Time-Difference Estimation from Sub-Nyquist-Rate Samples

    Science.gov (United States)

    Harada, Koji; Sakai, Hideaki

    In this paper, time-difference estimation of filtered random signals passed through multipath channels is discussed. First, we reformulate the approach based on innovation-rate sampling (IRS) to fit our random signal model, then use the IRS results to drive the nonlinear least-squares (NLS) minimization algorithm. This hybrid approach (referred to as the IRS-NLS method) provides consistent estimates even for cases with sub-Nyquist sampling assuming the use of compactly-supported sampling kernels that satisfies the recently-developed nonaliasing condition in the frequency domain. Numerical simulations show that the proposed NLS-IRS method can improve performance over the straight-forward IRS method, and provides approximately the same performance as the NLS method with reduced sampling rate, even for closely-spaced time delays. This enables, given a fixed observation time, significant reduction in the required number of samples, while maintaining the same level of estimation performance.

  9. Research on mine noise sources analysis based on least squares wave-let transform

    Institute of Scientific and Technical Information of China (English)

    CHENG Gen-yin; YU Sheng-chen; CHEN Shao-jie; WEI Zhi-yong; ZHANG Xiao-chen

    2010-01-01

    In order to determine the characteristics of noise source accurately, the noise distribution at different frequencies was determined by taking the differences into account between aerodynamic noises, mechanical noise, electrical noise in terms of in frequency and intensity. Designed a least squares wavelet with high precision and special effects for strong interference zone (multi-source noise), which is applicable to strong noise analysis produced by underground mine, and obtained distribution of noise in different frequency and achieves good results. According to the results of decomposition, the characteristics of noise sources production can be more accurately determined, which lays a good foundation for the follow-up focused and targeted noise control, and provides a new method that is greatly applicable for testing and analyzing noise control.

  10. Least squares support vector machine for short-term prediction of meteorological time series

    Science.gov (United States)

    Mellit, A.; Pavan, A. Massi; Benghanem, M.

    2013-01-01

    The prediction of meteorological time series plays very important role in several fields. In this paper, an application of least squares support vector machine (LS-SVM) for short-term prediction of meteorological time series (e.g. solar irradiation, air temperature, relative humidity, wind speed, wind direction and pressure) is presented. In order to check the generalization capability of the LS-SVM approach, a K-fold cross-validation and Kolmogorov-Smirnov test have been carried out. A comparison between LS-SVM and different artificial neural network (ANN) architectures (recurrent neural network, multi-layered perceptron, radial basis function and probabilistic neural network) is presented and discussed. The comparison showed that the LS-SVM produced significantly better results than ANN architectures. It also indicates that LS-SVM provides promising results for short-term prediction of meteorological data.

  11. Improved Computing-Efficiency Least-Squares Algorithm with Application to All-Pass Filter Design

    Directory of Open Access Journals (Sweden)

    Lo-Chyuan Su

    2013-01-01

    Full Text Available All-pass filter design can be generally achieved by solving a system of linear equations. The associated matrices involved in the set of linear equations can be further formulated as a Toeplitz-plus-Hankel form such that a matrix inversion is avoided. Consequently, the optimal filter coefficients can be solved by using computationally efficient Levinson algorithms or Cholesky decomposition technique. In this paper, based on trigonometric identities and sampling the frequency band of interest uniformly, the authors proposed closed-form expressions to compute the elements of the Toeplitz-plus-Hankel matrix required in the least-squares design of IIR all-pass filters. Simulation results confirm that the proposed method achieves good performance as well as effectiveness.

  12. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    Directory of Open Access Journals (Sweden)

    Ma W-K

    2006-01-01

    Full Text Available The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA, received signal strength (RSS, time-difference-of-arrival (TDOA, and angle-of-arrival (AOA are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly. Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  13. Least Squares Estimate of the Initial Phases in STFT based Speech Enhancement

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Krawczyk-Becker, Martin; Gerkmann, Timo;

    2015-01-01

    In this paper, we consider single-channel speech enhancement in the short time Fourier transform (STFT) domain. We suggest to improve an STFT phase estimate by estimating the initial phases. The method is based on the harmonic model and a model for the phase evolution over time. The initial phases...... are estimated by setting up a least squares problem between the noisy phase and the model for phase evolution. Simulations on synthetic and speech signals show a decreased error on the phase when an estimate of the initial phase is included compared to using the noisy phase as an initialisation. The error...... on the phase is decreased at input SNRs from -10 to 10 dB. Reconstructing the signal using the clean amplitude, the mean squared error is decreased and the PESQ score is increased....

  14. Wavelet Neural Networks for Adaptive Equalization by Using the Orthogonal Least Square Algorithm

    Institute of Scientific and Technical Information of China (English)

    JIANG Minghu(江铭虎); DENG Beixing(邓北星); Georges Gielen

    2004-01-01

    Equalizers are widely used in digital communication systems for corrupted or time varying channels. To overcome performance decline for noisy and nonlinear channels, many kinds of neural network models have been used in nonlinear equalization. In this paper, we propose a new nonlinear channel equalization, which is structured by wavelet neural networks. The orthogonal least square algorithm is applied to update the weighting matrix of wavelet networks to form a more compact wavelet basis unit, thus obtaining good equalization performance. The experimental results show that performance of the proposed equalizer based on wavelet networks can significantly improve the neural modeling accuracy and outperform conventional neural network equalization in signal to noise ratio and channel non-linearity.

  15. First-order system least squares for the pure traction problem in planar linear elasticity

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  16. Two regularizers for recursive least squared algorithms in feedforward multilayered neural networks.

    Science.gov (United States)

    Leung, C S; Tsoi, A C; Chan, L W

    2001-01-01

    Recursive least squares (RLS)-based algorithms are a class of fast online training algorithms for feedforward multilayered neural networks (FMNNs). Though the standard RLS algorithm has an implicit weight decay term in its energy function, the weight decay effect decreases linearly as the number of learning epochs increases, thus rendering a diminishing weight decay effect as training progresses. In this paper, we derive two modified RLS algorithms to tackle this problem. In the first algorithm, namely, the true weight decay RLS (TWDRLS) algorithm, we consider a modified energy function whereby the weight decay effect remains constant, irrespective of the number of learning epochs. The second version, the input perturbation RLS (IPRLS) algorithm, is derived by requiring robustness in its prediction performance to input perturbations. Simulation results show that both algorithms improve the generalization capability of the trained network.

  17. An Adaptive Recursive Least Square Algorithm for Feed Forward Neural Network and Its Application

    Science.gov (United States)

    Qing, Xi-Hong; Xu, Jun-Yi; Guo, Fen-Hong; Feng, Ai-Mu; Nin, Wei; Tao, Hua-Xue

    In high dimension data fitting, it is difficult task to insert new training samples and remove old-fashioned samples for feed forward neural network (FFNN). This paper, therefore, studies dynamical learning algorithms with adaptive recursive regression (AR) and presents an advanced adaptive recursive (AAR) least square algorithm. This algorithm can efficiently handle new samples inserting and old samples removing. This AAR algorithm is applied to train FFNN and makes FFNN be capable of simultaneously implementing three processes of new samples dynamical learning, old-fashioned samples removing and neural network (NN) synchronization computing. It efficiently solves the problem of dynamically training of FFNN. This FFNN algorithm is carried out to compute residual oil distribution.

  18. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis

    DEFF Research Database (Denmark)

    Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel;

    2014-01-01

    . Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3......Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods...

  19. PEMODELAN TINGKAT PENGHUNIAN KAMAR HOTEL DI KENDARI DENGAN TRANSFORMASI WAVELET KONTINU DAN PARTIAL LEAST SQUARES

    Directory of Open Access Journals (Sweden)

    Margaretha Ohyver

    2014-12-01

    Full Text Available Multicollinearity and outliers are the common problems when estimating regression model. Multicollinearitiy occurs when there are high correlations among predictor variables, leading to difficulties in separating the effects of each independent variable on the response variable. While, if outliers are present in the data to be analyzed, then the assumption of normality in the regression will be violated and the results of the analysis may be incorrect or misleading. Both of these cases occurred in the data on room occupancy rate of hotels in Kendari. The purpose of this study is to find a model for the data that is free of multicollinearity and outliers and to determine the factors that affect the level of room occupancy hotels in Kendari. The method used is Continuous Wavelet Transformation and Partial Least Squares. The result of this research is a regression model that is free of multicollinearity and a pattern of data that resolved the present of outliers.

  20. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  1. Baseline configuration for GNSS attitude determination with an analytical least-squares solution

    Science.gov (United States)

    Chang, Guobin; Xu, Tianhe; Wang, Qianxin

    2016-12-01

    The GNSS attitude determination using carrier phase measurements with 4 antennas is studied on condition that the integer ambiguities have been resolved. The solution to the nonlinear least-squares is often obtained iteratively, however an analytical solution can exist for specific baseline configurations. The main aim of this work is to design this class of configurations. Both single and double difference measurements are treated which refer to the dedicated and non-dedicated receivers respectively. More realistic error models are employed in which the correlations between different measurements are given full consideration. The desired configurations are worked out. The configurations are rotation and scale equivariant and can be applied to both the dedicated and non-dedicated receivers. For these configurations, the analytical and optimal solution for the attitude is also given together with its error variance-covariance matrix.

  2. Numerical solution of a nonlinear least squares problem in digital breast tomosynthesis

    Science.gov (United States)

    Landi, G.; Loli Piccolomini, E.; Nagy, J. G.

    2015-11-01

    In digital tomosynthesis imaging, multiple projections of an object are obtained along a small range of different incident angles in order to reconstruct a pseudo-3D representation (i.e., a set of 2D slices) of the object. In this paper we describe some mathematical models for polyenergetic digital breast tomosynthesis image reconstruction that explicitly takes into account various materials composing the object and the polyenergetic nature of the x-ray beam. A polyenergetic model helps to reduce beam hardening artifacts, but the disadvantage is that it requires solving a large-scale nonlinear ill-posed inverse problem. We formulate the image reconstruction process (i.e., the method to solve the ill-posed inverse problem) in a nonlinear least squares framework, and use a Levenberg-Marquardt scheme to solve it. Some implementation details are discussed, and numerical experiments are provided to illustrate the performance of the methods.

  3. Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application

    Institute of Scientific and Technical Information of China (English)

    JIA Jin-ming; WEN Xiang-jun

    2008-01-01

    Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.

  4. Partial Least Squares Regression Model to Predict Water Quality in Urban Water Distribution Systems

    Institute of Scientific and Technical Information of China (English)

    LUO Bijun; ZHAO Yuan; CHEN Kai; ZHAO Xinhua

    2009-01-01

    The water distribution system of one residential district in Tianjin is taken as an example to analyze the changes of water quality. Partial least squares (PLS) regression model, in which the turbidity and Fe are regarded as con-trol objectives, is used to establish the statistical model. The experimental results indicate that the PLS regression model has good predicted results of water quality compared with the monitored data. The percentages of absolute relative error (below 15%, 20%, 30%) are 44.4%, 66.7%, 100% (turbidity) and 33.3%, 44.4%, 77.8% (Fe) on the 4th sampling point; 77.8%, 88.9%, 88.9% (turbidity) and 44.4%, 55.6%, 66.7% (Fe) on the 5th sampling point.

  5. The Helmholtz equation least squares method for reconstructing and predicting acoustic radiation

    CERN Document Server

    Wu, Sean F

    2015-01-01

    This book gives a comprehensive introduction to the Helmholtz Equation Least Squares (HELS) method and its use in diagnosing noise and vibration problems. In contrast to the traditional NAH technologies, the HELS method does not seek an exact solution to the acoustic field produced by an arbitrarily shaped structure. Rather, it attempts to obtain the best approximation of an acoustic field through the expansion of certain basis functions. Therefore, it significantly simplifies the complexities of the reconstruction process, yet still enables one to acquire an understanding of the root causes of different noise and vibration problems that involve arbitrarily shaped surfaces in non-free space using far fewer measurement points than either Fourier acoustics or BEM based NAH. The examples given in this book illustrate that the HELS method may potentially become a practical and versatile tool for engineers to tackle a variety of complex noise and vibration issues in engineering applications.

  6. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    CERN Document Server

    Brückman de Renstrom, P

    2005-01-01

    A least squares method to solve a generic alignment problem of high granularity tracking system is presented. The formalism takes advantage of the assumption that the derived corrections are small and consequently uses the first order linear expansion throughout. The algorithm consists of analytical linear expansion allowing for multiple nested fits. E.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on any set of either implicit or explicit parameters. The baseline solution to the alignment problem is equivalent to the one described in [1]. The latter was derived using purely algebraic methods to reduce the initial large system of linear equations arising from separate fits of tracks and alignment parameters. The method presented here benefits from wider range of applications including problems with implicit vertex fit, physics constraints on track parameters, use of external information to constrain the geo...

  7. On the Semivalues and the Least Square Values Average Per Capita Formulas and Relationships

    Institute of Scientific and Technical Information of China (English)

    Irinel DRAGAN

    2006-01-01

    In this paper, it is shown that both the Semivalues and the Least Square Values of cooperative transferable utilities games can be expressed in terms of n2 averages of values of the characteristic function of the game, by means of what we call the Average per capita formulas. Moreover, like the case of the Shapley value earlier considered, the terms of the formulas can be computed in parallel, and an algorithm is derived. From these results, it follows that each of the two values mentioned above are Shapley values of games easily obtained from the given game, and this fact gives another computational opportunity, as soon as the computation of the Shapley value is efficiently done.

  8. Correlation technique and least square support vector machine combine for frequency domain based ECG beat classification.

    Science.gov (United States)

    Dutta, Saibal; Chatterjee, Amitava; Munshi, Sugata

    2010-12-01

    The present work proposes the development of an automated medical diagnostic tool that can classify ECG beats. This is considered an important problem as accurate, timely detection of cardiac arrhythmia can help to provide proper medical attention to cure/reduce the ailment. The proposed scheme utilizes a cross-correlation based approach where the cross-spectral density information in frequency domain is used to extract suitable features. A least square support vector machine (LS-SVM) classifier is developed utilizing the features so that the ECG beats are classified into three categories: normal beats, PVC beats and other beats. This three-class classification scheme is developed utilizing a small training dataset and tested with an enormous testing dataset to show the generalization capability of the scheme. The scheme, when employed for 40 files in the MIT/BIH arrhythmia database, could produce high classification accuracy in the range 95.51-96.12% and could outperform several competing algorithms.

  9. [Relationships between Dendrobium quality and ecological factors based on partial least square regression].

    Science.gov (United States)

    Li, Wen-Tao; Huang, Lin-Fang; Du, Jing; Chen, Shi-Lin

    2013-10-01

    A total of eleven ecological factors values were obtained from the ecological suitability database of the geographic information system for traditional Chinese medicines production areas (TCM-GIS), and the relationships between the chemical components of Dendrobium and the ecological factors were analyzed by partial least square (PLS) regression. There existed significant differences in the chemical components contents of the same species of Dendrobium in different areas. The polysaccharides content of D. officinale had significant positive correlation with soil type, the accumulated dendrobine in D. nobile was significantly positively correlated with annual precipitation, and the erianin content of D. chrysotoxum was mainly affected by air temperature. The principal component analysis (PCA) showed that Zhejiang Province was the optimal production area for D. officinale, Guizhou Province was the most appropriate planting area for D. nobile, and Yunnan Province was the best production area of D. chrysotoxum.

  10. Experiments using least square lattice filters for the identification of structural dynamics

    Science.gov (United States)

    Sundararajan, N.; Montgomery, R. C.

    1983-01-01

    An approach for identifying the dynamics of large space structures is applied to a free-free beam. In this approach the system's order is determined on-line, along with mode shapes, using recursive lattice filters which provide a least square estimate of the measurement data. The mode shapes determined are orthonormal in the space of the measurements and, hence, are not the natural modes of the structure. To determine the natural modes of the structure, a method based on the fast Fourier transform is used on the outputs of the lattice filter. These natural modes are used to obtain the modal amplitude time series which provides the input data for an output error parameter identification scheme that identifies the ARMA parameters of the difference equation model of the modes. the approach is applied to both simulated and experimental data.

  11. Operator functional state classification using least-square support vector machine based recursive feature elimination technique.

    Science.gov (United States)

    Yin, Zhong; Zhang, Jianhua

    2014-01-01

    This paper proposed two psychophysiological-data-driven classification frameworks for operator functional states (OFS) assessment in safety-critical human-machine systems with stable generalization ability. The recursive feature elimination (RFE) and least square support vector machine (LSSVM) are combined and used for binary and multiclass feature selection. Besides typical binary LSSVM classifiers for two-class OFS assessment, two multiclass classifiers based on multiclass LSSVM-RFE and decision directed acyclic graph (DDAG) scheme are developed, one used for recognizing the high mental workload and fatigued state while the other for differentiating overloaded and base-line states from the normal states. Feature selection results have revealed that different dimensions of OFS can be characterized by specific set of psychophysiological features. Performance comparison studies show that reasonable high and stable classification accuracy of both classification frameworks can be achieved if the RFE procedure is properly implemented and utilized.

  12. Adaptive control of a flexible beam using least square lattice filters

    Science.gov (United States)

    Sundararajan, N.; Montgomery, R. C.

    1983-01-01

    This paper presents an indirect adaptive control scheme for the control of flexible structures using recursive least square lattice filters. The identification scheme uses lattice filters which provide an on-line estimate of the number of modes, mode shapes and modal amplitudes. These modes are coupled and a transformation to decouple them in order to obtain the natural modes is presented. The decoupled modal amplitude time series are then used in an equation error identification scheme to identify the model parameters in an autoregressive moving average (ARMA) form. The control is based on modal pole placement scheme with the objective of vibration suppression. The control gains are calculated based on the identified ARMA parameters. Before using the identified parameters for control, detailed testing and validation procedures are carried out on the identified parameters. The full adaptive control scheme is demonstrated using the simulation for the 12 foot free-free beam apparatus at NASA Langley Research Center.

  13. Identification of the dynamics of a two-dimensional grid structure using least square lattice filters

    Science.gov (United States)

    Montgomery, R. C.; Sundararajan, N.

    1984-01-01

    The basic theory of least square lattice filters and their use in identification of structural dynamics systems is summarized. Thereafter, this theory is applied to a two-dimensional grid structure made of overlapping bars. Previously, this theory has been applied to an integral beam. System identification results are presented for both simulated and experimental tests and they are compared with those predicted using finite element modelling. The lattice filtering approach works well for simulated data based on finite element modelling. However, considerable discrepancy exists between estimates obtained from experimental data and the finite element analysis. It is believed that this discrepancy is the result of inadequacies in the finite element modelling to represent the damped motion of the laboratory apparatus.

  14. A Collocation Method by Moving Least Squares Applicable to European Option Pricing

    Directory of Open Access Journals (Sweden)

    M. Amirfakhrian

    2016-05-01

    Full Text Available The subject matter of the present inquiry is the pricing of European options in the actual form of numbers. To assess the numerical prices of European options, a scheme independent of any kind of mesh but rather powered by moving least squares (MLS estimation is made. In practical terms, first the discretion of time variable is implemented and then, an MLS-powered method is applied for spatial approximation. As, unlike other methods, these courses of action mentioned here don't rely on a mesh, one can firmly claim they are to be categorized under mesh-less methods. And, of course, at the end of the paper, various experiments are offered to prove how efficient and how powerful the introduced approach is.

  15. Natural gradient-based recursive least-squares algorithm for adaptive blind source separation

    Institute of Scientific and Technical Information of China (English)

    ZHU Xiaolong; ZHANG Xianda; YE Jimin

    2004-01-01

    This paper focuses on the problem of adaptive blind source separation (BSS).First, a recursive least-squares (RLS) whitening algorithm is proposed. By combining it with a natural gradient-based RLS algorithm for nonlinear principle component analysis (PCA), and using reasonable approximations, a novel RLS algorithm which can achieve BSS without additional pre-whitening of the observed mixtures is obtained. Analyses of the equilibrium points show that both of the RLS whitening algorithm and the natural gradient-based RLS algorithm for BSS have the desired convergence properties. It is also proved that the combined new RLS algorithm for BSS is equivariant and has the property of keeping the separating matrix from becoming singular. Finally, the effectiveness of the proposed algorithm is verified by extensive simulation results.

  16. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    Science.gov (United States)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

  17. 关于TLS问题%ON THE TOTAL LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    魏木生; 朱超

    2002-01-01

    The total least squares(TLS) is a method of solving an overdetermined systemof linear equations AX = B that is appropriate when there are errors in both A andB. Golub and Van Loan(G. H. Golub and C. F. Van Loan, SIAM J. Numer. Anal.17(1980), 883-893) introduced this method into the field of numerical analysis anddeveloped an algorithm based on singular value decomposition. While M. Wei(M.Wei, Numer. Math. 62(1992), 123-148) proposed a new definition for TLS problem.In this paper, we discuss the relations between the two definitions. As a result,one can see that the latter definition is a generalization of the former one.

  18. Spline based least squares integration for two-dimensional shape or wavefront reconstruction

    Science.gov (United States)

    Huang, Lei; Xue, Junpeng; Gao, Bo; Zuo, Chao; Idir, Mourad

    2017-04-01

    In this work, we present a novel method to handle two-dimensional shape or wavefront reconstruction from its slopes. The proposed integration method employs splines to fit the measured slope data with piecewise polynomials and uses the analytical polynomial functions to represent the height changes in a lateral spacing with the pre-determined spline coefficients. The linear least squares method is applied to estimate the height or wavefront as a final result. Numerical simulations verify that the proposed method has less algorithm errors than two other existing methods used for comparison. Especially at the boundaries, the proposed method has better performance. The noise influence is studied by adding white Gaussian noise to the slope data. Experimental data from phase measuring deflectometry are tested to demonstrate the feasibility of the new method in a practical measurement.

  19. NEW RESULTS ABOUT THE RELATIONSHIP BETWEEN OPTIMALLY WEIGHTED LEAST SQUARES ESTIMATE AND LINEAR MINIMUM VARIANCE ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Juan ZHAO; Yunmin ZHU

    2009-01-01

    The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.

  20. Online Identification of Multivariable Discrete Time Delay Systems Using a Recursive Least Square Algorithm

    Directory of Open Access Journals (Sweden)

    Saïda Bedoui

    2013-01-01

    Full Text Available This paper addresses the problem of simultaneous identification of linear discrete time delay multivariable systems. This problem involves both the estimation of the time delays and the dynamic parameters matrices. In fact, we suggest a new formulation of this problem allowing defining the time delay and the dynamic parameters in the same estimated vector and building the corresponding observation vector. Then, we use this formulation to propose a new method to identify the time delays and the parameters of these systems using the least square approach. Convergence conditions and statistics properties of the proposed method are also developed. Simulation results are presented to illustrate the performance of the proposed method. An application of the developed approach to compact disc player arm is also suggested in order to validate simulation results.

  1. Recursive N-way partial least squares for brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Andrey Eliseyev

    Full Text Available In the article tensor-input/tensor-output blockwise Recursive N-way Partial Least Squares (RNPLS regression is considered. It combines the multi-way tensors decomposition with a consecutive calculation scheme and allows blockwise treatment of tensor data arrays with huge dimensions, as well as the adaptive modeling of time-dependent processes with tensor variables. In the article the numerical study of the algorithm is undertaken. The RNPLS algorithm demonstrates fast and stable convergence of regression coefficients. Applied to Brain Computer Interface system calibration, the algorithm provides an efficient adjustment of the decoding model. Combining the online adaptation with easy interpretation of results, the method can be effectively applied in a variety of multi-modal neural activity flow modeling tasks.

  2. Prediction of chaotic systems with multidimensional recurrent least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Sun Jian-Cheng; Zhou Ya-Tong; Luo Jian-Guo

    2006-01-01

    In this paper, we propose a multidimensional version of recurrent least squares support vector machines (MDRLSSVM) to solve the problem about the prediction of chaotic system. To acquire better prediction performance, the high-dimensional space, which provides more information on the system than the scalar time series, is first reconstructed utilizing Takens's embedding theorem. Then the MDRLS-SVM instead of traditional RLS-SVM is used in the highdimensional space, and the prediction performance can be improved from the point of view of reconstructed embedding phase space. In addition, the MDRLS-SVM algorithm is analysed in the context of noise, and we also find that the MDRLS-SVM has lower sensitivity to noise than the RLS-SVM.

  3. Least squares approach for initial data recovery in dynamic data-driven applications simulations

    KAUST Repository

    Douglas, C.

    2010-12-01

    In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.

  4. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  5. Defense of the Least Squares Solution to Peelle’s Pertinent Puzzle

    Directory of Open Access Journals (Sweden)

    Nicolas Hengartner

    2011-02-01

    Full Text Available Generalized least squares (GLS for model parameter estimation has a long and successful history dating to its development by Gauss in 1795. Alternatives can outperform GLS in some settings, and alternatives to GLS are sometimes sought when GLS exhibits curious behavior, such as in Peelle’s Pertinent Puzzle (PPP. PPP was described in 1987 in the context of estimating fundamental parameters that arise in nuclear interaction experiments. In PPP, GLS estimates fell outside the range of the data, eliciting concerns that GLS was somehow flawed. These concerns have led to suggested alternatives to GLS estimators. This paper defends GLS in the PPP context, investigates when PPP can occur, illustrates when PPP can be beneficial for parameter estimation, reviews optimality properties of GLS estimators, and gives an example in which PPP does occur.

  6. A multivariate partial least squares approach to joint association analysis for multiple correlated traits

    Institute of Scientific and Technical Information of China (English)

    Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu

    2016-01-01

    Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion (BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability, polymorphic information content (PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.

  7. A multivariate partial least squares approach to joint association analysis for multiple correlated traits

    Institute of Scientific and Technical Information of China (English)

    Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu

    2016-01-01

    Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion(BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability,polymorphic information content(PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.

  8. Least Squares Temporal Difference Actor-Critic Methods with Applications to Robot Motion Control

    CERN Document Server

    Estanjini, Reza Moazzez; Lahijanian, Morteza; Wang, Jing; Belta, Calin A; Paschalidis, Ioannis Ch

    2011-01-01

    We consider the problem of finding a control policy for a Markov Decision Process (MDP) to maximize the probability of reaching some states while avoiding some other states. This problem is motivated by applications in robotics, where such problems naturally arise when probabilistic models of robot motion are required to satisfy temporal logic task specifications. We transform this problem into a Stochastic Shortest Path (SSP) problem and develop a new approximate dynamic programming algorithm to solve it. This algorithm is of the actor-critic type and uses a least-square temporal difference learning method. It operates on sample paths of the system and optimizes the policy within a pre-specified class parameterized by a parsimonious set of parameters. We show its convergence to a policy corresponding to a stationary point in the parameters' space. Simulation results confirm the effectiveness of the proposed solution.

  9. Influence and interaction indexes for pseudo-Boolean functions: a unified least squares approach

    CERN Document Server

    Marichal, Jean-Luc

    2012-01-01

    The Banzhaf power and interaction indexes for a pseudo-Boolean function (or a cooperative game) appear naturally as leading coefficients in the standard least squares approximation of the function by a pseudo-Boolean function of a specified degree. We first observe that this property still holds if we consider approximations by pseudo-Boolean functions depending only on specified variables. We then show that the Banzhaf influence index can also be obtained from the latter approximation problem. Considering certain weighted versions of this approximation problem, we introduce a class of weighted Banzhaf influence indexes, analyze their most important properties, and point out similarities between the weighted Banzhaf influence index and the corresponding weighted Banzhaf interaction index.

  10. Novel passive localization algorithm based on double side matrix-restricted total least squares

    Institute of Scientific and Technical Information of China (English)

    Xu Zheng; Qu Changwen; Wang Changhai

    2013-01-01

    In order to solve the bearings-only passive localization problem in the presence of erroneous observer position,a novel algorithm based on double side matrix-restricted total least squares (DSMRTLS) is proposed.First,the aforementioned passive localization problem is transferred to the DSMRTLS problem by deriving a multiplicative structure for both the observation matrix and the observation vector.Second,the corresponding optimization problem of the DSMRTLS problem without constraint is derived,which can be approximated as the generalized Rayleigh quotient minimization problem.Then,the localization solution which is globally optimal and asymptotically unbiased can be got by generalized eigenvalue decomposition.Simulation results verify the rationality of the approximation and the good performance of the proposed algorithm compared with several typical algorithms.

  11. Nonlinear decoupling controller design based on least squares support vector regression

    Institute of Scientific and Technical Information of China (English)

    WEN Xiang-jun; ZHANG Yu-nong; YAN Wei-wu; XU Xiao-ming

    2006-01-01

    Support Vector Machines (SVMs) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on a method of least squares SVM (LS-SVM) for multivariate function estimation, a generalized inverse system is developed for the linearization and decoupling control ora general nonlinear continuous system. The approach of inverse modelling via LS-SVM and parameters optimization using the Bayesian evidence framework is discussed in detail. In this paper, complex high-order nonlinear system is decoupled into a number of pseudo-linear Single Input Single Output (SISO) subsystems with linear dynamic components. The poles of pseudo-linear subsystems can be configured to desired positions. The proposed method provides an effective alternative to the controller design of plants whose accurate mathematical model is unknown or state variables are difficult or impossible to measure. Simulation results showed the efficacy of the method.

  12. Generalized total least squares to characterize biogeochemical processes of the ocean

    Science.gov (United States)

    Guglielmi, Véronique; Goyet, Catherine; Touratier, Franck; El Jai, Marie

    2016-11-01

    The chemical composition of the global ocean is governed by biological, chemical, and physical processes. These processes interact with each other so that the concentrations of carbon, oxygen, nitrogen (mainly from nitrate, nitrite, ammonium), and phosphorous (mainly from phosphate), vary in constant proportions, referred to as the Redfield ratios. We construct here the generalized total least squares estimator of these ratios. The significance of our approach is twofold; it respects the hydrological characteristics of the studied areas, and it can be applied identically in any area where enough data are available. The tests applied to Atlantic Ocean data highlight a variability of the Redfield ratios, both with geographical location and with depth. This variability emphasizes the importance of local and accurate estimates of Redfield ratios.

  13. Robust GRAPPA reconstruction using sparse multi-kernel learning with least squares support vector regression.

    Science.gov (United States)

    Xu, Lin; Feng, Yanqiu; Liu, Xiaoyun; Kang, Lili; Chen, Wufan

    2014-01-01

    Accuracy of interpolation coefficients fitting to the auto-calibrating signal data is crucial for k-space-based parallel reconstruction. Both conventional generalized autocalibrating partially parallel acquisitions (GRAPPA) reconstruction that utilizes linear interpolation function and nonlinear GRAPPA (NLGRAPPA) reconstruction with polynomial kernel function are sensitive to interpolation window and often cannot consistently produce good results for overall acceleration factors. In this study, sparse multi-kernel learning is conducted within the framework of least squares support vector regression to fit interpolation coefficients as well as to reconstruct images robustly under different subsampling patterns and coil datasets. The kernel combination weights and interpolation coefficients are adaptively determined by efficient semi-infinite linear programming techniques. Experimental results on phantom and in vivo data indicate that the proposed method can automatically achieve an optimized compromise between noise suppression and residual artifacts for various sampling schemes. Compared with NLGRAPPA, our method is significantly less sensitive to the interpolation window and kernel parameters.

  14. Least-Squares Solution of Inverse Problem for Hermitian Anti-reflexive Matrices and Its Appoximation

    Institute of Scientific and Technical Information of China (English)

    Zhen Yun PENG; Yuan Bei DENG; Jin Wang LIU

    2006-01-01

    In this paper, we first consider the least-squares solution of the matrix inverse problem as follows: Find a hermitian anti-reflexive matrix corresponding to a given generalized reflection matrix J such that for given matrices X, B we have minA‖AX - B‖. The existence theorems are obtained, and a general representation of such a matrix is presented. We denote the set of such matrices by SE. Then the matrix nearness problem for the matrix inverse problem is discussed. That is: Given an arbitrary A*, find a matrix A ∈ SE which is nearest to A* in Frobenius norm. We show that the nearest matrix is unique and provide an expression for this nearest matrix.

  15. Concerning an application of the method of least squares with a variable weight matrix

    Science.gov (United States)

    Sukhanov, A. A.

    1979-01-01

    An estimate of a state vector for a physical system when the weight matrix in the method of least squares is a function of this vector is considered. An iterative procedure is proposed for calculating the desired estimate. Conditions for the existence and uniqueness of the limit of this procedure are obtained, and a domain is found which contains the limit estimate. A second method for calculating the desired estimate which reduces to the solution of a system of algebraic equations is proposed. The question of applying Newton's method of tangents to solving the given system of algebraic equations is considered and conditions for the convergence of the modified Newton's method are obtained. Certain properties of the estimate obtained are presented together with an example.

  16. STUDY ON PARAMETERS FOR TOPOLOGICAL VARIABLES FIELD INTERPOLATED BY MOVING LEAST SQUARE APPROXIMATION

    Institute of Scientific and Technical Information of China (English)

    Kal Long; Zhengxing Zuo; Rehan H.Zuberi

    2009-01-01

    This paper presents a new approach to the structural topology optimization of con-tinuum structures. Material-point independent variables are presented to illustrate the existence condition, or inexistence of the material points and their vicinity instead of elements or nodes in popular topology optimization methods. Topological variables field is constructed by moving least square approximation which is used as a shape function in the meshless method. Combined with finite element analyses, not only checkerboard patterns and mesh-dependence phenomena are overcome by this continuous and smooth topological variables field, but also the locations and numbers of topological variables can be arbitrary. Parameters including the number of quadrature points, scaling parameter, weight function and so on upon optimum topological configurations are discussed. Two classic topology optimization problems are solved successfully by the pro-posed method. The method is found robust and no numerical instabilities are found with proper parameters.

  17. The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

    Directory of Open Access Journals (Sweden)

    Kuosheng Jiang

    2014-07-01

    Full Text Available In this paper a stochastic resonance (SR-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.

  18. Least-Square Collaborative Beamforming Linear Array for Steering Capability in Green Wireless Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    NikNoordini NikAbdMalik; Mazlina Esa; Nurul Mu’azzah Abdul Latiff

    2016-01-01

    Abstract-This paper presents a collaborative beamforming (CB) technique to organize the sensor node’s location in a linear array for green wireless sensor network (WSN) applications. In this method, only selected clusters and active CB nodes are needed each time to perform CB in WSNs. The proposed least-square linear array (LSLA) manages to select nodes to perform as a linear antenna array (LAA), which is similar to and as outstanding as the conventional uniform linear array (ULA). The LSLA technique is also able to solve positioning error problems that exist in the random nodes deployment. The beampattern fluctuations have been analyzed due to the random positions of sensor nodes. Performances in terms of normalized power gains are given. It is demonstrated by a simulation that the proposed technique gives similar performances to the conventional ULA and at the same time exhibits lower complexity.

  19. The use of least squares methods in functional optimization of energy use prediction models

    Science.gov (United States)

    Bourisli, Raed I.; Al-Shammeri, Basma S.; AlAnzi, Adnan A.

    2012-06-01

    The least squares method (LSM) is used to optimize the coefficients of a closed-form correlation that predicts the annual energy use of buildings based on key envelope design and thermal parameters. Specifically, annual energy use is related to a number parameters like the overall heat transfer coefficients of the wall, roof and glazing, glazing percentage, and building surface area. The building used as a case study is a previously energy-audited mosque in a suburb of Kuwait City, Kuwait. Energy audit results are used to fine-tune the base case mosque model in the VisualDOE{trade mark, serif} software. Subsequently, 1625 different cases of mosques with varying parameters were developed and simulated in order to provide the training data sets for the LSM optimizer. Coefficients of the proposed correlation are then optimized using multivariate least squares analysis. The objective is to minimize the difference between the correlation-predicted results and the VisualDOE-simulation results. It was found that the resulting correlation is able to come up with coefficients for the proposed correlation that reduce the difference between the simulated and predicted results to about 0.81%. In terms of the effects of the various parameters, the newly-defined weighted surface area parameter was found to have the greatest effect on the normalized annual energy use. Insulating the roofs and walls also had a major effect on the building energy use. The proposed correlation and methodology can be used during preliminary design stages to inexpensively assess the impacts of various design variables on the expected energy use. On the other hand, the method can also be used by municipality officials and planners as a tool for recommending energy conservation measures and fine-tuning energy codes.

  20. Partial least-squares: Theoretical issues and engineering applications in signal processing

    Directory of Open Access Journals (Sweden)

    Fredric M. Ham

    1996-01-01

    Full Text Available In this paper we present partial least-squares (PLS, which is a statistical modeling method used extensively in analytical chemistry for quantitatively analyzing spectroscopic data. Comparisons are made between classical least-squares (CLS and PLS to show how PLS can be used in certain engineering signal processing applications. Moreover, it is shown that in certain situations when there exists a linear relationship between the independent and dependent variables, PLS can yield better predictive performance than CLS when it is not desirable to use all of the empirical data to develop a calibration model used for prediction. Specifically, because PLS is a factor analysis method, optimal selection of the number of PLS factors can result in a calibration model whose predictive performance is considerably better than CLS. That is, factor analysis (rank reduction allows only those features of the data that are associated with information of interest to be retained for development of the calibration model, and the remaining data associated with noise are discarded. It is shown that PLS can yield physical insight into the system from which empirical data has been collected. Also, when there exists a non-linear cause-and-effect relationship between the independent and dependent variables, the PLS calibration model can yield prediction errors that are much less than those for CLS. Three PLS application examples are given and the results are compared to CLS. In one example, a method is presented using PLS for parametric system identification. Using PLS for system identification allows simultaneous estimation of the system dimension and the system parameter vector associated with a minimal realization of the system.

  1. HYDRA: a Java library for Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Gregory R. Warnes

    2002-03-01

    Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.

  2. On the roles of minimization and linearization in least-squares finite element models of nonlinear boundary-value problems

    Science.gov (United States)

    Payette, G. S.; Reddy, J. N.

    2011-05-01

    In this paper we examine the roles of minimization and linearization in the least-squares finite element formulations of nonlinear boundary-values problems. The least-squares principle is based upon the minimization of the least-squares functional constructed via the sum of the squares of appropriate norms of the residuals of the partial differential equations (in the present case we consider L2 norms). Since the least-squares method is independent of the discretization procedure and the solution scheme, the least-squares principle suggests that minimization should be performed prior to linearization, where linearization is employed in the context of either the Picard or Newton iterative solution procedures. However, in the least-squares finite element analysis of nonlinear boundary-value problems, it has become common practice in the literature to exchange the sequence of application of the minimization and linearization operations. The main purpose of this study is to provide a detailed assessment on how the finite element solution is affected when the order of application of these operators is interchanged. The assessment is performed mathematically, through an examination of the variational setting for the least-squares formulation of an abstract nonlinear boundary-value problem, and also computationally, through the numerical simulation of the least-squares finite element solutions of both a nonlinear form of the Poisson equation and also the incompressible Navier-Stokes equations. The assessment suggests that although the least-squares principle indicates that minimization should be performed prior to linearization, such an approach is often impractical and not necessary.

  3. Mass and Momentum Conservation of the Least-Squares Spectral Collocation Method for the Time-Dependent Stokes Equations

    Science.gov (United States)

    Kattelans, Thorsten; Heinrichs, Wilhelm

    2009-09-01

    For Stokes problems least-squares schemes have the big advantage that they require no stabilization and equal order interpolation can be used. The disadvantage of Least-Squares Finite Element Method (LSFEM) and of Least-Squares Spectral Element Method (LSSEM) is that they perform poorly with respect to conservation of mass for internal flow problems, where the LSSEM compensates this by a superior conservation of momentum. In the literature it has been shown that Least-Squares Spectral Collocation Method (LSSCM) leads to superior conservation of mass and momentum for the steady Stokes. Here, we extend the study to the time-dependent Stokes equations for an internal flow problem, where the domain is decomposed into different elements using the transfinite mapping of Gordon and Hall. Minimizing the influence of round-off errors we use QR decomposition for solving the resulting overdetermined algebraic systems instead of forming normal equations.

  4. The use of derivative and least-squares methods to analyse a polypharmaceutical product by UV spectrophotometry.

    Science.gov (United States)

    Jones, R; Orchard, M J; Hall, K

    1985-01-01

    Derivative UV spectrophotometry is well established for analysing pharmaceutical products containing more than one drug. By contrast, the least-squares method for over-determined systems is rarely used, because it is assumed that measurements at a large number of wavelengths are needed to obtain good results. Both methods have advantages, and their use in combination is useful for analysing polypharmaceuticals. A combination of derivative and least-squares methods was used to analyse tablets containing pseudoephedrine hydrochloride, triprolidine hydrochloride and dextromethorphan hydrobromide. Pseudoephedrine was determined by derivative spectrophotometry. The other drugs were determined by the least-squares method at higher wavelengths where pseudoephedrine does not absorb. Satisfactory precision for the least-squares method was obtained with a manual spectrometer measuring at six wavelengths and calculating the results with a microcomputer.

  5. Separating iterative solution model of generalized nonlinear dynamic least squares for data processing in building of digital earth

    Institute of Scientific and Technical Information of China (English)

    陶华学; 郭金运

    2003-01-01

    Data coming from different sources have different types and temporal states. Relations between one type of data and another ones, or between data and unknown parameters are almost nonlinear. It is not accurate and reliable to process the data in building the digital earth with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method was put forward to process data in building the digital earth. A separating solution model and the iterative calculation method were used to solve the generalized nonlinear dynamic least squares problem. In fact, a complex problem can be separated and then solved by converting to two sub-problems, each of which has a single variable. Therefore the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations.

  6. Fishery landing forecasting using EMD-based least square support vector machine models

    Science.gov (United States)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  7. Identification of the Hammerstein model of a PEMFC stack based on least squares support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Li, Chun-Hua; Zhu, Xin-Jian; Cao, Guang-Yi; Sui, Sheng; Hu, Ming-Ruo [Fuel Cell Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China)

    2008-01-03

    This paper reports a Hammerstein modeling study of a proton exchange membrane fuel cell (PEMFC) stack using least squares support vector machines (LS-SVM). PEMFC is a complex nonlinear, multi-input and multi-output (MIMO) system that is hard to model by traditional methodologies. Due to the generalization performance of LS-SVM being independent of the dimensionality of the input data and the particularly simple structure of the Hammerstein model, a MIMO SVM-ARX (linear autoregression model with exogenous input) Hammerstein model is used to represent the PEMFC stack in this paper. The linear model parameters and the static nonlinearity can be obtained simultaneously by solving a set of linear equations followed by the singular value decomposition (SVD). The simulation tests demonstrate the obtained SVM-ARX Hammerstein model can efficiently approximate the dynamic behavior of a PEMFC stack. Furthermore, based on the proposed SVM-ARX Hammerstein model, valid control strategy studies such as predictive control, robust control can be developed. (author)

  8. An Emotion Detection System Based on Multi Least Squares Twin Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Divya Tomar

    2014-01-01

    Full Text Available Posttraumatic stress disorder (PTSD, bipolar manic disorder (BMD, obsessive compulsive disorder (OCD, depression, and suicide are some major problems existing in civilian and military life. The change in emotion is responsible for such type of diseases. So, it is essential to develop a robust and reliable emotion detection system which is suitable for real world applications. Apart from healthcare, importance of automatically recognizing emotions from human speech has grown with the increasing role of spoken language interfaces in human-computer interaction applications. Detection of emotion in speech can be applied in a variety of situations to allocate limited human resources to clients with the highest levels of distress or need, such as in automated call centers or in a nursing home. In this paper, we used a novel multi least squares twin support vector machine classifier in order to detect seven different emotions such as anger, happiness, sadness, anxiety, disgust, panic, and neutral emotions. The experimental result indicates better performance of the proposed technique over other existing approaches. The result suggests that the proposed emotion detection system may be used for screening of mental status.

  9. Soft sensor modelling by time difference, recursive partial least squares and adaptive model updating

    Science.gov (United States)

    Fu, Y.; Yang, W.; Xu, O.; Zhou, L.; Wang, J.

    2017-04-01

    To investigate time-variant and nonlinear characteristics in industrial processes, a soft sensor modelling method based on time difference, moving-window recursive partial least square (PLS) and adaptive model updating is proposed. In this method, time difference values of input and output variables are used as training samples to construct the model, which can reduce the effects of the nonlinear characteristic on modelling accuracy and retain the advantages of recursive PLS algorithm. To solve the high updating frequency of the model, a confidence value is introduced, which can be updated adaptively according to the results of the model performance assessment. Once the confidence value is updated, the model can be updated. The proposed method has been used to predict the 4-carboxy-benz-aldehyde (CBA) content in the purified terephthalic acid (PTA) oxidation reaction process. The results show that the proposed soft sensor modelling method can reduce computation effectively, improve prediction accuracy by making use of process information and reflect the process characteristics accurately.

  10. Spatter Rate Estimation of GMAW-S based on Partial Least Square Regression

    Institute of Scientific and Technical Information of China (English)

    CAI Yan; WANG Guang-wei; YANG Hai-lan; HUA Xue-ming; WU Yi-xiong

    2008-01-01

    This paper analyzes the drop transfer process in gas metal arc welding in short-circuit transfer mode (GMAW-S) in order to develop an optimized spatter rate model that can be used on line.According to thermodynamic characters and practical behavior,a complete arcing process is divided into three sub-processes:arc re-ignition,energy output and shorting preparation.Shorting process is then divided as drop spread,bridge sustention and bridge destabilization.Nine process variables and their distribution are analyzed based on welding experiments with high-speed photos and synchronous current and voltage signals.Method of variation coefficient is used to reflect process consistency and to design characteristic parameters.Partial least square regression (PLSR) is utilized to set up spatter rate model because of severe correlativity among the above characteristic parameters.PLSR is a new multivariate statistical analysis method,in which regression modeling,data simplification and relativity analysis are included in a single algorithm.Experiment results show that the regression equation based on PLSR is effective for on-line predicting spatter rate of its corresponding welding condition.

  11. Improving the Robustness and Stability of Partial Least Squares Regression for Near-infrared Spectral Analysis

    Institute of Scientific and Technical Information of China (English)

    SHAO, Xueguang; CHEN, Da; XU, Heng; LIU, Zhichao; CAI, Wensheng

    2009-01-01

    Partial least-squares (PLS) regression has been presented as a powerful tool for spectral quantitative measure- ment. However, the improvement of the robustness and stability of PLS models is still needed, because it is difficult to build a stable model when complex samples are analyzed or outliers are contained in the calibration data set. To achieve the purpose, a robust ensemble PLS technique based on probability resampling was proposed, which is named RE-PLS. In the proposed method, a probability is firstly obtained for each calibration sample from its resid- ual in a robust regression. Then, multiple PLS models are constructed based on probability resampling. At last, the multiple PLS models are used to predict unknown samples by taking the average of the predictions from the multi- ple models as final prediction result. To validate the effectiveness and universality of the proposed method, it was applied to two different sets of NIR spectra. The results show that RE-PLS can not only effectively avoid the inter- ference of outliers but also enhance the precision of prediction and the stability of PLS regression. Thus, it may pro- vide a useful tool for multivariate calibration with multiple outliers.

  12. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  13. The LASSO and sparse least square regression methods for SNP selection in predicting quantitative traits.

    Science.gov (United States)

    Feng, Zeny Z; Yang, Xiaojian; Subedi, Sanjeena; McNicholas, Paul D

    2012-01-01

    Recent work concerning quantitative traits of interest has focused on selecting a small subset of single nucleotide polymorphisms (SNPs) from amongst the SNPs responsible for the phenotypic variation of the trait. When considered as covariates, the large number of variables (SNPs) and their association with those in close proximity pose challenges for variable selection. The features of sparsity and shrinkage of regression coefficients of the least absolute shrinkage and selection operator (LASSO) method appear attractive for SNP selection. Sparse partial least squares (SPLS) is also appealing as it combines the features of sparsity in subset selection and dimension reduction to handle correlations amongst SNPs. In this paper we investigate application of the LASSO and SPLS methods for selecting SNPs that predict quantitative traits. We evaluate the performance of both methods with different criteria and under different scenarios using simulation studies. Results indicate that these methods can be effective in selecting SNPs that predict quantitative traits but are limited by some conditions. Both methods perform similarly overall but each exhibit advantages over the other in given situations. Both methods are applied to Canadian Holstein cattle data to compare their performance.

  14. Least-squares finite-element method for shallow-water equations with source terms

    Institute of Scientific and Technical Information of China (English)

    Shin-Jye Liang; Tai-Wen Hsu

    2009-01-01

    Numerical solution of shallow-water equations (SWE) has been a challenging task because of its nonlinear hyperbolic nature, admitting discontinuous solution, and the need to satisfy the C-property. The presence of source terms in momentum equations, such as the bottom slope and friction of bed, compounds the difficulties further. In this paper, a least-squares finite-element method for the space discretization and θ-method for the time integration is developed for the 2D non-conservative SWE including the source terms. Advantages of the method include: the source terms can be approximated easily with interpolation functions, no upwind scheme is needed, as well as the resulting system equations is symmetric and positive-definite, therefore, can be solved efficiently with the conjugate gradient method. The method is applied to steady and unsteady flows, subcritical and transcritical flow over a bump, 1D and 2D circular dam-break, wave past a circular cylinder, as well as wave past a hump. Computed results show good C-property, conservation property and compare well with exact solutions and other numerical results for flows with weak and mild gradient changes, but lead to inaccurate predictions for flows with strong gradient changes and discontinuities.

  15. New predictive control algorithms based on Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; SU Hong-ye; CHU Jian

    2005-01-01

    Used for industrial process with different degree of nonlinearity, the two predictive control algorithms presented in this paper are based on Least Squares Support Vector Machines (LS-SVM) model. For the weakly nonlinear system, the system model is built by using LS-SVM with linear kernel function, and then the obtained linear LS-SVM model is transformed into linear input-output relation of the controlled system. However, for the strongly nonlinear system, the off-line model of the controlled system is built by using LS-SVM with Radial Basis Function (RBF) kernel. The obtained nonlinear LS-SVM model is linearized at each sampling instant of system running, after which the on-line linear input-output model of the system is built. Based on the obtained linear input-output model, the Generalized Predictive Control (GPC) algorithm is employed to implement predictive control for the controlled plant in both algorithms. The simulation results after the presented algorithms were implemented in two different industrial processes model; respectively revealed the effectiveness and merit of both algorithms.

  16. Least squares evaluations for form and profile errors of ellipse using coordinate data

    Science.gov (United States)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-09-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  17. Improved prediction of drug-target interactions using regularized least squares integrating with kernel fusion technique

    Energy Technology Data Exchange (ETDEWEB)

    Hao, Ming; Wang, Yanli, E-mail: ywang@ncbi.nlm.nih.gov; Bryant, Stephen H., E-mail: bryant@ncbi.nlm.nih.gov

    2016-02-25

    Identification of drug-target interactions (DTI) is a central task in drug discovery processes. In this work, a simple but effective regularized least squares integrating with nonlinear kernel fusion (RLS-KF) algorithm is proposed to perform DTI predictions. Using benchmark DTI datasets, our proposed algorithm achieves the state-of-the-art results with area under precision–recall curve (AUPR) of 0.915, 0.925, 0.853 and 0.909 for enzymes, ion channels (IC), G protein-coupled receptors (GPCR) and nuclear receptors (NR) based on 10 fold cross-validation. The performance can further be improved by using a recalculated kernel matrix, especially for the small set of nuclear receptors with AUPR of 0.945. Importantly, most of the top ranked interaction predictions can be validated by experimental data reported in the literature, bioassay results in the PubChem BioAssay database, as well as other previous studies. Our analysis suggests that the proposed RLS-KF is helpful for studying DTI, drug repositioning as well as polypharmacology, and may help to accelerate drug discovery by identifying novel drug targets. - Graphical abstract: Flowchart of the proposed RLS-KF algorithm for drug-target interaction predictions. - Highlights: • A nonlinear kernel fusion algorithm is proposed to perform drug-target interaction predictions. • Performance can further be improved by using the recalculated kernel. • Top predictions can be validated by experimental data.

  18. Multi-classification algorithm and its realization based on least square support vector machine algorithm

    Institute of Scientific and Technical Information of China (English)

    Fan Youping; Chen Yunping; Sun Wansheng; Li Yu

    2005-01-01

    As a new type of learning machine developed on the basis of statistics learning theory, support vector machine (SVM) plays an important role in knowledge discovering and knowledge updating by constructing non-linear optimal classifier. However, realizing SVM requires resolving quadratic programming under constraints of inequality, which results in calculation difficulty while learning samples gets larger. Besides, standard SVM is incapable of tackling multi-classification. To overcome the bottleneck of populating SVM, with training algorithm presented, the problem of quadratic programming is converted into that of resolving a linear system of equations composed of a group of equation constraints by adopting the least square SVM(LS-SVM) and introducing a modifying variable which can change inequality constraints into equation constraints, which simplifies the calculation. With regard to multi-classification, an LS-SVM applicable in multi-classification is deduced. Finally, efficiency of the algorithm is checked by using universal Circle in square and two-spirals to measure the performance of the classifier.

  19. A modified Generalized Least Squares method for large scale nuclear data evaluation

    Science.gov (United States)

    Schnabel, Georg; Leeb, Helmut

    2017-01-01

    Nuclear data evaluation aims to provide estimates and uncertainties in the form of covariance matrices of cross sections and related quantities. Many practitioners use the Generalized Least Squares (GLS) formulas to combine experimental data and results of model calculations in order to determine reliable estimates and covariance matrices. A prerequisite to apply the GLS formulas is the construction of a prior covariance matrix for the observables from a set of model calculations. Modern nuclear model codes are able to provide predictions for a large number of observables. However, the inclusion of all observables may lead to a prior covariance matrix of intractable size. Therefore, we introduce mathematically equivalent versions of the GLS formulas to avoid the construction of the prior covariance matrix. Experimental data can be incrementally incorporated into the evaluation process, hence there is no upper limit on their amount. We demonstrate the modified GLS method in a tentative evaluation involving about three million observables using the code TALYS. The revised scheme is well suited as building block of a database application providing evaluated nuclear data. Updating with new experimental data is feasible and users can query estimates and correlations of arbitrary subsets of the observables stored in the database.

  20. Plane-Wave Least-Squares Reverse Time Migration for Rugged Topography

    Institute of Scientific and Technical Information of China (English)

    Jianping Huang; Chuang Li; Rongrong Wang; Qingyang Li

    2015-01-01

    We present a method based on least-squares reverse time migration with plane-wave encod-ing (P-LSRTM) for rugged topography. Instead of modifying the wave field before migration, we modify the plane-wave encoding function and fill constant velocity to the area above rugged topography in the model so that P-LSRTM can be directly performed from rugged surface in the way same to shot domain reverse time migration. In order to improve efficiency and reduce I/O (input/output) cost, the dynamic en-coding strategy and hybrid encoding strategy are implemented. Numerical test on SEG rugged topography model show that P-LSRTM can suppress migration artifacts in the migration image, and compensate am-plitude in the middle-deep part efficiently. Without data correction, P-LSRTM can produce a satisfying image of near-surface if we could get an accurate near-surface velocity model. Moreover, the pre-stack P-LSRTM is more robust than conventional RTM in the presence of migration velocity errors.

  1. Least-squares reverse time migration of marine data with frequency-selection encoding

    KAUST Repository

    Dai, Wei

    2013-06-24

    The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique nonoverlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Because the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is comparable to conventional RTM for the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved by suppressing migration artifacts, balancing reflector amplitudes, and enhancing the spatial resolution. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM. © 2013 Society of Exploration Geophysicists.

  2. Attenuation compensation for least-squares reverse time migration using the viscoacoustic-wave equation

    KAUST Repository

    Dutta, Gaurav

    2014-10-01

    Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. Conventional acoustic reverse time migration (RTM) and least-squares reverse time migration (LSRTM) do not account for this distortion, which can lead to defocusing of migration images in highly attenuative geologic environments. To correct for this distortion, we used a linearized inversion method, denoted as Qp-LSRTM. During the leastsquares iterations, we used a linearized viscoacoustic modeling operator for forward modeling. The adjoint equations were derived using the adjoint-state method for back propagating the residual wavefields. The merit of this approach compared with conventional RTM and LSRTM was that Qp-LSRTM compensated for the amplitude loss due to attenuation and could produce images with better balanced amplitudes and more resolution below highly attenuative layers. Numerical tests on synthetic and field data illustrated the advantages of Qp-LSRTM over RTM and LSRTM when the recorded data had strong attenuation effects. Similar to standard LSRTM, the sensitivity tests for background velocity and Qp errors revealed that the liability of this method is the requirement for smooth and accurate migration velocity and attenuation models.

  3. Multidimensional model of apathy in older adults using partial least squares--path modeling.

    Science.gov (United States)

    Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine

    2016-06-01

    Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed.

  4. Least-squares reverse time migration with local Radon-based preconditioning

    KAUST Repository

    Dutta, Gaurav

    2017-03-08

    Least-squares migration (LSM) can produce images with better balanced amplitudes and fewer artifacts than standard migration. The conventional objective function used for LSM minimizes the L2-norm of the data residual between the predicted and the observed data. However, for field-data applications in which the recorded data are noisy and undersampled, the conventional formulation of LSM fails to provide the desired uplift in the quality of the inverted image. We have developed a leastsquares reverse time migration (LSRTM) method using local Radon-based preconditioning to overcome the low signal-tonoise ratio (S/N) problem of noisy or severely undersampled data. A high-resolution local Radon transform of the reflectivity is used, and sparseness constraints are imposed on the inverted reflectivity in the local Radon domain. The sparseness constraint is that the inverted reflectivity is sparse in the Radon domain and each location of the subsurface is represented by a limited number of geologic dips. The forward and the inverse mapping of the reflectivity to the local Radon domain and vice versa is done through 3D Fourier-based discrete Radon transform operators. The weights for the preconditioning are chosen to be varying locally based on the relative amplitudes of the local dips or assigned using quantile measures. Numerical tests on synthetic and field data validate the effectiveness of our approach in producing images with good S/N and fewer aliasing artifacts when compared with standard RTM or standard LSRTM.

  5. Intelligent control of a sensor-actuator system via kernelized least-squares policy iteration.

    Science.gov (United States)

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  6. Scatter factor confidence interval estimate of least square maximum entropy quantile function for small samples

    Institute of Scientific and Technical Information of China (English)

    Wu Fuxian; Wen Weidong

    2016-01-01

    Classic maximum entropy quantile function method (CMEQFM) based on the probabil-ity weighted moments (PWMs) can accurately estimate the quantile function of random variable on small samples, but inaccurately on the very small samples. To overcome this weakness, least square maximum entropy quantile function method (LSMEQFM) and that with constraint condition (LSMEQFMCC) are proposed. To improve the confidence level of quantile function estimation, scatter factor method is combined with maximum entropy method to estimate the confidence inter-val of quantile function. From the comparisons of these methods about two common probability distributions and one engineering application, it is showed that CMEQFM can estimate the quan-tile function accurately on the small samples but inaccurately on the very small samples (10 sam-ples); LSMEQFM and LSMEQFMCC can be successfully applied to the very small samples;with consideration of the constraint condition on quantile function, LSMEQFMCC is more stable and computationally accurate than LSMEQFM; scatter factor confidence interval estimation method based on LSMEQFM or LSMEQFMCC has good estimation accuracy on the confidence interval of quantile function, and that based on LSMEQFMCC is the most stable and accurate method on the very small samples (10 samples).

  7. Non-linear Least-squares Fitting in IDL with MPFIT

    Science.gov (United States)

    Markwardt, C. B.

    2009-09-01

    MPFIT is a port to IDL of the non-linear least squares fitting program MINPACK-1. MPFIT inherits the robustness of the original FORTRAN version of MINPACK-1, but is optimized for performance and convenience in IDL. In addition to the main fitting engine, MPFIT, several specialized functions are provided to fit 1-D curves and 2-D images, 1-D and 2-D peaks, and interactive fitting from the IDL command line. Several constraints can be applied to model parameters, including fixed constraints, simple bounding constraints, and ``tying'' the value to another parameter. Several data-weighting methods are allowed, and the parameter covariance matrix is computed. Extensive diagnostic capabilities are available during the fit, via a call-back subroutine, and after the fit is complete. Several different forms of documentation are provided, including a tutorial, reference pages, and frequently asked questions. The package has been translated to C and Python as well. The full IDL and C packages can be found at http://purl.com/net/mpfit.

  8. Solution of shallow-water equations using least-squares finite-element method

    Institute of Scientific and Technical Information of China (English)

    Shin-Jye Liang; Jyh-Haw Tang; Ming-Shun Wu

    2008-01-01

    A least-squares finite-element method (LSFEM) for the non-conservative shallow-water equations is pre-sented. The model is capable of handling complex topogra-phy, steady and unsteady flows, subcritical and supercritical flows, and flows with smooth and sharp gradient changes. Advantages of the model include: (1) sources terms, such as the bottom slope, surface stresses and bed frictions, can be treated easily without any special treatment; (2) upwind scheme is no needed; (3) a single approximating space can be used for all variables, and its choice of approximating space is not subject to the Ladyzhenskaya-Babuska-Brezzi (LBB) condition; and (4) the resulting system of equations is sym-metric and positive-definite (SPD) which can be solved effi-ciently with the preconditioned conjugate gradient method. The model is verified with flow over a bump, tide induced flow, and dam-break. Computed results are compared with analytic solutions or other numerical results, and show the model is conservative and accurate. The model is then used to simulate flow past a circular cylinder. Important flow charac-teristics, such as variation of water surface around the cylin-der and vortex shedding behind the cylinder are investigated. Computed results compare well with experiment data and other numerical results.

  9. [Quantitative analysis of alloy steel based on laser induced breakdown spectroscopy with partial least squares method].

    Science.gov (United States)

    Cong, Zhi-Bo; Sun, Lan-Xiang; Xin, Yong; Li, Yang; Qi, Li-Feng; Yang, Zhi-Jia

    2014-02-01

    In the present paper both the partial least squares (PLS) method and the calibration curve (CC) method are used to quantitatively analyze the laser induced breakdown spectroscopy data obtained from the standard alloy steel samples. Both the major and trace elements were quantitatively analyzed. By comparing the results of two different calibration methods some useful results were obtained: for major elements, the PLS method is better than the CC method in quantitative analysis; more importantly, for the trace elements, the CC method can not give the quantitative results due to the extremely weak characteristic spectral lines, but the PLS method still has a good ability of quantitative analysis. And the regression coefficient of PLS method is compared with the original spectral data with background interference to explain the advantage of the PLS method in the LIBS quantitative analysis. Results proved that the PLS method used in laser induced breakdown spectroscopy is suitable for quantitative analysis of trace elements such as C in the metallurgical industry.

  10. Identification Method by Least Squares Applied On a Level Didactic Plant Viafoundation Fieldbus Protocol

    Directory of Open Access Journals (Sweden)

    Murillo Ferreira Dos Santos

    2014-05-01

    Full Text Available The industrial field is always considered a growing area, which leads some systems toimprove the techniques used on its manufacturing. By consequence of this concept, level systems became an important part of the whole system, showing that needs to be studied more specific to get the optimal controlled response. It's known that the good controlled response is gotten when the system is identified correctly. Then, the objective of this paper is to present a didactic project of modeling and identification method applied on a level system, which uses a didactic system with Foundation Fieldbus protocol developed by SMAR® enterprise, belonging to CEFET MG-Campus III –Leopoldina, Brazil. The experiments were implemented considering the least squares method to identify the system dynamic, which the results were obtained using the OPC toolbox from MATLAB/Simulink®to establish the communication between the computer and the system. The modeling and identification results were satisfactory, showing that the applied technic can be used to approximate the system's level dynamic to a second order transfer function.

  11. The comparison of robust partial least squares regression with robust principal component regression on a real

    Science.gov (United States)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  12. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

    Science.gov (United States)

    Liu, X. Y.; Alfi, S.; Bruni, S.

    2016-06-01

    A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

  13. Quantitative analysis of mixed hydrofluoric and nitric acids using Raman spectroscopy with partial least squares regression.

    Science.gov (United States)

    Kang, Gumin; Lee, Kwangchil; Park, Haesung; Lee, Jinho; Jung, Youngjean; Kim, Kyoungsik; Son, Boongho; Park, Hyoungkuk

    2010-06-15

    Mixed hydrofluoric and nitric acids are widely used as a good etchant for the pickling process of stainless steels. The cost reduction and the procedure optimization in the manufacturing process can be facilitated by optically detecting the concentration of the mixed acids. In this work, we developed a novel method which allows us to obtain the concentrations of hydrofluoric acid (HF) and nitric acid (HNO(3)) mixture samples with high accuracy. The experiments were carried out for the mixed acids which consist of the HF (0.5-3wt%) and the HNO(3) (2-12wt%) at room temperature. Fourier Transform Raman spectroscopy has been utilized to measure the concentration of the mixed acids HF and HNO(3), because the mixture sample has several strong Raman bands caused by the vibrational mode of each acid in this spectrum. The calibration of spectral data has been performed using the partial least squares regression method which is ideal for local range data treatment. Several figures of merit (FOM) were calculated using the concept of net analyte signal (NAS) to evaluate performance of our methodology.

  14. A Novel Method for Flatness Pattern Recognition via Least Squares Support Vector Regression

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    To adapt to the new requirement of the developing flatness control theory and technology, cubic patterns were introduced on the basis of the traditional linear, quadratic and quartic flatness basic patterns. Linear, quadratic, cubic and quartic Legendre orthogonal polynomials were adopted to express the flatness basic patterns. In order to over- come the defects live in the existent recognition methods based on fuzzy, neural network and support vector regres- sion (SVR) theory, a novel flatness pattern recognition method based on least squares support vector regression (LS-SVR) was proposed. On this basis, for the purpose of determining the hyper-parameters of LS-SVR effectively and enhan- cing the recognition accuracy and generalization performance of the model, particle swarm optimization algorithm with leave-one-out (LOO) error as fitness function was adopted. To overcome the disadvantage of high computational complexity of naive cross-validation algorithm, a novel fast cross-validation algorithm was introduced to calculate the LOO error of LDSVR. Results of experiments on flatness data calculated by theory and a 900HC cold-rolling mill practically measured flatness signals demonstrate that the proposed approach can distinguish the types and define the magnitudes of the flatness defects effectively with high accuracy, high speed and strong generalization ability.

  15. Least-squares fitting of time-domain signals for Fourier transform mass spectrometry.

    Science.gov (United States)

    Aushev, Tagir; Kozhinov, Anton N; Tsybin, Yury O

    2014-07-01

    To advance Fourier transform mass spectrometry (FTMS)-based molecular structure analysis, corresponding development of the FTMS signal processing methods and instrumentation is required. Here, we demonstrate utility of a least-squares fitting (LSF) method for analysis of FTMS time-domain (transient) signals. We evaluate the LSF method in the analysis of single- and multiple-component experimental and simulated ion cyclotron resonance (ICR) and Orbitrap FTMS transient signals. Overall, the LSF method allows one to estimate the analytical limits of the conventional instrumentation and signal processing methods in FTMS. Particularly, LSF provides accurate information on initial phases of sinusoidal components in a given transient. For instance, the phase distribution obtained for a statistical set of experimental transients reveals the effect of the first data-point problem in FT-ICR MS. Additionally, LSF might be useful to improve the implementation of the absorption-mode FT spectral representation for FTMS applications. Finally, LSF can find utility in characterization and development of filter-diagonalization method (FDM) MS.

  16. Prediction of olive oil sensory descriptors using instrumental data fusion and partial least squares (PLS) regression.

    Science.gov (United States)

    Borràs, Eva; Ferré, Joan; Boqué, Ricard; Mestres, Montserrat; Aceña, Laura; Calvo, Angels; Busto, Olga

    2016-08-01

    Headspace-Mass Spectrometry (HS-MS), Fourier Transform Mid-Infrared spectroscopy (FT-MIR) and UV-Visible spectrophotometry (UV-vis) instrumental responses have been combined to predict virgin olive oil sensory descriptors. 343 olive oil samples analyzed during four consecutive harvests (2010-2014) were used to build multivariate calibration models using partial least squares (PLS) regression. The reference values of the sensory attributes were provided by expert assessors from an official taste panel. The instrumental data were modeled individually and also using data fusion approaches. The use of fused data with both low- and mid-level of abstraction improved PLS predictions for all the olive oil descriptors. The best PLS models were obtained for two positive attributes (fruity and bitter) and two defective descriptors (fusty and musty), all of them using data fusion of MS and MIR spectral fingerprints. Although good predictions were not obtained for some sensory descriptors, the results are encouraging, specially considering that the legal categorization of virgin olive oils only requires the determination of fruity and defective descriptors.

  17. First-order system least-squares for the Helmholtz equation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

    1996-12-31

    We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.

  18. Denoising spectroscopic data by means of the improved Least-Squares Deconvolution method

    CERN Document Server

    Tkachenko, A; Tsymbal, V; Aerts, C; Kochukhov, O; Debosscher, J

    2013-01-01

    The MOST, CoRoT, and Kepler space missions led to the discovery of a large number of intriguing, and in some cases unique, objects among which are pulsating stars, stars hosting exoplanets, binaries, etc. Although the space missions deliver photometric data of unprecedented quality, these data are lacking any spectral information and we are still in need of ground-based spectroscopic and/or multicolour photometric follow-up observations for a solid interpretation. Both faintness of most of the observed stars and the required high S/N of spectroscopic data imply the need of using large telescopes, access to which is limited. In this paper, we look for an alternative, and aim for the development of a technique allowing to denoise the originally low S/N spectroscopic data, making observations of faint targets with small telescopes possible and effective. We present a generalization of the original Least-Squares Deconvolution (LSD) method by implementing a multicomponent average profile and a line strengths corre...

  19. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2012-02-01

    Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  20. PENALIZED PARTIAL LEAST SQUARES%惩罚的偏最小二乘

    Institute of Scientific and Technical Information of China (English)

    殷弘; 汪宝彬

    2013-01-01

    本文研究了二个推广的惩罚的偏小二乘模型,将惩罚估计的算法作用于偏最小二乘估计上,得到了参数的最终估计.将此模型运用到一个实际数据,在预测方面获得了较好的结果.%In this paper,penalized partial least squares (PPLS) method is used in quantitative structure and activity relationship (QSAR) research.PPLS is in fact a combination of PLS and penalized regression which is first proposed in classification problems of biological informatics,but to our knowledge,our application of PPLS to QSAR data is novel.Further,we consider three different penalized regressions in contrast to the previous literature that only use one penalized function.Using a real data set,we demonstrate the competitive performances of PPLS methods compared with other four methods used widely in QSAR research.