WorldWideScience

Sample records for carlo library least-squares

  1. Status of software for PGNAA bulk analysis by the Monte Carlo - Library Least-Squares (MCLLS) approach

    International Nuclear Information System (INIS)

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for about ten years on the Monte Carlo - Library Least-Squares (MCLLS) approach for treating the nonlinear inverse analysis problem for PGNAA bulk analysis. This approach consists essentially of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required libraries. These libraries are then used in the linear Library Least-Squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. The other libraries include all sources of background which includes: (1) gamma-rays emitted by the neutron source, (2) prompt gamma-rays produced in the analyzer construction materials, (3) natural gamma-rays from K-40 and the uranium and thorium decay chains, and (4) prompt and decay gamma-rays produced in the NaI detector by neutron activation. A number of unforeseen problems have arisen in pursuing this approach including: (1) the neutron activation of the most common detector (NaI) used in bulk analysis PGNAA systems, (2) the nonlinearity of this detector, and (3) difficulties in obtaining detector response functions for this (and other) detectors. These problems have been addressed by CEAR recently and have either been solved or are almost solved at the present time. Development of Monte Carlo simulation for all of the libraries has been finished except the prompt gamma-ray library from the activation of the NaI detector. Treatment for the coincidence schemes for Na and particularly I must be first determined to complete the Monte Carlo simulation of this last library. (author)

  2. On the treatment of ill-conditioned cases in the Monte Carlo library least-squares approach for inverse radiation analyzers

    International Nuclear Information System (INIS)

    Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately. (paper)

  3. Canonical Least-Squares Monte Carlo Valuation of American Options: Convergence and Empirical Pricing Analysis

    Directory of Open Access Journals (Sweden)

    Xisheng Yu

    2014-01-01

    Full Text Available The paper by Liu (2010 introduces a method termed the canonical least-squares Monte Carlo (CLM which combines a martingale-constrained entropy model and a least-squares Monte Carlo algorithm to price American options. In this paper, we first provide the convergence results of CLM and numerically examine the convergence properties. Then, the comparative analysis is empirically conducted using a large sample of the S&P 100 Index (OEX puts and IBM puts. The results on the convergence show that choosing the shifted Legendre polynomials with four regressors is more appropriate considering the pricing accuracy and the computational cost. With this choice, CLM method is empirically demonstrated to be superior to the benchmark methods of binominal tree and finite difference with historical volatilities.

  4. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  5. A library least-squares approach for scatter correction in gamma-ray tomography

    Science.gov (United States)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  6. Elemental PGNAA analysis using gamma-gamma coincidence counting with the library least-squares approach

    Science.gov (United States)

    Metwally, Walid A.; Gardner, Robin P.; Mayo, Charles W.

    2004-01-01

    An accurate method for determining elemental analysis using gamma-gamma coincidence counting is presented. To demonstrate the feasibility of this method for PGNAA, a system of three radioisotopes (Na-24, Co-60 and Cs-134) that emit coincident gamma rays was used. Two HPGe detectors were connected to a system that allowed both singles and coincidences to be collected simultaneously. A known mixture of the three radioisotopes was used and data was deliberately collected at relatively high counting rates to determine the effect of pulse pile-up distortion. The results obtained, with the library least-squares analysis, of both the normal and coincidence counting are presented and compared to the known amounts. The coincidence results are shown to give much better accuracy. It appears that in addition to the expected advantage of reduced background, the coincidence approach is considerably more resistant to pulse pile-up distortion.

  7. Uncovering Time-Varying Parameters with the Kalman-Filter and the Flexible Least Squares: a Monte Carlo Study

    OpenAIRE

    Zsolt Darvas; Balázs Varga

    2012-01-01

    Using Monte Carlo methods, we compare the ability of the Kalman-filter, the Kalman-smoother and the flexible least squares (FLS) to uncover the parameters of an autoregression. We find that the ordinary least squares (OLS) estimator performs much better that the time-varying coefficient methods when the parameters are in fact constant, but the OLS does very poorly when parameters change. Neither the FLS, nor the Kalman-filter and Kalman-smoother can uncover sudden changes in parameters. But w...

  8. A library least-squares approach for scatter correction in gamma-ray tomography

    International Nuclear Information System (INIS)

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system. - Highlights: • A LLS approach is proposed for scatter correction in gamma-ray tomography. • The validity of the LLS approach is tested through experiments. • Gain shift and pulse pile-up affect the accuracy of the LLS approach. • The LLS approach successfully estimates scatter profiles

  9. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  10. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  11. A SUCCESSIVE LEAST SQUARES METHOD FOR STRUCTURED TOTAL LEAST SQUARES

    Institute of Scientific and Technical Information of China (English)

    Plamen Y. Yalamov; Jin-yun Yuan

    2003-01-01

    A new method for Total Least Squares (TLS) problems is presented. It differs from previous approaches and is based on the solution of successive Least Squares problems.The method is quite suitable for Structured TLS (STLS) problems. We study mostly the case of Toeplitz matrices in this paper. The numerical tests illustrate that the method converges to the solution fast for Toeplitz STLS problems. Since the method is designed for general TLS problems, other structured problems can be treated similarly.

  12. Maximum likelihood, least squares and penalized least squares for PET

    International Nuclear Information System (INIS)

    The EM algorithm is the basic approach used to maximize the log likelihood objective function for the reconstruction problem in PET. The EM algorithm is a scaled steepest ascent algorithm that elegantly handles the nonnegativity constraints of the problem. The authors show that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach. The experiments suggest that one can cut the computation by about a factor of 3 by using this technique. The results also apply to various penalized least squares functions which might be used to produce a smoother image

  13. Monte Carlo method of least squares fitting of experimental data%基于蒙特卡罗最小二乘的实验数据拟合方法

    Institute of Scientific and Technical Information of China (English)

    颜清; 彭小平

    2011-01-01

    Using the least squares method that fits chemical industry empirical datum, the correlation coefficient approaches in 1, and the precision is high.the results differ with the empirical correlation. Monte Carlo method is a probabilistic model based on non-deterministic numerical methods. Monte Carlo method of least squares fits of experimental data processing chemicals.so the application is more flexible and broader scope. In the Excel spreadsheet, using the worksheet data and VBA programming is easy to complete mixing least-squares data fitting Monte Carlo, VBA and Excel spreadsheets to achieve data communications and parallel processing experimental data, to read the worksheet experimental data and calculate the approximate point random search, the least-squares statistical analysis, and the results output to the worksheet. Monte Carlo method of least squares fits method of least squares using the same precision with the standard, in line with large numbers theorem, which is based on the accuracy improved significantly. Monte Carlo method in the random search point is small, the error, and when the random search points to 10 000, its accuracy is almost the same with the method of least squares. At the same time we can get the empirical correlation that has been very close relationship between the number of quasi-equation sand practice which make unified theory of the experimental results.%采用最小二乘法拟合化工实验数据,相关系数接近于1,精度高,但所得的结果与经验关联式大相径庭.蒙特卡罗方法是一种基于概率模型的非确定性数值方法.蒙特卡罗最小二乘拟合方法处理化工实验数据,应用中更为灵活,适用范围更广.在Excel电子表格中,利用工作表中的数据与VBA混合编程很容易完成蒙特卡罗最小二乘数据拟合,VBA实现与Excel电子表格的数据通讯及并行处理实验数据,读取工作表中的实验数据,计算随机点的大致搜索范围,进行最小二乘

  14. Nonlinear Least Squares for Inverse Problems

    CERN Document Server

    Chavent, Guy

    2009-01-01

    Presents an introduction into the least squares resolution of nonlinear inverse problems. This title intends to develop a geometrical theory to analyze nonlinear least square (NLS) problems with respect to their quadratic wellposedness, that is, both wellposedness and optimizability

  15. The Monte Carlo validation framework for the discriminant partial least squares model extended with variable selection methods applied to authenticity studies of Viagra® based on chromatographic impurity profiles.

    Science.gov (United States)

    Krakowska, B; Custers, D; Deconinck, E; Daszykowski, M

    2016-02-01

    The aim of this work was to develop a general framework for the validation of discriminant models based on the Monte Carlo approach that is used in the context of authenticity studies based on chromatographic impurity profiles. The performance of the validation approach was applied to evaluate the usefulness of the diagnostic logic rule obtained from the partial least squares discriminant model (PLS-DA) that was built to discriminate authentic Viagra® samples from counterfeits (a two-class problem). The major advantage of the proposed validation framework stems from the possibility of obtaining distributions for different figures of merit that describe the PLS-DA model such as, e.g., sensitivity, specificity, correct classification rate and area under the curve in a function of model complexity. Therefore, one can quickly evaluate their uncertainty estimates. Moreover, the Monte Carlo model validation allows balanced sets of training samples to be designed, which is required at the stage of the construction of PLS-DA and is recommended in order to obtain fair estimates that are based on an independent set of samples. In this study, as an illustrative example, 46 authentic Viagra® samples and 97 counterfeit samples were analyzed and described by their impurity profiles that were determined using high performance liquid chromatography with photodiode array detection and further discriminated using the PLS-DA approach. In addition, we demonstrated how to extend the Monte Carlo validation framework with four different variable selection schemes: the elimination of uninformative variables, the importance of a variable in projections, selectivity ratio and significance multivariate correlation. The best PLS-DA model was based on a subset of variables that were selected using the variable importance in the projection approach. For an independent test set, average estimates with the corresponding standard deviation (based on 1000 Monte Carlo runs) of the correct

  16. Tikhonov Regularization and Total Least Squares

    DEFF Research Database (Denmark)

    Golub, G. H.; Hansen, Per Christian; O'Leary, D. P.

    2000-01-01

    Discretizations of inverse problems lead to systems of linear equations with a highly ill-conditioned coefficient matrix, and in order to compute stable solutions to these systems it is necessary to apply regularization methods. We show how Tikhonov's regularization method, which in its original...... formulation involves a least squares problem, can be recast in a total least squares formulation suited for problems in which both the coefficient matrix and the right-hand side are known only approximately. We analyze the regularizing properties of this method and demonstrate by a numerical example that......, in certain cases with large perturbations, the new method is superior to standard regularization methods....

  17. Partial update least-square adaptive filtering

    CERN Document Server

    Xie, Bei

    2014-01-01

    Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a

  18. Least Squares Data Fitting with Applications

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela

    that help readers to understand and evaluate the computed solutions • many examples that illustrate the techniques and algorithms Least Squares Data Fitting with Applications can be used as a textbook for advanced undergraduate or graduate courses and professionals in the sciences and in engineering....

  19. Combinatorics of least-squares trees.

    Science.gov (United States)

    Mihaescu, Radu; Pachter, Lior

    2008-09-01

    A recurring theme in the least-squares approach to phylogenetics has been the discovery of elegant combinatorial formulas for the least-squares estimates of edge lengths. These formulas have proved useful for the development of efficient algorithms, and have also been important for understanding connections among popular phylogeny algorithms. For example, the selection criterion of the neighbor-joining algorithm is now understood in terms of the combinatorial formulas of Pauplin for estimating tree length. We highlight a phylogenetically desirable property that weighted least-squares methods should satisfy, and provide a complete characterization of methods that satisfy the property. The necessary and sufficient condition is a multiplicative four-point condition that the variance matrix needs to satisfy. The proof is based on the observation that the Lagrange multipliers in the proof of the Gauss-Markov theorem are tree-additive. Our results generalize and complete previous work on ordinary least squares, balanced minimum evolution, and the taxon-weighted variance model. They also provide a time-optimal algorithm for computation.

  20. Time Scale in Least Square Method

    Directory of Open Access Journals (Sweden)

    Özgür Yeniay

    2014-01-01

    Full Text Available Study of dynamic equations in time scale is a new area in mathematics. Time scale tries to build a bridge between real numbers and integers. Two derivatives in time scale have been introduced and called as delta and nabla derivative. Delta derivative concept is defined as forward direction, and nabla derivative concept is defined as backward direction. Within the scope of this study, we consider the method of obtaining parameters of regression equation of integer values through time scale. Therefore, we implemented least squares method according to derivative definition of time scale and obtained coefficients related to the model. Here, there exist two coefficients originating from forward and backward jump operators relevant to the same model, which are different from each other. Occurrence of such a situation is equal to total number of values of vertical deviation between regression equations and observation values of forward and backward jump operators divided by two. We also estimated coefficients for the model using ordinary least squares method. As a result, we made an introduction to least squares method on time scale. We think that time scale theory would be a new vision in least square especially when assumptions of linear regression are violated.

  1. Deformation analysis with Total Least Squares

    Directory of Open Access Journals (Sweden)

    M. Acar

    2006-01-01

    Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.

  2. Least-squares fitting Gompertz curve

    Science.gov (United States)

    Jukic, Dragan; Kralik, Gordana; Scitovski, Rudolf

    2004-08-01

    In this paper we consider the least-squares (LS) fitting of the Gompertz curve to the given nonconstant data (pi,ti,yi), i=1,...,m, m≥3. We give necessary and sufficient conditions which guarantee the existence of the LS estimate, suggest a choice of a good initial approximation and give some numerical examples.

  3. Editorial: Perspectives on Partial Least Squares

    NARCIS (Netherlands)

    Vinzi, Vincenzo Esposito; Chin, Wynne W.; Henseler, Jörg; Wang, Huiwen; Vinzi, Vincenzo Esposito; Chin, Wynne W.; Henseler, Jörg; Wang, Huiwen

    2010-01-01

    This Handbook on Partial Least Squares (PLS) represents a comprehensive presentation of the current, original and most advanced research in the domain of PLS methods with specific reference to their use in Marketing-related areas and with a discussion of the forthcoming and most challenging directio

  4. Distributional aspects in partial least squares regression

    OpenAIRE

    Romera, Rosario

    1999-01-01

    This paper presents some results about the asymptotic behaviour of the estimate of a regression model obtained by Partial Least Squares (PLS) Methods. Because the nonlinearity of the regression estimator on the response variable, local linear approximation through the 6-method for the PLS regression vector is carried out. A new implementation of the PLS algorithm is developed for this purpose.

  5. Least square fitting with one parameter less

    CERN Document Server

    Berg, Bernd A

    2015-01-01

    It is shown that whenever the multiplicative normalization of a fitting function is not known, least square fitting by $\\chi^2$ minimization can be performed with one parameter less than usual by converting the normalization parameter into a function of the remaining parameters and the data.

  6. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  7. Discrete least squares approximation with polynomial vectors

    OpenAIRE

    Van Barel, Marc; Bultheel, Adhemar

    1993-01-01

    We give a solution of a discrete least squares approximation problem in terms of orthogonal polynomial vectors. The degrees of the polynomial elements of these vectors can be different. An algorithm is constructed computing the coefficients of recurrence relations for the orthogonal polynomial vectors. In case the function values are prescribed in points on the real line or on the unit circle variants of the original algorithm can be designed which are an order of magnitude more efficient. Al...

  8. Regularization by truncated total least squares

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Fierro, R.D; Golub, G.H;

    1997-01-01

    of TLS for solving problems with very ill-conditioned coefficient matrices whose singular values decay gradually (so-called discrete ill-posed problems), where some regularization is necessary to stabilize the computed solution. We filter the solution by truncating the small singular values of the TLS...... matrix. We express our results in terms of the singular value decomposition (SVD) of the coefficient matrix rather than the augmented matrix. This leads to insight into the filtering properties of the truncated TLS method as compared to regularized least squares solutions. In addition, we propose...

  9. Total least squares for anomalous change detection

    Energy Technology Data Exchange (ETDEWEB)

    Theiler, James P [Los Alamos National Laboratory; Matsekh, Anna M [Los Alamos National Laboratory

    2010-01-01

    A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

  10. ON THE SEPARABLE NONLINEAR LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Xin Liu; Yaxiang Yuan

    2008-01-01

    Separable nonlinear least squares problems are a special class of nonlinear least squares problems, where the objective functions are linear and nonlinear on different parts of variables. Such problems have broad applications in practice. Most existing algorithms for this kind of problems are derived from the variable projection method proposed by Golub and Pereyra, which utilizes the separability under a separate framework. However, the methods based on variable projection strategy would be invalid if there exist some constraints to the variables, as the real problems always do, even if the constraint is simply the ball constraint. We present a new algorithm which is based on a special approximation to the Hessian by noticing the fact that certain terms of the Hessian can be derived from the gradient. Our method maintains all the advantages of variable projection based methods, and moreover it can be combined with trust region methods easily and can be applied to general constrained separable nonlinear problems. Convergence analysis of our method is presented and numerical results are also reported.

  11. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, D. L.

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.

  12. Multisource Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-12-01

    Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is

  13. Skeletonized Least Squares Wave Equation Migration

    KAUST Repository

    Zhan, Ge

    2010-10-17

    The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).

  14. The least-square method in complex number domain

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The classical least-square method was extended from the real number into the complex number domain, which is called the complex least-square method. The mathematical derivation and its applications show that the complex least-square method is different from one that the real number and the imaginary number are separately calculated with the classical least-square, by which the actual leastsquare estimation cannot be obtained in practice. Applications of this new method to an arbitrarily given series and to the precipitation in rainy season at 160 meteorological stations in China mainland show advantages of this method over other conventional statistical models.

  15. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    Science.gov (United States)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator

  16. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    Directory of Open Access Journals (Sweden)

    Greenwood L.R.

    2016-01-01

    Full Text Available The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002 [3], which was sponsored by the International Atomic Energy Agency (IAEA, though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the

  17. Constrained least-squares methods for partial differential equations

    NARCIS (Netherlands)

    De Maerschalck, B.; Gerritsma, M.I.

    2006-01-01

    Least-squares methods for partial differential equations are based on a norm-equivalence between the error norm and the residual norm. The resulting algebraic system of equations, which is symmetric positive definite, can also be obtained by solving a weighted collocation scheme using least-squares

  18. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  19. Bibliography on total least squares and related methods

    OpenAIRE

    Markovsky, Ivan

    2010-01-01

    The class of total least squares methods has been growing since the basic total least squares method was proposed by Golub and Van Loan in the 70's. Efficient and robust computational algorithms were developed and properties of the resulting estimators were established in the errors-in-variables setting. At the same time the developed methods were applied in diverse areas, leading to broad literature on the subject. This paper collects the main references and guides the reader in finding deta...

  20. Generalized Penalized Least Squares and Its Statistical Characteristics

    Institute of Scientific and Technical Information of China (English)

    DING Shijun; TAO Benzao

    2006-01-01

    The solution properties of semiparametric model are analyzed, especially that penalized least squares for semiparametric model will be invalid when the matrix BTPB is ill-posed or singular. According to the principle of ridge estimate for linear parametric model, generalized penalized least squares for semiparametric model are put forward, and some formulae and statistical properties of estimates are derived. Finally according to simulation examples some helpful conclusions are drawn.

  1. Sparse Partial Least Squares Classification for High Dimensional Data*

    OpenAIRE

    Chung, Dongjun; Keles, Sunduz

    2010-01-01

    Partial least squares (PLS) is a well known dimension reduction method which has been recently adapted for high dimensional classification problems in genome biology. We develop sparse versions of the recently proposed two PLS-based classification methods using sparse partial least squares (SPLS). These sparse versions aim to achieve variable selection and dimension reduction simultaneously. We consider both binary and multicategory classification. We provide analytical and simulation-based i...

  2. A Generalized Autocovariance Least-Squares Method for Covariance Estimation

    DEFF Research Database (Denmark)

    Åkesson, Bernt Magnus; Jørgensen, John Bagterp; Poulsen, Niels Kjølstad;

    2007-01-01

    A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter.......A generalization of the autocovariance least- squares method for estimating noise covariances is presented. The method can estimate mutually correlated system and sensor noise and can be used with both the predicting and the filtering form of the Kalman filter....

  3. Performance analysis of the Least-Squares estimator in Astrometry

    CERN Document Server

    Lobos, Rodrigo A; Mendez, Rene A; Orchard, Marcos

    2015-01-01

    We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated...

  4. Least-squares RTM with L1 norm regularisation

    Science.gov (United States)

    Wu, Di; Yao, Gang; Cao, Jingjie; Wang, Yanghua

    2016-10-01

    Reverse time migration (RTM), for imaging complex Earth models, is a reversal procedure of the forward modelling of seismic wavefields, and hence can be formulated as an inverse problem. The least-squares RTM method attempts to minimise the difference between the observed field data and the synthetic data generated by the migration image. It can reduce the artefacts in the images of a conventional RTM which uses an adjoint operator, instead of an inverse operator, for the migration. However, as the least-squares inversion provides an average solution with minimal variation, the resolution of the reflectivity image is compromised. This paper presents the least-squares RTM method with a model constraint defined by an L1-norm of the reflectivity image. For solving the least-squares RTM with L1 norm regularisation, the inversion is reformulated as a ‘basis pursuit de-noise (BPDN)’ problem, and is solved directly using an algorithm called ‘spectral projected gradient for L1 minimisation (SPGL1)’. Three numerical examples demonstrate the effectiveness of the method which can mitigate artefacts and produce clean images with significantly higher resolution than the least-squares RTM without such a constraint.

  5. Distributed Recursive Least-Squares: Stability and Performance Analysis

    CERN Document Server

    Mateos, Gonzalo

    2011-01-01

    The recursive least-squares (RLS) algorithm has well-documented merits for reducing complexity and storage requirements, when it comes to online estimation of stationary signals as well as for tracking slowly-varying nonstationary processes. In this paper, a distributed recursive least-squares (D-RLS) algorithm is developed for cooperative estimation using ad hoc wireless sensor networks. Distributed iterations are obtained by minimizing a separable reformulation of the exponentially-weighted least-squares cost, using the alternating-minimization algorithm. Sensors carry out reduced-complexity tasks locally, and exchange messages with one-hop neighbors to consent on the network-wide estimates adaptively. A steady-state mean-square error (MSE) performance analysis of D-RLS is conducted, by studying a stochastically-driven `averaged' system that approximates the D-RLS dynamics asymptotically in time. For sensor observations that are linearly related to the time-invariant parameter vector sought, the simplifying...

  6. An Algorithm to Solve Separable Nonlinear Least Square Problem

    Directory of Open Access Journals (Sweden)

    Wajeb Gharibi

    2013-07-01

    Full Text Available Separable Nonlinear Least Squares (SNLS problem is a special class of Nonlinear Least Squares (NLS problems, whose objective function is a mixture of linear and nonlinear functions. SNLS has many applications in several areas, especially in the field of Operations Research and Computer Science. Problems related to the class of NLS are hard to resolve having infinite-norm metric. This paper gives a brief explanation about SNLS problem and offers a Lagrangian based algorithm for solving mixed linear-nonlinear minimization problem

  7. HERMITE SCATTERED DATA FITTING BY THE PENALIZED LEAST SQUARES METHOD

    Institute of Scientific and Technical Information of China (English)

    Tianhe Zhou; Danfu Han

    2009-01-01

    Given a set of scattered data with derivative values. If the data is noisy or there is an extremely large number of data, we use an extension of the penalized least squares method of von Golitschek and Schumaker[Serdica, 18 (2002), pp.1001-1020]to fit the data. We show that the extension of the penalized least squares method produces a unique spline to fit the data. Also we give the error bound for the extension method. Some numerical examples are presented to demonstrate the effectiveness of the proposed method.

  8. Efficient Model Selection for Sparse Least-Square SVMs

    OpenAIRE

    Xiao-Lei Xia; Suxiang Qian; Xueqin Liu; Huanlai Xing

    2013-01-01

    The Forward Least-Squares Approximation (FLSA) SVM is a newly-emerged Least-Square SVM (LS-SVM) whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to im...

  9. Sparse least-squares reverse time migration using seislets

    KAUST Repository

    Dutta, Gaurav

    2015-08-19

    We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.

  10. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin

    2012-11-04

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.

  11. Multivariate calibration with least-squares support vector machines.

    NARCIS (Netherlands)

    Thissen, U.M.J.; Ustun, B.; Melssen, W.J.; Buydens, L.M.C.

    2004-01-01

    This paper proposes the use of least-squares support vector machines (LS-SVMs) as a relatively new nonlinear multivariate calibration method, capable of dealing with ill-posed problems. LS-SVMs are an extension of "traditional" SVMs that have been introduced recently in the field of chemistry and ch

  12. Integer least-squares theory for the GNSS compass

    NARCIS (Netherlands)

    Teunissen, P.J.G.

    2010-01-01

    Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to highprecision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategi

  13. A Genetic Algorithm Approach to Nonlinear Least Squares Estimation

    Science.gov (United States)

    Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.

    2004-01-01

    A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…

  14. Least-squares variance component estimation: theory and GPS applications

    NARCIS (Netherlands)

    Amiri-Simkooei, A.

    2007-01-01

    In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known princip

  15. Parallel block schemes for large scale least squares computations

    Energy Technology Data Exchange (ETDEWEB)

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.

  16. SELECTION OF REFERENCE PLANE BY THE LEAST SQUARES FITTING METHODS

    Directory of Open Access Journals (Sweden)

    Przemysław Podulka

    2016-06-01

    For least squares polynomial fittings it was found that applied method for cylinder liners gave usually better robustness for scratches, valleys and dimples occurrence. For piston skirt surfaces better edge-filtering results were obtained. It was also recommended to analyse the Sk parameters for proper selection of reference plane in surface topography measurements.

  17. NON-PARAMETRIC LEAST SQUARE ESTIMATION OF DISTRIBUTION FUNCTION

    Institute of Scientific and Technical Information of China (English)

    ChaiGenxiang; HuaHong; ShangHanji

    2002-01-01

    By using the non-parametric least square method, the strong consistent estimations of distribution function and failure function are established, where the distribution function F(x)after logist transformation is assumed to be approximated by a polynomial. The performance of simulation shows that the estimations are highly satisfactory.

  18. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-11-04

    Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.

  19. ON THE COMPARISION OF THE TOTAL LEAST SQUARES AND THE LEAST SQUARES PROBLEMS%TLS和LS问题的比较

    Institute of Scientific and Technical Information of China (English)

    刘永辉; 魏木生

    2003-01-01

    There are a number of articles discussing the total least squares(TLS) and the least squares(LS) problems.M.Wei(M.Wei, Mathematica Numerica Sinica 20(3)(1998),267-278) proposed a new orthogonal projection method to improve existing perturbation bounds of the TLS and LS problems.In this paper,wecontinue to improve existing bounds of differences between the squared residuals,the weighted squared residuals and the minimum norm correction matrices of the TLS and LS problems.

  20. Least Squares Based and Two-Stage Least Squares Based Iterative Estimation Algorithms for H-FIR-MA Systems

    OpenAIRE

    Zhenwei Shi; Zhicheng Ji

    2015-01-01

    This paper studies the identification of Hammerstein finite impulse response moving average (H-FIR-MA for short) systems. A new two-stage least squares iterative algorithm is developed to identify the parameters of the H-FIR-MA systems. The simulation cases indicate the efficiency of the proposed algorithms.

  1. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-03-01

    This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic

  2. Moving least-squares corrections for smoothed particle hydrodynamics

    Directory of Open Access Journals (Sweden)

    Ciro Del Negro

    2011-12-01

    Full Text Available First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small increase in computational cost, we manage to achieve smooth density distributions without the need for post-processing and with higher accuracy in the computation of the viscous term of the Navier–Stokes equations, thereby reducing the formation of spurious shockwaves or other streaming effects in the evolution of fluid flow. Numerical tests on a classic two-dimensional dam-break problem confirm the improvement of the new approach.

  3. CONDITION NUMBER FOR WEIGHTED LINEAR LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Yimin Wei; Huaian Diao; Sanzheng Qiao

    2007-01-01

    In this paper,we investigate the condition numbers for the generalized matrix inversion and the rank deficient linear least squares problem:minx ||Ax-b||2,where A is an m-by-n (m≥n)rank deficient matrix.We first derive an explicit expression for the condition number in the weighted Frobenius norm || [AT,βb]||F of the data A and b,where T is a positive diagonal matrix and β is a positive scalar.We then discuss the sensitivity of the standard 2-norm condition numbers for the generalized matrix inversion and rank deficient least squares and establish relations between the condition numbers and their condition numbers called level-2 condition numbers.

  4. Source allocation by least-squares hydrocarbon fingerprint matching

    Energy Technology Data Exchange (ETDEWEB)

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  5. Robust structured total least squares algorithm for passive location

    Institute of Scientific and Technical Information of China (English)

    Hao Wu; Shuxin Chen; Yihang Zhang; Hengyang Zhang; Juan Ni

    2015-01-01

    A new approach cal ed the robust structured total least squares (RSTLS) algorithm is described for solving location inac-curacy caused by outliers in the single-observer passive location. It is built within the weighted structured total least squares (WSTLS) framework and improved based on the robust estimation theory. Moreover, the improved Danish weight function is proposed ac-cording to the robust extremal function of the WSTLS, so that the new algorithm can detect outliers based on residuals and reduce the weights of outliers automatical y. Final y, the inverse iteration method is discussed to deal with the RSTLS problem. Simulations show that when outliers appear, the result of the proposed algo-rithm is stil accurate and robust, whereas that of the conventional algorithms is distorted seriously.

  6. Weighted discrete least-squares polynomial approximation using randomized quadratures

    Science.gov (United States)

    Zhou, Tao; Narayan, Akil; Xiu, Dongbin

    2015-10-01

    We discuss the problem of polynomial approximation of multivariate functions using discrete least squares collocation. The problem stems from uncertainty quantification (UQ), where the independent variables of the functions are random variables with specified probability measure. We propose to construct the least squares approximation on points randomly and uniformly sampled from tensor product Gaussian quadrature points. We analyze the stability properties of this method and prove that the method is asymptotically stable, provided that the number of points scales linearly (up to a logarithmic factor) with the cardinality of the polynomial space. Specific results in both bounded and unbounded domains are obtained, along with a convergence result for Chebyshev measure. Numerical examples are provided to verify the theoretical results.

  7. Least-squares finite element methods for compressible Euler equations

    Science.gov (United States)

    Jiang, Bo-Nan; Carey, G. F.

    1990-01-01

    A method based on backward finite differencing in time and a least-squares finite element scheme for first-order systems of partial differential equations in space is applied to the Euler equations for gas dynamics. The scheme minimizes the L-sq-norm of the residual within each time step. The method naturally generates numerical dissipation proportional to the time step size. An implicit method employing linear elements has been implemented and proves robust. For high-order elements, computed solutions based on the L-sq method may have oscillations for calculations at similar time step sizes. To overcome this difficulty, a scheme which minimizes the weighted H1-norm of the residual is proposed and leads to a successful scheme with high-degree elements. Finally, a conservative least-squares finite element method is also developed. Numerical results for two-dimensional problems are given to demonstrate the shock resolution of the methods and compare different approaches.

  8. Speckle reduction by phase-based weighted least squares.

    Science.gov (United States)

    Zhu, Lei; Wang, Weiming; Qin, Jing; Heng, Pheng-Ann

    2014-01-01

    Although ultrasonography has been widely used in clinical applications, the doctor suffers great difficulties in diagnosis due to the artifacts of ultrasound images, especially the speckle noise. This paper proposes a novel framework for speckle reduction by using a phase-based weighted least squares optimization. The proposed approach can effectively smooth out speckle noise while preserving the features in the image, e.g., edges with different contrasts. To this end, we first employ a local phase-based measure, which is theoretically intensity-invariant, to extract the edge map from the input image. The edge map is then incorporated into the weighted least squares framework to supervise the optimization during despeckling, so that low contrast edges can be retained while the noise has been greatly removed. Experimental results in synthetic and clinical ultrasound images demonstrate that our approach performs better than state-of-the-art methods. PMID:25570846

  9. Anisotropy minimization via least squares method for transformation optics.

    Science.gov (United States)

    Junqueira, Mateus A F C; Gabrielli, Lucas H; Spadoti, Danilo H

    2014-07-28

    In this work the least squares method is used to reduce anisotropy in transformation optics technique. To apply the least squares method a power series is added on the coordinate transformation functions. The series coefficients were calculated to reduce the deviations in Cauchy-Riemann equations, which, when satisfied, result in both conformal transformations and isotropic media. We also present a mathematical treatment for the special case of transformation optics to design waveguides. To demonstrate the proposed technique a waveguide with a 30° of bend and with a 50% of increase in its output width was designed. The results show that our technique is simultaneously straightforward to be implement and effective in reducing the anisotropy of the transformation for an extremely low value close to zero.

  10. Least Squares Shadowing for Sensitivity Analysis of Turbulent Fluid Flows

    CERN Document Server

    Blonigan, Patrick; Wang, Qiqi

    2014-01-01

    Computational methods for sensitivity analysis are invaluable tools for aerodynamics research and engineering design. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in turbulent fluid flow fields, specifically those obtained using high-fidelity turbulence simulations. This is because of a number of dynamical properties of turbulent and chaotic fluid flows, most importantly high sensitivity of the initial value problem, popularly known as the "butterfly effect". The recently developed least squares shadowing (LSS) method avoids the issues encountered by traditional sensitivity analysis methods by approximating the "shadow trajectory" in phase space, avoiding the high sensitivity of the initial value problem. The following paper discusses how the least squares problem associated with LSS is solved. Two methods are presented and are demonstrated on a simulation of homogeneous isotropic turbulence and the Kuramoto-Sivashinsky (KS) equation, a 4th order c...

  11. Linearized least-square imaging of internally scattered data

    KAUST Repository

    Aldawood, Ali

    2014-01-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.

  12. Multisplitting for linear, least squares and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  13. Single Directional SMO Algorithm for Least Squares Support Vector Machines

    OpenAIRE

    Xigao Shao; Kun Wu; Bifeng Liao

    2013-01-01

    Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the c...

  14. An Efficient Inexact ABCD Method for Least Squares Semidefinite Programming

    OpenAIRE

    Sun, Defeng; Toh, Kim-Chuan; Yang, Liuqin

    2015-01-01

    We consider least squares semidefinite programming (LSSDP) where the primal matrix variable must satisfy given linear equality and inequality constraints, and must also lie in the intersection of the cone of symmetric positive semidefinite matrices and a simple polyhedral set. We propose an inexact accelerated block coordinate descent (ABCD) method for solving LSSDP via its dual, which can be reformulated as a convex composite minimization problem whose objective is the sum of a coupled quadr...

  15. MODIFIED LEAST SQUARE METHOD ON COMPUTING DIRICHLET PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The singularity theory of dynamical systems is linked to the numerical computation of boundary value problems of differential equations. It turns out to be a modified least square method for a calculation of variational problem defined on Ck(Ω), in which the base functions are polynomials and the computation of problems is transferred to compute the coefficients of the base functions. The theoretical treatment and some simple examples are provided for understanding the modification procedure of the metho...

  16. SUBSPACE SEARCH METHOD FOR A CLASS OF LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Zi-Luan Wei

    2000-01-01

    A subspace search method for solving a class of least squares problem is pre sented in the paper. The original problem is divided into many independent sub problems, and a search direction is obtained by solving each of the subproblems, as well as a new iterative point is determined by choosing a suitable steplength such that the value of residual norm is decreasing. The convergence result is also given. The numerical test is also shown for a special problem,

  17. Least-squares inversion for density-matrix reconstruction

    OpenAIRE

    Opatrny, T.; Welsch, D. -G.; Vogel, W.

    1997-01-01

    We propose a method for reconstruction of the density matrix from measurable time-dependent (probability) distributions of physical quantities. The applicability of the method based on least-squares inversion is - compared with other methods - very universal. It can be used to reconstruct quantum states of various systems, such as harmonic and and anharmonic oscillators including molecular vibrations in vibronic transitions and damped motion. It also enables one to take into account various s...

  18. An iterative approach to a constrained least squares problem

    Directory of Open Access Journals (Sweden)

    Simeon Reich

    2003-01-01

    In the case where the set of the constraints is the nonempty intersection of a finite collection of closed convex subsets of H, an iterative algorithm is designed. The resulting sequence is shown to converge strongly to the unique solution of the regularized problem. The net of the solutions to the regularized problems strongly converges to the minimum norm solution of the least squares problem if its solution set is nonempty.

  19. On the computation of the structured total least squares estimator

    OpenAIRE

    I. Markovsky; Van Huffel, S.; Kukush, A.

    2004-01-01

    A class of structured total least squares problems is considered, in which the extended data matrix is partitioned into blocks and each of the blocks is (block) Toeplitz/Hankel structured, unstructured, or noise free. We describe the implementation of two types of numerical solution methods for this problem: i) standard local optimization methods in combination with efficient evaluation of the cost function and its gradient, and ii) an iterative procedure proposed originally for the element-w...

  20. Block-Toeplitz/Hankel structured total least squares

    OpenAIRE

    I. Markovsky; Van Huffel, S.; Pintelon, R.

    2005-01-01

    A multivariate structured total least squares problem is considered, in which the extended data matrix is partitioned into blocks and each of the blocks is block-Toeplitz/Hankel structured, unstructured, or noise free. An equivalent optimization problem is derived and its properties are established. The special structure of the equivalent problem enables to improve the computational efficiency of the numerical solution via local optimization methods. By exploiting the structure, the computati...

  1. ESTIMASI KURVA REGRESI PADA DATA LONGITUDINAL DENGAN WEIGHTED LEAST SQUARE

    OpenAIRE

    Ragil P., Dian

    2014-01-01

    Model varying-coefficient pada data longitudinal akan dikaji dalam proposal ini. Hubungan antara variabel respon dan prediktor diasumsikan linier pada waktu tertentu, tapi koefisien-koefisiennya berubah terhadap waktu. Estimator spline berdasarkan Weighted least square (WLS) digunakan untuk mengestimasi kurva regresi dari Model Varying Coefficient. Generalized Cross-Validation (GCV) digunakan untuk memilih titik knot optimal. Aplikasi pada proposal ini diterapkan pada data ACTG yaitu hubungan...

  2. Kernel Partial Least Squares for Nonlinear Regression and Discrimination

    Science.gov (United States)

    Rosipal, Roman; Clancy, Daniel (Technical Monitor)

    2002-01-01

    This paper summarizes recent results on applying the method of partial least squares (PLS) in a reproducing kernel Hilbert space (RKHS). A previously proposed kernel PLS regression model was proven to be competitive with other regularized regression methods in RKHS. The family of nonlinear kernel-based PLS models is extended by considering the kernel PLS method for discrimination. Theoretical and experimental results on a two-class discrimination problem indicate usefulness of the method.

  3. Multilevel first-order system least squares for PDEs

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, S.

    1994-12-31

    The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.

  4. Solving linear inequalities in a least squares sense

    Energy Technology Data Exchange (ETDEWEB)

    Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  5. Simple procedures for imposing constraints for nonlinear least squares optimization

    Energy Technology Data Exchange (ETDEWEB)

    Carvalho, R. [Petrobras, Rio de Janeiro (Brazil); Thompson, L.G.; Redner, R.; Reynolds, A.C. [Univ. of Tulsa, OK (United States)

    1995-12-31

    Nonlinear regression method (least squares, least absolute value, etc.) have gained acceptance as practical technology for analyzing well-test pressure data. Even for relatively simple problems, however, commonly used algorithms sometimes converge to nonfeasible parameter estimates (e.g., negative permeabilities) resulting in a failure of the method. The primary objective of this work is to present a new method for imaging the objective function across all boundaries imposed to satisfy physical constraints on the parameters. The algorithm is extremely simple and reliable. The method uses an equivalent unconstrained objective function to impose the physical constraints required in the original problem. Thus, it can be used with standard unconstrained least squares software without reprogramming and provides a viable alternative to penalty functions for imposing constraints when estimating well and reservoir parameters from pressure transient data. In this work, the authors also present two methods of implementing the penalty function approach for imposing parameter constraints in a general unconstrained least squares algorithm. Based on their experience, the new imaging method always converges to a feasible solution in less time than the penalty function methods.

  6. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  7. AN ASSESSMENT OF THE MESHLESS WEIGHTED LEAST-SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    PanXiaofei; SzeKimYim; ZhangXiong

    2004-01-01

    The meshless weighted least-square (MWLS) method was developed based on the weighted least-square method. The method possesses several advantages, such as high accuracy, high stability and high efficiency. Moreover, the coefficient matrix obtained is symmetric and semipositive definite. In this paper, the method is further examined critically. The effects of several parameters on the results of MWLS are investigated systematically by using a cantilever beam and an infinite plate with a central circular hole. The numerical results are compared with those obtained by using the collocation-based meshless method (CBMM) and Galerkin-based meshless method (GBMM). The investigated parameters include the type of approximations, the type of weight functions, the number of neighbors of an evaluation point, as well as the manner in which the neighbors of an evaluation point are determined. This study shows that the displacement accuracy and convergence rate obtained by MWLS is comparable to that of the GBMM while the stress accuracy and convergence rate yielded by MWLS is even higher than that of GBMM. Furthermore, MWLS is much more efficient than GBMM. This study also shows that the instability of CBMM is mainly due to the neglect of the equilibrium residuals at boundary nodes. In MWLS, the residuals of all the governing equations are minimized in a weighted least-square sense.

  8. Classification using least squares support vector machine for reliability analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-wei GUO; Guang-chen BAI

    2009-01-01

    In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.

  9. MULTI-RESOLUTION LEAST SQUARES SUPPORT VECTOR MACHINES

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The Least Squares Support Vector Machines (LS-SVM) is an improvement to the SVM.Combined the LS-SVM with the Multi-Resolution Analysis (MRA), this letter proposes the Multi-resolution LS-SVM (MLS-SVM). The proposed algorithm has the same theoretical framework as MRA but with better approximation ability. At a fixed scale MLS-SVM is a classical LS-SVM, but MLS-SVM can gradually approximate the target function at different scales. In experiments, the MLS-SVM is used for nonlinear system identification, and achieves better identification accuracy.

  10. Moving least squares simulation of free surface flows

    DEFF Research Database (Denmark)

    Felter, C. L.; Walther, Jens Honore; Henriksen, Christian

    2014-01-01

    In this paper a Moving Least Squares method (MLS) for the simulation of 2D free surface flows is presented. The emphasis is on the governing equations, the boundary conditions, and the numerical implementation. The compressible viscous isothermal Navier–Stokes equations are taken as the starting...... derivatives and a Runge–Kutta method for the time derivatives. The computational frame is Lagrangian, which means that the computational nodes are convected with the flow. The method proposed here is benchmarked using the standard lid driven cavity problem, a rotating free surface problem, and the simulation...

  11. Handbook of Partial Least Squares Concepts, Methods and Applications

    CERN Document Server

    Vinzi, Vincenzo Esposito; Henseler, Jörg

    2010-01-01

    This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.

  12. Least square estimation of phase, frequency and PDEV

    CERN Document Server

    Danielson, Magnus; Rubiola, Enrico

    2016-01-01

    The Omega-preprocessing was introduced to improve phase noise rejection by using a least square algorithm. The associated variance is the PVAR which is more efficient than MVAR to separate the different noise types. However, unlike AVAR and MVAR, the decimation of PVAR estimates for multi-tau analysis is not possible if each counter measurement is a single scalar. This paper gives a decimation rule based on two scalars, the processing blocks, for each measurement. For the Omega-preprocessing, this implies the definition of an output standard as well as hardware requirements for performing high-speed computations of the blocks.

  13. Neural Network Inverse Adaptive Controller Based on Davidon Least Square

    Institute of Scientific and Technical Information of China (English)

    2000-01-01

    General neural network inverse adaptive controller haa two flaws: the first is the slow convergence speed; the second is the invalidation to the non-minimum phase system.These defects limit the scope in which the neural network inverse adaptive controller is used.We employ Davidon least squares in training the multi-layer feedforward neural network used in approximating the inverse model of plant to expedite the convergence,and then through constructing the pseudo-plant,a neural network inverse adaptive controller is put forward which is still effective to the nonlinear non-minimum phase system.The simulation results show the validity of this scheme.

  14. Making the most out of least-squares migration

    KAUST Repository

    Huang, Yunsong

    2014-09-01

    Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  15. Point pattern matching based on kernel partial least squares

    Institute of Scientific and Technical Information of China (English)

    Weidong Yan; Zheng Tian; Lulu Pan; Jinhuan Wen

    2011-01-01

    @@ Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.%Point pattern matching is an essential step in many image processing applications. This letter investigates the spectral approaches of point pattern matching, and presents a spectral feature matching algorithm based on kernel partial least squares (KPLS). Given the feature points of two images, we define position similarity matrices for the reference and sensed images, and extract the pattern vectors from the matrices using KPLS, which indicate the geometric distribution and the inner relationships of the feature points.Feature points matching are done using the bipartite graph matching method. Experiments conducted on both synthetic and real-world data demonstrate the robustness and invariance of the algorithm.

  16. Orthogonal least squares learning algorithm for radial basis function networks

    Energy Technology Data Exchange (ETDEWEB)

    Chen, S.; Cowan, C.F.N.; Grant, P.M. (Dept. of Electrical Engineering, Univ. of Edinburgh, Mayfield Road, Edinburgh EH9 3JL, Scotland (GB))

    1991-03-01

    The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular value decomposition to solve for the weights of the network. Such a procedure has several drawbacks and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The paper proposes an alternative learning procedure based on the orthogonal least squares method. The procedure choose radial basis function centers one by one in a rational way until an adequate network has been constructed. The algorithm has the property that each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least squares learning strategy provides a simple and efficient means for fitting radial basis function networks, and this is illustrated using examples taken from two different signal processing applications.

  17. Orthogonal least squares learning algorithm for radial basis function networks.

    Science.gov (United States)

    Chen, S; Cowan, C N; Grant, P M

    1991-01-01

    The radial basis function network offers a viable alternative to the two-layer neural network in many applications of signal processing. A common learning algorithm for radial basis function networks is based on first choosing randomly some data points as radial basis function centers and then using singular-value decomposition to solve for the weights of the network. Such a procedure has several drawbacks, and, in particular, an arbitrary selection of centers is clearly unsatisfactory. The authors propose an alternative learning procedure based on the orthogonal least-squares method. The procedure chooses radial basis function centers one by one in a rational way until an adequate network has been constructed. In the algorithm, each selected center maximizes the increment to the explained variance or energy of the desired output and does not suffer numerical ill-conditioning problems. The orthogonal least-squares learning strategy provides a simple and efficient means for fitting radial basis function networks. This is illustrated using examples taken from two different signal processing applications.

  18. Least squares weighted twin support vector machines with local information

    Institute of Scientific and Technical Information of China (English)

    花小朋; 徐森; 李先锋

    2015-01-01

    A least squares version of the recently proposed weighted twin support vector machine with local information (WLTSVM) for binary classification is formulated. This formulation leads to an extremely simple and fast algorithm, called least squares weighted twin support vector machine with local information (LSWLTSVM), for generating binary classifiers based on two non-parallel hyperplanes. Two modified primal problems of WLTSVM are attempted to solve, instead of two dual problems usually solved. The solution of the two modified problems reduces to solving just two systems of linear equations as opposed to solving two quadratic programming problems along with two systems of linear equations in WLTSVM. Moreover, two extra modifications were proposed in LSWLTSVM to improve the generalization capability. One is that a hot kernel function, not the simple-minded definition in WLTSVM, is used to define the weight matrix of adjacency graph, which ensures that the underlying similarity information between any pair of data points in the same class can be fully reflected. The other is that the weight for each point in the contrary class is considered in constructing equality constraints, which makes LSWLTSVM less sensitive to noise points than WLTSVM. Experimental results indicate that LSWLTSVM has comparable classification accuracy to that of WLTSVM but with remarkably less computational time.

  19. Making the most out of the least (squares migration)

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  20. Plane-wave least-squares reverse-time migration

    KAUST Repository

    Dai, Wei

    2013-06-03

    A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.

  1. On the stability and accuracy of least squares approximations

    CERN Document Server

    Cohen, Albert; Leviatan, Dany

    2011-01-01

    We consider the problem of reconstructing an unknown function $f$ on a domain $X$ from samples of $f$ at $n$ randomly chosen points with respect to a given measure $\\rho_X$. Given a sequence of linear spaces $(V_m)_{m>0}$ with ${\\rm dim}(V_m)=m\\leq n$, we study the least squares approximations from the spaces $V_m$. It is well known that such approximations can be inaccurate when $m$ is too close to $n$, even when the samples are noiseless. Our main result provides a criterion on $m$ that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in $L^2(X,\\rho_X)$, is comparable to the best approximation error of $f$ by elements from $V_m$. We illustrate this criterion for various approximation schemes, such as trigonometric polynomials, with $\\rho_X$ being the uniform measure, and algebraic polynomials, with $\\rho_X$ being either the uniform or Chebyshev measure. For such examples we also prove similar stability results using deterministic...

  2. Decision-Directed Recursive Least Squares MIMO Channels Tracking

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available A new approach for joint data estimation and channel tracking for multiple-input multiple-output (MIMO channels is proposed based on the decision-directed recursive least squares (DD-RLS algorithm. RLS algorithm is commonly used for equalization and its application in channel estimation is a novel idea. In this paper, after defining the weighted least squares cost function it is minimized and eventually the RLS MIMO channel estimation algorithm is derived. The proposed algorithm combined with the decision-directed algorithm (DDA is then extended for the blind mode operation. From the computational complexity point of view being O3 versus the number of transmitter and receiver antennas, the proposed algorithm is very efficient. Through various simulations, the mean square error (MSE of the tracking of the proposed algorithm for different joint detection algorithms is compared with Kalman filtering approach which is one of the most well-known channel tracking algorithms. It is shown that the performance of the proposed algorithm is very close to Kalman estimator and that in the blind mode operation it presents a better performance with much lower complexity irrespective of the need to know the channel model.

  3. Forecasting Istanbul monthly temperature by multivariate partial least square

    Science.gov (United States)

    Ertaç, Mefharet; Firuzan, Esin; Solum, Şenol

    2015-07-01

    Weather forecasting, especially for temperature, has always been a popular subject since it affects our daily life and always includes uncertainty as statistics does. The goals of this study are (a) to forecast monthly mean temperature by benefitting meteorological variables like temperature, humidity and rainfall; and (b) to improve the forecast ability by evaluating the forecasting errors depending upon the parameter changes and local or global forecasting methods. Approximately 100 years of meteorological data from 54 automatic meteorology observation stations of Istanbul that is the mega city of Turkey are analyzed to infer about the meteorological behaviour of the city. A new partial least square (PLS) forecasting technique based on chaotic analysis is also developed by using nonlinear time series and variable selection methods. The proposed model is also compared with artificial neural networks (ANNs), which model nonlinearly the relation between inputs and outputs by working neurons like human brain. Ordinary least square (OLS), PLS and ANN methods are used for nonlinear time series forecasting in this study. Major findings are the chaotic nature of the meteorological data of Istanbul and the best performance values of the proposed PLS model.

  4. Efficient Model Selection for Sparse Least-Square SVMs

    Directory of Open Access Journals (Sweden)

    Xiao-Lei Xia

    2013-01-01

    Full Text Available The Forward Least-Squares Approximation (FLSA SVM is a newly-emerged Least-Square SVM (LS-SVM whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.

  5. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  6. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang

    2013-12-06

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower

  7. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    KAUST Repository

    Cao, Jiguo

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.

  8. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  9. Least squares deconvolution of the stellar intensity and polarization spectra

    CERN Document Server

    Kochukhov, O; Piskunov, N

    2010-01-01

    Least squares deconvolution (LSD) is a powerful method of extracting high-precision average line profiles from the stellar intensity and polarization spectra. Despite its common usage, the LSD method is poorly documented and has never been tested using realistic synthetic spectra. In this study we revisit the key assumptions of the LSD technique, clarify its numerical implementation, discuss possible improvements and give recommendations how to make LSD results understandable and reproducible. We also address the problem of interpretation of the moments and shapes of the LSD profiles in terms of physical parameters. We have developed an improved, multiprofile version of LSD and have extended the deconvolution procedure to linear polarization analysis taking into account anomalous Zeeman splitting of spectral lines. This code is applied to the theoretical Stokes parameter spectra. We test various methods of interpreting the mean profiles, investigating how coarse approximations of the multiline technique trans...

  10. ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    SONG Kaichen; NIE Xili

    2006-01-01

    Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.

  11. Column Reordering for Box-Constrained Integer Least Squares Problems

    CERN Document Server

    Breen, Stephen

    2012-01-01

    The box-constrained integer least squares problem (BILS) arises in MIMO wireless communications applications. Typically a sphere decoding algorithm (a tree search algorithm) is used to solve the problem. In order to make the search algorithm more efficient, the columns of the channel matrix in the BILS problem have to be reordered. To our knowledge, there are currently two algorithms for column reordering that provide the best known results. Both use all available information, but they were derived respectively from geometric and algebraic points of view and look different. In this paper we modify one to make it more computationally efficient and easier to comprehend. Then we prove the modified one and the other actually give the same column reordering in theory. Finally we propose a new mathematically equivalent algorithm, which is more computationally efficient and is still easy to understand.

  12. Least-squares based iterative multipath super-resolution technique

    CERN Document Server

    Nam, Wooseok

    2011-01-01

    In this paper, we study the problem of multipath channel estimation for direct sequence spread spectrum signals. To resolve multipath components arriving within a short interval, we propose a new algorithm called the least-squares based iterative multipath super-resolution (LIMS). Compared to conventional super-resolution techniques, such as the multiple signal classification (MUSIC) and the estimation of signal parameters via rotation invariance techniques (ESPRIT), our algorithm has several appealing features. In particular, even in critical situations where the conventional super-resolution techniques are not very powerful due to limited data or the correlation between path coefficients, the LIMS algorithm can produce successful results. In addition, due to its iterative nature, the LIMS algorithm is suitable for recursive multipath tracking, whereas the conventional super-resolution techniques may not be. Through numerical simulations, we show that the LIMS algorithm can resolve the first arrival path amo...

  13. Robust Homography Estimation Based on Nonlinear Least Squares Optimization

    Directory of Open Access Journals (Sweden)

    Wei Mou

    2014-01-01

    Full Text Available The homography between image pairs is normally estimated by minimizing a suitable cost function given 2D keypoint correspondences. The correspondences are typically established using descriptor distance of keypoints. However, the correspondences are often incorrect due to ambiguous descriptors which can introduce errors into following homography computing step. There have been numerous attempts to filter out these erroneous correspondences, but it is unlikely to always achieve perfect matching. To deal with this problem, we propose a nonlinear least squares optimization approach to compute homography such that false matches have no or little effect on computed homography. Unlike normal homography computation algorithms, our method formulates not only the keypoints’ geometric relationship but also their descriptor similarity into cost function. Moreover, the cost function is parametrized in such a way that incorrect correspondences can be simultaneously identified while the homography is computed. Experiments show that the proposed approach can perform well even with the presence of a large number of outliers.

  14. Estimating Military Aircraft Cost Using Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    ZHU Jia-yuan; ZHANG Xi-bin; ZHANG Heng-xi; REN Bo

    2004-01-01

    A multi-layer adaptive optimizing parameters algorithm is developed for improving least squares support vector machines(LS-SVM),and a military aircraft life-cycle-cost(LCC)intelligent estimation model is proposed based on the improved LS-SVM.The intelligent cost estimation process is divided into three steps in the model.In the first step,a cost-drive-factor needs to be selected,which is significant for cost estimation.In the second step,military aircraft training samples within costs and cost-drive-factor set are obtained by the LS-SVM.Then the model can be used for new type aircraft cost estimation.Chinese military aircraft costs are estimated in the paper.The results show that the estimated costs by the new model are closer to the true costs than that of the traditionally used methods.

  15. A Galerkin least squares approach to viscoelastic flow.

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  16. DIRECT ITERATIVE METHODS FOR RANK DEFICIENT GENERALIZED LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Jin-yun Yuan; Xiao-qing Jin

    2000-01-01

    The generalized least squares (LS) problem appears in many application areas. Here W is an m × m symmetric positive definite matrix and A is an m × n matrix with m≥n. Since the problem has many solutions in rank deficient case, some special preconditioned techniques are adapted to obtain the minimum 2-norm solution. A block SOR method and the preconditioned conjugate gradient (PCG) method are proposed here. Convergence and optimal relaxation parameter for the block SOR method are studied. An error bound for the PCG method is given. The comparison of these methods is investigated. Some remarks on the implementation of the methods and the operation cost are given as well.

  17. Temperature prediction control based on least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Bin LIU; Hongye SU; Weihua HUANG; Jian CHU

    2004-01-01

    A prediction control algorithm is presented based on least squares support vector machines (LS-SVM) model for a class of complex systems with strong nonlinearity.The nonlinear off-line model of the controlled plant is built by LS-SVM with radial basis function (RBF) kernel.In the process of system running,the off-line model is linearized at each sampling instant,and the generalized prediction control (GPC) algorithm is employed to implement the prediction control for the controlled plant.The obtained algorithm is applied to a boiler temperature control system with complicated nonlinearity and large time delay.The results of the experiment verify the effectiveness and merit of the algorithm.

  18. Local validation of EU-DEM using Least Squares Collocation

    Science.gov (United States)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  19. Flow Applications of the Least Squares Finite Element Method

    Science.gov (United States)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  20. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  1. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  2. Least-squares deconvolution based analysis of stellar spectra

    CERN Document Server

    Van Reeth, T; Tsymbal, V

    2013-01-01

    In recent years, astronomical photometry has been revolutionised by space missions such as MOST, CoRoT and Kepler. However, despite this progress, high-quality spectroscopy is still required as well. Unfortunately, high-resolution spectra can only be obtained using ground-based telescopes, and since many interesting targets are rather faint, the spectra often have a relatively low S/N. Consequently, we have developed an algorithm based on the least-squares deconvolution profile, which allows to reconstruct an observed spectrum, but with a higher S/N. We have successfully tested the method using both synthetic and observed data, and in combination with several common spectroscopic applications, such as e.g. the determination of atmospheric parameter values, and frequency analysis and mode identification of stellar pulsations.

  3. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  4. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  5. Least-Squares Seismic Inversion with Stochastic Conjugate Gradient Method

    Institute of Scientific and Technical Information of China (English)

    Wei Huang; Hua-Wei Zhou

    2015-01-01

    With the development of computational power, there has been an increased focus on data-fitting related seismic inversion techniques for high fidelity seismic velocity model and image, such as full-waveform inversion and least squares migration. However, though more advanced than conventional methods, these data fitting methods can be very expensive in terms of computational cost. Recently, various techniques to optimize these data-fitting seismic inversion problems have been implemented to cater for the industrial need for much improved efficiency. In this study, we propose a general stochastic conjugate gradient method for these data-fitting related inverse problems. We first prescribe the basic theory of our method and then give synthetic examples. Our numerical experiments illustrate the potential of this method for large-size seismic inversion application.

  6. Götterdämmerung over total least squares

    Science.gov (United States)

    Malissiovas, G.; Neitzel, F.; Petrovic, S.

    2016-06-01

    The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.

  7. Recursive least square vehicle mass estimation based on acceleration partition

    Science.gov (United States)

    Feng, Yuan; Xiong, Lu; Yu, Zhuoping; Qu, Tong

    2014-05-01

    Vehicle mass is an important parameter in vehicle dynamics control systems. Although many algorithms have been developed for the estimation of mass, none of them have yet taken into account the different types of resistance that occur under different conditions. This paper proposes a vehicle mass estimator. The estimator incorporates road gradient information in the longitudinal accelerometer signal, and it removes the road grade from the longitudinal dynamics of the vehicle. Then, two different recursive least square method (RLSM) schemes are proposed to estimate the driving resistance and the mass independently based on the acceleration partition under different conditions. A 6 DOF dynamic model of four In-wheel Motor Vehicle is built to assist in the design of the algorithm and in the setting of the parameters. The acceleration limits are determined to not only reduce the estimated error but also ensure enough data for the resistance estimation and mass estimation in some critical situations. The modification of the algorithm is also discussed to improve the result of the mass estimation. Experiment data on a sphalt road, plastic runway, and gravel road and on sloping roads are used to validate the estimation algorithm. The adaptability of the algorithm is improved by using data collected under several critical operating conditions. The experimental results show the error of the estimation process to be within 2.6%, which indicates that the algorithm can estimate mass with great accuracy regardless of the road surface and gradient changes and that it may be valuable in engineering applications. This paper proposes a recursive least square vehicle mass estimation method based on acceleration partition.

  8. Partial Least Squares tutorial for analyzing neuroimaging data

    Directory of Open Access Journals (Sweden)

    Patricia Van Roon

    2014-09-01

    Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The

  9. Recursive least squares background prediction of univariate syndromic surveillance data

    Directory of Open Access Journals (Sweden)

    Burkom Howard

    2009-01-01

    Full Text Available Abstract Background Surveillance of univariate syndromic data as a means of potential indicator of developing public health conditions has been used extensively. This paper aims to improve the performance of detecting outbreaks by using a background forecasting algorithm based on the adaptive recursive least squares method combined with a novel treatment of the Day of the Week effect. Methods Previous work by the first author has suggested that univariate recursive least squares analysis of syndromic data can be used to characterize the background upon which a prediction and detection component of a biosurvellance system may be built. An adaptive implementation is used to deal with data non-stationarity. In this paper we develop and implement the RLS method for background estimation of univariate data. The distinctly dissimilar distribution of data for different days of the week, however, can affect filter implementations adversely, and so a novel procedure based on linear transformations of the sorted values of the daily counts is introduced. Seven-days ahead daily predicted counts are used as background estimates. A signal injection procedure is used to examine the integrated algorithm's ability to detect synthetic anomalies in real syndromic time series. We compare the method to a baseline CDC forecasting algorithm known as the W2 method. Results We present detection results in the form of Receiver Operating Characteristic curve values for four different injected signal to noise ratios using 16 sets of syndromic data. We find improvements in the false alarm probabilities when compared to the baseline W2 background forecasts. Conclusion The current paper introduces a prediction approach for city-level biosurveillance data streams such as time series of outpatient clinic visits and sales of over-the-counter remedies. This approach uses RLS filters modified by a correction for the weekly patterns often seen in these data series, and a threshold

  10. Least-squares joint imaging of multiples and primaries

    Science.gov (United States)

    Brown, Morgan Parker

    Current exploration geophysics practice still regards multiple reflections as noise, although multiples often contain considerable information about the earth's angle-dependent reflectivity that primary reflections do not. To exploit this information, multiples and primaries must be combined in a domain in which they are comparable, such as in the prestack image domain. However, unless the multiples and primaries have been pre-separated from the data, crosstalk leakage between multiple and primary images will significantly degrade any gains in the signal fidelity, geologic interpretability, and signal-to-noise ratio of the combined image. I present a global linear least-squares algorithm, denoted LSJIMP (Least-squares Joint Imaging of Multiples and Primaries), which separates multiples from primaries while simultaneously combining their information. The novelty of the method lies in the three model regularization operators which discriminate between crosstalk and signal and extend information between multiple and primary images. The LSJIMP method exploits the hitherto ignored redundancy between primaries and multiples in the data. While many different types of multiple imaging operators are well-suited for use with the LSJIMP method, in this thesis I utilize an efficient prestack time imaging strategy for multiples which sacrifices accuracy in a complex earth for computational speed and convenience. I derive a variant of the normal moveout (NMO) equation for multiples, called HEMNO, which can image "split" pegleg multiples which arise from a moderately heterogeneous earth. I also derive a series of prestack amplitude compensation operators which when combined with HEMNO, transform pegleg multiples into events are directly comparable---kinematically and in terms of amplitudes---to the primary reflection. I test my implementation of LSJIMP on two datasets from the deepwater Gulf of Mexico. The first, a 2-D line in the Mississippi Canyon region, exhibits a variety of

  11. A least squares closure approximation for liquid crystalline polymers

    Science.gov (United States)

    Sievenpiper, Traci Ann

    2011-12-01

    An introduction to existing closure schemes for the Doi-Hess kinetic theory of liquid crystalline polymers is provided. A new closure scheme is devised based on a least squares fit of a linear combination of the Doi, Tsuji-Rey, Hinch-Leal I, and Hinch-Leal II closure schemes. The orientation tensor and rate-of-strain tensor are fit separately using data generated from the kinetic solution of the Smoluchowski equation. The known behavior of the kinetic solution and existing closure schemes at equilibrium is compared with that of the new closure scheme. The performance of the proposed closure scheme in simple shear flow for a variety of shear rates and nematic polymer concentrations is examined, along with that of the four selected existing closure schemes. The flow phase diagram for the proposed closure scheme under the conditions of shear flow is constructed and compared with that of the kinetic solution. The study of the closure scheme is extended to the simulation of nematic polymers in plane Couette cells. The results are compared with existing kinetic simulations for a Landau-deGennes mesoscopic model with the application of a parameterized closure approximation. The proposed closure scheme is shown to produce a reasonable approximation to the kinetic results in the case of simple shear flow and plane Couette flow.

  12. Non-parametric and least squares Langley plot methods

    Directory of Open Access Journals (Sweden)

    P. W. Kiedron

    2015-04-01

    Full Text Available Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer–Lambert–Beer law V=V>/i>0e−τ ·m, where a plot of ln (V voltage vs. m air mass yields a straight line with intercept ln (V0. This ln (V0 subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The eleven techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln (V0's are smoothed and interpolated with median and mean moving window filters.

  13. PREDIKSI WAKTU KETAHANAN HIDUP DENGAN METODE PARTIAL LEAST SQUARE

    Directory of Open Access Journals (Sweden)

    PANDE PUTU BUDI KUSUMA

    2013-03-01

    Full Text Available Coronary heart disease is caused due to an accumulation of fat on the inside walls of blood vessels of the heart (coronary arteries. The factors that had led to the occurrence of coronary heart disease is dominated by unhealthy lifestyle of patients, and the survival times of different patients. This research objective is to predict the survival time of patients with coronary heart disease by taking into account the explanatory variables were analyzed by the method of Partial Least Square (PLS.  PLS method is used to resolve the multiple regression analysis when the specific problems of multicollinearity and microarray data. The purpose of the PLS method is to predict the explanatory variables with multiple response variables so as to produce a more accurate predictive value.  The results of this research showed that the prediction of survival for the three samples of patients with coronary heart disease had an average of 13 days, with a RMSEP value (error value was 1.526 which means that the results of this study are not much different from the predicted results in the field of medicine. This is consistent with the fact that the medical field suggests that the average survival for patients with coronary heart disease by 13 days.

  14. Application of the Least Squares Method in Axisymmetric Biharmonic Problems

    Directory of Open Access Journals (Sweden)

    Vasyl Chekurin

    2016-01-01

    Full Text Available An approach for solving of the axisymmetric biharmonic boundary value problems for semi-infinite cylindrical domain was developed in the paper. On the lateral surface of the domain homogeneous Neumann boundary conditions are prescribed. On the remaining part of the domain’s boundary four different biharmonic boundary pieces of data are considered. To solve the formulated biharmonic problems the method of least squares on the boundary combined with the method of homogeneous solutions was used. That enabled reducing the problems to infinite systems of linear algebraic equations which can be solved with the use of reduction method. Convergence of the solution obtained with developed approach was studied numerically on some characteristic examples. The developed approach can be used particularly to solve axisymmetric elasticity problems for cylindrical bodies, the heights of which are equal to or exceed their diameters, when on their lateral surface normal and tangential tractions are prescribed and on the cylinder’s end faces various types of boundary conditions in stresses in displacements or mixed ones are given.

  15. Nonlinear least-squares data fitting in Excel spreadsheets.

    Science.gov (United States)

    Kemmer, Gerdi; Keller, Sandro

    2010-02-01

    We describe an intuitive and rapid procedure for analyzing experimental data by nonlinear least-squares fitting (NLSF) in the most widely used spreadsheet program. Experimental data in x/y form and data calculated from a regression equation are inputted and plotted in a Microsoft Excel worksheet, and the sum of squared residuals is computed and minimized using the Solver add-in to obtain the set of parameter values that best describes the experimental data. The confidence of best-fit values is then visualized and assessed in a generally applicable and easily comprehensible way. Every user familiar with the most basic functions of Excel will be able to implement this protocol, without previous experience in data fitting or programming and without additional costs for specialist software. The application of this tool is exemplified using the well-known Michaelis-Menten equation characterizing simple enzyme kinetics. Only slight modifications are required to adapt the protocol to virtually any other kind of dataset or regression equation. The entire protocol takes approximately 1 h.

  16. River flow time series using least squares support vector machines

    Science.gov (United States)

    Samsudin, R.; Saad, P.; Shabri, A.

    2011-06-01

    This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA), GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE) and coefficient of correlation (R) are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.

  17. 3D plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2014-08-05

    A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.

  18. Robustness of ordinary least squares in randomized clinical trials.

    Science.gov (United States)

    Judkins, David R; Porter, Kristin E

    2016-05-20

    There has been a series of occasional papers in this journal about semiparametric methods for robust covariate control in the analysis of clinical trials. These methods are fairly easy to apply on currently available computers, but standard software packages do not yet support these methods with easy option selections. Moreover, these methods can be difficult to explain to practitioners who have only a basic statistical education. There is also a somewhat neglected history demonstrating that ordinary least squares (OLS) is very robust to the types of outcome distribution features that have motivated the newer methods for robust covariate control. We review these two strands of literature and report on some new simulations that demonstrate the robustness of OLS to more extreme normality violations than previously explored. The new simulations involve two strongly leptokurtic outcomes: near-zero binary outcomes and zero-inflated gamma outcomes. Potential examples of such outcomes include, respectively, 5-year survival rates for stage IV cancer and healthcare claim amounts for rare conditions. We find that traditional OLS methods work very well down to very small sample sizes for such outcomes. Under some circumstances, OLS with robust standard errors work well with even smaller sample sizes. Given this literature review and our new simulations, we think that most researchers may comfortably continue using standard OLS software, preferably with the robust standard errors. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26694758

  19. Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao bo......In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram......\\'{e}r-Rao bounds for the joint estimation problem. Then, we propose a nonlinear least squares (NLS) and an approximate NLS (aNLS) estimator for joint DOA and fundamental frequency estimation. The proposed estimators are maximum likelihood estimators when: 1) the noise is white Gaussian, 2) the environment is...... anechoic, and 3) the source of interest is in the far-field. Otherwise, the methods still approximately yield maximum likelihood estimates. Simulations on synthetic data show that the proposed methods have similar or better performance than state-of-the-art methods for DOA and fundamental frequency...

  20. Non-parametric and least squares Langley plot methods

    Science.gov (United States)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  1. Optimization of Parameter Selection for Partial Least Squares Model Development

    Science.gov (United States)

    Zhao, Na; Wu, Zhi-Sheng; Zhang, Qiao; Shi, Xin-Yuan; Ma, Qun; Qiao, Yan-Jiang

    2015-07-01

    In multivariate calibration using a spectral dataset, it is difficult to optimize nonsystematic parameters in a quantitative model, i.e., spectral pretreatment, latent factors and variable selection. In this study, we describe a novel and systematic approach that uses a processing trajectory to select three parameters including different spectral pretreatments, variable importance in the projection (VIP) for variable selection and latent factors in the Partial Least-Square (PLS) model. The root mean square errors of calibration (RMSEC), the root mean square errors of prediction (RMSEP), the ratio of standard error of prediction to standard deviation (RPD), and the determination coefficient of calibration (Rcal2) and validation (Rpre2) were simultaneously assessed to optimize the best modeling path. We used three different near-infrared (NIR) datasets, which illustrated that there was more than one modeling path to ensure good modeling. The PLS model optimizes modeling parameters step-by-step, but the robust model described here demonstrates better efficiency than other published papers.

  2. Non-linear Least Squares Fitting in IDL with MPFIT

    CERN Document Server

    Markwardt, Craig B

    2009-01-01

    MPFIT is a port to IDL of the non-linear least squares fitting program MINPACK-1. MPFIT inherits the robustness of the original FORTRAN version of MINPACK-1, but is optimized for performance and convenience in IDL. In addition to the main fitting engine, MPFIT, several specialized functions are provided to fit 1-D curves and 2-D images; 1-D and 2-D peaks; and interactive fitting from the IDL command line. Several constraints can be applied to model parameters, including fixed constraints, simple bounding constraints, and "tying" the value to another parameter. Several data weighting methods are allowed, and the parameter covariance matrix is computed. Extensive diagnostic capabilities are available during the fit, via a call-back subroutine, and after the fit is complete. Several different forms of documentation are provided, including a tutorial, reference pages, and frequently asked questions. The package has been translated to C and Python as well. The full IDL and C packages can be found at http://purl.co...

  3. Reconciling alternate methods for the determination of charge distributions: A probabilistic approach to high-dimensional least-squares approximations

    CERN Document Server

    Champagnat, Nicolas; Faou, Erwan

    2010-01-01

    We propose extensions and improvements of the statistical analysis of distributed multipoles (SADM) algorithm put forth by Chipot et al. in [6] for the derivation of distributed atomic multipoles from the quantum-mechanical electrostatic potential. The method is mathematically extended to general least-squares problems and provides an alternative approximation method in cases where the original least-squares problem is computationally not tractable, either because of its ill-posedness or its high-dimensionality. The solution is approximated employing a Monte Carlo method that takes the average of a random variable defined as the solutions of random small least-squares problems drawn as subsystems of the original problem. The conditions that ensure convergence and consistency of the method are discussed, along with an analysis of the computational cost in specific instances.

  4. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  5. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  6. The moving-least-squares-particle hydrodynamics method (MLSPH)

    Energy Technology Data Exchange (ETDEWEB)

    Dilts, G. [Los Alamos National Lab., NM (United States)

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.

  7. Integer least-squares theory for the GNSS compass

    Science.gov (United States)

    Teunissen, P. J. G.

    2010-07-01

    Global navigation satellite system (GNSS) carrier phase integer ambiguity resolution is the key to high-precision positioning and attitude determination. In this contribution, we develop new integer least-squares (ILS) theory for the GNSS compass model, together with efficient integer search strategies. It extends current unconstrained ILS theory to the nonlinearly constrained case, an extension that is particularly suited for precise attitude determination. As opposed to current practice, our method does proper justice to the a priori given information. The nonlinear baseline constraint is fully integrated into the ambiguity objective function, thereby receiving a proper weighting in its minimization and providing guidance for the integer search. Different search strategies are developed to compute exact and approximate solutions of the nonlinear constrained ILS problem. Their applicability depends on the strength of the GNSS model and on the length of the baseline. Two of the presented search strategies, a global and a local one, are based on the use of an ellipsoidal search space. This has the advantage that standard methods can be applied. The global ellipsoidal search strategy is applicable to GNSS models of sufficient strength, while the local ellipsoidal search strategy is applicable to models for which the baseline lengths are not too small. We also develop search strategies for the most challenging case, namely when the curvature of the non-ellipsoidal ambiguity search space needs to be taken into account. Two such strategies are presented, an approximate one and a rigorous, somewhat more complex, one. The approximate one is applicable when the fixed baseline variance matrix is close to diagonal. Both methods make use of a search and shrink strategy. The rigorous solution is efficiently obtained by means of a search and shrink strategy that uses non-quadratic, but easy-to-evaluate, bounding functions of the ambiguity objective function. The theory

  8. Linear least squares compartmental-model-independent parameter identification in PET.

    Science.gov (United States)

    Thie, J A; Smith, G T; Hubner, K F

    1997-02-01

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals--all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible.

  9. Frequency domain analysis and synthesis of lumped parameter systems using nonlinear least squares techniques

    Science.gov (United States)

    Hays, J. R.

    1969-01-01

    Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.

  10. Error Estimates Derived from the Data for Least-Squares Spline Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2007-06-25

    The use of least-squares fitting by cubic splines for the purpose of noise reduction in measured data is studied. Splines with variable mesh size are considered. The error, the difference between the input signal and its estimate, is divided into two sources: the R-error, which depends only on the noise and increases with decreasing mesh size, and the Ferror, which depends only on the signal and decreases with decreasing mesh size. The estimation of both errors as a function of time is demonstrated. The R-error estimation requires knowledge of the statistics of the noise and uses well-known methods. The primary contribution of the paper is a method for estimating the F-error that requires no prior knowledge of the signal except that it has four derivatives. It is calculated from the difference between two different spline fits to the data and is illustrated with Monte Carlo simulations and with an example.

  11. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    Science.gov (United States)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  12. Fitting of two and three variate polynomials from experimental data through the least squares method

    International Nuclear Information System (INIS)

    Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries

  13. LSFODF: a generalized nonlinear least-squares fitting program for use with ORELA ODF files

    International Nuclear Information System (INIS)

    The Fortran-10 program LSFODF has been written on the ORELA PDP-10 in order to perform non-linear least-squares curve fitting with user supplied functions and derivatives on data which can be read directly from ORELA-data-format (ODF) files. LSFODF can be used with any user supplied function and derivatives; has its storage requirements specified in this function; has P-search and eta-search capabilities; and can output the input data and fitted curve in an ODF file which then can be manipulated and plotted with the existing ORELA library of ODF programs. A description of the fitting formalism, input instructions, five test cases, and a program listing are given

  14. NEGATIVE NORM LEAST-SQUARES METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Gao Shaoqin; Duan Huoyuan

    2008-01-01

    The purpose of this article is to develop and analyze least-squares approxi-mations for the incompressible magnetohydrodynamic equations. The major advantage of the least-squares finite element method is that it is not subjected to the so-called Ladyzhenskaya-Babuska-Brezzi (LBB) condition. The authors employ least-squares func-tionals which involve a discrete inner product which is related to the inner product in H-1(Ω).

  15. Comparison of structural and least-squares lines for estimating geologic relations

    Science.gov (United States)

    Williams, G.P.; Troutman, B.M.

    1990-01-01

    Two different goals in fitting straight lines to data are to estimate a "true" linear relation (physical law) and to predict values of the dependent variable with the smallest possible error. Regarding the first goal, a Monte Carlo study indicated that the structural-analysis (SA) method of fitting straight lines to data is superior to the ordinary least-squares (OLS) method for estimating "true" straight-line relations. Number of data points, slope and intercept of the true relation, and variances of the errors associated with the independent (X) and dependent (Y) variables influence the degree of agreement. For example, differences between the two line-fitting methods decrease as error in X becomes small relative to error in Y. Regarding the second goal-predicting the dependent variable-OLS is better than SA. Again, the difference diminishes as X takes on less error relative to Y. With respect to estimation of slope and intercept and prediction of Y, agreement between Monte Carlo results and large-sample theory was very good for sample sizes of 100, and fair to good for sample sizes of 20. The procedures and error measures are illustrated with two geologic examples. ?? 1990 International Association for Mathematical Geology.

  16. CHEBYSHEV WEIGHTED NORM LEAST-SQUARES SPECTRAL METHODS FOR THE ELLIPTIC PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Sang Dong Kim; Byeong Chun Shin

    2006-01-01

    We develop and analyze a first-order system least-squares spectral method for the second-order elliptic boundary value problem with variable coefficients. We first analyze the Chebyshev weighted norm least-squares functional defined by the sum of the L2w-and H-1w,- norm of the residual equations and then we replace the negative norm by the discrete negative norm and analyze the discrete Chebyshev weighted least-squares method. The spectral convergence is derived for the proposed method. We also present various numerical experiments. The Legendre weighted least-squares method can be easily developed by following this paper.

  17. Extension of least squares spectral resolution algorithm to high-resolution lipidomics data.

    Science.gov (United States)

    Zeng, Ying-Xu; Mjøs, Svein Are; David, Fabrice P A; Schmid, Adrien W

    2016-03-31

    Lipidomics, which focuses on the global study of molecular lipids in biological systems, has been driven tremendously by technical advances in mass spectrometry (MS) instrumentation, particularly high-resolution MS. This requires powerful computational tools that handle the high-throughput lipidomics data analysis. To address this issue, a novel computational tool has been developed for the analysis of high-resolution MS data, including the data pretreatment, visualization, automated identification, deconvolution and quantification of lipid species. The algorithm features the customized generation of a lipid compound library and mass spectral library, which covers the major lipid classes such as glycerolipids, glycerophospholipids and sphingolipids. Next, the algorithm performs least squares resolution of spectra and chromatograms based on the theoretical isotope distribution of molecular ions, which enables automated identification and quantification of molecular lipid species. Currently, this methodology supports analysis of both high and low resolution MS as well as liquid chromatography-MS (LC-MS) lipidomics data. The flexibility of the methodology allows it to be expanded to support more lipid classes and more data interpretation functions, making it a promising tool in lipidomic data analysis.

  18. Multilevel solvers of first-order system least-squares for Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  19. Least-squares methods involving the H{sup -1} inner product

    Energy Technology Data Exchange (ETDEWEB)

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  20. An Effective Hybrid Artificial Bee Colony Algorithm for Nonnegative Linear Least Squares Problems

    Directory of Open Access Journals (Sweden)

    Xiangyu Kong

    2014-07-01

    Full Text Available An effective hybrid artificial bee colony algorithm is proposed in this paper for nonnegative linear least squares problems. To further improve the performance of algorithm, orthogonal initialization method is employed to generate the initial swarm. Furthermore, to balance the exploration and exploitation abilities, a new search mechanism is designed. The performance of this algorithm is verified by using 27 benchmark functions and 5 nonnegative linear least squares test problems. And the comparison analyses are given between the proposed algorithm and other swarm intelligence algorithms. Numerical results demonstrate that the proposed algorithm displays a high performance compared with other algorithms for global optimization problems and nonnegative linear least squares problems.

  1. A least squares finite element scheme for transonic flow around harmonically oscillating airfoils

    Science.gov (United States)

    Cox, C. L.; Fix, G. J.; Gunzburger, M. D.

    1983-01-01

    The present investigation shows that a finite element scheme with a weighted least squares variational principle is applicable to the problem of transonic flow around a harmonically oscillating airfoil. For the flat plate case, numerical results compare favorably with the exact solution. The obtained numerical results for the transonic problem, for which an exact solution is not known, have the characteristics of known experimental results. It is demonstrated that the performance of the employed numerical method is independent of equation type (elliptic or hyperbolic) and frequency. The weighted least squares principle allows the appropriate modeling of singularities, which such a modeling of singularities is not possible with normal least squares.

  2. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    Science.gov (United States)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  3. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Manteuffel, T.A [Univ. of Colorado, Boulder, CO (United States); Ressel, K.J. [Interdisciplinary Project Center for Supercomputing, Zurich (Switzerland); Starkes, G. [Universtaet Karlsruhe (Germany)

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  4. Iterative least-squares solvers for the Navier-Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Bochev, P. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  5. Methodology and theory for partial least squares applied to functional data

    CERN Document Server

    Delaigle, Aurore; 10.1214/11-AOS958

    2012-01-01

    The partial least squares procedure was originally developed to estimate the slope parameter in multivariate parametric models. More recently it has gained popularity in the functional data literature. There, the partial least squares estimator of slope is either used to construct linear predictive models, or as a tool to project the data onto a one-dimensional quantity that is employed for further statistical analysis. Although the partial least squares approach is often viewed as an attractive alternative to projections onto the principal component basis, its properties are less well known than those of the latter, mainly because of its iterative nature. We develop an explicit formulation of partial least squares for functional data, which leads to insightful results and motivates new theory, demonstrating consistency and establishing convergence rates.

  6. A window least squares algorithm for statistical noise smoothing of 2D-ACAR data

    International Nuclear Information System (INIS)

    Taking into account a number of basic features of the histograms of two-dimensional angular correlation of the positron annihilation radiation (2D-ACAR), a window least squares technique for statistical noise smoothing is proposed. (author). 15 refs

  7. 8th International Conference on Partial Least Squares and Related Methods

    CERN Document Server

    Vinzi, Vincenzo; Russolillo, Giorgio; Saporta, Gilbert; Trinchera, Laura

    2016-01-01

    This volume presents state of the art theories, new developments, and important applications of Partial Least Square (PLS) methods. The text begins with the invited communications of current leaders in the field who cover the history of PLS, an overview of methodological issues, and recent advances in regression and multi-block approaches. The rest of the volume comprises selected, reviewed contributions from the 8th International Conference on Partial Least Squares and Related Methods held in Paris, France, on 26-28 May, 2014. They are organized in four coherent sections: 1) new developments in genomics and brain imaging, 2) new and alternative methods for multi-table and path analysis, 3) advances in partial least square regression (PLSR), and 4) partial least square path modeling (PLS-PM) breakthroughs and applications. PLS methods are very versatile methods that are now used in areas as diverse as engineering, life science, sociology, psychology, brain imaging, genomics, and business among both academics ...

  8. LEAST-SQUARES MIXED FINITE ELEMENT METHODS FOR NONLINEAR PARABOLIC PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Dan-ping Yang

    2002-01-01

    Two least-squares mixed finite element schemes are formulated to solve the initialboundary value problem of a nonlinear parabolic partial differential equation and the convergence of these schemes are analyzed.

  9. SUPERCONVERGENCE OF LEAST-SQUARES MIXED FINITE ELEMENTS FOR ELLIPTIC PROBLEMS ON TRIANGULATION

    Institute of Scientific and Technical Information of China (English)

    陈艳萍; 杨菊娥

    2003-01-01

    In this paper,we present the least-squares mixed finite element method and investigate superconvergence phenomena for the second order elliptic boundary-value problems over triangulations.On the basis of the L2-projection and some mixed finite element projections,we obtain the superconvergence result of least-squares mixed finite element solutions.This error estimate indicates an accuracy of O(h3/2)if the lowest order Raviart-Thomas elements are employed.

  10. Consistency of the structured total least squares estimator in a multivariate errors-in-variables model

    OpenAIRE

    Kukush, A.; I. Markovsky; Van Huffel, S.

    2005-01-01

    The structured total least squares estimator, defined via a constrained optimization problem, is a generalization of the total least squares estimator when the data matrix and the applied correction satisfy given structural constraints. In the paper, an affine structure with additional assumptions is considered. In particular, Toeplitz and Hankel structured, noise free and unstructured blocks are allowed simultaneously in the augmented data matrix. An equivalent optimization problem is derive...

  11. ON STABLE PERTURBATIONS OF THE STIFFLY WEIGHTED PSEUDOINVERSE AND WEIGHTED LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Mu-sheng Wei

    2005-01-01

    In this paper we study perturbations of the stiffly weighted pseudoinverse (W1/2 A)+W1/2 and the related stiffly weighted least squares problem, where both the matrices A and W are given with W positive diagonal and severely stiff. We show that the perturbations to the stiffly weighted pseudoinverse and the related stiffly weighted least squares problem are stable, if and only if the perturbed matrices (^)A = A+δA satisfy several row rank preserving conditions.

  12. Solving method of generalized nonlinear dynamic least squares for data processing in building of digital mine

    Institute of Scientific and Technical Information of China (English)

    TAO Hua-xue (陶华学); GUO Jin-yun (郭金运)

    2003-01-01

    Data are very important to build the digital mine. Data come from many sources, have different types and temporal states. Relations between one class of data and the other one, or between data and unknown parameters are more nonlinear. The unknown parameters are non-random or random, among which the random parameters often dynamically vary with time. Therefore it is not accurate and reliable to process the data in building the digital mine with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method to process data in building the digital mine is put forward. In the meantime, the corresponding mathematical model is also given. The generalized nonlinear least squares problem is more complex than the common nonlinear least squares problem and its solution is more difficultly obtained because the dimensions of data and parameters in the former are bigger. So a new solution model and the method are put forward to solve the generalized nonlinear dynamic least squares problem. In fact, the problem can be converted to two sub-problems, each of which has a single variable. That is to say, a complex problem can be separated and then solved. So the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations. The method lessens the calculating load and opens up a new way to process the data in building the digital mine, which have more sources, different types and more temporal states.

  13. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    Science.gov (United States)

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-03-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components. PMID:26987554

  14. The possibilities of least-squares migration of internally scattered seismic energy

    KAUST Repository

    Aldawood, Ali

    2015-05-26

    Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.

  15. Iterative weighted partial spline least squares estimation in semiparametric modeling of longitudinal data

    Institute of Scientific and Technical Information of China (English)

    孙孝前; 尤进红

    2003-01-01

    In this paper we consider the estimating problem of a semiparametric regression modelling whenthe data are longitudinal. An iterative weighted partial spline least squares estimator (IWPSLSE) for the para-metric component is proposed which is more efficient than the weighted partial spline least squares estimator(WPSLSE) with weights constructed by using the within-group partial spline least squares residuals in the senseof asymptotic variance. The asymptotic normality of this IWPSLSE is established. An adaptive procedure ispresented which ensures that the iterative process stops after a finite number of iterations and produces anestimator asymptotically equivalent to the best estimator that can be obtained by using the iterative proce-dure. These results are generalizations of those in heteroscedastic linear model to the case of semiparametric regression.

  16. An Improved Moving Least Squares Method for Curve and Surface Fitting

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2013-01-01

    Full Text Available The moving least squares (MLS method has been developed for the fitting of measured data contaminated with random error. The local approximants of MLS method only take the error of dependent variable into account, whereas the independent variable of measured data always contains random error. Considering the errors of all variables, this paper presents an improved moving least squares (IMLS method to generate curve and surface for the measured data. In IMLS method, total least squares (TLS with a parameter λ based on singular value decomposition is introduced to the local approximants. A procedure is developed to determine the parameter λ. Numerical examples for curve and surface fitting are given to prove the performance of IMLS method.

  17. Meshless Least-Squares Method for Solving the Steady-State Heat Conduction Equation

    Institute of Scientific and Technical Information of China (English)

    LIU Yan; ZHANG Xiong; LU Mingwan

    2005-01-01

    The meshless weighted least-squares (MWLS) method is a pure meshless method that combines the moving least-squares approximation scheme and least-square discretization. Previous studies of the MWLS method for elastostatics and wave propagation problems have shown that the MWLS method possesses several advantages, such as high accuracy, high convergence rate, good stability, and high computational efficiency. In this paper, the MWLS method is extended to heat conduction problems. The MWLS computational parameters are chosen based on a thorough numerical study of 1-dimensional problems. Several 2-dimensional examples show that the MWLS method is much faster than the element free Galerkin method (EFGM), while the accuracy of the MWLS method is close to, or even better than the EFGM. These numerical results demonstrate that the MWLS method has good potential for numerical analyses of heat transfer problems.

  18. On the equivalence of Kalman filtering and least-squares estimation

    Science.gov (United States)

    Mysen, E.

    2016-07-01

    The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

  19. Least Squares Second Order Radiative Transfer Equation and Meshless Method Solution

    CERN Document Server

    Zhao, J M; Liu, L H

    2011-01-01

    To overcome the singularity problem of the SORTE [Numer. Heat Transfer B 51 (2007) 391-409] in dealing with inhomogeneous media where some position is with very small/zero extinction coefficient, a new second order formula of radiative transfer equation which owns the characteristics of least squares approach (termed here the Least squares Second Order Radiative Transfer Equation, LSORTE) is proposed. A diffusion (second order) term is naturally introduced in the LSORTE, which provides much better numerical property than the classic first order radiative transfer equation (RTE). The discretization of the LSORTE by weighted residual approach with standard Galerkin scheme leads to a formulation exactly the same as the least squares scheme discretization of the RTE. A problem of the second order form of RTE in dealing with inhomogeneous medium with discontinuity in distribution of extinction coefficient is observed and an amendment scheme is proposed. The collocation meshless methods based on the moving least sq...

  20. Sensitivity analysis on chaotic dynamical system by Non-Intrusive Least Square Shadowing (NILSS)

    CERN Document Server

    Ni, Angxiu

    2016-01-01

    This paper develops the tangent Non-Intrusive Least Square Shadowing (NILSS) method, which computes sensitivity for chaotic dynamical systems. In NILSS, a tangent solution is represented as a linear combination of a inhomogeneous tangent solution and some homogeneous tangent solutions. Then we solve a least square problem under this new representation. As a result, this new variant is easier to implement with existing solvers. For chaotic systems with large degrees of freedom but low dimensional attractors, NILSS has low computation cost. NILSS is applied to two chaotic PDE systems: the Lorenz 63 system, and a CFD simulation of a backward-facing step. The results show that NILSS computes the correct derivative with a lower cost than the conventional Least Square Shadowing method and the conventional finite difference method.

  1. A note on implementation of decaying product correlation structures for quasi-least squares.

    Science.gov (United States)

    Shults, Justine; Guerra, Matthew W

    2014-08-30

    This note implements an unstructured decaying product matrix via the quasi-least squares approach for estimation of the correlation parameters in the framework of generalized estimating equations. The structure we consider is fairly general without requiring the large number of parameters that are involved in a fully unstructured matrix. It is straightforward to show that the quasi-least squares estimators of the correlation parameters yield feasible values for the unstructured decaying product structure. Furthermore, subject to conditions that are easily checked, the quasi-least squares estimators are valid for longitudinal Bernoulli data. We demonstrate implementation of the structure in a longitudinal clinical trial with both a continuous and binary outcome variable.

  2. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  3. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    Science.gov (United States)

    Kermarrec, Gaël; Schön, Steffen

    2016-09-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  4. Efectivity of Additive Spline for Partial Least Square Method in Regression Model Estimation

    Directory of Open Access Journals (Sweden)

    Ahmad Bilfarsah

    2005-04-01

    Full Text Available Additive Spline of Partial Least Square method (ASPL as one generalization of Partial Least Square (PLS method. ASPLS method can be acommodation to non linear and multicollinearity case of predictor variables. As a principle, The ASPLS method approach is cahracterized by two idea. The first is to used parametric transformations of predictors by spline function; the second is to make ASPLS components mutually uncorrelated, to preserve properties of the linear PLS components. The performance of ASPLS compared with other PLS method is illustrated with the fisher economic application especially the tuna fish production.

  5. The Jackknife Interval Estimation of Parametersin Partial Least Squares Regression Modelfor Poverty Data Analysis

    Directory of Open Access Journals (Sweden)

    Pudji Ismartini

    2010-08-01

    Full Text Available One of the major problem facing the data modelling at social area is multicollinearity. Multicollinearity can have significant impact on the quality and stability of the fitted regression model. Common classical regression technique by using Least Squares estimate is highly sensitive to multicollinearity problem. In such a problem area, Partial Least Squares Regression (PLSR is a useful and flexible tool for statistical model building; however, PLSR can only yields point estimations. This paper will construct the interval estimations for PLSR regression parameters by implementing Jackknife technique to poverty data. A SAS macro programme is developed to obtain the Jackknife interval estimator for PLSR.

  6. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    OpenAIRE

    Tian Wang; Jie Chen; Yi Zhou; Hichem Snoussi

    2013-01-01

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samp...

  7. The structured total least squares algorithm research for passive location based on angle information

    Institute of Scientific and Technical Information of China (English)

    WANG Ding; ZHANG Li; WU Ying

    2009-01-01

    Based on the constrained total least squares (CTLS) passive location algorithm with bearing-only measurements, in this paper, the same passive location problem is transformed into the structured total least squares (STLS) problem. The solution of the STLS problem for passive location can be obtained using the inverse iteration method. It also expatiates that both the STLS algorithm and the CTLS algorithm have the same location mean squares error under certain condition. Finally, the article presents a kind of location and tracking algorithm for moving target by combining STLS location algorithm with Kalman filter (KF). The efficiency and superiority of the proposed algorithms can be confirmed by computer simulation results.

  8. Constrained total least squares algorithm for passive location based on bearing-only measurements

    Institute of Scientific and Technical Information of China (English)

    WANG Ding; ZHANG Li; WU Ying

    2007-01-01

    The constrained total least squares algorithm for the passive location is presented based on the bearing-only measurements in this paper. By this algorithm the non-linear measurement equations are firstly transformed into linear equations and the effect of the measurement noise on the linear equation coefficients is analyzed,therefore the problem of the passive location can be considered as the problem of constrained total least squares, then the problem is changed into the optimized question without restraint which can be solved by the Newton algorithm, and finally the analysis of the location accuracy is given. The simulation results prove that the new algorithm is effective and practicable.

  9. Galerkin-Petrov least squares mixed element method for stationary incompressible magnetohydrodynamics

    Institute of Scientific and Technical Information of China (English)

    LUO Zhen-dong; MAO Yun-kui; ZHU Jiang

    2007-01-01

    The Galerkin-Petrov least squares method is combined with the mixed finite element method to deal with the stationary, incompressible magnetohydrodynamics system of equations with viscosity. A Galerkin-Petrov least squares mixed finite element format for the stationary incompressible magnetohydrodynamics equations is presented.And the existence and error estimates of its solution are derived. Through this method,the combination among the mixed finite element spaces does not demand the discrete Babu(s)ka-Brezzi stability conditions so that the mixed finite element spaces could be chosen arbitrartily and the error estimates with optimal order could be obtained.

  10. Hierarchical Least Squares Identification and Its Convergence for Large Scale Multivariable Systems

    Institute of Scientific and Technical Information of China (English)

    丁锋; 丁韬

    2002-01-01

    The recursive least squares identification algorithm (RLS) for large scale multivariable systems requires a large amount of calculations, therefore, the RLS algorithm is difficult to implement on a computer. The computational load of estimation algorithms can be reduced using the hierarchical least squares identification algorithm (HLS) for large scale multivariable systems. The convergence analysis using the Martingale Convergence Theorem indicates that the parameter estimation error (PEE) given by the HLS algorithm is uniformly bounded without a persistent excitation signal and that the PEE consistently converges to zero for the persistent excitation condition. The HLS algorithm has a much lower computational load than the RLS algorithm.

  11. A Least-Squares Solution to Nonlinear Steady-State Multi-Dimensional IHCP

    Institute of Scientific and Technical Information of China (English)

    1996-01-01

    In this paper,the least-squares method is used to solve the Inverse Heat Conduction Probles(IHCP) to determine the space-wise variation of the unknown boundary condition on the inner surface of a helically coied tube with fluid flow inside,electrical heating and insulation outside.The sensitivity coefficient is analyzed to give a rational distribution of the thermocouples.The results demonstrate that the method effectively extracts information about the unknown boundary condition for the heat conduction problem from the experimental measurements.The results also show that the least-squares method conerges very quickly.

  12. Analysis of total least squares in estimating the parameters of a mortar trajectory

    Energy Technology Data Exchange (ETDEWEB)

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  13. Robust analysis of trends in noisy tokamak confinement data using geodesic least squares regression

    Science.gov (United States)

    Verdoolaege, G.; Shabbir, A.; Hornung, G.

    2016-11-01

    Regression analysis is a very common activity in fusion science for unveiling trends and parametric dependencies, but it can be a difficult matter. We have recently developed the method of geodesic least squares (GLS) regression that is able to handle errors in all variables, is robust against data outliers and uncertainty in the regression model, and can be used with arbitrary distribution models and regression functions. We here report on first results of application of GLS to estimation of the multi-machine scaling law for the energy confinement time in tokamaks, demonstrating improved consistency of the GLS results compared to standard least squares.

  14. Genfit: a general least squares curve fitting program for mini-computer

    International Nuclear Information System (INIS)

    Genfit is a basic data processing program, suitable for small on line computers. In essence the program solve the curve fitting problem using the non-linear least squares method. A data set consisting of a series of points in X-Y plane is fitted to a selected function whose parameters are adjusted to give the best fit in the least squares sence. Convergence may be accelerated by modifying (or interchanging) the values of the constant parameters in accordance with results of previous calculations

  15. Iterative Weighted Semiparametric Least Squares Estimation in Repeated Measurement Partially Linear Regression Models

    Institute of Scientific and Technical Information of China (English)

    Ge-mai Chen; Jin-hong You

    2005-01-01

    Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.

  16. Unknown parameter's variance-covariance propagation and calculation in generalized nonlinear least squares problem

    Institute of Scientific and Technical Information of China (English)

    TAO Hua-xue; GUO Jin-yun

    2005-01-01

    The unknown parameter's variance-covariance propagation and calculation in the generalized nonlinear least squares remain to be studied now,which didn't appear in the internal and external referencing documents. The unknown parameter's variance-covariance propagation formula, considering the two-power terms, was concluded used to evaluate the accuracy of unknown parameter estimators in the generalized nonlinear least squares problem. It is a new variance-covariance formula and opens up a new way to evaluate the accuracy when processing data which have the multi-source,multi-dimensional, multi-type, multi-time-state, different accuracy and nonlinearity.

  17. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    Science.gov (United States)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  18. Explicit least squares system parameter identification for exact differential input/output models

    Science.gov (United States)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  19. Seismic reliability assessment of RC structures including soil–structure interaction using wavelet weighted least squares support vector machine

    International Nuclear Information System (INIS)

    An efficient metamodeling framework in conjunction with the Monte-Carlo Simulation (MCS) is introduced to reduce the computational cost in seismic reliability assessment of existing RC structures. In order to achieve this purpose, the metamodel is designed by combining weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, called wavelet weighted least squares support vector machine (WWLS-SVM). In this study, the seismic reliability assessment of existing RC structures with consideration of soil–structure interaction (SSI) effects is investigated in accordance with Performance-Based Design (PBD). This study aims to incorporate the acceptable performance levels of PBD into reliability theory for comparing the obtained annual probability of non-performance with the target values for each performance level. The MCS method as the most reliable method is utilized to estimate the annual probability of failure associated with a given performance level in this study. In WWLS-SVM-based MCS, the structural seismic responses are accurately predicted by WWLS-SVM for reducing the computational cost. To show the efficiency and robustness of the proposed metamodel, two RC structures are studied. Numerical results demonstrate the efficiency and computational advantages of the proposed metamodel for the seismic reliability assessment of structures. Furthermore, the consideration of the SSI effects in the seismic reliability assessment of existing RC structures is compared to the fixed base model. It shows which SSI has the significant influence on the seismic reliability assessment of structures.

  20. LEAST-SQUARES MIXED FINITE ELEMENT METHOD FOR SADDLE-POINT PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Lie-heng Wang; Huo-yuan Duan

    2000-01-01

    In this paper, a least-squares mixed finite element method for the solution of the primal saddle-point problem is developed. It is proved that the approximate problem is consistent ellipticity in the conforming finite element spaces with only the discrete BB-condition needed for a smaller auxiliary problem. The abstract error estimate is derived.

  1. Memory and computation reduction for least-square channel estimation of mobile OFDM systems

    NARCIS (Netherlands)

    Xu, T.; Tang, Z.; Lu, H.; Leuken, R van

    2012-01-01

    Mobile OFDM refers to OFDM systems with fast moving transceivers, contrastive to traditional OFDM systems whose transceivers are stationary or have a low velocity. In this paper, we use Basis Expansion Models (BEM) to model the time-variation of channels, based on which two least-squares (LS) channe

  2. A Progress Report on Numerical Solutions of Least Squares Adjustment in GNU Project Gama

    Directory of Open Access Journals (Sweden)

    A. Čepek

    2005-01-01

    Full Text Available GNU project Gama for adjustment of geodetic networks is presented. Numerical solution of Least Squares Adjustment in the project is based on Singular Value Decomposition (SVD and General Orthogonalization Algorithm (GSO. Both algorithms enable solution of singular systems resulting from adjustment of free geodetic networks. 

  3. On the convergence of the partial least squares path modeling algorithm

    NARCIS (Netherlands)

    Henseler, Jörg

    2010-01-01

    This paper adds to an important aspect of Partial Least Squares (PLS) path modeling, namely the convergence of the iterative PLS path modeling algorithm. Whilst conventional wisdom says that PLS always converges in practice, there is no formal proof for path models with more than two blocks of manif

  4. Solving sparse linear least squares problems on some supercomputers by using large dense blocks

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Ostromsky, T; Sameh, A;

    1997-01-01

    the matrix so that dense blocks can be constructed and treated with some standard software, say LAPACK or NAG. These ideas are implemented for linear least-squares problems. The rectangular matrices (that appear in such problems) are decomposed by an orthogonal method. Results obtained on a CRAY C92A...

  5. Representing Topography with Second-Degree Bivariate Polynomial Functions Fitted by Least Squares.

    Science.gov (United States)

    Neuman, Arthur Edward

    1987-01-01

    There is a need for abstracting topography other than for mapping purposes. The method employed should be simple and available to non-specialists, thereby ruling out spline representations. Generalizing from univariate first-degree least squares and from multiple regression, this article introduces bivariate polynomial functions fitted by least…

  6. Unbiased Invariant Least Squares Estimation in A Generalized Growth Curve Model

    OpenAIRE

    Wu, Xiaoyong; Liang, Hua; Zou, Guohua

    2009-01-01

    This paper is concerned with a generalized growth curve model. We derive the unbiased invariant least squares estimators of the linear functions of variance-covariance matrix of disturbances. Under the minimum variance criterion, we obtain the necessary and sufficient conditions of the proposed estimators to be optimal. Simulation studies show that the proposed estimators perform well.

  7. Mis-parametrization subsets for a penalized least squares model selection

    OpenAIRE

    Guyon, Xavier; Hardouin, Cécile

    2011-01-01

    When identifying a model by a penalized minimum contrast procedure, we give a description of the over and under fitting parametrization subsets for a least squares contrast. This allows to determine an accurate sequence of penalization rates ensuring good identification. We present applications for the identification of the covariance for a general time series, and for the variogram identification of a geostatistical model.

  8. APPLICATION OF PARTIAL LEAST SQUARES REGRESSION FOR AUDIO-VISUAL SPEECH PROCESSING AND MODELING

    Directory of Open Access Journals (Sweden)

    A. L. Oleinik

    2015-09-01

    Full Text Available Subject of Research. The paper deals with the problem of lip region image reconstruction from speech signal by means of Partial Least Squares regression. Such problems arise in connection with development of audio-visual speech processing methods. Audio-visual speech consists of acoustic and visual components (called modalities. Applications of audio-visual speech processing methods include joint modeling of voice and lips’ movement dynamics, synchronization of audio and video streams, emotion recognition, liveness detection. Method. Partial Least Squares regression was applied to solve the posed problem. This method extracts components of initial data with high covariance. These components are used to build regression model. Advantage of this approach lies in the possibility of achieving two goals: identification of latent interrelations between initial data components (e.g. speech signal and lip region image and approximation of initial data component as a function of another one. Main Results. Experimental research on reconstruction of lip region images from speech signal was carried out on VidTIMIT audio-visual speech database. Results of the experiment showed that Partial Least Squares regression is capable of solving reconstruction problem. Practical Significance. Obtained findings give the possibility to assert that Partial Least Squares regression is successfully applicable for solution of vast variety of audio-visual speech processing problems: from synchronization of audio and video streams to liveness detection.

  9. Gauss’s, Cholesky’s and Banachiewicz’s Contributions to Least Squares

    DEFF Research Database (Denmark)

    Gustavson, Fred G.; Wasniewski, Jerzy

    This paper describes historically Gauss’s contributions to the area of Least Squares. Also mentioned are Cholesky’s and Banachiewicz’s contributions to linear algebra. The material given is backup information to a Tutorial given at PPAM 2011 to honor Cholesky on the hundred anniversary of his...

  10. Linking Socioeconomic Status to Social Cognitive Career Theory Factors: A Partial Least Squares Path Modeling Analysis

    Science.gov (United States)

    Huang, Jie-Tsuen; Hsieh, Hui-Hsien

    2011-01-01

    The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…

  11. Adjoint sensitivity in PDE constrained least squares problems as a multiphysics problem

    NARCIS (Netherlands)

    Lahaye, D.; Mulckhuyse, W.F.W.

    2012-01-01

    Purpose - The purpose of this paper is to provide a framework for the implementation of an adjoint sensitivity formulation for least-squares partial differential equations constrained optimization problems exploiting a multiphysics finite elements package. The estimation of the diffusion coefficient

  12. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

  13. Least square fitting of low resolution gamma ray spectra with cubic B-spline basis functions

    Institute of Scientific and Technical Information of China (English)

    ZHU Meng-Hua; LIU Liang-Gang; QI Dong-Xu; YOU Zhong; XU Ao-Ao

    2009-01-01

    In this paper,the least square fitting method with the cubic B-spline basis hmctioas is derived to reduce the influence of statistical fluctuations in the gamma ray spectra.The derived procedure is simple and automatic.The results show that this method is better than the convolution method with a sufficient reduction of statistical fluctuation.

  14. Using AMMI, factorial regression and partial least squares regression models for interpreting genotype x environment interaction.

    NARCIS (Netherlands)

    Vargas, M.; Crossa, J.; Eeuwijk, van F.A.; Ramirez, M.E.; Sayre, K.

    1999-01-01

    Partial least squares (PLS) and factorial regression (FR) are statistical models that incorporate external environmental and/or cultivar variables for studying and interpreting genotype × environment interaction (GEl). The Additive Main effect and Multiplicative Interaction (AMMI) model uses only th

  15. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    Science.gov (United States)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower-upper-middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  16. A Coupled Finite Difference and Moving Least Squares Simulation of Violent Breaking Wave Impact

    DEFF Research Database (Denmark)

    Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2012-01-01

    Two model for simulation of free surface flow is presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A specia...

  17. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  18. The consistency of ordinary least-squares and generalized least-squares polynomial regression on characterizing the mechanomyographic amplitude versus torque relationship

    International Nuclear Information System (INIS)

    The primary purpose of this study was to examine the consistency of ordinary least-squares (OLS) and generalized least-squares (GLS) polynomial regression analyses utilizing linear, quadratic and cubic models on either five or ten data points that characterize the mechanomyographic amplitude (MMGRMS) versus isometric torque relationship. The secondary purpose was to examine the consistency of OLS and GLS polynomial regression utilizing only linear and quadratic models (excluding cubic responses) on either ten or five data points. Eighteen participants (mean ± SD age = 24 ± 4 yr) completed ten randomly ordered isometric step muscle actions from 5% to 95% of the maximal voluntary contraction (MVC) of the right leg extensors during three separate trials. MMGRMS was recorded from the vastus lateralis during the MVCs and each submaximal muscle action. MMGRMS versus torque relationships were analyzed on a subject-by-subject basis using OLS and GLS polynomial regression. When using ten data points, only 33% and 27% of the subjects were fitted with the same model (utilizing linear, quadratic and cubic models) across all three trials for OLS and GLS, respectively. After eliminating the cubic model, there was an increase to 55% of the subjects being fitted with the same model across all trials for both OLS and GLS regression. Using only five data points (instead of ten data points), 55% of the subjects were fitted with the same model across all trials for OLS and GLS regression. Overall, OLS and GLS polynomial regression models were only able to consistently describe the torque-related patterns of response for MMGRMS in 27–55% of the subjects across three trials. Future studies should examine alternative methods for improving the consistency and reliability of the patterns of response for the MMGRMS versus isometric torque relationship

  19. Least Square Regression Method for Estimating Gas Concentration in an Electronic Nose System

    Directory of Open Access Journals (Sweden)

    Walaa Khalaf

    2009-03-01

    Full Text Available We describe an Electronic Nose (ENose system which is able to identify the type of analyte and to estimate its concentration. The system consists of seven sensors, five of them being gas sensors (supplied with different heater voltage values, the remainder being a temperature and a humidity sensor, respectively. To identify a new analyte sample and then to estimate its concentration, we use both some machine learning techniques and the least square regression principle. In fact, we apply two different training models; the first one is based on the Support Vector Machine (SVM approach and is aimed at teaching the system how to discriminate among different gases, while the second one uses the least squares regression approach to predict the concentration of each type of analyte.

  20. A NUMERICALLY STABLE BLOCK MODIFIED GRAM-SCHMIDT ALGORITHM FOR SOLVING STIFF WEIGHTED LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Musheng Wei; Qiaohua Liu

    2007-01-01

    Recently,Wei in[18]proved that perturbed stiff weighted pseudoinverses and stiff weighted least squares problems are stable,if and only if the original and perturbed coefficient matrices A and A satisfy several row rank preservation conditions.According to these conditions,in this paper we show that in general,ordinary modified Gram-Schmidt with column pivoting is not numerically stable for solving the stiff weighted least squares problem.We then propose a row block modified Gram-Schmidt algorithm with column pivoting,and show that with appropriately chosen tolerance,this algorithm can correctly determine the numerical ranks of these row partitioned sub-matrices,and the computed QR factor R contains small roundoff error which is row stable.Several numerical experiments are also provided to compare the results of the ordinary Modified Gram-Schmidt algorithm with column pivoting and the row block Modified Gram-Schmidt algorithm with column pivoting.

  1. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  2. A new research of identification strategy based on particle swarm optimization and least square

    Institute of Scientific and Technical Information of China (English)

    Tong ZHANG; Yahui WANG; Anli YE; Jian WANG; Jianchao ZENG

    2009-01-01

    Within the heat and moisture system that is complex in the air-conditioning rooms of large space building, the existence of delay makes the stability cushion reduced, which thereby makes the estimated parameters more complex. In this paper, particle swarm optimization (PSO) is integrated with least square (LS) to improve least squares (short for PSOLS). LS, optimized by PSO,identifies the heat and moisture system parameters of the existence of delay in the air-conditioning rooms by sampling input and output data. In view of this delay system, the identification is an effective solution to nonlinear system which LS can not identify directly. The simulation results show that PSOLS is quite effective, and its global optimization has great potential.

  3. ON THE SINGULARITY OF LEAST SQUARES ESTIMATOR FOR MEAN-REVERTING Α-STABLE MOTIONS

    Institute of Scientific and Technical Information of China (English)

    Hu Yaozhong; Long Hongwei

    2009-01-01

    We study the problem of parameter estimation for mean-reverting α-stable motion, dXt= (a0- θ0Xt)dt + dZt, observed at discrete time instants.A least squares estimator is obtained and its asymptotics is discussed in the singular case (a0, θ0)=(0,0).If a0=0, then the mean-reverting α-stable motion becomes Ornstein-Uhlenbeck process and is studied in [7] in the ergodie case θ0 > 0.For the Ornstein-Uhlenbeck process, asymptoties of the least squares estimators for the singular case (θ0 = 0) and for ergodic case (θ0 > 0) are completely different.

  4. A Weighed Least Square TDOA Location Algorithm for TDMA Multi-target

    Directory of Open Access Journals (Sweden)

    WANG XU

    2011-04-01

    Full Text Available In order to improve the location precision of multiple targets in a time division multiple address (TDMA system, a new weighed least square algorithm is presented for multi-target ranging and locating. According to the time synchronization of the TDMA system, the range difference model between multiple targets is built using the time relations among the slot signals. Thus, the range of one target can be estimated by the other one's, and a group of estimated value can be acquired for every target. Then, the weighed least square algorithm is used to estimate the range of every target. Due to the time differences of arrival (TDOA of all targets are used in one target's location, the location precision is improved. The ambiguity and non-solution problems in the traditional TDOA location algorithm are avoided also in the presented algorithm. At the end, the simulation results illustrate the validity of the proposed algorithm.

  5. Difference mapping method using least square support vector regression for variable-fidelity metamodelling

    Science.gov (United States)

    Zheng, Jun; Shao, Xinyu; Gao, Liang; Jiang, Ping; Qiu, Haobo

    2015-06-01

    Engineering design, especially for complex engineering systems, is usually a time-consuming process involving computation-intensive computer-based simulation and analysis methods. A difference mapping method using least square support vector regression is developed in this work, as a special metamodelling methodology that includes variable-fidelity data, to replace the computationally expensive computer codes. A general difference mapping framework is proposed where a surrogate base is first created, then the approximation is gained by a mapping the difference between the base and the real high-fidelity response surface. The least square support vector regression is adopted to accomplish the mapping. Two different sampling strategies, nested and non-nested design of experiments, are conducted to explore their respective effects on modelling accuracy. Different sample sizes and three approximation performance measures of accuracy are considered.

  6. GLUCS: a generalized least-squares program for updating cross section evaluations with correlated data sets

    International Nuclear Information System (INIS)

    The PDP-10 FORTRAN IV computer programs INPUT.F4, GLUCS.F4, and OUTPUT.F4, which employ Bayes' theorem (or generalized least-squares) for simultaneous evaluation of reaction cross sections, are described. Evaluations of cross sections and covariances are used as input for incorporating correlated data sets, particularly ratios. These data are read from Evaluated Nuclear Data File (ENDF/B-V) formatted files. Measured data sets, including ratios and absolute and relative cross section data, are read and combined with the input evaluations by means of the least-squares technique. The resulting output evaluations have not updated only cross sections and covariances, but also cross-reaction covariances. These output data are written into ENDF/B-V format

  7. Real-Time Adaptive Least-Squares Drag Minimization for Performance Adaptive Aeroelastic Wing

    Science.gov (United States)

    Ferrier, Yvonne L.; Nguyen, Nhan T.; Ting, Eric

    2016-01-01

    This paper contains a simulation study of a real-time adaptive least-squares drag minimization algorithm for an aeroelastic model of a flexible wing aircraft. The aircraft model is based on the NASA Generic Transport Model (GTM). The wing structures incorporate a novel aerodynamic control surface known as the Variable Camber Continuous Trailing Edge Flap (VCCTEF). The drag minimization algorithm uses the Newton-Raphson method to find the optimal VCCTEF deflections for minimum drag in the context of an altitude-hold flight control mode at cruise conditions. The aerodynamic coefficient parameters used in this optimization method are identified in real-time using Recursive Least Squares (RLS). The results demonstrate the potential of the VCCTEF to improve aerodynamic efficiency for drag minimization for transport aircraft.

  8. Least Squares Ranking on Graphs, Hodge Laplacians, Time Optimality, and Iterative Methods

    CERN Document Server

    Hirani, Anil N; Watts, Seth

    2010-01-01

    Given a set of alternatives to be ranked and some pairwise comparison values, ranking can be posed as a least squares computation on a graph. This was first used by Leake for ranking football teams. The residual can be further analyzed to find inconsistencies in the given data, and this leads to a second least squares problem. This whole process was formulated recently by Jiang et al. as a Hodge decomposition of the edge values. Recently, Koutis et al., showed that linear systems involving symmetric diagonally dominant (SDD) matrices can be solved in time approaching optimality. By using Hodge 0-Laplacian and 2-Laplacian, we give various results on when the normal equations for ranking are SDD and when iterative Krylov methods should be used. We also give iteration bounds for conjugate gradient method for these problems.

  9. Fractional Order Digital Differentiator Design Based on Power Function and Least squares

    Science.gov (United States)

    Kumar, Manjeet; Rawat, Tarun Kumar

    2016-10-01

    In this article, we propose the use of power function and least squares method for designing of a fractional order digital differentiator. The input signal is transformed into a power function by using Taylor series expansion, and its fractional derivative is computed using the Grunwald-Letnikov (G-L) definition. Next, the fractional order digital differentiator is modelled as a finite impulse response (FIR) system that yields fractional order derivative of the G-L type for a power function. The FIR system coefficients are obtained by using the least squares method. Two examples are used to demonstrate that the fractional derivative of the digital signals is computed by using the proposed technique. The results of the third and fourth examples reveal that the proposed technique gives superior performance in comparison with the existing techniques.

  10. Modelling of the Relaxation Least Squares-Based Neural Networks and Its Application

    Institute of Scientific and Technical Information of China (English)

    2002-01-01

    A relaxation least squares-based learning algorithm for neural networks is proposed. Not only does it have a fast convergence rate, but it involves less computation quantity. Therefore, it is suitable to deal with the case when a network has a large scale but the number of training data is very limited. It has been used in converting furnace process modelling, and impressive result has been obtained.

  11. Optimal Knot Selection for Least-squares Fitting of Noisy Data with Spline Functions

    Energy Technology Data Exchange (ETDEWEB)

    Jerome Blair

    2008-05-15

    An automatic data-smoothing algorithm for data from digital oscilloscopes is described. The algorithm adjusts the bandwidth of the filtering as a function of time to provide minimum mean squared error at each time. It produces an estimate of the root-mean-square error as a function of time and does so without any statistical assumptions about the unknown signal. The algorithm is based on least-squares fitting to the data of cubic spline functions.

  12. Combined genetic algorithm optimization and regularized orthogonal least squares learning for radial basis function networks.

    Science.gov (United States)

    Chen, S; Wu, Y; Luk, B L

    1999-01-01

    The paper presents a two-level learning method for radial basis function (RBF) networks. A regularized orthogonal least squares (ROLS) algorithm is employed at the lower level to construct RBF networks while the two key learning parameters, the regularization parameter and the RBF width, are optimized using a genetic algorithm (GA) at the upper level. Nonlinear time series modeling and prediction is used as an example to demonstrate the effectiveness of this hierarchical learning approach.

  13. On the efficiency of the orthogonal least squares training method for radial basis function networks.

    Science.gov (United States)

    Sherstinsky, A; Picard, R W

    1996-01-01

    The efficiency of the orthogonal least squares (OLS) method for training approximation networks is examined using the criterion of energy compaction. We show that the selection of basis vectors produced by the procedure is not the most compact when the approximation is performed using a nonorthogonal basis. Hence, the algorithm does not produce the smallest possible networks for a given approximation error. Specific examples are given using the Gaussian radial basis functions type of approximation networks.

  14. Integrated application of uniform design and least-squares support vector machines to transfection optimization

    Directory of Open Access Journals (Sweden)

    Pan Jin-Shui

    2009-05-01

    Full Text Available Abstract Background Transfection in mammalian cells based on liposome presents great challenge for biological professionals. To protect themselves from exogenous insults, mammalian cells tend to manifest poor transfection efficiency. In order to gain high efficiency, we have to optimize several conditions of transfection, such as amount of liposome, amount of plasmid, and cell density at transfection. However, this process may be time-consuming and energy-consuming. Fortunately, several mathematical methods, developed in the past decades, may facilitate the resolution of this issue. This study investigates the possibility of optimizing transfection efficiency by using a method referred to as least-squares support vector machine, which requires only a few experiments and maintains fairly high accuracy. Results A protocol consists of 15 experiments was performed according to the principle of uniform design. In this protocol, amount of liposome, amount of plasmid, and the number of seeded cells 24 h before transfection were set as independent variables and transfection efficiency was set as dependent variable. A model was deduced from independent variables and their respective dependent variable. Another protocol made up by 10 experiments was performed to test the accuracy of the model. The model manifested a high accuracy. Compared to traditional method, the integrated application of uniform design and least-squares support vector machine greatly reduced the number of required experiments. What's more, higher transfection efficiency was achieved. Conclusion The integrated application of uniform design and least-squares support vector machine is a simple technique for obtaining high transfection efficiency. Using this novel method, the number of required experiments would be greatly cut down while higher efficiency would be gained. Least-squares support vector machine may be applicable to many other problems that need to be optimized.

  15. A mixed effects least squares support vector machine model for classification of longitudinal data

    OpenAIRE

    Luts, Jan; Molenberghs, Geert; Verbeke, Geert; Van Huffel, Sabine; Suykens, Johan A.K.

    2012-01-01

    A mixed effects least squares support vector machine (LS-SVM) classifier is introduced to extend the standard LS-SVM classifier for handling longitudinal data. The mixed effects LS-SVM model contains a random intercept and allows to classify highly unbalanced data, in the sense that there is an unequal number of observations for each case at non-fixed time points. The methodology consists of a regression modeling and a classification step based on the obtained regression estimates. Regression...

  16. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  17. Least squares algorithm for region-of-interest evaluation in emission tomography

    International Nuclear Information System (INIS)

    In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region

  18. LEAST-SQUARES METHOD-BASED FEATURE FITTING AND EXTRACTION IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The main purpose of reverse engineering is to convert discrete data points into piecewise smooth, continuous surface models.Before carrying out model reconstruction it is significant to extract geometric features because the quality of modeling greatly depends on the representation of features.Some fitting techniques of natural quadric surfaces with least-squares method are described.And these techniques can be directly used to extract quadric surfaces features during the process of segmentation for point cloud.

  19. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    OpenAIRE

    Qingzhou Mao; Liang Zhang; Qingquan Li; Qingwu Hu; Jianwei Yu; Shaojun Feng; Washington Ochieng; Hanlu Gong

    2015-01-01

    In environments that are hostile to Global Navigation Satellites Systems (GNSS), the precision achieved by a mobile light detection and ranging (LiDAR) system (MLS) can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS). This paper proposes a novel least squares collocation (LSC)-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, ...

  20. Sparse partial least squares for on-line variable selection in multivariate data streams

    OpenAIRE

    McWilliams, Brian; Montana, Giovanni

    2009-01-01

    In this paper we propose a computationally efficient algorithm for on-line variable selection in multivariate regression problems involving high dimensional data streams. The algorithm recursively extracts all the latent factors of a partial least squares solution and selects the most important variables for each factor. This is achieved by means of only one sparse singular value decomposition which can be efficiently updated on-line and in an adaptive fashion. Simulation results based on art...

  1. A PRESS statistic for two-block partial least squares regression

    OpenAIRE

    McWilliams, Brian; Montana, Giovanni

    2013-01-01

    Predictive modelling of multivariate data where both the covariates and responses are high-dimensional is becoming an increasingly popular task in many data mining applications. Partial Least Squares (PLS) regression often turns out to be a useful model in these situations since it performs dimensionality reduction by assuming the existence of a small number of latent factors that may explain the linear dependence between input and output. In practice, the number of latent factors to be retai...

  2. ANALISIS KEPUASAN KONSUMEN RESTORAN CEPAT SAJI MENGGUNAKAN METODE PARTIAL LEAST SQUARE (Studi Kasus: Burger King Bali)

    OpenAIRE

    MADE SANJIWANI; KETUT JAYANEGARA; I PUTU EKA N. KENCANA

    2015-01-01

    The were two aims of this research. First is to get model of the relation between the latent variable quality of service and product quality to customer satisfaction. The second was to determine the influence of service quality on customer satisfaction and the influence of product quality on consumer satisfaction at Burger King Bali. This research implemented Partial Least Square method with 3 second order variables is the service quality, product quality, and customer satisfaction. In this r...

  3. Kernelized partial least squares for feature reduction and classification of gene microarray data

    OpenAIRE

    Land Walker H; Qiao Xingye; Margolis Daniel E; Ford William S; Paquette Christopher T; Perez-Rogers Joseph F; Borgia Jeffrey A; Yang Jack Y; Deng Youping

    2011-01-01

    Abstract Background The primary objectives of this paper are: 1.) to apply Statistical Learning Theory (SLT), specifically Partial Least Squares (PLS) and Kernelized PLS (K-PLS), to the universal "feature-rich/case-poor" (also known as "large p small n", or "high-dimension, low-sample size") microarray problem by eliminating those features (or probes) that do not contribute to the "best" chromosome bio-markers for lung cancer, and 2.) quantitatively measure and verify (by an independent means...

  4. Characterization of ocean biogeochemical processes: a generalized total least-squares estimator of the Redfield ratios

    OpenAIRE

    Guglielmi, V.; Goyet, C; Touratier, F.

    2015-01-01

    The chemical composition of the global ocean is governed by biological, chemical and physical processes. These processes interact with each other so that the concentrations of carbon dioxide, oxygen, nitrate and phosphate vary in constant proportions, referred to as the Redfield ratios. We build here the Generalized Total Least-Squares estimator of these ratios. The interest of our approach is twofold: it respects the hydrological characteristics of the studied areas, and it...

  5. Least squares algorithm for region-of-interest evaluation in emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Formiconi, A.R. (Sezione di Medicina Nucleare, Firenze (Italy). Dipt. di Fisiopatologia Clinica)

    1993-03-01

    In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region.

  6. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    OpenAIRE

    Bo Liu; Sanfeng Chen; Shuai Li; Yongsheng Liang

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via s...

  7. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  8. A DYNAMICAL SYSTEM ALGORITHM FOR SOLVING A LEAST SQUARES PROBLEM WITH ORTHOGONALITY CONSTRAINTS

    Institute of Scientific and Technical Information of China (English)

    黄建国; 叶中行; 徐雷

    2001-01-01

    This paper introduced a dynamical system (neural networks) algorithm for solving a least squares problem with orthogonality constraints, which has wide applications in computer vision and signal processing. A rigorous analysis for the convergence and stability of the algorithm was provided. Moreover, a so called zero-extension technique was presented to keep the algorithm always convergent to the needed result for any randomly chosen initial data. Numerical experiments illustrate the effectiveness and efficiency of the algorithm.

  9. Learning rates of least-square regularized regression with polynomial kernels

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in Lρ2X with Borel probability measure.

  10. Learning rates of least-square regularized regression with polynomial kernels

    Institute of Scientific and Technical Information of China (English)

    LI BingZheng; WANG GuoMao

    2009-01-01

    This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in Lpx2 with Borel probability measure.

  11. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    OpenAIRE

    Ying Chen; Shiqing Zhang; Xiaoming Zhao

    2014-01-01

    Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS) sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP) and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments ...

  12. Prediction of ferric iron precipitation in bioleaching process using partial least squares and artificial neural network

    OpenAIRE

    Golmohammadi Hassan; Rashidi Abbas; Safdari Seyed Jaber

    2013-01-01

    A quantitative structure-property relationship (QSPR) study based on partial least squares (PLS) and artificial neural network (ANN) was developed for the prediction of ferric iron precipitation in bioleaching process. The leaching temperature, initial pH, oxidation/reduction potential (ORP), ferrous concentration and particle size of ore were used as inputs to the network. The output of the model was ferric iron precipitation. The optimal condition of the neural network was obtained by...

  13. High-performance numerical algorithms and software for structured total least squares

    OpenAIRE

    I. Markovsky; Van Huffel, S.

    2005-01-01

    We present a software package for structured total least squares approximation problems. The allowed structures in the data matrix are block-Toeplitz, block-Hankel, unstructured, and exact. Combination of blocks with these structures can be specified. The computational complexity of the algorithms is O(m), where m is the sample size. We show simulation examples with different approximation problems. Application of the method for multivariable system identification is illustrated on examples f...

  14. Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares

    OpenAIRE

    Rong Long; Qihong Chen; Liyan Zhang; Longhua Ma; Shuhai Quan

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction i...

  15. Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    XU Rui-Rui; BIAN Guo-Xing; GAO Chen-Feng; CHEN Tian-Lun

    2005-01-01

    The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction.First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.

  16. Estimasi Kurva Regresi Pada Model Varying Coefficient Dengan Weighted Least Square

    OpenAIRE

    Ragil P., Dian; Raupong; ISLAMIYATI, ANNA

    2014-01-01

    Model varying-coefficient pada data longitudinal akan dikaji dalam proposal ini. Hubungan antara variabel respon dan prediktor diasumsikan linier pada waktu tertentu, tapi koefisien-koefisiennya berubah terhadap waktu. Estimator spline berdasarkan Weighted least square (WLS) digunakan untuk mengestimasi kurva regresi dari Model Varying Coefficient. Generalized Cross-Validation (GCV) digunakan untuk memilih titik knot optimal. Aplikasi pada proposal ini diterapkan pada data ACTG yaitu hubungan...

  17. Solving the Axisymmetric Inverse Heat Conduction Problem by a Wavelet Dual Least Squares Method

    Directory of Open Access Journals (Sweden)

    Fu Chu-Li

    2009-01-01

    Full Text Available We consider an axisymmetric inverse heat conduction problem of determining the surface temperature from a fixed location inside a cylinder. This problem is ill-posed; the solution (if it exists does not depend continuously on the data. A special project method—dual least squares method generated by the family of Shannon wavelet is applied to formulate regularized solution. Meanwhile, an order optimal error estimate between the approximate solution and exact solution is proved.

  18. Least-Squares Solutions of the Equation AX = B Over Anti-Hermitian Generalized Hamiltonian Matrices

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Upon using the denotative theorem of anti-Hermitian generalized Hamiltonian matrices, we solve effectively the least-squares problem min ‖AX - B‖ over anti-Hermitian generalized Hamiltonian matrices. We derive some necessary and sufficient conditions for solvability of the problem and an expression for general solution of the matrix equation AX = B. In addition, we also obtain the expression for the solution of a relevant optimal approximate problem.

  19. A New Least Squares Support Vector Machines Ensemble Model for Aero Engine Performance Parameter Chaotic Prediction

    OpenAIRE

    Dangdang Du; Xiaoliang Jia; Chaobo Hao

    2016-01-01

    Aiming at the nonlinearity, chaos, and small-sample of aero engine performance parameters data, a new ensemble model, named the least squares support vector machine (LSSVM) ensemble model with phase space reconstruction (PSR) and particle swarm optimization (PSO), is presented. First, to guarantee the diversity of individual members, different single kernel LSSVMs are selected as base predictors, and they also output the primary prediction results independently. Then, all the primary predicti...

  20. A Comparison of Recursive Least Squares Estimation and Kalman Filtering for Flow in Open Channels

    OpenAIRE

    DURDU, Ömer Faruk

    2005-01-01

    An integrated approach to the design of an automatic control system for canals using a Linear Quadratic Gaussian regulator based on recursive least squares estimation was developed. The one-dimensional partial differential equations describing open channel flow (Saint-Venant) equations are linearized about an average operating condition of the canal. The concept of optimal control theory is applied to drive a feedback control algorithm for constant-level control of an irrigation cana...

  1. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  2. Limitation of the Least Square Method in the Evaluation of Dimension of Fractal Brownian Motions

    CERN Document Server

    Qiao, Bingqiang; Zeng, Houdun; Li, Xiang; Dai, Benzhong

    2015-01-01

    With the standard deviation for the logarithm of the re-scaled range $\\langle |F(t+\\tau)-F(t)|\\rangle$ of simulated fractal Brownian motions $F(t)$ given in a previous paper \\cite{q14}, the method of least squares is adopted to determine the slope, $S$, and intercept, $I$, of the log$(\\langle |F(t+\\tau)-F(t)|\\rangle)$ vs $\\rm{log}(\\tau)$ plot to investigate the limitation of this procedure. It is found that the reduced $\\chi^2$ of the fitting decreases with the increase of the Hurst index, $H$ (the expectation value of $S$), which may be attributed to the correlation among the re-scaled ranges. Similarly, it is found that the errors of the fitting parameters $S$ and $I$ are usually smaller than their corresponding standard deviations. These results show the limitation of using the simple least square method to determine the dimension of a fractal time series. Nevertheless, they may be used to reinterpret the fitting results of the least square method to determine the dimension of fractal Brownian motions more...

  3. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    Science.gov (United States)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  4. Constrained Balancing of Two Industrial Rotor Systems: Least Squares and Min-Max Approaches

    Directory of Open Access Journals (Sweden)

    Bin Huang

    2009-01-01

    Full Text Available Rotor vibrations caused by rotor mass unbalance distributions are a major source of maintenance problems in high-speed rotating machinery. Minimizing this vibration by balancing under practical constraints is quite important to industry. This paper considers balancing of two large industrial rotor systems by constrained least squares and min-max balancing methods. In current industrial practice, the weighted least squares method has been utilized to minimize rotor vibrations for many years. One of its disadvantages is that it cannot guarantee that the maximum value of vibration is below a specified value. To achieve better balancing performance, the min-max balancing method utilizing the Second Order Cone Programming (SOCP with the maximum correction weight constraint, the maximum residual response constraint as well as the weight splitting constraint has been utilized for effective balancing. The min-max balancing method can guarantee a maximum residual vibration value below an optimum value and is shown by simulation to significantly outperform the weighted least squares method.

  5. A least square extrapolation method for improving solution accuracy of PDE computations

    CERN Document Server

    Garbey, M

    2003-01-01

    Richardson extrapolation (RE) is based on a very simple and elegant mathematical idea that has been successful in several areas of numerical analysis such as quadrature or time integration of ODEs. In theory, RE can be used also on PDE approximations when the convergence order of a discrete solution is clearly known. But in practice, the order of a numerical method often depends on space location and is not accurately satisfied on different levels of grids used in the extrapolation formula. We propose in this paper a more robust and numerically efficient method based on the idea of finding automatically the order of a method as the solution of a least square minimization problem on the residual. We introduce a two-level and three-level least square extrapolation method that works on nonmatching embedded grid solutions via spline interpolation. Our least square extrapolation method is a post-processing of data produced by existing PDE codes, that is easy to implement and can be a better tool than RE for code v...

  6. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    Science.gov (United States)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  7. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    Science.gov (United States)

    Demetriou, I. C.

    2006-04-01

    , biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Program summaryTitle of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc. Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components

  8. Multisource least-squares migration of marine streamer and land data with frequency-division encoding

    KAUST Repository

    Huang, Yunsong

    2012-05-22

    Multisource migration of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. The accompanying crosstalk noise, in addition to the migration footprint, can be reduced by least-squares inversion. But the application of this approach to marine streamer data is hampered by the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modelling method. This leads to a strong mismatch in the misfit function and results in strong artefacts (crosstalk) in the multisource least-squares migration image. To eliminate this noise, we present a frequency-division multiplexing (FDM) strategy with iterative least-squares migration (ILSM) of supergathers. The key idea is, at each ILSM iteration, to assign a unique frequency band to each shot gather. In this case there is no overlap in the crosstalk spectrum of each migrated shot gather m(x, ω i), so the spectral crosstalk product m(x, ω i)m(x, ω j) =δ i, j is zero, unless i=j. Our results in applying this method to 2D marine data for a SEG/EAGE salt model show better resolved images than standard migration computed at about 1/10 th of the cost. Similar results are achieved after applying this method to synthetic data for a 3D SEG/EAGE salt model, except the acquisition geometry is similar to that of a marine OBS survey. Here, the speedup of this method over conventional migration is more than 10. We conclude that multisource migration for a marine geometry can be successfully achieved by a frequency-division encoding strategy, as long as crosstalk-prone sources are segregated in their spectral content. This is both the strength and the potential limitation of this method. © 2012 European Association of Geoscientists & Engineers.

  9. Local classification: Locally weighted-partial least squares-discriminant analysis (LW-PLS-DA).

    Science.gov (United States)

    Bevilacqua, Marta; Marini, Federico

    2014-08-01

    The possibility of devising a simple, flexible and accurate non-linear classification method, by extending the locally weighted partial least squares (LW-PLS) approach to the cases where the algorithm is used in a discriminant way (partial least squares discriminant analysis, PLS-DA), is presented. In particular, to assess which category an unknown sample belongs to, the proposed algorithm operates by identifying which training objects are most similar to the one to be predicted and building a PLS-DA model using these calibration samples only. Moreover, the influence of the selected training samples on the local model can be further modulated by adopting a not uniform distance-based weighting scheme which allows the farthest calibration objects to have less impact than the closest ones. The performances of the proposed locally weighted-partial least squares-discriminant analysis (LW-PLS-DA) algorithm have been tested on three simulated data sets characterized by a varying degree of non-linearity: in all cases, a classification accuracy higher than 99% on external validation samples was achieved. Moreover, when also applied to a real data set (classification of rice varieties), characterized by a high extent of non-linearity, the proposed method provided an average correct classification rate of about 93% on the test set. By the preliminary results, showed in this paper, the performances of the proposed LW-PLS-DA approach have proved to be comparable and in some cases better than those obtained by other non-linear methods (k nearest neighbors, kernel-PLS-DA and, in the case of rice, counterpropagation neural networks).

  10. Circular and linear regression fitting circles and lines by least squares

    CERN Document Server

    Chernov, Nikolai

    2010-01-01

    Find the right algorithm for your image processing applicationExploring the recent achievements that have occurred since the mid-1990s, Circular and Linear Regression: Fitting Circles and Lines by Least Squares explains how to use modern algorithms to fit geometric contours (circles and circular arcs) to observed data in image processing and computer vision. The author covers all facets-geometric, statistical, and computational-of the methods. He looks at how the numerical algorithms relate to one another through underlying ideas, compares the strengths and weaknesses of each algorithm, and il

  11. A Galerkin least-square stabilisation technique for hyperelastic biphasic soft tissue

    CERN Document Server

    Vignollet, Julien; Kaczmarczyk, Lukasz

    2011-01-01

    An hyperelastic biphasic model is presented. For slow-draining problems (permeability less than 1\\times10-2 mm4 N-1 s-1), numerical instabilities in the form of non-physical oscillations in the pressure field are observed in 3D problems using tetrahedral Taylor-Hood finite elements. As an alternative to considerable mesh refinement, a Galerkin least-square stabilization framework is proposed. This technique drastically reduces the pressure discrepancies and prevents these oscillations from propagating towards the centre of the medium. The performance and robustness of this technique are demonstrated on a 3D numerical example.

  12. A Least-Squares Method for Unfolding Convolution Products in X-ray Diffraction Line Profiles

    OpenAIRE

    Yokoyama, Fumiyoshi

    1982-01-01

    A deconvolution method for the X-ray diffraction line profile is proposed, which is based on the conventional least-squares method. The true profile is assumed to be a functional form. The numerical values of parameters of the function assumed are determined so that the calculated profile, which is a convolution of the function and the instrumental profile, has a minimum deviation from the observed one. The method is illustrated by analysis of the X-ray powder diffraction profile of sodium ch...

  13. Thrust estimator design based on least squares support vector regression machine

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-ping; SUN Jian-guo

    2010-01-01

    In order to realize direct thrust control instead of traditional sensor-based control for nero-engines,it is indispensable to design a thrust estimator with high accuracy,so a scheme for thrust estimator design based on the least square support vector regression machine is proposed to solve this problem.Furthermore,numerical simulations confirm the effectiveness of our presented scheme.During the process of estimator design,a wrap per criterion that can not only reduce the computational complexity but also enhance the generalization performance is proposed to select variables as input variables for estimator.

  14. Fully Modified Narrow-Band Least Squares Estimation of Weak Fractional Cointegration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ørregaard; Frederiksen, Per

    application recently, especially in financial economics. Previous research on this model has considered a semiparametric narrow-band least squares (NBLS) estimator in the frequency domain, but in the stationary case its asymptotic distribution has been derived only under a condition of non-coherence between......-coherence. Furthermore, compared to much previous research, the development of the asymptotic distribution theory is based on a different spectral density representation, which is relevant for multivariate fractionally integrated processes, and the use of this representation is shown to result in lower asymptotic bias...

  15. A least squares procedure for calculating the calibration constants of a portable gamma-ray spectrometer.

    Science.gov (United States)

    Ribeiro, F B; Carlos, D U; Hiodo, F Y; Strobino, E F

    2005-01-01

    In this study, a least squares procedure for calculating the calibration constants of a portable gamma-ray spectrometer using the general inverse matrix method is presented. The procedure weights the model equations fitting to the calibration data, taking into account the variances in the counting rates and in the radioactive standard concentrations. The application of the described procedure is illustrated by calibrating twice the same gamma-ray spectrometer, with two independent data sets collected approximately 18 months apart in the same calibration facility.

  16. LEAST-SQUARES MIXED FINITE ELEMENT METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Shao-qin Gao

    2005-01-01

    Least-squares mixed finite element methods are proposed and analyzed for the incompressible magnetohydrodynamic equations, where the two vorticities are additionally introduced as independent variables in order that the primal equations are transformed into the first-order systems. We show that there hold the coerciveness and the optimal error bound in appropriate norms for all variables under consideration, which can be approximated by all kinds of continuous element. Consequently, the Babuska-Brezzi condition (i.e. the inf-sup condition) and the indefiniteness are avoided which are essential features of the classical mixed methods.

  17. A comparison of three additive tree algorithms that rely on a least-squares loss criterion.

    Science.gov (United States)

    Smith, T J

    1998-11-01

    The performances of three additive tree algorithms which seek to minimize a least-squares loss criterion were compared. The algorithms included the penalty-function approach of De Soete (1983), the iterative projection strategy of Hubert & Arabie (1995) and the two-stage ADDTREE algorithm, (Corter, 1982; Sattath & Tversky, 1977). Model fit, comparability of structure, processing time and metric recovery were assessed. Results indicated that the iterative projection strategy consistently located the best-fitting tree, but also displayed a wider range and larger number of local optima. PMID:9854946

  18. Recursive Least Squares Estimator with Multiple Exponential Windows in Vector Autoregression

    Institute of Scientific and Technical Information of China (English)

    Hong-zhi An; Zhi-guo Li

    2002-01-01

    In the parameter tracking of time-varying systems, the ordinary method is weighted least squares with the rectangular window or the exponential window. In this paper we propose a new kind of sliding window called the multiple exponential window, and then use it to fit time-varying Gaussian vector autoregressive models. The asymptotic bias and covariance of the estimator of the parameter for time-invariant models are also derived. Simulation results show that the multiple exponential windows have better parameter tracking effect than rectangular windows and exponential ones.

  19. A negative-norm least-squares method for time-harmonic Maxwell equations

    KAUST Repository

    Copeland, Dylan M.

    2012-04-01

    This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.

  20. Retinal Oximetry with 510-600 nm Light Based on Partial Least-Squares Regression Technique

    Science.gov (United States)

    Arimoto, Hidenobu; Furukawa, Hiromitsu

    2010-11-01

    The oxygen saturation distribution in the retinal blood stream is estimated by measuring spectral images and adopting the partial-least squares regression. The wavelengths range used for the calculation is from 510 to 600 nm. The regression model for estimating the retinal oxygen saturation is built on the basis of the arterial and venous blood spectra. The experiment is performed using an originally designed spectral ophthalmoscope. The obtained two-dimensional (2D) oxygen saturation indicates the reasonable oxygen level across the retina. The measurement quality is compared with those obtained using other wavelengths sets and data processing methods.

  1. Least Square Method for Porous Fin in the Presence of Uniform Magnetic Field

    Directory of Open Access Journals (Sweden)

    H.A. Hoshyar

    2016-01-01

    Full Text Available In this study, the Least Square Method (LSM is a powerful and easy to use analytic tool for predicting the temperature distribution in a porous fin which is exposed to uniform magnetic field. The heat transfer through porous media is simulated using passage velocity from the Darcy’s model. It has been attempted to show the capabilities and wide-range applications of the LSM in comparison with a type of numerical analysis as Boundary Value Problem (BVP in solving this problem. The results reveal that the present method is very effective and convenient, and it is suggested that LSM can be found widely applications in engineering and physics.

  2. Least Squares Spectral Analysis and Its Application to Superconducting Gravimeter Data Analysis

    Institute of Scientific and Technical Information of China (English)

    YIN Hui; Spiros D. Pagiatakis

    2004-01-01

    Detection of a periodic signal hidden in noise is the goal of Superconducting Gravimeter (SG) data analysis. Due to spikes, gaps, datum shrifts (offsets) and other disturbances, the traditional FFT method shows inherent limitations. Instead, the least squares spectral analysis (LSSA) has showed itself more suitable than Fourier analysis of gappy, unequally spaced and unequally weighted data series in a variety of applications in geodesy and geophysics. This paper reviews the principle of LSSA and gives a possible strategy for the analysis of time series obtained from the Canadian Superconducting Gravimeter Installation (CGSI), with gaps, offsets, unequal sampling decimation of the data and unequally weighted data points.

  3. A meshless Galerkin method with moving least square approximations for infinite elastic solids

    Institute of Scientific and Technical Information of China (English)

    Li Xiao-Lin; Li Shu-Ling

    2013-01-01

    Combining moving least square approximations and boundary integral equations,a meshless Galerkin method,which is the Galerkin boundary node method (GBNM),for two-and three-dimensional infinite elastic solid mechanics problems with traction boundary conditions is discussed.In this numerical method,the resulting formulation inherits the symmetry and positive definiteness of variational problems,and boundary conditions can be applied directly and easily.A rigorous error analysis and convergence study for both displacement and stress is presented in Sobolev spaces.The capability of this method is illustrated and assessed by some numerical examples.

  4. Solving Time of Least Square Systems in Sigma-Pi Unit Networks

    CERN Document Server

    Courrieu, Pierre

    2008-01-01

    The solving of least square systems is a useful operation in neurocomputational modeling of learning, pattern matching, and pattern recognition. In these last two cases, the solution must be obtained on-line, thus the time required to solve a system in a plausible neural architecture is critical. This paper presents a recurrent network of Sigma-Pi neurons, whose solving time increases at most like the logarithm of the system size, and of its condition number, which provides plausible computation times for biological systems.

  5. Distribution of error in least-squares solution of an overdetermined system of linear simultaneous equations

    Science.gov (United States)

    Miller, C. D.

    1972-01-01

    Probability density functions were derived for errors in the evaluation of unknowns by the least squares method in system of nonhomogeneous linear equations. Coefficients of the unknowns were assumed correct and computational precision were also assumed. A vector space was used, with number of dimensions equal to the number of equations. An error vector was defined and assumed to have uniform distribution of orientation throughout the vector space. The density functions are shown to be insensitive to the biasing effects of the source of the system of equations.

  6. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

  7. Useful and little-known applications of the Least Square Method and some consequences of covariances

    Science.gov (United States)

    Helene, Otaviano; Mariano, Leandro; Guimarães-Filho, Zwinglio

    2016-10-01

    Covariances are as important as variances when dealing with experimental data and they must be considered in fitting procedures and adjustments in order to preserve the statistical properties of the adjusted quantities. In this paper, we apply the Least Square Method in matrix form to several simple problems in order to evaluate the consequences of covariances in the fitting procedure. Among the examples, we demonstrate how a measurement of a physical quantity can change the adopted value of all other covariant quantities and how a new single point (x , y) improves the parameters of a previously adjusted straight-line.

  8. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  9. On-line Weighted Least Squares Kernel Method for Nonlinear Dynamic Modeling

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Support vector machines (SVM) have been widely used in pattern recognition and have also drawn considerable interest in control areas. Based on rolling optimization method and on-line learning strategies, a novel approach based on weighted least squares support vector machines (WLS-SVM) is proposed for nonlinear dynamic modeling.The good robust property of the novel approach enhances the generalization ability of kernel method-based modeling and some experimental results are presented to illustrate the feasibility of the proposed method.

  10. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  11. Fast algorithm for solving the Hankel/Toeplitz Structured Total Least Squares problem

    Science.gov (United States)

    Lemmerling, Philippe; Mastronardi, Nicola; van Huffel, Sabine

    2000-07-01

    The Structured Total Least Squares (STLS) problem is a natural extension of the Total Least Squares (TLS) problem when constraints on the matrix structure need to be imposed. Similar to the ordinary TLS approach, the STLS approach can be used to determine the parameter vector of a linear model, given some noisy measurements. In many signal processing applications, the imposition of this matrix structure constraint is necessary for obtaining Maximum Likelihood (ML) estimates of the parameter vectorE In this paper we consider the Toeplitz (Hankel) STLS problem (i.e., an STLS problem in which the Toeplitz (Hankel) structure needs to be preserved). A fast implementation of an algorithm for solving this frequently occurring STLS problem is proposed. The increased efficiency is obtained by exploiting the low displacement rank of the involved matrices and the sparsity of the associated generators. The fast implementation is compared to two other implementations of algorithms for solving the Toeplitz (Hankel) STLS problem. The comparison is carried out on a recently proposed speech compression scheme. The numerical results confirm the high efficiency of the newly proposed fast implementation: the straightforward implementations have a complexity of O((m+n)3) and O(m3) whereas the proposed implementation has a complexity of O(mn+n2).

  12. Multi-loop adaptive internal model control based on a dynamic partial least squares model

    Institute of Scientific and Technical Information of China (English)

    Zhao ZHAO; Bin HU; Jun LIANG

    2011-01-01

    A multi-loop adaptive internal model control (IMC) strategy based on a dynamic partial least squares (PLS) framework is proposed to account for plant model errors caused by slow aging, drift in operational conditions, or environmental changes. Since PLS decomposition structure enables multi-loop controller design within latent spaces, a multivariable adaptive control scheme can be converted easily into several independent univariable control loops in the PLS space. In each latent subspace,once the model error exceeds a specific threshold, online adaptation rules are implemented separately to correct the plant model mismatch via a recursive least squares (RLS) algorithm. Because the IMC extracts the inverse of the minimum part of the internal model as its structure, the IMC controller is self-tuned by explicitly updating the parameters, which are parts of the internal model.Both parameter convergence and system stability are briefly analyzed, and proved to be effective. Finally, the proposed control scheme is tested and evaluated using a widely-used benchmark of a multi-input multi-output (MIMO) system with pure delay.

  13. Non-negative least-squares variance component estimation with application to GPS time series

    Science.gov (United States)

    Amiri-Simkooei, A. R.

    2016-05-01

    The problem of negative variance components is probable to occur in many geodetic applications. This problem can be avoided if non-negativity constraints on variance components (VCs) are introduced to the stochastic model. Based on the standard non-negative least-squares (NNLS) theory, this contribution presents the method of non-negative least-squares variance component estimation (NNLS-VCE). The method is easy to understand, simple to implement, and efficient in practice. The NNLS-VCE is then applied to the coordinate time series of the permanent GPS stations to simultaneously estimate the amplitudes of different noise components such as white noise, flicker noise, and random walk noise. If a noise model is unlikely to be present, its amplitude is automatically estimated to be zero. The results obtained from 350 GPS permanent stations indicate that the noise characteristics of the GPS time series are well described by combination of white noise and flicker noise. This indicates that all time series contain positive noise amplitudes for white and flicker noise. In addition, around two-thirds of the series consist of random walk noise, of which its average amplitude is the (small) value of 0.16, 0.13, and 0.45 { mm/year }^{1/2} for the north, east, and up components, respectively. Also, about half of the positive estimated amplitudes of random walk noise are statistically significant, indicating that one-third of the total time series have significant random walk noise.

  14. Online least squares one-class support vector machines-based abnormal visual event detection.

    Science.gov (United States)

    Wang, Tian; Chen, Jie; Zhou, Yi; Snoussi, Hichem

    2013-01-01

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method. PMID:24351629

  15. Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems

    CERN Document Server

    Abbasi-Yadkori, Yasin; Szepesvari, Csaba

    2011-01-01

    The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bou...

  16. Equalization of Loudspeaker and Room Responses Using Kautz Filters: Direct Least Squares Design

    Directory of Open Access Journals (Sweden)

    Tuomas Paatero

    2007-01-01

    Full Text Available DSP-based correction of loudspeaker and room responses is becoming an important part of improving sound reproduction. Such response equalization (EQ is based on using a digital filter in cascade with the reproduction channel to counteract the response errors introduced by loudspeakers and room acoustics. Several FIR and IIR filter design techniques have been proposed for equalization purposes. In this paper we investigate Kautz filters, an interesting class of IIR filters, from the point of view of direct least squares EQ design. Kautz filters can be seen as generalizations of FIR filters and their frequency-warped counterparts. They provide a flexible means to obtain desired frequency resolution behavior, which allows low filter orders even for complex corrections. Kautz filters have also the desirable property to avoid inverting dips in transfer function to sharp and long-ringing resonances in the equalizer. Furthermore, the direct least squares design is applicable to nonminimum-phase EQ design and allows using a desired target response. The proposed method is demonstrated by case examples with measured and synthetic loudspeaker and room responses.

  17. Total Robustified Least Squares Estimation in Partial Errors-in-variables Model

    Directory of Open Access Journals (Sweden)

    ZHAO Jun

    2016-05-01

    Full Text Available The weighted total least-squares (WTLS estimate for the partial errors-in-variables (EIV model is very susceptible to outliers. Because the observations and coefficient matrix in the partial EIV model may be contaminated with outliers simultaneously, a total robustified least squares (TRLS estimation for the partial EIV model is proposed by combining a two-step iterated algorithm of the WTLS estimate with the equivalent weight method of robust M-estimation. And the uniformly most powerful test statistics are constructed to determine the down-weighting factors. For the characteristics of the two-step iterated method, two different down-weighting schemes are presented. In the first scheme down-weighting is only implemented for the coefficient matrix and not for the observations when some elements of the coefficient matrix are estimated, and the second scheme is contrary. A simulated two-dimensional affine transformation and a linear fitting with real data are analyzed. The results show that the TRLS with the first scheme is superior to one with the second scheme, and it outperforms the existing robust methods with residual and posterior estimate of variance of unit weight and existing robust methods for the general EIV model.

  18. Online segmentation of time series based on polynomial least-squares approximations.

    Science.gov (United States)

    Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard

    2010-12-01

    The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs. PMID:20975120

  19. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    Science.gov (United States)

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L2 and L1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  20. Least-squares migration of multisource data with a deblurring filter

    KAUST Repository

    Dai, Wei

    2011-09-01

    Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.

  1. Weighted Least Squares Algorithm for Single-observer Passive Coherent Location Using DOA and TDOA Measurements

    Directory of Open Access Journals (Sweden)

    Zhao Yongsheng

    2016-06-01

    Full Text Available In order to determine single-observer passive coherent locations using illuminators of opportunity, we propose a jointing angle and Time Difference Of Arrival (TDOA Weighted Least Squares (WLS location method. First, we linearize the DOA and TDOA measurement equations. We establish the localization problem as the WLS optimization model by considering the errors in the location equations. Then, we iteratively solve the WLS optimization. Finally, we conduct a performance analysis of the proposed method. Simulation results show that, unlike the TDOA-only method, which needs at least three illuminators to locate a target, the jointing DOA and TDOA method requires only one illuminator. It also has a higher localization accuracy than the TDOA-only method when using the same number of illuminators. The proposed method yields a lower mean square error than the least squares algorithm, which makes it possible to approach the Cramér-Rao lower bound at a relatively high TDOA noise level. Moreover, on the basis of the geometric dilution of precision, we conclude that the positions of the target and illuminators are also important factors affecting the localization accuracy.

  2. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    Directory of Open Access Journals (Sweden)

    Tian Wang

    2013-12-01

    Full Text Available The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM, combined with its sparsified version (sparse online LS-OC-SVM. LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samples to provide a basic normal model, then updates the model through remaining data. In the sparse online scheme, the model complexity is controlled by the coherence criterion. The online LS-OC-SVM is adopted to handle the abnormal event detection problem. Each frame of the video is characterized by the covariance matrix descriptor encoding the moving information, then is classified into a normal or an abnormal frame. Experiments are conducted, on a two-dimensional synthetic distribution dataset and a benchmark video surveillance dataset, to demonstrate the promising results of the proposed online LS-OC-SVM method.

  3. Improving precision of X-ray fluorescence analysis of lanthanide mixtures using partial least squares regression

    Science.gov (United States)

    Kirsanov, Dmitry; Panchuk, Vitaly; Goydenko, Alexander; Khaydukova, Maria; Semenov, Valentin; Legin, Andrey

    2015-11-01

    This study addresses the problem of simultaneous quantitative analysis of six lanthanides (Ce, Pr, Nd, Sm, Eu, Gd) in mixed solutions by two different X-ray fluorescence techniques: energy-dispersive (EDX) and total reflection (TXRF). Concentration of each lanthanide was varied in the range 10- 6-10- 3 mol/L, low values being around the detection limit of the method. This resulted in XRF spectra with very poor signal to noise ratio and overlapping bands in case of EDX, while only the latter problem was observed for TXRF. It was shown that ordinary least squares approach in numerical calibration fails to provide for reasonable precision in quantification of individual lanthanides. Partial least squares (PLS) regression was able to circumvent spectral inferiorities and yielded adequate calibration models for both techniques with RMSEP (root mean squared error of prediction) values around 10- 5 mol/L. It was demonstrated that comparatively simple and inexpensive EDX method is capable of ensuring the similar precision to more sophisticated TXRF, when the spectra are treated by PLS.

  4. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353

  5. Library Design in Combinatorial Chemistry by Monte Carlo Methods

    OpenAIRE

    Falcioni, Marco; Michael W. Deem

    2000-01-01

    Strategies for searching the space of variables in combinatorial chemistry experiments are presented, and a random energy model of combinatorial chemistry experiments is introduced. The search strategies, derived by analogy with the computer modeling technique of Monte Carlo, effectively search the variable space even in combinatorial chemistry experiments of modest size. Efficient implementations of the library design and redesign strategies are feasible with current experimental capabilities.

  6. Radio astronomical image formation using constrained least squares and Krylov subspaces

    Science.gov (United States)

    Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan

    2016-04-01

    Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources

  7. Genetic and least squares algorithms for estimating spectral EIS parameters of prostatic tissues

    International Nuclear Information System (INIS)

    We employed electrical impedance spectroscopy (EIS) to evaluate the electrical properties of prostatic tissues. We collected freshly excised prostates from 23 men immediately following radical prostatectomy. The prostates were sectioned into 3 mm slices and electrical property measurements of complex resistivity were recorded from each of the slices using an impedance probe over the frequency range of 100 Hz to 100 kHz. The area probed was marked so that following tissue fixation and slide preparation, histological assessment could be correlated directly with the recorded EIS spectra. Prostate cancer (CaP), benign prostatic hyperplasia (BPH), non-hyperplastic glandular tissue and stroma were the primary prostatic tissue types probed. Genetic and least squares parameter estimation algorithms were implemented for fitting a Cole-type resistivity model to the measured data. The four multi-frequency-based spectral parameters defining the recorded spectrum (ρ∞, Δρ, fc and α) were determined using these algorithms and statistically analyzed with respect to the tissue type. Both algorithms fit the measured data well, with the least squares algorithm having a better average goodness of fit (95.2 mΩ m versus 109.8 mΩ m) and a faster execution time (80.9 ms versus 13 637 ms) than the genetic algorithm. The mean parameters, from all tissue samples, estimated using the genetic algorithm ranged from 4.44 to 5.55 Ω m, 2.42 to 7.14 Ω m, 3.26 to 6.07 kHz and 0.565 to 0.654 for ρ∞, Δρ, fc and α, respectively. These same parameters estimated using the least squares algorithm ranged from 4.58 to 5.79 Ω m, 2.18 to 6.98 Ω m, 2.97 to 5.06 kHz and 0.621 to 0.742 for ρ∞, Δρ, fc and α, respectively. The ranges of these parameters were similar to those reported in the literature. Further, significant differences (p c; this is especially important since current prostate cancer screening methods do not reliably differentiate between these two tissue types

  8. Recursive N-way partial least squares for brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Andrey Eliseyev

    Full Text Available In the article tensor-input/tensor-output blockwise Recursive N-way Partial Least Squares (RNPLS regression is considered. It combines the multi-way tensors decomposition with a consecutive calculation scheme and allows blockwise treatment of tensor data arrays with huge dimensions, as well as the adaptive modeling of time-dependent processes with tensor variables. In the article the numerical study of the algorithm is undertaken. The RNPLS algorithm demonstrates fast and stable convergence of regression coefficients. Applied to Brain Computer Interface system calibration, the algorithm provides an efficient adjustment of the decoding model. Combining the online adaptation with easy interpretation of results, the method can be effectively applied in a variety of multi-modal neural activity flow modeling tasks.

  9. Quantification of anaesthetic effects on atrial fibrillation rate by partial least-squares

    International Nuclear Information System (INIS)

    The mechanism underlying atrial fibrillation (AF) remains poorly understood. Multiple wandering propagation wavelets drifting through both atria under hierarchical models are not understood. Some pharmacological drugs, known as antiarrhythmics, modify the cardiac ionic currents supporting the fibrillation process within the atria and may modify the AF propagation dynamics terminating the fibrillation process. Other medications, theoretically non-antiarrhythmic, may slightly affect the fibrillation process in non-defined mechanisms. We evaluated whether the most commonly used anaesthetic agent, propofol, affects AF patterns. Partial least-squares (PLS) analysis was performed to reduce significant noise into the main latent variables to find the differences between groups. The final results showed an excellent discrimination between groups with slow atrial activity during the propofol infusion. (paper)

  10. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    International Nuclear Information System (INIS)

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues

  11. Integrated Combination of the Multi Hydrological Models by Applying the Least Square Method

    Directory of Open Access Journals (Sweden)

    Muhammad Tayyab

    2015-05-01

    Full Text Available Different hydrological models show different outputs for specific catchment, thus combining all the models in suitable way is very important to improve the forecast. To solve the issue, researchers have applied different techniques which ranges from simple inter-comparison of different hydrological models to extended combination of hydrological models. The aim of this research is to find a suitable and applicable combination technique, by applying least square method to get more valuable flood forecasting results for the Jinshajiang River basin. The combination forecast has been compared with the results of the three models individually, based on the comparison of the simulation outputs and the Nash-Sutcliffe efficiency and Correlation coefficient. The result showed that the performance of combine system of three conceptual hydrological models including Xin’anjiang model, Antecedent Precipitation Index (API model and Tank model is much more reliable as compared to their individual performance.

  12. Least squares approach for initial data recovery in dynamic data-driven applications simulations

    KAUST Repository

    Douglas, C.

    2010-12-01

    In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.

  13. PEMODELAN TINGKAT PENGHUNIAN KAMAR HOTEL DI KENDARI DENGAN TRANSFORMASI WAVELET KONTINU DAN PARTIAL LEAST SQUARES

    Directory of Open Access Journals (Sweden)

    Margaretha Ohyver

    2014-12-01

    Full Text Available Multicollinearity and outliers are the common problems when estimating regression model. Multicollinearitiy occurs when there are high correlations among predictor variables, leading to difficulties in separating the effects of each independent variable on the response variable. While, if outliers are present in the data to be analyzed, then the assumption of normality in the regression will be violated and the results of the analysis may be incorrect or misleading. Both of these cases occurred in the data on room occupancy rate of hotels in Kendari. The purpose of this study is to find a model for the data that is free of multicollinearity and outliers and to determine the factors that affect the level of room occupancy hotels in Kendari. The method used is Continuous Wavelet Transformation and Partial Least Squares. The result of this research is a regression model that is free of multicollinearity and a pattern of data that resolved the present of outliers.

  14. A multivariate partial least squares approach to joint association analysis for multiple correlated traits

    Institute of Scientific and Technical Information of China (English)

    Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu

    2016-01-01

    Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion (BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability, polymorphic information content (PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.

  15. Lameness detection challenges in automated milking systems addressed with partial least squares discriminant analysis

    DEFF Research Database (Denmark)

    Garcia, Emanuel; Klaas, Ilka Christine; Amigo Rubio, Jose Manuel;

    2014-01-01

    . Eighty variables retrieved from AMS were summarized week-wise and used to predict 2 defined classes: nonlame and clinically lame cows. Variables were represented with 2 transformations of the week summarized variables, using 2-wk data blocks before gait scoring, totaling 320 variables (2 × 2 × 80......). The reference gait scoring error was estimated in the first week of the study and was, on average, 15%. Two partial least squares discriminant analysis models were fitted to parity 1 and parity 2 groups, respectively, to assign the lameness class according to the predicted probability of being lame (score 3......Lameness is prevalent in dairy herds. It causes decreased animal welfare and leads to higher production costs. This study explored data from an automatic milking system (AMS) to model on-farm gait scoring from a commercial farm. A total of 88 cows were gait scored once per week, for 2 5-wk periods...

  16. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    Science.gov (United States)

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  17. Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    CERN Document Server

    Zhu, Hao; Giannakis, Georgios B

    2010-01-01

    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also all...

  18. Use of correspondence analysis partial least squares on linear and unimodal data

    DEFF Research Database (Denmark)

    Frisvad, Jens C.; Bergsøe, Merete Norsker

    1996-01-01

    Correspondence analysis partial least squares (CA-PLS) has been compared with PLS conceming classification and prediction of unimodal growth temperature data and an example using infrared (IR) spectroscopy for predicting amounts of chemicals in mixtures. CA-PLS was very effective for ordinating...... the unimodal temperature data and the results indicated that CA-PLS is effective in treating the arch effect, thus avoiding the detrending procedure often used on ecological data sets, at least when one basic underlying gradient is present. PLS and PCR gave poor results, as the ordinations had a horseshoe form...... that could only be seen in two-dimensional plots, and also less effective predictions. PLS was the best method in the linear case treated, with fewer components and a better prediction than CA-PLS....

  19. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  20. Prediction of chaotic systems with multidimensional recurrent least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Sun Jian-Cheng; Zhou Ya-Tong; Luo Jian-Guo

    2006-01-01

    In this paper, we propose a multidimensional version of recurrent least squares support vector machines (MDRLSSVM) to solve the problem about the prediction of chaotic system. To acquire better prediction performance, the high-dimensional space, which provides more information on the system than the scalar time series, is first reconstructed utilizing Takens's embedding theorem. Then the MDRLS-SVM instead of traditional RLS-SVM is used in the highdimensional space, and the prediction performance can be improved from the point of view of reconstructed embedding phase space. In addition, the MDRLS-SVM algorithm is analysed in the context of noise, and we also find that the MDRLS-SVM has lower sensitivity to noise than the RLS-SVM.

  1. First-order system least squares for the pure traction problem in planar linear elasticity

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  2. Partial Least Squares Regression Model to Predict Water Quality in Urban Water Distribution Systems

    Institute of Scientific and Technical Information of China (English)

    LUO Bijun; ZHAO Yuan; CHEN Kai; ZHAO Xinhua

    2009-01-01

    The water distribution system of one residential district in Tianjin is taken as an example to analyze the changes of water quality. Partial least squares (PLS) regression model, in which the turbidity and Fe are regarded as con-trol objectives, is used to establish the statistical model. The experimental results indicate that the PLS regression model has good predicted results of water quality compared with the monitored data. The percentages of absolute relative error (below 15%, 20%, 30%) are 44.4%, 66.7%, 100% (turbidity) and 33.3%, 44.4%, 77.8% (Fe) on the 4th sampling point; 77.8%, 88.9%, 88.9% (turbidity) and 44.4%, 55.6%, 66.7% (Fe) on the 5th sampling point.

  3. First-Order System Least Squares and the Energetic Variational Approach for Two-Phase Flow

    CERN Document Server

    Adler, J H; Liu, C; Manteuffel, T; Zikatanov, L

    2010-01-01

    This paper develops a first-order system least-squares (FOSLS) formulation for equations of two-phase flow. The main goal is to show that this discretization, along with numerical techniques such as nested iteration, algebraic multigrid, and adaptive local refinement, can be used to solve these types of complex fluid flow problems. In addition, from an energetic variational approach, it can be shown that an important quantity to preserve in a given simulation is the energy law. We discuss the energy law and inherent structure for two-phase flow using the Allen-Cahn interface model and indicate how it is related to other complex fluid models, such as magnetohydrodynamics. Finally, we show that, using the FOSLS framework, one can still satisfy the appropriate energy law globally while using well-known numerical techniques.

  4. The Helmholtz equation least squares method for reconstructing and predicting acoustic radiation

    CERN Document Server

    Wu, Sean F

    2015-01-01

    This book gives a comprehensive introduction to the Helmholtz Equation Least Squares (HELS) method and its use in diagnosing noise and vibration problems. In contrast to the traditional NAH technologies, the HELS method does not seek an exact solution to the acoustic field produced by an arbitrarily shaped structure. Rather, it attempts to obtain the best approximation of an acoustic field through the expansion of certain basis functions. Therefore, it significantly simplifies the complexities of the reconstruction process, yet still enables one to acquire an understanding of the root causes of different noise and vibration problems that involve arbitrarily shaped surfaces in non-free space using far fewer measurement points than either Fourier acoustics or BEM based NAH. The examples given in this book illustrate that the HELS method may potentially become a practical and versatile tool for engineers to tackle a variety of complex noise and vibration issues in engineering applications.

  5. Comparison between the basic least squares and the Bayesian approach for elastic constants identification

    Energy Technology Data Exchange (ETDEWEB)

    Gogu, C; Le Riche, R; Molimard, J; Vautrin, A [Ecole des Mines de Saint Etienne, 158 cours Fauriel, 42023 Saint Etienne (France); Haftka, R; Sankar, B [University of Florida, PO Box 116250, Gainesville, FL, 32611 (United States)], E-mail: gogu@emse.fr

    2008-11-01

    The basic formulation of the least squares method, based on the L{sub 2} norm of the misfit, is still widely used today for identifying elastic material properties from experimental data. An alternative statistical approach is the Bayesian method. We seek here situations with significant difference between the material properties found by the two methods. For a simple three bar truss example we illustrate three such situations in which the Bayesian approach leads to more accurate results: different magnitude of the measurements, different uncertainty in the measurements and correlation among measurements. When all three effects add up, the Bayesian approach can have a large advantage. We then compared the two methods for identification of elastic constants from plate vibration natural frequencies.

  6. Window least squares method applied to statistical noise smoothing of positron annihilation data

    International Nuclear Information System (INIS)

    The paper deals with the off-line processing of experimental data obtained by two-dimensional angular correlation of the electron-positron annihilation radiation (2D-ACAR) technique on high-temperature superconductors. A piecewise continuous window least squares (WLS) method devoted to the statistical noise smoothing of 2D-ACAR data, under close control of the crystal reciprocal lattice periodicity, is derived. Reliability evaluation of the constant local weight WLS smoothing formula (CW-WLSF) shows that consistent processing 2D-ACAR data by CW-WLSF is possible. CW-WLSF analysis of 2D-ACAR data collected on untwinned Y Ba2Cu3O7-δ single crystals yields significantly improved signature of the Fermi surface ridge at second Umklapp processes and resolves, for the first time, the ridge signature at third Umklapp processes. (author). 24 refs, 9 figs

  7. Comparison of the least squares and the maximum likelihood estimators for gamma-spectrometry

    International Nuclear Information System (INIS)

    A comparison of the characteristics of the maximum likelihood (ML) and the least squares (LS) estimators of nuclides activities for low-intensity scintillation γ-spectra has been carried out by the computer simulation. It has been found that the part of the LS estimators gives biased activity estimates and the bias grows with increase of the multichannel analyzer resolution (the number of the spectrum channels). Such bias in estimates leads to the significant deterioration of the estimation accuracy for low-intensity spectra. Consequently, the threshold of nuclides detection rises up to 2-10 times in comparison with the ML estimator. It has been shown that the ML estimator and the special LS estimator provide non biased estimates of nuclides activities. Thus, these estimators are optimal for practical application to low-intensity spectrometry. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)

  8. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    Science.gov (United States)

    Samsudin, R.; Saad, P.; Shabri, A.

    2010-06-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN) models, Autoregressive Integrated Moving Average (ARIMA) model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE) and coefficient of correlation (R) are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  9. A Hybridization of Enhanced Artificial Bee Colony-Least Squares Support Vector Machines for Price Forecasting

    Directory of Open Access Journals (Sweden)

    Yuhanis Yusof

    2012-01-01

    Full Text Available Problem statement: As the performance of Least Squares Support Vector Machines (LSSVM is highly rely on its value of regularization parameter, γ and kernel parameter, σ2, man-made approach is clearly not an appropriate solution since it may lead to blindness in certain extent. In addition, this technique is time consuming and unsystematic, which consequently affect the generalization performance of LSSVM. Approach: This study presents an enhanced Artificial Bee Colony (ABC to automatically optimize the hyper parameters of interest. The enhancement involved modifications that provide better exploitation activity by the bees during searching and prevent premature convergence. Later, the prediction process is accomplished by LSSVM. Results and Conclusion: Empirical results obtained indicated that feasibility of proposed technique showed a satisfactory performance by producing better prediction accuracy as compared to standard ABC-LSSVM and Back Propagation Neural Network.

  10. Novel passive localization algorithm based on double side matrix-restricted total least squares

    Institute of Scientific and Technical Information of China (English)

    Xu Zheng; Qu Changwen; Wang Changhai

    2013-01-01

    In order to solve the bearings-only passive localization problem in the presence of erroneous observer position,a novel algorithm based on double side matrix-restricted total least squares (DSMRTLS) is proposed.First,the aforementioned passive localization problem is transferred to the DSMRTLS problem by deriving a multiplicative structure for both the observation matrix and the observation vector.Second,the corresponding optimization problem of the DSMRTLS problem without constraint is derived,which can be approximated as the generalized Rayleigh quotient minimization problem.Then,the localization solution which is globally optimal and asymptotically unbiased can be got by generalized eigenvalue decomposition.Simulation results verify the rationality of the approximation and the good performance of the proposed algorithm compared with several typical algorithms.

  11. Least Squares Temporal Difference Actor-Critic Methods with Applications to Robot Motion Control

    CERN Document Server

    Estanjini, Reza Moazzez; Lahijanian, Morteza; Wang, Jing; Belta, Calin A; Paschalidis, Ioannis Ch

    2011-01-01

    We consider the problem of finding a control policy for a Markov Decision Process (MDP) to maximize the probability of reaching some states while avoiding some other states. This problem is motivated by applications in robotics, where such problems naturally arise when probabilistic models of robot motion are required to satisfy temporal logic task specifications. We transform this problem into a Stochastic Shortest Path (SSP) problem and develop a new approximate dynamic programming algorithm to solve it. This algorithm is of the actor-critic type and uses a least-square temporal difference learning method. It operates on sample paths of the system and optimizes the policy within a pre-specified class parameterized by a parsimonious set of parameters. We show its convergence to a policy corresponding to a stationary point in the parameters' space. Simulation results confirm the effectiveness of the proposed solution.

  12. Online Identification of Multivariable Discrete Time Delay Systems Using a Recursive Least Square Algorithm

    Directory of Open Access Journals (Sweden)

    Saïda Bedoui

    2013-01-01

    Full Text Available This paper addresses the problem of simultaneous identification of linear discrete time delay multivariable systems. This problem involves both the estimation of the time delays and the dynamic parameters matrices. In fact, we suggest a new formulation of this problem allowing defining the time delay and the dynamic parameters in the same estimated vector and building the corresponding observation vector. Then, we use this formulation to propose a new method to identify the time delays and the parameters of these systems using the least square approach. Convergence conditions and statistics properties of the proposed method are also developed. Simulation results are presented to illustrate the performance of the proposed method. An application of the developed approach to compact disc player arm is also suggested in order to validate simulation results.

  13. Least-Square Collaborative Beamforming Linear Array for Steering Capability in Green Wireless Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    NikNoordini NikAbdMalik; Mazlina Esa; Nurul Mu’azzah Abdul Latiff

    2016-01-01

    Abstract-This paper presents a collaborative beamforming (CB) technique to organize the sensor node’s location in a linear array for green wireless sensor network (WSN) applications. In this method, only selected clusters and active CB nodes are needed each time to perform CB in WSNs. The proposed least-square linear array (LSLA) manages to select nodes to perform as a linear antenna array (LAA), which is similar to and as outstanding as the conventional uniform linear array (ULA). The LSLA technique is also able to solve positioning error problems that exist in the random nodes deployment. The beampattern fluctuations have been analyzed due to the random positions of sensor nodes. Performances in terms of normalized power gains are given. It is demonstrated by a simulation that the proposed technique gives similar performances to the conventional ULA and at the same time exhibits lower complexity.

  14. Research on mine noise sources analysis based on least squares wave-let transform

    Institute of Scientific and Technical Information of China (English)

    CHENG Gen-yin; YU Sheng-chen; CHEN Shao-jie; WEI Zhi-yong; ZHANG Xiao-chen

    2010-01-01

    In order to determine the characteristics of noise source accurately, the noise distribution at different frequencies was determined by taking the differences into account between aerodynamic noises, mechanical noise, electrical noise in terms of in frequency and intensity. Designed a least squares wavelet with high precision and special effects for strong interference zone (multi-source noise), which is applicable to strong noise analysis produced by underground mine, and obtained distribution of noise in different frequency and achieves good results. According to the results of decomposition, the characteristics of noise sources production can be more accurately determined, which lays a good foundation for the follow-up focused and targeted noise control, and provides a new method that is greatly applicable for testing and analyzing noise control.

  15. Uncertainty evaluation for ordinary least-square fitting with arbitrary order polynomial in joule balance method

    International Nuclear Information System (INIS)

    The ordinary least-square fitting with polynomial is used in both the dynamic phase of the watt balance method and the weighting phase of joule balance method but few researches have been conducted to evaluate the uncertainty of the fitting data in the electrical balance methods. In this paper, a matrix-calculation method for evaluating the uncertainty of the polynomial fitting data is derived and the properties of this method are studied by simulation. Based on this, another two derived methods are proposed. One is used to find the optimal fitting order for the watt or joule balance methods. Accuracy and effective factors of this method are experimented with simulations. The other is used to evaluate the uncertainty of the integral of the fitting data for joule balance, which is demonstrated with an experiment from the NIM-1 joule balance. (paper)

  16. NEW RESULTS ABOUT THE RELATIONSHIP BETWEEN OPTIMALLY WEIGHTED LEAST SQUARES ESTIMATE AND LINEAR MINIMUM VARIANCE ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Juan ZHAO; Yunmin ZHU

    2009-01-01

    The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.

  17. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    Science.gov (United States)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  18. A Note on the Nonparametric Least-squares Test for Checking a Polynomial Relationship

    Institute of Scientific and Technical Information of China (English)

    Chang-lin Mei; Shu-yuan He; Yan-hua Wang

    2003-01-01

    Recently, Gijbels and Rousson[6] suggested a new approach, called nonparametric least-squares test, to check polynomial regression relationships. Although this test procedure is not only simple but also powerful in most cases, there are several other parameters to be chosen in addition to the kernel and bandwidth.As shown in their paper, choice of these parameters is crucial but sometimes intractable. We propose in this paper a new statistic which is based on sample variance of the locally estimated pth derivative of the regression function at each design point. The resulting test is still simple but includes no extra parameters to be determined besides the kernel and bandwidth that are necessary for nonparametric smoothing techniques. Comparison by simulations demonstrates that our test performs as well as or even better than Gijbels and Rousson's approach.Furthermore, a real-life data set is analyzed by our method and the results obtained are satisfactory.

  19. A Constrained Least Squares Approach to Mobile Positioning: Algorithms and Optimality

    Directory of Open Access Journals (Sweden)

    Ma W-K

    2006-01-01

    Full Text Available The problem of locating a mobile terminal has received significant attention in the field of wireless communications. Time-of-arrival (TOA, received signal strength (RSS, time-difference-of-arrival (TDOA, and angle-of-arrival (AOA are commonly used measurements for estimating the position of the mobile station. In this paper, we present a constrained weighted least squares (CWLS mobile positioning approach that encompasses all the above described measurement cases. The advantages of CWLS include performance optimality and capability of extension to hybrid measurement cases (e.g., mobile positioning using TDOA and AOA measurements jointly. Assuming zero-mean uncorrelated measurement errors, we show by mean and variance analysis that all the developed CWLS location estimators achieve zero bias and the Cramér-Rao lower bound approximately when measurement error variances are small. The asymptotic optimum performance is also confirmed by simulation results.

  20. Least Squares Estimate of the Initial Phases in STFT based Speech Enhancement

    DEFF Research Database (Denmark)

    Nørholm, Sidsel Marie; Krawczyk-Becker, Martin; Gerkmann, Timo;

    2015-01-01

    In this paper, we consider single-channel speech enhancement in the short time Fourier transform (STFT) domain. We suggest to improve an STFT phase estimate by estimating the initial phases. The method is based on the harmonic model and a model for the phase evolution over time. The initial phases...... are estimated by setting up a least squares problem between the noisy phase and the model for phase evolution. Simulations on synthetic and speech signals show a decreased error on the phase when an estimate of the initial phase is included compared to using the noisy phase as an initialisation. The error...... on the phase is decreased at input SNRs from -10 to 10 dB. Reconstructing the signal using the clean amplitude, the mean squared error is decreased and the PESQ score is increased....

  1. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise

    DEFF Research Database (Denmark)

    Nolte, Ingmar; Voev, Valeri

    The expected value of sums of squared intraday returns (realized variance) gives rise to a least squares regression which adapts itself to the assumptions of the noise process and allows for a joint inference on integrated volatility (IV), noise moments and price-noise relations. In the iid noise...... increasing" type of dependence and analyze its ability to cope with the empirically observed price-noise dependence in quote data. In the empirical section of the paper we apply the LS methodology to estimate the integrated volatility as well as the noise properties of 25 liquid stocks both with midquote and...... transaction price data. We find that while iid noise is an oversimplification, its non-iid characteristics have a decidedly negligible effect on volatility estimation within our framework, for which we provide a sound theoretical reason. In terms of noise-price endogeneity, we are not able to find empirical...

  2. Modelling of chaotic systems based on modified weighted recurrent least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Sun Jian-Cheng; Zhang Tai-Yi; Liu Feng

    2004-01-01

    Positive Lyapunov exponents cause the errors in modelling of the chaotic time series to grow exponentially. In this paper, we propose the modified version of the support vector machines (SVM) to deal with this problem. Based on recurrent least squares support vector machines (RLS-SVM), we introduce a weighted term to the cost function to compensate the prediction errors resulting from the positive global Lyapunov exponents. To demonstrate the effectiveness of our algorithm, we use the power spectrum and dynamic invariants involving the Lyapunov exponents and the correlation dimension as criterions, and then apply our method to the Santa Fe competition time series. The simulation results shows that the proposed method can capture the dynamics of the chaotic time series effectively.

  3. A least-squares finite element method for 3D incompressible Navier-Stokes equations

    Science.gov (United States)

    Jiang, Bo-Nan; Lin, T. L.; Hou, Lin-Jun; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations, and results in symmetric, positive definite algebraic system. An additional compatibility equation, i.e., the divergence of vorticity vector should be zero, is included to make the first-order system elliptic. The Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. The flow in a half of 3D cubic cavity is calculated at Re = 100, 400, and 1,000 with 50 x 52 x 25 trilinear elements. The Taylor-Gortler-like vortices are observed at Re = 1,000.

  4. The least-squares finite element method for low-mach-number compressible viscous flows

    Science.gov (United States)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  5. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    Science.gov (United States)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  6. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.;

    2002-01-01

    Two-dimensional gel electrophoresis (2-DE) produces large amounts of data and extraction of relevant information from these data demands a cautious and time consuming process of spot pattern matching between gels. The classical approach of data analysis is to detect protein markers that appear...... or disappear depending on the experimental conditions. Such biomarkers are found by comparing the relative volumes of individual spots in the individual gels. Multivariate statistical analysis and modelling of 2-DE data for comparison and classification is an alternative approach utilising the combination...... of all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary...

  7. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  8. Underwater terrain positioning method based on least squares estimation for AUV

    Science.gov (United States)

    Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing

    2015-12-01

    To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.

  9. Least Squares Approach to the Alignment of the Generic High Precision Tracking System

    CERN Document Server

    Brückman de Renstrom, P

    2005-01-01

    A least squares method to solve a generic alignment problem of high granularity tracking system is presented. The formalism takes advantage of the assumption that the derived corrections are small and consequently uses the first order linear expansion throughout. The algorithm consists of analytical linear expansion allowing for multiple nested fits. E.g. imposing a common vertex for groups of particle tracks is of particular interest. We present a consistent and complete recipe to impose constraints on any set of either implicit or explicit parameters. The baseline solution to the alignment problem is equivalent to the one described in [1]. The latter was derived using purely algebraic methods to reduce the initial large system of linear equations arising from separate fits of tracks and alignment parameters. The method presented here benefits from wider range of applications including problems with implicit vertex fit, physics constraints on track parameters, use of external information to constrain the geo...

  10. Natural gradient-based recursive least-squares algorithm for adaptive blind source separation

    Institute of Scientific and Technical Information of China (English)

    ZHU Xiaolong; ZHANG Xianda; YE Jimin

    2004-01-01

    This paper focuses on the problem of adaptive blind source separation (BSS).First, a recursive least-squares (RLS) whitening algorithm is proposed. By combining it with a natural gradient-based RLS algorithm for nonlinear principle component analysis (PCA), and using reasonable approximations, a novel RLS algorithm which can achieve BSS without additional pre-whitening of the observed mixtures is obtained. Analyses of the equilibrium points show that both of the RLS whitening algorithm and the natural gradient-based RLS algorithm for BSS have the desired convergence properties. It is also proved that the combined new RLS algorithm for BSS is equivariant and has the property of keeping the separating matrix from becoming singular. Finally, the effectiveness of the proposed algorithm is verified by extensive simulation results.

  11. Improvement of high-order least-squares integration method for stereo deflectometry.

    Science.gov (United States)

    Ren, Hongyu; Gao, Feng; Jiang, Xiangqian

    2015-12-01

    Stereo deflectometry is defined as measurement of the local slope of specular surfaces by using two CCD cameras as detectors and one LCD screen as a light source. For obtaining 3D topography, integrating the calculated slope data is needed. Currently, a high-order finite-difference-based least-squares integration (HFLI) method is used to improve the integration accuracy. However, this method cannot be easily implemented in circular domain or when gradient data are incomplete. This paper proposes a modified easy-implementation integration method based on HFLI (EI-HFLI), which can work in arbitrary domains, and can directly and conveniently handle incomplete gradient data. To carry out the proposed algorithm in a practical stereo deflectometry measurement, gradients are calculated in both CCD frames, and then are mixed together as original data to be meshed into rectangular grids format. Simulation and experiments show this modified method is feasible and can work efficiently. PMID:26836684

  12. Rapid and accurate determination of tissue optical properties using least-squares support vector machines.

    Science.gov (United States)

    Barman, Ishan; Dingari, Narahara Chari; Rajaram, Narasimhan; Tunnell, James W; Dasari, Ramachandra R; Feld, Michael S

    2011-01-01

    Diffuse reflectance spectroscopy (DRS) has been extensively applied for the characterization of biological tissue, especially for dysplasia and cancer detection, by determination of the tissue optical properties. A major challenge in performing routine clinical diagnosis lies in the extraction of the relevant parameters, especially at high absorption levels typically observed in cancerous tissue. Here, we present a new least-squares support vector machine (LS-SVM) based regression algorithm for rapid and accurate determination of the absorption and scattering properties. Using physical tissue models, we demonstrate that the proposed method can be implemented more than two orders of magnitude faster than the state-of-the-art approaches while providing better prediction accuracy. Our results show that the proposed regression method has great potential for clinical applications including in tissue scanners for cancer margin assessment, where rapid quantification of optical properties is critical to the performance. PMID:21412464

  13. A Bayesian least squares support vector machines based framework for fault diagnosis and failure prognosis

    Science.gov (United States)

    Khawaja, Taimoor Saleem

    A high-belief low-overhead Prognostics and Health Management (PHM) system is desired for online real-time monitoring of complex non-linear systems operating in a complex (possibly non-Gaussian) noise environment. This thesis presents a Bayesian Least Squares Support Vector Machine (LS-SVM) based framework for fault diagnosis and failure prognosis in nonlinear non-Gaussian systems. The methodology assumes the availability of real-time process measurements, definition of a set of fault indicators and the existence of empirical knowledge (or historical data) to characterize both nominal and abnormal operating conditions. An efficient yet powerful Least Squares Support Vector Machine (LS-SVM) algorithm, set within a Bayesian Inference framework, not only allows for the development of real-time algorithms for diagnosis and prognosis but also provides a solid theoretical framework to address key concepts related to classification for diagnosis and regression modeling for prognosis. SVM machines are founded on the principle of Structural Risk Minimization (SRM) which tends to find a good trade-off between low empirical risk and small capacity. The key features in SVM are the use of non-linear kernels, the absence of local minima, the sparseness of the solution and the capacity control obtained by optimizing the margin. The Bayesian Inference framework linked with LS-SVMs allows a probabilistic interpretation of the results for diagnosis and prognosis. Additional levels of inference provide the much coveted features of adaptability and tunability of the modeling parameters. The two main modules considered in this research are fault diagnosis and failure prognosis. With the goal of designing an efficient and reliable fault diagnosis scheme, a novel Anomaly Detector is suggested based on the LS-SVM machines. The proposed scheme uses only baseline data to construct a 1-class LS-SVM machine which, when presented with online data is able to distinguish between normal behavior

  14. STUDY ON PARAMETERS FOR TOPOLOGICAL VARIABLES FIELD INTERPOLATED BY MOVING LEAST SQUARE APPROXIMATION

    Institute of Scientific and Technical Information of China (English)

    Kal Long; Zhengxing Zuo; Rehan H.Zuberi

    2009-01-01

    This paper presents a new approach to the structural topology optimization of con-tinuum structures. Material-point independent variables are presented to illustrate the existence condition, or inexistence of the material points and their vicinity instead of elements or nodes in popular topology optimization methods. Topological variables field is constructed by moving least square approximation which is used as a shape function in the meshless method. Combined with finite element analyses, not only checkerboard patterns and mesh-dependence phenomena are overcome by this continuous and smooth topological variables field, but also the locations and numbers of topological variables can be arbitrary. Parameters including the number of quadrature points, scaling parameter, weight function and so on upon optimum topological configurations are discussed. Two classic topology optimization problems are solved successfully by the pro-posed method. The method is found robust and no numerical instabilities are found with proper parameters.

  15. Least-Squares Solution of Inverse Problem for Hermitian Anti-reflexive Matrices and Its Appoximation

    Institute of Scientific and Technical Information of China (English)

    Zhen Yun PENG; Yuan Bei DENG; Jin Wang LIU

    2006-01-01

    In this paper, we first consider the least-squares solution of the matrix inverse problem as follows: Find a hermitian anti-reflexive matrix corresponding to a given generalized reflection matrix J such that for given matrices X, B we have minA‖AX - B‖. The existence theorems are obtained, and a general representation of such a matrix is presented. We denote the set of such matrices by SE. Then the matrix nearness problem for the matrix inverse problem is discussed. That is: Given an arbitrary A*, find a matrix A ∈ SE which is nearest to A* in Frobenius norm. We show that the nearest matrix is unique and provide an expression for this nearest matrix.

  16. A Collocation Method by Moving Least Squares Applicable to European Option Pricing

    Directory of Open Access Journals (Sweden)

    M. Amirfakhrian

    2016-05-01

    Full Text Available The subject matter of the present inquiry is the pricing of European options in the actual form of numbers. To assess the numerical prices of European options, a scheme independent of any kind of mesh but rather powered by moving least squares (MLS estimation is made. In practical terms, first the discretion of time variable is implemented and then, an MLS-powered method is applied for spatial approximation. As, unlike other methods, these courses of action mentioned here don't rely on a mesh, one can firmly claim they are to be categorized under mesh-less methods. And, of course, at the end of the paper, various experiments are offered to prove how efficient and how powerful the introduced approach is.

  17. Analysis of Shift and Deformation of Planar Surfaces Using the Least Squares Plane

    Directory of Open Access Journals (Sweden)

    Hrvoje Matijević

    2006-12-01

    Full Text Available Modern methods of measurement developed on the basis of advanced reflectorless distance measurement have paved the way for easier detection and analysis of shift and deformation. A large quantity of collected data points will often require a mathematical model of the surface that fits best into these. Although this can be a complex task, in the case of planar surfaces it is easily done, enabling further processing and analysis of measurement results. The paper describes the fitting of a plane to a set of collected points using the least squares distance, with previously excluded outliers via the RANSAC algorithm. Based on that, a method for analysis of the deformation and shift of planar surfaces is also described.

  18. Wavelet Neural Networks for Adaptive Equalization by Using the Orthogonal Least Square Algorithm

    Institute of Scientific and Technical Information of China (English)

    JIANG Minghu(江铭虎); DENG Beixing(邓北星); Georges Gielen

    2004-01-01

    Equalizers are widely used in digital communication systems for corrupted or time varying channels. To overcome performance decline for noisy and nonlinear channels, many kinds of neural network models have been used in nonlinear equalization. In this paper, we propose a new nonlinear channel equalization, which is structured by wavelet neural networks. The orthogonal least square algorithm is applied to update the weighting matrix of wavelet networks to form a more compact wavelet basis unit, thus obtaining good equalization performance. The experimental results show that performance of the proposed equalizer based on wavelet networks can significantly improve the neural modeling accuracy and outperform conventional neural network equalization in signal to noise ratio and channel non-linearity.

  19. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    Science.gov (United States)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  20. On the Semivalues and the Least Square Values Average Per Capita Formulas and Relationships

    Institute of Scientific and Technical Information of China (English)

    Irinel DRAGAN

    2006-01-01

    In this paper, it is shown that both the Semivalues and the Least Square Values of cooperative transferable utilities games can be expressed in terms of n2 averages of values of the characteristic function of the game, by means of what we call the Average per capita formulas. Moreover, like the case of the Shapley value earlier considered, the terms of the formulas can be computed in parallel, and an algorithm is derived. From these results, it follows that each of the two values mentioned above are Shapley values of games easily obtained from the given game, and this fact gives another computational opportunity, as soon as the computation of the Shapley value is efficiently done.

  1. The Recovery of Weak Impulsive Signals Based on Stochastic Resonance and Moving Least Squares Fitting

    Directory of Open Access Journals (Sweden)

    Kuosheng Jiang

    2014-07-01

    Full Text Available In this paper a stochastic resonance (SR-based method for recovering weak impulsive signals is developed for quantitative diagnosis of faults in rotating machinery. It was shown in theory that weak impulsive signals follow the mechanism of SR, but the SR produces a nonlinear distortion of the shape of the impulsive signal. To eliminate the distortion a moving least squares fitting method is introduced to reconstruct the signal from the output of the SR process. This proposed method is verified by comparing its detection results with that of a morphological filter based on both simulated and experimental signals. The experimental results show that the background noise is suppressed effectively and the key features of impulsive signals are reconstructed with a good degree of accuracy, which leads to an accurate diagnosis of faults in roller bearings in a run-to failure test.

  2. Chaotic time series multi-step direct prediction with partial least squares regression

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    Considering chaotic time series multi-step prediction,multi-step direct prediction model based on partial least squares(PLS)is proposed in this article,where PLS,the method for predicting a set of dependent variables forming a large set of predictors,is used to model the dynamic evolution between the space points and the corresponding future points.The model can eliminate error accumulation with the common single-step local model algorithm,and refrain from the high multi-collinearity problem in the reconstructed state space with the increase of embedding dimension.Simulation predictions are done on the Mackey-Glass chaotic time series with the model.The satisfying prediction accuracy is obtained and the model efficiency verified.In the experiments,the number of extracted components in PLS is set with Cross-validation procedure.

  3. A Selective Moving Window Partial Least Squares Method and Its Application in Process Modeling

    Institute of Scientific and Technical Information of China (English)

    Ouguan Xu; Yongfeng Fu; Hongye Su; Lijuan Li

    2014-01-01

    A selective moving window partial least squares (SMW-PLS) soft sensor was proposed in this paper and applied to a hydro-isomerization process for on-line estimation of para-xylene (PX) content. Aiming at the high frequen-cy of model updating in previous recursive PLS methods, a selective updating strategy was developed. The model adaptation is activated once the prediction error is larger than a preset threshold, or the model is kept unchanged. As a result, the frequency of model updating is reduced greatly, while the change of prediction accuracy is minor. The performance of the proposed model is better as compared with that of other PLS-based model. The compro-mise between prediction accuracy and real-time performance can be obtained by regulating the threshold. The guidelines to determine the model parameters are illustrated. In summary, the proposed SMW-PLS method can deal with the slow time-varying processes effectively.

  4. Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application

    Institute of Scientific and Technical Information of China (English)

    JIA Jin-ming; WEN Xiang-jun

    2008-01-01

    Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.

  5. Influence and interaction indexes for pseudo-Boolean functions: a unified least squares approach

    CERN Document Server

    Marichal, Jean-Luc

    2012-01-01

    The Banzhaf power and interaction indexes for a pseudo-Boolean function (or a cooperative game) appear naturally as leading coefficients in the standard least squares approximation of the function by a pseudo-Boolean function of a specified degree. We first observe that this property still holds if we consider approximations by pseudo-Boolean functions depending only on specified variables. We then show that the Banzhaf influence index can also be obtained from the latter approximation problem. Considering certain weighted versions of this approximation problem, we introduce a class of weighted Banzhaf influence indexes, analyze their most important properties, and point out similarities between the weighted Banzhaf influence index and the corresponding weighted Banzhaf interaction index.

  6. Partial least-squares: Theoretical issues and engineering applications in signal processing

    Directory of Open Access Journals (Sweden)

    Fredric M. Ham

    1996-01-01

    Full Text Available In this paper we present partial least-squares (PLS, which is a statistical modeling method used extensively in analytical chemistry for quantitatively analyzing spectroscopic data. Comparisons are made between classical least-squares (CLS and PLS to show how PLS can be used in certain engineering signal processing applications. Moreover, it is shown that in certain situations when there exists a linear relationship between the independent and dependent variables, PLS can yield better predictive performance than CLS when it is not desirable to use all of the empirical data to develop a calibration model used for prediction. Specifically, because PLS is a factor analysis method, optimal selection of the number of PLS factors can result in a calibration model whose predictive performance is considerably better than CLS. That is, factor analysis (rank reduction allows only those features of the data that are associated with information of interest to be retained for development of the calibration model, and the remaining data associated with noise are discarded. It is shown that PLS can yield physical insight into the system from which empirical data has been collected. Also, when there exists a non-linear cause-and-effect relationship between the independent and dependent variables, the PLS calibration model can yield prediction errors that are much less than those for CLS. Three PLS application examples are given and the results are compared to CLS. In one example, a method is presented using PLS for parametric system identification. Using PLS for system identification allows simultaneous estimation of the system dimension and the system parameter vector associated with a minimal realization of the system.

  7. An integrated approach to the simultaneous selection of variables, mathematical pre-processing and calibration samples in partial least-squares multivariate calibration.

    Science.gov (United States)

    Allegrini, Franco; Olivieri, Alejandro C

    2013-10-15

    A new optimization strategy for multivariate partial-least-squares (PLS) regression analysis is described. It was achieved by integrating three efficient strategies to improve PLS calibration models: (1) variable selection based on ant colony optimization, (2) mathematical pre-processing selection by a genetic algorithm, and (3) sample selection through a distance-based procedure. Outlier detection has also been included as part of the model optimization. All the above procedures have been combined into a single algorithm, whose aim is to find the best PLS calibration model within a Monte Carlo-type philosophy. Simulated and experimental examples are employed to illustrate the success of the proposed approach. PMID:24054659

  8. On the roles of minimization and linearization in least-squares finite element models of nonlinear boundary-value problems

    Science.gov (United States)

    Payette, G. S.; Reddy, J. N.

    2011-05-01

    In this paper we examine the roles of minimization and linearization in the least-squares finite element formulations of nonlinear boundary-values problems. The least-squares principle is based upon the minimization of the least-squares functional constructed via the sum of the squares of appropriate norms of the residuals of the partial differential equations (in the present case we consider L2 norms). Since the least-squares method is independent of the discretization procedure and the solution scheme, the least-squares principle suggests that minimization should be performed prior to linearization, where linearization is employed in the context of either the Picard or Newton iterative solution procedures. However, in the least-squares finite element analysis of nonlinear boundary-value problems, it has become common practice in the literature to exchange the sequence of application of the minimization and linearization operations. The main purpose of this study is to provide a detailed assessment on how the finite element solution is affected when the order of application of these operators is interchanged. The assessment is performed mathematically, through an examination of the variational setting for the least-squares formulation of an abstract nonlinear boundary-value problem, and also computationally, through the numerical simulation of the least-squares finite element solutions of both a nonlinear form of the Poisson equation and also the incompressible Navier-Stokes equations. The assessment suggests that although the least-squares principle indicates that minimization should be performed prior to linearization, such an approach is often impractical and not necessary.

  9. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

    Science.gov (United States)

    Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

    2015-09-01

    This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

  10. Separating iterative solution model of generalized nonlinear dynamic least squares for data processing in building of digital earth

    Institute of Scientific and Technical Information of China (English)

    陶华学; 郭金运

    2003-01-01

    Data coming from different sources have different types and temporal states. Relations between one type of data and another ones, or between data and unknown parameters are almost nonlinear. It is not accurate and reliable to process the data in building the digital earth with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method was put forward to process data in building the digital earth. A separating solution model and the iterative calculation method were used to solve the generalized nonlinear dynamic least squares problem. In fact, a complex problem can be separated and then solved by converting to two sub-problems, each of which has a single variable. Therefore the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations.

  11. HYDRA: a Java library for Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Gregory R. Warnes

    2002-03-01

    Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.

  12. Kinetic microplate bioassays for relative potency of antibiotics improved by partial Least Square (PLS) regression.

    Science.gov (United States)

    Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello

    2016-05-01

    Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination. PMID:26971814

  13. Retrieve the evaporation duct height by least-squares support vector machine algorithm

    Science.gov (United States)

    Douvenot, Remi; Fabbro, Vincent; Bourlier, Christophe; Saillard, Joseph; Fuchs, Hans-Hellmuth; Essen, Helmut; Foerster, Joerg

    2009-01-01

    The detection and tracking of naval targets, including low Radar Cross Section (RCS) objects like inflatable boats or sea skimming missiles requires a thorough knowledge of the propagation properties of the maritime boundary layer. Models are in existence, which allow a prediction of the propagation factor using the parabolic equation algorithm. As a necessary input, the refractive index has to be known. This index, however, is strongly influenced by the actual atmospheric conditions, characterized mainly by temperature, humidity and air pressure. An approach is initiated to retrieve the vertical profile of the refractive index from the propagation factor measured on an onboard target. The method is based on the LS-SVM (Least-Squares Support Vector Machines) theory. The inversion method is here used to determine refractive index from data measured during the VAMPIRA campaign (Validation Measurement for Propagation in the Infrared and RAdar) conducted as a multinational approach over a transmission path across the Baltic Sea. As a propagation factor has been measured on two reference reflectors mounted onboard a naval vessel at different heights, the inversion method can be tested on both heights. The paper describes the experimental campaign and validates the LS-SVM inversion method for refractivity from propagation factor on simple measured data.

  14. Quantitative analysis of mixed hydrofluoric and nitric acids using Raman spectroscopy with partial least squares regression.

    Science.gov (United States)

    Kang, Gumin; Lee, Kwangchil; Park, Haesung; Lee, Jinho; Jung, Youngjean; Kim, Kyoungsik; Son, Boongho; Park, Hyoungkuk

    2010-06-15

    Mixed hydrofluoric and nitric acids are widely used as a good etchant for the pickling process of stainless steels. The cost reduction and the procedure optimization in the manufacturing process can be facilitated by optically detecting the concentration of the mixed acids. In this work, we developed a novel method which allows us to obtain the concentrations of hydrofluoric acid (HF) and nitric acid (HNO(3)) mixture samples with high accuracy. The experiments were carried out for the mixed acids which consist of the HF (0.5-3wt%) and the HNO(3) (2-12wt%) at room temperature. Fourier Transform Raman spectroscopy has been utilized to measure the concentration of the mixed acids HF and HNO(3), because the mixture sample has several strong Raman bands caused by the vibrational mode of each acid in this spectrum. The calibration of spectral data has been performed using the partial least squares regression method which is ideal for local range data treatment. Several figures of merit (FOM) were calculated using the concept of net analyte signal (NAS) to evaluate performance of our methodology.

  15. Identification Method by Least Squares Applied On a Level Didactic Plant Viafoundation Fieldbus Protocol

    Directory of Open Access Journals (Sweden)

    Murillo Ferreira Dos Santos

    2014-05-01

    Full Text Available The industrial field is always considered a growing area, which leads some systems toimprove the techniques used on its manufacturing. By consequence of this concept, level systems became an important part of the whole system, showing that needs to be studied more specific to get the optimal controlled response. It's known that the good controlled response is gotten when the system is identified correctly. Then, the objective of this paper is to present a didactic project of modeling and identification method applied on a level system, which uses a didactic system with Foundation Fieldbus protocol developed by SMAR® enterprise, belonging to CEFET MG-Campus III –Leopoldina, Brazil. The experiments were implemented considering the least squares method to identify the system dynamic, which the results were obtained using the OPC toolbox from MATLAB/Simulink®to establish the communication between the computer and the system. The modeling and identification results were satisfactory, showing that the applied technic can be used to approximate the system's level dynamic to a second order transfer function.

  16. The comparison of robust partial least squares regression with robust principal component regression on a real

    Science.gov (United States)

    Polat, Esra; Gunay, Suleyman

    2013-10-01

    One of the problems encountered in Multiple Linear Regression (MLR) is multicollinearity, which causes the overestimation of the regression parameters and increase of the variance of these parameters. Hence, in case of multicollinearity presents, biased estimation procedures such as classical Principal Component Regression (CPCR) and Partial Least Squares Regression (PLSR) are then performed. SIMPLS algorithm is the leading PLSR algorithm because of its speed, efficiency and results are easier to interpret. However, both of the CPCR and SIMPLS yield very unreliable results when the data set contains outlying observations. Therefore, Hubert and Vanden Branden (2003) have been presented a robust PCR (RPCR) method and a robust PLSR (RPLSR) method called RSIMPLS. In RPCR, firstly, a robust Principal Component Analysis (PCA) method for high-dimensional data on the independent variables is applied, then, the dependent variables are regressed on the scores using a robust regression method. RSIMPLS has been constructed from a robust covariance matrix for high-dimensional data and robust linear regression. The purpose of this study is to show the usage of RPCR and RSIMPLS methods on an econometric data set, hence, making a comparison of two methods on an inflation model of Turkey. The considered methods have been compared in terms of predictive ability and goodness of fit by using a robust Root Mean Squared Error of Cross-validation (R-RMSECV), a robust R2 value and Robust Component Selection (RCS) statistic.

  17. PENALIZED PARTIAL LEAST SQUARES%惩罚的偏最小二乘

    Institute of Scientific and Technical Information of China (English)

    殷弘; 汪宝彬

    2013-01-01

    本文研究了二个推广的惩罚的偏小二乘模型,将惩罚估计的算法作用于偏最小二乘估计上,得到了参数的最终估计.将此模型运用到一个实际数据,在预测方面获得了较好的结果.%In this paper,penalized partial least squares (PPLS) method is used in quantitative structure and activity relationship (QSAR) research.PPLS is in fact a combination of PLS and penalized regression which is first proposed in classification problems of biological informatics,but to our knowledge,our application of PPLS to QSAR data is novel.Further,we consider three different penalized regressions in contrast to the previous literature that only use one penalized function.Using a real data set,we demonstrate the competitive performances of PPLS methods compared with other four methods used widely in QSAR research.

  18. New predictive control algorithms based on Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    LIU Bin; SU Hong-ye; CHU Jian

    2005-01-01

    Used for industrial process with different degree of nonlinearity, the two predictive control algorithms presented in this paper are based on Least Squares Support Vector Machines (LS-SVM) model. For the weakly nonlinear system, the system model is built by using LS-SVM with linear kernel function, and then the obtained linear LS-SVM model is transformed into linear input-output relation of the controlled system. However, for the strongly nonlinear system, the off-line model of the controlled system is built by using LS-SVM with Radial Basis Function (RBF) kernel. The obtained nonlinear LS-SVM model is linearized at each sampling instant of system running, after which the on-line linear input-output model of the system is built. Based on the obtained linear input-output model, the Generalized Predictive Control (GPC) algorithm is employed to implement predictive control for the controlled plant in both algorithms. The simulation results after the presented algorithms were implemented in two different industrial processes model; respectively revealed the effectiveness and merit of both algorithms.

  19. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    Science.gov (United States)

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-01-01

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals. PMID:27483275

  20. First-order system least-squares for the Helmholtz equation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

    1996-12-31

    We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.

  1. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2011-09-01

    Full Text Available The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network. The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  2. Time series online prediction algorithm based on least squares support vector machine

    Institute of Scientific and Technical Information of China (English)

    WU Qiong; LIU Wen-ying; YANG Yi-han

    2007-01-01

    Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive calculation of block matrix, a new time series online prediction algorithm based on improved LS-SVM was proposed. The historical training results were fully utilized and the computing speed of LS-SVM was enhanced. Then, the improved algorithm was applied to time series online prediction. Based on the operational data provided by the Northwest Power Grid of China, the method was used in the transient stability prediction of electric power system. The results show that, compared with the calculation time of the traditional LS-SVM(75-1 600 ms), that of the proposed method in different time windows is 40-60 ms, and the prediction accuracy(normalized root mean squared error) of the proposed method is above 0.8. So the improved method is better than the traditional LS-SVM and more suitable for time series online prediction.

  3. Denoising spectroscopic data by means of the improved Least-Squares Deconvolution method

    CERN Document Server

    Tkachenko, A; Tsymbal, V; Aerts, C; Kochukhov, O; Debosscher, J

    2013-01-01

    The MOST, CoRoT, and Kepler space missions led to the discovery of a large number of intriguing, and in some cases unique, objects among which are pulsating stars, stars hosting exoplanets, binaries, etc. Although the space missions deliver photometric data of unprecedented quality, these data are lacking any spectral information and we are still in need of ground-based spectroscopic and/or multicolour photometric follow-up observations for a solid interpretation. Both faintness of most of the observed stars and the required high S/N of spectroscopic data imply the need of using large telescopes, access to which is limited. In this paper, we look for an alternative, and aim for the development of a technique allowing to denoise the originally low S/N spectroscopic data, making observations of faint targets with small telescopes possible and effective. We present a generalization of the original Least-Squares Deconvolution (LSD) method by implementing a multicomponent average profile and a line strengths corre...

  4. Predicting Asthma Outcome Using Partial Least Square Regression and Artificial Neural Networks

    Directory of Open Access Journals (Sweden)

    E. Chatzimichail

    2013-01-01

    Full Text Available The long-term solution to the asthma epidemic is believed to be prevention and not treatment of the established disease. Most cases of asthma begin during the first years of life; thus the early determination of which young children will have asthma later in their life counts as an important priority. Artificial neural networks (ANN have been already utilized in medicine in order to improve the performance of the clinical decision-making tools. In this study, a new computational intelligence technique for the prediction of persistent asthma in children is presented. By employing partial least square regression, 9 out of 48 prognostic factors correlated to the persistent asthma have been chosen. Multilayer perceptron and probabilistic neural networks topologies have been investigated in order to obtain the best prediction accuracy. Based on the results, it is shown that the proposed system is able to predict the asthma outcome with a success of 96.77%. The ANN, with which these high rates of reliability were obtained, will help the doctors to identify which of the young patients are at a high risk of asthma disease progression. Moreover, this may lead to better treatment opportunities and hopefully better disease outcomes in adulthood.

  5. A Novel Method for Flatness Pattern Recognition via Least Squares Support Vector Regression

    Institute of Scientific and Technical Information of China (English)

    2012-01-01

    To adapt to the new requirement of the developing flatness control theory and technology, cubic patterns were introduced on the basis of the traditional linear, quadratic and quartic flatness basic patterns. Linear, quadratic, cubic and quartic Legendre orthogonal polynomials were adopted to express the flatness basic patterns. In order to over- come the defects live in the existent recognition methods based on fuzzy, neural network and support vector regres- sion (SVR) theory, a novel flatness pattern recognition method based on least squares support vector regression (LS-SVR) was proposed. On this basis, for the purpose of determining the hyper-parameters of LS-SVR effectively and enhan- cing the recognition accuracy and generalization performance of the model, particle swarm optimization algorithm with leave-one-out (LOO) error as fitness function was adopted. To overcome the disadvantage of high computational complexity of naive cross-validation algorithm, a novel fast cross-validation algorithm was introduced to calculate the LOO error of LDSVR. Results of experiments on flatness data calculated by theory and a 900HC cold-rolling mill practically measured flatness signals demonstrate that the proposed approach can distinguish the types and define the magnitudes of the flatness defects effectively with high accuracy, high speed and strong generalization ability.

  6. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  7. Prediction of Navigation Satellite Clock Bias Considering Clock's Stochastic Variation Behavior with Robust Least Square Collocation

    Directory of Open Access Journals (Sweden)

    WANG Yupu

    2016-06-01

    Full Text Available In order to better express the characteristic of satellite clock bias (SCB and further improve its prediction precision, a new SCB prediction model is proposed, which can take the physical feature, cyclic variation and stochastic variation behaviors of the space-borne atomic clock into consideration by using a robust least square collocation (LSC method. The proposed model firstly uses a quadratic polynomial model with periodic terms to fit and abstract the trend term and cyclic terms of SCB. Then for the residual stochastic variation part and possible gross errors hidden in SCB data, the model employs a robust LSC method to process them. The covariance function of the LSC is determined by selecting an empirical function and combining SCB prediction tests. Using the final precise IGS SCB products to conduct prediction tests, the results show that the proposed model can get better prediction performance. Specifically, the results' prediction accuracy can enhance 0.457 ns and 0.948 ns respectively, and the corresponding prediction stability can improve 0.445 ns and 1.233 ns, compared with the results of quadratic polynomial model and grey model. In addition, the results also show that the proposed covariance function corresponding to the new model is reasonable.

  8. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  9. Least squares collocation applied to local gravimetric solutions from satellite gravity gradiometry data

    Science.gov (United States)

    Robbins, J. W.

    1985-01-01

    An autonomous spaceborne gravity gradiometer mission is being considered as a post Geopotential Research Mission project. The introduction of satellite diometry data to geodesy is expected to improve solid earth gravity models. The possibility of utilizing gradiometer data for the determination of pertinent gravimetric quantities on a local basis is explored. The analytical technique of least squares collocation is investigated for its usefulness in local solutions of this type. It is assumed, in the error analysis, that the vertical gravity gradient component of the gradient tensor is used as the raw data signal from which the corresponding reference gradients are removed to create the centered observations required in the collocation solution. The reference gradients are computed from a high degree and order geopotential model. The solution can be made in terms of mean or point gravity anomalies, height anomalies, or other useful gravimetric quantities depending on the choice of covariance types. Selected for this study were 30 x 30 foot mean gravity and height anomalies. Existing software and new software are utilized to implement the collocation technique. It was determined that satellite gradiometry data at an altitude of 200 km can be used successfully for the determination of 30 x 30 foot mean gravity anomalies to an accuracy of 9.2 mgal from this algorithm. It is shown that the resulting accuracy estimates are sensitive to gravity model coefficient uncertainties, data reduction assumptions and satellite mission parameters.

  10. A Design Method of Code Correlation Reference Waveform in GNSS Based on Least-Squares Fitting.

    Science.gov (United States)

    Xu, Chengtao; Liu, Zhe; Tang, Xiaomei; Wang, Feixue

    2016-07-29

    The multipath effect is one of the main error sources in the Global Satellite Navigation Systems (GNSSs). The code correlation reference waveform (CCRW) technique is an effective multipath mitigation algorithm for the binary phase shift keying (BPSK) signal. However, it encounters the false lock problem in code tracking, when applied to the binary offset carrier (BOC) signals. A least-squares approximation method of the CCRW design scheme is proposed, utilizing the truncated singular value decomposition method. This algorithm was performed for the BPSK signal, BOC(1,1) signal, BOC(2,1) signal, BOC(6,1) and BOC(7,1) signal. The approximation results of CCRWs were presented. Furthermore, the performances of the approximation results are analyzed in terms of the multipath error envelope and the tracking jitter. The results show that the proposed method can realize coherent and non-coherent CCRW discriminators without false lock points. Generally, there is performance degradation in the tracking jitter, if compared to the CCRW discriminator. However, the performance promotions in the multipath error envelope for the BOC(1,1) and BPSK signals makes the discriminator attractive, and it can be applied to high-order BOC signals.

  11. Depth estimation of face images using the nonlinear least-squares model.

    Science.gov (United States)

    Sun, Zhan-Li; Lam, Kin-Man; Gao, Qing-Wei

    2013-01-01

    In this paper, we propose an efficient algorithm to reconstruct the 3D structure of a human face from one or more of its 2D images with different poses. In our algorithm, the nonlinear least-squares model is first employed to estimate the depth values of facial feature points and the pose of the 2D face image concerned by means of the similarity transform. Furthermore, different optimization schemes are presented with regard to the accuracy levels and the training time required. Our algorithm also embeds the symmetrical property of the human face into the optimization procedure, in order to alleviate the sensitivities arising from changes in pose. In addition, the regularization term, based on linear correlation, is added in the objective function to improve the estimation accuracy of the 3D structure. Further, a model-integration method is proposed to improve the depth-estimation accuracy when multiple nonfrontal-view face images are available. Experimental results on the 2D and 3D databases demonstrate the feasibility and efficiency of the proposed methods. PMID:22711771

  12. Automatic retinal vessel classification using a Least Square-Support Vector Machine in VAMPIRE.

    Science.gov (United States)

    Relan, D; MacGillivray, T; Ballerini, L; Trucco, E

    2014-01-01

    It is important to classify retinal blood vessels into arterioles and venules for computerised analysis of the vasculature and to aid discovery of disease biomarkers. For instance, zone B is the standardised region of a retinal image utilised for the measurement of the arteriole to venule width ratio (AVR), a parameter indicative of microvascular health and systemic disease. We introduce a Least Square-Support Vector Machine (LS-SVM) classifier for the first time (to the best of our knowledge) to label automatically arterioles and venules. We use only 4 image features and consider vessels inside zone B (802 vessels from 70 fundus camera images) and in an extended zone (1,207 vessels, 70 fundus camera images). We achieve an accuracy of 94.88% and 93.96% in zone B and the extended zone, respectively, with a training set of 10 images and a testing set of 60 images. With a smaller training set of only 5 images and the same testing set we achieve an accuracy of 94.16% and 93.95%, respectively. This experiment was repeated five times by randomly choosing 10 and 5 images for the training set. Mean classification accuracy are close to the above mentioned result. We conclude that the performance of our system is very promising and outperforms most recently reported systems. Our approach requires smaller training data sets compared to others but still results in a similar or higher classification rate. PMID:25569917

  13. Radial Basis Function-Sparse Partial Least Squares for Application to Brain Imaging Data

    Directory of Open Access Journals (Sweden)

    Hisako Yoshida

    2013-01-01

    Full Text Available Magnetic resonance imaging (MRI data is an invaluable tool in brain morphology research. Here, we propose a novel statistical method for investigating the relationship between clinical characteristics and brain morphology based on three-dimensional MRI data via radial basis function-sparse partial least squares (RBF-sPLS. Our data consisted of MRI image intensities for multimillion voxels in a 3D array along with 73 clinical variables. This dataset represents a suitable application of RBF-sPLS because of a potential correlation among voxels as well as among clinical characteristics. Additionally, this method can simultaneously select both effective brain regions and clinical characteristics based on sparse modeling. This is in contrast to existing methods, which consider prespecified brain regions because of the computational difficulties involved in processing high-dimensional data. RBF-sPLS employs dimensionality reduction in order to overcome this obstacle. We have applied RBF-sPLS to a real dataset composed of 102 chronic kidney disease patients, while a comparison study used a simulated dataset. RBF-sPLS identified two brain regions of interest from our patient data: the temporal lobe and the occipital lobe, which are associated with aging and anemia, respectively. Our simulation study suggested that such brain regions are extracted with excellent accuracy using our method.

  14. Fishery landing forecasting using EMD-based least square support vector machine models

    Science.gov (United States)

    Shabri, Ani

    2015-05-01

    In this paper, the novel hybrid ensemble learning paradigm integrating ensemble empirical mode decomposition (EMD) and least square support machine (LSSVM) is proposed to improve the accuracy of fishery landing forecasting. This hybrid is formulated specifically to address in modeling fishery landing, which has high nonlinear, non-stationary and seasonality time series which can hardly be properly modelled and accurately forecasted by traditional statistical models. In the hybrid model, EMD is used to decompose original data into a finite and often small number of sub-series. The each sub-series is modeled and forecasted by a LSSVM model. Finally the forecast of fishery landing is obtained by aggregating all forecasting results of sub-series. To assess the effectiveness and predictability of EMD-LSSVM, monthly fishery landing record data from East Johor of Peninsular Malaysia, have been used as a case study. The result shows that proposed model yield better forecasts than Autoregressive Integrated Moving Average (ARIMA), LSSVM and EMD-ARIMA models on several criteria..

  15. Identification of the Hammerstein model of a PEMFC stack based on least squares support vector machines

    Energy Technology Data Exchange (ETDEWEB)

    Li, Chun-Hua; Zhu, Xin-Jian; Cao, Guang-Yi; Sui, Sheng; Hu, Ming-Ruo [Fuel Cell Research Institute, Shanghai Jiao Tong University, 800 Dongchuan Road, Shanghai 200240 (China)

    2008-01-03

    This paper reports a Hammerstein modeling study of a proton exchange membrane fuel cell (PEMFC) stack using least squares support vector machines (LS-SVM). PEMFC is a complex nonlinear, multi-input and multi-output (MIMO) system that is hard to model by traditional methodologies. Due to the generalization performance of LS-SVM being independent of the dimensionality of the input data and the particularly simple structure of the Hammerstein model, a MIMO SVM-ARX (linear autoregression model with exogenous input) Hammerstein model is used to represent the PEMFC stack in this paper. The linear model parameters and the static nonlinearity can be obtained simultaneously by solving a set of linear equations followed by the singular value decomposition (SVD). The simulation tests demonstrate the obtained SVM-ARX Hammerstein model can efficiently approximate the dynamic behavior of a PEMFC stack. Furthermore, based on the proposed SVM-ARX Hammerstein model, valid control strategy studies such as predictive control, robust control can be developed. (author)

  16. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    Science.gov (United States)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  17. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

    Science.gov (United States)

    Liu, X. Y.; Alfi, S.; Bruni, S.

    2016-06-01

    A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

  18. An Emotion Detection System Based on Multi Least Squares Twin Support Vector Machine

    Directory of Open Access Journals (Sweden)

    Divya Tomar

    2014-01-01

    Full Text Available Posttraumatic stress disorder (PTSD, bipolar manic disorder (BMD, obsessive compulsive disorder (OCD, depression, and suicide are some major problems existing in civilian and military life. The change in emotion is responsible for such type of diseases. So, it is essential to develop a robust and reliable emotion detection system which is suitable for real world applications. Apart from healthcare, importance of automatically recognizing emotions from human speech has grown with the increasing role of spoken language interfaces in human-computer interaction applications. Detection of emotion in speech can be applied in a variety of situations to allocate limited human resources to clients with the highest levels of distress or need, such as in automated call centers or in a nursing home. In this paper, we used a novel multi least squares twin support vector machine classifier in order to detect seven different emotions such as anger, happiness, sadness, anxiety, disgust, panic, and neutral emotions. The experimental result indicates better performance of the proposed technique over other existing approaches. The result suggests that the proposed emotion detection system may be used for screening of mental status.

  19. Modeling of a PEM Fuel Cell Stack using Partial Least Squares and Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Han, In-Su; Shin, Hyun Khil [GS Caltex Corp, Daejeon (Korea, Republic of)

    2015-04-15

    We present two data-driven modeling methods, partial least square (PLS) and artificial neural network (ANN), to predict the major operating and performance variables of a polymer electrolyte membrane (PEM) fuel cell stack. PLS and ANN models were constructed using the experimental data obtained from the testing of a 30 kW-class PEM fuel cell stack, and then were compared with each other in terms of their prediction and computational performances. To reduce the complexity of the models, we combined a variables importance on PLS projection (VIP) as a variable selection method into the modeling procedure in which the predictor variables are selected from a set of input operation variables. The modeling results showed that the ANN models outperformed the PLS models in predicting the average cell voltage and cathode outlet temperature of the fuel cell stack. However, the PLS models also offered satisfactory prediction performances although they can only capture linear correlations between the predictor and output variables. Depending on the degree of modeling accuracy and speed, both ANN and PLS models can be employed for performance predictions, offline and online optimizations, controls, and fault diagnoses in the field of PEM fuel cell designs and operations.

  20. Eddy current characterization of small cracks using least square support vector machine

    Science.gov (United States)

    Chelabi, M.; Hacib, T.; Le Bihan, Y.; Ikhlef, N.; Boughedda, H.; Mekideche, M. R.

    2016-04-01

    Eddy current (EC) sensors are used for non-destructive testing since they are able to probe conductive materials. Despite being a conventional technique for defect detection and localization, the main weakness of this technique is that defect characterization, of the exact determination of the shape and dimension, is still a question to be answered. In this work, we demonstrate the capability of small crack sizing using signals acquired from an EC sensor. We report our effort to develop a systematic approach to estimate the size of rectangular and thin defects (length and depth) in a conductive plate. The achieved approach by the novel combination of a finite element method (FEM) with a statistical learning method is called least square support vector machines (LS-SVM). First, we use the FEM to design the forward problem. Next, an algorithm is used to find an adaptive database. Finally, the LS-SVM is used to solve the inverse problems, creating polynomial functions able to approximate the correlation between the crack dimension and the signal picked up from the EC sensor. Several methods are used to find the parameters of the LS-SVM. In this study, the particle swarm optimization (PSO) and genetic algorithm (GA) are proposed for tuning the LS-SVM. The results of the design and the inversions were compared to both simulated and experimental data, with accuracy experimentally verified. These suggested results prove the applicability of the presented approach.

  1. Partial least squares prediction of the first hyperpolarizabilities of donor-acceptor polyenic derivatives

    International Nuclear Information System (INIS)

    Graphical abstract: PLS regression equations predicts quite well static β values for a large set of donor-acceptor organic molecules, in close agreement with the available experimental data. Display Omitted Highlights: → PLS regression predicts static β values of 35 push-pull organic molecules. → PLS equations show correlation of β with structural-electronic parameters. → PLS regression selects best components of push-bridge-pull nonlinear compounds. → PLS analyses can be routinely used to select novel second-order materials. - Abstract: A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities (β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.

  2. Texture discrimination of green tea categories based on least squares support vector machine (LSSVM) classifier

    Science.gov (United States)

    Li, Xiaoli; He, Yong; Qiu, Zhengjun; Wu, Di

    2008-03-01

    This research aimed for development multi-spectral imaging technique for green tea categories discrimination based on texture analysis. Three key wavelengths of 550, 650 and 800 nm were implemented in a common-aperture multi-spectral charged coupled device camera, and images were acquired for 190 unique images in a four different kinds of green tea data set. An image data set consisting of 15 texture features for each image was generated based on texture analysis techniques including grey level co-occurrence method (GLCM) and texture filtering. For optimization the texture features, 5 features that weren't correlated with the category of tea were eliminated. Unsupervised cluster analysis was conducted using the optimized texture features based on principal component analysis. The cluster analysis showed that the four kinds of green tea could be separated in the first two principal components space, however there was overlapping phenomenon among the different kinds of green tea. To enhance the performance of discrimination, least squares support vector machine (LSSVM) classifier was developed based on the optimized texture features. The excellent discrimination performance for sample in prediction set was obtained with 100%, 100%, 75% and 100% for four kinds of green tea respectively. It can be concluded that texture discrimination of green tea categories based on multi-spectral image technology is feasible.

  3. Multidimensional model of apathy in older adults using partial least squares--path modeling.

    Science.gov (United States)

    Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine

    2016-06-01

    Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed.

  4. Attenuation compensation for least-squares reverse time migration using the viscoacoustic-wave equation

    KAUST Repository

    Dutta, Gaurav

    2014-10-01

    Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. Conventional acoustic reverse time migration (RTM) and least-squares reverse time migration (LSRTM) do not account for this distortion, which can lead to defocusing of migration images in highly attenuative geologic environments. To correct for this distortion, we used a linearized inversion method, denoted as Qp-LSRTM. During the leastsquares iterations, we used a linearized viscoacoustic modeling operator for forward modeling. The adjoint equations were derived using the adjoint-state method for back propagating the residual wavefields. The merit of this approach compared with conventional RTM and LSRTM was that Qp-LSRTM compensated for the amplitude loss due to attenuation and could produce images with better balanced amplitudes and more resolution below highly attenuative layers. Numerical tests on synthetic and field data illustrated the advantages of Qp-LSRTM over RTM and LSRTM when the recorded data had strong attenuation effects. Similar to standard LSRTM, the sensitivity tests for background velocity and Qp errors revealed that the liability of this method is the requirement for smooth and accurate migration velocity and attenuation models.

  5. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

    Science.gov (United States)

    Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

    2015-12-01

    Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features. PMID:26433615

  6. A weighted least squares analysis of globalization and the Nigerian stock market performance

    Directory of Open Access Journals (Sweden)

    Alenoghena Osi Raymond

    2013-12-01

    Full Text Available The study empirically investigates the impact of globalization on the performance of the Nigerian Stock market. The study seeks the verification of the existence of a linking mechanism between globalization through trade openness, net inflow of capital, participation in international capital market and financial development on Stock Market performance over the period of 1981 to 2011. The methodology adopted examines the stochastic characteristics of each time series by testing their stationarity using the Im, Pesaran and Shin W-stat test. The weighted least squares regression method was employed to ascertain the different level of impacts on the above subject matter. The findings were reinforced by the presence of a long-term equilibrium relationship, as evidenced by the cointegrating equation of the VECM. The Model ascertained that globalization variables actually positively impacted on stock market performance. However, the findings reveal that while net capital inflows and participation in international capital market have greater impact on the Nigerian Stock market performance during the period under review. Accordingly, it is advised that in formulating foreign policy, policy makers should take strategic views on the international economy and make new creative policies that will foster economic integration between Nigeria and its existing trade allies. These creative policies will also assist to create avenues for the making new trade agreements with other nations of the world, which hitherto were not trade partners with Nigeria.

  7. A least-squares finite-element method for the simulation of free-surface flows

    International Nuclear Information System (INIS)

    This paper presents the simulations of free-surface flows involving two fluids (air and water) by the least-squares finite-element method. The motion of both fluids is governed by two-dimensional Navier-Stokes equations in velocity-pressure-vorticity form. The free surface of moving interface is treated as the surface of density discontinuity between gas and liquid. A field variable is used to represent the fractional volume of both fluids so the profile and position of the interface can be calculated accurately. For the time-dependent nonlinear equations, iteration with linearization is performed within each time-step. An element-by- element conjugate gradient method is applied to solve the discretized systems. The model is validated by the experimental measurements of the dam break problem. The simulations of free-surface surges through the sluice gate and over the free fall show encouraging results for representing the complicated free-surface profiles, especially, the simulated phenomena of vortex distributed in the circulation zone. This model also has the strong ability to simulate the practical engineering problems with complex geometry. Refs. 3 (author)

  8. Plane-Wave Least-Squares Reverse Time Migration for Rugged Topography

    Institute of Scientific and Technical Information of China (English)

    Jianping Huang; Chuang Li; Rongrong Wang; Qingyang Li

    2015-01-01

    We present a method based on least-squares reverse time migration with plane-wave encod-ing (P-LSRTM) for rugged topography. Instead of modifying the wave field before migration, we modify the plane-wave encoding function and fill constant velocity to the area above rugged topography in the model so that P-LSRTM can be directly performed from rugged surface in the way same to shot domain reverse time migration. In order to improve efficiency and reduce I/O (input/output) cost, the dynamic en-coding strategy and hybrid encoding strategy are implemented. Numerical test on SEG rugged topography model show that P-LSRTM can suppress migration artifacts in the migration image, and compensate am-plitude in the middle-deep part efficiently. Without data correction, P-LSRTM can produce a satisfying image of near-surface if we could get an accurate near-surface velocity model. Moreover, the pre-stack P-LSRTM is more robust than conventional RTM in the presence of migration velocity errors.

  9. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2012-02-01

    Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  10. Intelligent control of a sensor-actuator system via kernelized least-squares policy iteration.

    Science.gov (United States)

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  11. On the Potential of Least Squares Response Method for the Calibration of Superconducting Gravimeters

    Directory of Open Access Journals (Sweden)

    Mahmoud Abd El-Gelil

    2012-01-01

    Full Text Available One of the most important operating procedures after the installation of a superconducting gravimeter (SG is its calibration. The calibration process can identify and evaluate possible time variability in the scale factor and in the hardware anti-aliasing filter response. The SG installed in Cantley, Canada is calibrated using two absolute gravimeters and the data are analysed in the time and frequency domains to estimate the SG scale factor. In the time domain, we use the weighted linear regression method whereas in the frequency domain we use the least squares response method. Rigorous statistical procedures are applied to define data disturbances, outliers, and realistic data noise levels. Using data from JILA-2 and FG5-236 separately, the scale factor is estimated in the time and frequency domains as −78.374±0.012 μGal/V and −78.403±0.075 μGal/V, respectively. The relative accuracy in the time domain is 0.015%. We cannot identify any significant periodicity in the scale factor. The hardware anti-aliasing filter response is tested by injecting known waves into the control electronics of the system. Results show that the anti-aliasing filter response is stable and conforms to the global geodynamics project standards.

  12. Globally Conservative, Hybrid Self-Adjoint Angular Flux and Least-Squares Method Compatible with Void

    CERN Document Server

    Laboure, Vincent M; Wang, Yaqi

    2016-01-01

    In this paper, we derive a method for the second-order form of the transport equation that is both globally conservative and compatible with voids, using Continuous Finite Element Methods (CFEM). The main idea is to use the Least-Squares (LS) form of the transport equation in the void regions and the Self-Adjoint Angular Flux (SAAF) form elsewhere. While the SAAF formulation is globally conservative, the LS formulation need a correction in void. The price to pay for this fix is the loss of symmetry of the bilinear form. We first derive this Conservative LS (CLS) formulation in void. Second we combine the SAAF and CLS forms and end up with an hybrid SAAF-CLS method, having the desired properties. We show that extending the theory to near-void regions is a minor complication and can be done without affecting the global conservation of the scheme. Being angular discretization agnostic, this method can be applied to both discrete ordinates (SN) and spherical harmonics (PN) methods. However, since a globally conse...

  13. Least-squares reverse time migration of marine data with frequency-selection encoding

    KAUST Repository

    Dai, Wei

    2013-06-24

    The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique nonoverlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Because the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is comparable to conventional RTM for the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved by suppressing migration artifacts, balancing reflector amplitudes, and enhancing the spatial resolution. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM. © 2013 Society of Exploration Geophysicists.

  14. Least-squares reverse time migration with and without source wavelet estimation

    Science.gov (United States)

    Zhang, Qingchen; Zhou, Hui; Chen, Hanming; Wang, Jie

    2016-11-01

    Least-squares reverse time migration (LSRTM) attempts to find the best fit reflectivity model by minimizing the mismatching between the observed and simulated seismic data, where the source wavelet estimation is one of the crucial issues. We divide the frequency-domain observed seismic data by the numerical Green's function at the receiver nodes to estimate the source wavelet for the conventional LSRTM method, and propose the source-independent LSRTM based on a convolution-based objective function. The numerical Green's function can be simulated with a dirac wavelet and the migration velocity in the frequency or time domain. Compared to the conventional method with the additional source estimation procedure, the source-independent LSRTM is insensitive to the source wavelet and can still give full play to the amplitude-preserving ability even using an incorrect wavelet without the source estimation. In order to improve the anti-noise ability, we apply the robust hybrid norm objective function to both the methods and use the synthetic seismic data contaminated by the random Gaussian and spike noises with a signal-to-noise ratio of 5 dB to verify their feasibilities. The final migration images show that the source-independent algorithm is more robust and has a higher amplitude-preserving ability than the conventional source-estimated method.

  15. Comparison of two terrain extraction algorithms: hierarchical relaxation correlation and global least squares matching

    Science.gov (United States)

    Hermanson, Greg A.; Hinchman, John H.; Rauhala, Urho A.; Mueller, Walter J.

    1993-09-01

    Automated extraction of elevation data from stereo images requires automated images registration followed by photogrammetric mapping into a Digital Elevation Model (DEM). The Digital Production System (DPS) Data Extraction Segment (DE/S) of the Defense Mapping Agency (DMA) currently uses an image pyramid registration technique known as Hierarchical Relaxation Correlation (HRC) to perform Automated Terrain Extraction (ATE). Under an internal research and development project, GDE Systems has developed Global Least Squares Matching (GLSM) technique of nonlinear estimation requiring a simultaneous array algebra solution of a dense DEM as a part of the matching process. This paper focuses on traditional low density DEM production where the coarse-to-fine process of HRC and GLSM is stopped at lower image resolutions until the required DEM quality is reached. Tests were made comparing the HRC and GLSM results at various image resolutions against carefully edited and averaged check points of four cartographers from 1:40,000 and 1:80,000 softcopy stereo models. The results show that both HRC and GLSM far exceed the traditional mapping standard allowing an economic use of lower resolution source images. GLSM allowed up to five times lower image resolution than HRC producing acceptable contour plots with no manual edit from 1:40,000 - 800,000 softcopy stereo models vs. the traditional DEM collection from 1:40,000 analytical stereo model.

  16. Least-squares fitting of time-domain signals for Fourier transform mass spectrometry.

    Science.gov (United States)

    Aushev, Tagir; Kozhinov, Anton N; Tsybin, Yury O

    2014-07-01

    To advance Fourier transform mass spectrometry (FTMS)-based molecular structure analysis, corresponding development of the FTMS signal processing methods and instrumentation is required. Here, we demonstrate utility of a least-squares fitting (LSF) method for analysis of FTMS time-domain (transient) signals. We evaluate the LSF method in the analysis of single- and multiple-component experimental and simulated ion cyclotron resonance (ICR) and Orbitrap FTMS transient signals. Overall, the LSF method allows one to estimate the analytical limits of the conventional instrumentation and signal processing methods in FTMS. Particularly, LSF provides accurate information on initial phases of sinusoidal components in a given transient. For instance, the phase distribution obtained for a statistical set of experimental transients reveals the effect of the first data-point problem in FT-ICR MS. Additionally, LSF might be useful to improve the implementation of the absorption-mode FT spectral representation for FTMS applications. Finally, LSF can find utility in characterization and development of filter-diagonalization method (FDM) MS.

  17. Solution of shallow-water equations using least-squares finite-element method

    Institute of Scientific and Technical Information of China (English)

    Shin-Jye Liang; Jyh-Haw Tang; Ming-Shun Wu

    2008-01-01

    A least-squares finite-element method (LSFEM) for the non-conservative shallow-water equations is pre-sented. The model is capable of handling complex topogra-phy, steady and unsteady flows, subcritical and supercritical flows, and flows with smooth and sharp gradient changes. Advantages of the model include: (1) sources terms, such as the bottom slope, surface stresses and bed frictions, can be treated easily without any special treatment; (2) upwind scheme is no needed; (3) a single approximating space can be used for all variables, and its choice of approximating space is not subject to the Ladyzhenskaya-Babuska-Brezzi (LBB) condition; and (4) the resulting system of equations is sym-metric and positive-definite (SPD) which can be solved effi-ciently with the preconditioned conjugate gradient method. The model is verified with flow over a bump, tide induced flow, and dam-break. Computed results are compared with analytic solutions or other numerical results, and show the model is conservative and accurate. The model is then used to simulate flow past a circular cylinder. Important flow charac-teristics, such as variation of water surface around the cylin-der and vortex shedding behind the cylinder are investigated. Computed results compare well with experiment data and other numerical results.

  18. POSITIONING BASED ON INTEGRATION OF MUTI-SENSOR SYSTEMS USING KALMAN FILTER AND LEAST SQUARE ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    M. Omidalizarandi

    2013-09-01

    Full Text Available Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach with different algorithms (Kalman Filter, Least square Adjustment have been examined and the results of the different approaches are compared together.

  19. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

    Science.gov (United States)

    Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

    2011-12-15

    Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843

  20. Large-scale computation of incompressible viscous flow by least-squares finite element method

    Science.gov (United States)

    Jiang, Bo-Nan; Lin, T. L.; Povinelli, Louis A.

    1993-01-01

    The least-squares finite element method (LSFEM) based on the velocity-pressure-vorticity formulation is applied to large-scale/three-dimensional steady incompressible Navier-Stokes problems. This method can accommodate equal-order interpolations and results in symmetric, positive definite algebraic system which can be solved effectively by simple iterative methods. The first-order velocity-Bernoulli function-vorticity formulation for incompressible viscous flows is also tested. For three-dimensional cases, an additional compatibility equation, i.e., the divergence of the vorticity vector should be zero, is included to make the first-order system elliptic. The simple substitution of the Newton's method is employed to linearize the partial differential equations, the LSFEM is used to obtain discretized equations, and the system of algebraic equations is solved using the Jacobi preconditioned conjugate gradient method which avoids formation of either element or global matrices (matrix-free) to achieve high efficiency. To show the validity of this scheme for large-scale computation, we give numerical results for 2D driven cavity problem at Re = 10000 with 408 x 400 bilinear elements. The flow in a 3D cavity is calculated at Re = 100, 400, and 1,000 with 50 x 50 x 50 trilinear elements. The Taylor-Goertler-like vortices are observed for Re = 1,000.

  1. Partial least squares regression for predicting economic loss of vegetables caused by acid rain

    Institute of Scientific and Technical Information of China (English)

    WANG Ju; MENG He; DONG De-ming; LI Wei; FANG Chun-sheng

    2009-01-01

    To predict the economic loss of crops caused by acid rain, we used partial least squares (PLS) regression to build a model of single dependent variable-the economic loss calculated with the decrease in yield related to the pH value and levels of Ca2+, NH4+, Na+, K+, Mg2+, SO42-, NO3-, and Cl- in acid rain. We selected vegetables which were sensitive to acid rain as the sample crops, and collected 12 groups of data, of which 8 groups were used for modeling and 4 groups for testing. Using the cross validation method to evaluate the performace of this prediction model indicates that the optimum number of principal components was 3, determined by the minimum of prediction residual error sum of squares, and the prediction error of the regression equation ranges from-2.25% to 4.32%. The model predicted that the economic loss of vegetables from acid rain is negatively corrrelated to pH and the concentrations of NH4+, SO42-, NO3-, and Cl- in the rain, and positively correlated to the concentrations of Ca2+, Na+, K+ and Mg2+. The precision of the model may be improved if the non-linearity of original data is addressed.

  2. Multi-classification algorithm and its realization based on least square support vector machine algorithm

    Institute of Scientific and Technical Information of China (English)

    Fan Youping; Chen Yunping; Sun Wansheng; Li Yu

    2005-01-01

    As a new type of learning machine developed on the basis of statistics learning theory, support vector machine (SVM) plays an important role in knowledge discovering and knowledge updating by constructing non-linear optimal classifier. However, realizing SVM requires resolving quadratic programming under constraints of inequality, which results in calculation difficulty while learning samples gets larger. Besides, standard SVM is incapable of tackling multi-classification. To overcome the bottleneck of populating SVM, with training algorithm presented, the problem of quadratic programming is converted into that of resolving a linear system of equations composed of a group of equation constraints by adopting the least square SVM(LS-SVM) and introducing a modifying variable which can change inequality constraints into equation constraints, which simplifies the calculation. With regard to multi-classification, an LS-SVM applicable in multi-classification is deduced. Finally, efficiency of the algorithm is checked by using universal Circle in square and two-spirals to measure the performance of the classifier.

  3. Relationship of Fiber Properties to Vortex Yarn Quality via Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Calvin Price

    2009-12-01

    Full Text Available The Cotton Quality Research Station (CQRS of theUSDA-ARS, recently completed a comprehensivestudy of the relationship of cotton fiber properties tothe quality of spun yarn. The five year study, beganin 2001, utilized commercial variety cotton grown,harvested and ginned in each of three major growingregions in the US (Georgia, Mississippi, and Texas.CQRS made extensive measurements of the rawcotton properties (both physical and chemical of 154lots of blended cotton. These lots were then spuninto yarn in the CQRS laboratory by vortex spinningwith several characteristics of the yarn and spinningefficiency measured for each lot. This studyexamines the use of a multivariate statistical method,partial least squares (PLS, to relate fiber propertiesto spun yarn quality for vortex spinning. Twodifferent sets of predictors were used to forecast yarnquality response variables: one set being only HVI™variables, and the second set consisting of bothHVI™ and AFIS™ variables. The quality ofpredictions was not found to significantly changewith the addition of AFIS™ variables.

  4. A New Least Squares Support Vector Machines Ensemble Model for Aero Engine Performance Parameter Chaotic Prediction

    Directory of Open Access Journals (Sweden)

    Dangdang Du

    2016-01-01

    Full Text Available Aiming at the nonlinearity, chaos, and small-sample of aero engine performance parameters data, a new ensemble model, named the least squares support vector machine (LSSVM ensemble model with phase space reconstruction (PSR and particle swarm optimization (PSO, is presented. First, to guarantee the diversity of individual members, different single kernel LSSVMs are selected as base predictors, and they also output the primary prediction results independently. Then, all the primary prediction results are integrated to produce the most appropriate prediction results by another particular LSSVM—a multiple kernel LSSVM, which reduces the dependence of modeling accuracy on kernel function and parameters. Phase space reconstruction theory is applied to extract the chaotic characteristic of input data source and reconstruct the data sample, and particle swarm optimization algorithm is used to obtain the best LSSVM individual members. A case study is employed to verify the effectiveness of presented model with real operation data of aero engine. The results show that prediction accuracy of the proposed model improves obviously compared with other three models.

  5. Prediction for human intelligence using morphometric characteristics of cortical surface: partial least square analysis.

    Science.gov (United States)

    Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M

    2013-08-29

    A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. PMID:23643979

  6. Improving the Robustness and Stability of Partial Least Squares Regression for Near-infrared Spectral Analysis

    Institute of Scientific and Technical Information of China (English)

    SHAO, Xueguang; CHEN, Da; XU, Heng; LIU, Zhichao; CAI, Wensheng

    2009-01-01

    Partial least-squares (PLS) regression has been presented as a powerful tool for spectral quantitative measure- ment. However, the improvement of the robustness and stability of PLS models is still needed, because it is difficult to build a stable model when complex samples are analyzed or outliers are contained in the calibration data set. To achieve the purpose, a robust ensemble PLS technique based on probability resampling was proposed, which is named RE-PLS. In the proposed method, a probability is firstly obtained for each calibration sample from its resid- ual in a robust regression. Then, multiple PLS models are constructed based on probability resampling. At last, the multiple PLS models are used to predict unknown samples by taking the average of the predictions from the multi- ple models as final prediction result. To validate the effectiveness and universality of the proposed method, it was applied to two different sets of NIR spectra. The results show that RE-PLS can not only effectively avoid the inter- ference of outliers but also enhance the precision of prediction and the stability of PLS regression. Thus, it may pro- vide a useful tool for multivariate calibration with multiple outliers.

  7. Prediction of ferric iron precipitation in bioleaching process using partial least squares and artificial neural network

    Directory of Open Access Journals (Sweden)

    Golmohammadi Hassan

    2013-01-01

    Full Text Available A quantitative structure-property relationship (QSPR study based on partial least squares (PLS and artificial neural network (ANN was developed for the prediction of ferric iron precipitation in bioleaching process. The leaching temperature, initial pH, oxidation/reduction potential (ORP, ferrous concentration and particle size of ore were used as inputs to the network. The output of the model was ferric iron precipitation. The optimal condition of the neural network was obtained by adjusting various parameters by trial-and-error. After optimization and training of the network according to back-propagation algorithm, a 5-5-1 neural network was generated for prediction of ferric iron precipitation. The root mean square error for the neural network calculated ferric iron precipitation for training, prediction and validation set are 32.860, 40.739 and 35.890, respectively, which are smaller than those obtained by PLS model (180.972, 165.047 and 149.950, respectively. Results obtained reveal the reliability and good predictivity of neural network model for the prediction of ferric iron precipitation in bioleaching process.

  8. Amplitude differences least squares method applied to temporal cardiac beat alignment

    Science.gov (United States)

    Correa, R. O.; Laciar, E.; Valentinuzzi, M. E.

    2007-11-01

    High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative.

  9. Analysis and application of partial least square regression in arc welding process

    Institute of Scientific and Technical Information of China (English)

    YANG Hai-lan; CAI Yan; BAO Ye-feng; ZHOU Yun

    2005-01-01

    Because of the relativity among the parameters, partial least square regression(PLSR)was applied to build the model and get the regression equation. The improved algorithm simplified the calculating process greatly because of the reduction of calculation. The orthogonal design was adopted in this experiment. Every sample had strong representation, which could reduce the experimental time and obtain the overall test data. Combined with the formation problem of gas metal arc weld with big current, the auxiliary analysis technique of PLSR was discussed and the regression equation of form factors (i.e. surface width, weld penetration and weld reinforcement) to process parameters(i.e. wire feed rate, wire extension, welding speed, gas flow, welding voltage and welding current)was given. The correlativity structure among variables was analyzed and there was certain correlation between independent variables matrix X and dependent variables matrix Y. The regression analysis shows that the welding speed mainly influences the weld formation while the variation of gas flow in certain range has little influence on formation of weld. The fitting plot of regression accuracy is given. The fitting quality of regression equation is basically satisfactory.

  10. Hybrid partial least squares and neural network approach for short-term electrical load forecasting

    Institute of Scientific and Technical Information of China (English)

    Shukang YANG; Ming LU; Huifeng XUE

    2008-01-01

    Intelligent systems and methods such as the neural network (NN) are usually used in electric power systems for short-term electrical load forecasting. However, a vast amount of electrical load data is often redundant, and linearly or nonlinearly correlated with each other. Highly correlated input data can result in erroneous prediction results given out by an NN model. Besides this, the determination of the topological structure of an NN model has always been a problem for designers. This paper presents a new artificial intelligence hybrid procedure for next day electric load forecasting based on partial least squares (PLS) and NN. PLS is used for the compression of data input space, and helps to determine the structure of the NN model. The hybrid PLS-NN model can be used to predict hourly electric load on weekdays and weekends. The advantage of this methodology is that the hybrid model can provide faster convergence and more precise prediction results in comparison with abductive networks algorithm. Extensive testing on the electrical load data of the Puget power utility in the USA confirms the validity of the proposed approach.

  11. SEMIAUTOMATIC BUILDING EXTRACTION FROM STEREOIMAGE PAIR BASED ON LINES GROUPING AND LEAST SQUARES MATCHING

    Institute of Scientific and Technical Information of China (English)

    2001-01-01

    The paper presents a general paradigm of semiautomatic building extrac tion from aerial stereo image pair.In the semiautomatic extraction system,the building model is defined by selected roof type through human-machine interface and input the approximation of area where the extracted building exists.Then un der the knowledge of the roof type,low-level and mid-level processing including edge detection,straight line segments extraction and line segments grouping ar e used to establish the initial geometrical model of the roof-top.However,the initial geometrical model is not so accurate in geometry.To attain accurate res ults,a general least squares adjustment integrating the linear templates matchin g model with geometrical constraints in object-space is applied to refine the ini tial geometrical model.The adjustment model integrating the straight edge pat tern and 3D constraints together is a well-studied optimal and ant i-noise method.After gaining proper initial values,this adjustment model can f lexibly process extraction of kinds of roof types by changing or assembling the geometrical constraints in object-space.

  12. MANUFACTURING AND CONTINUOUS IMPROVEMENT AREAS USING PARTIAL LEAST SQUARE PATH MODELING WITH MULTIPLE REGRESSION COMPARISON

    Directory of Open Access Journals (Sweden)

    Carlos Monge Perry

    2014-07-01

    Full Text Available Structural equation modeling (SEM has traditionally been deployed in areas of marketing, consumer satisfaction and preferences, human behavior, and recently in strategic planning. These areas are considered their niches; however, there is a remarkable tendency in empirical research studies that indicate a more diversified use of the technique.  This paper shows the application of structural equation modeling using partial least square (PLS-SEM, in areas of manufacturing, quality, continuous improvement, operational efficiency, and environmental responsibility in Mexico’s medium and large manufacturing plants, while using a small sample (n = 40.  The results obtained from the PLS-SEM model application mentioned, are highly positive, relevant, and statistically significant. Also shown in this paper, for purposes of validity, reliability, and statistical power confirmation of PLS-SEM, is a comparative analysis against multiple regression showing very similar results to those obtained by PLS-SEM.  This fact validates the use of PLS-SEM in areas of untraditional scientific research, and suggests and invites the use of the technique in diversified fields of the scientific research

  13. Temporal parameter change of human postural control ability during upright swing using recursive least square method

    Science.gov (United States)

    Goto, Akifumi; Ishida, Mizuri; Sagawa, Koichi

    2010-01-01

    The purpose of this study is to derive quantitative assessment indicators of the human postural control ability. An inverted pendulum is applied to standing human body and is controlled by ankle joint torque according to PD control method in sagittal plane. Torque control parameters (KP: proportional gain, KD: derivative gain) and pole placements of postural control system are estimated with time from inclination angle variation using fixed trace method as recursive least square method. Eight young healthy volunteers are participated in the experiment, in which volunteers are asked to incline forward as far as and as fast as possible 10 times over 10 [s] stationary intervals with their neck joint, hip joint and knee joint fixed, and then return to initial upright posture. The inclination angle is measured by an optical motion capture system. Three conditions are introduced to simulate unstable standing posture; 1) eyes-opened posture for healthy condition, 2) eyes-closed posture for visual impaired and 3) one-legged posture for lower-extremity muscle weakness. The estimated parameters Kp, KD and pole placements are applied to multiple comparison test among all stability conditions. The test results indicate that Kp, KD and real pole reflect effect of lower-extremity muscle weakness and KD also represents effect of visual impairment. It is suggested that the proposed method is valid for quantitative assessment of standing postural control ability.

  14. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  15. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    Science.gov (United States)

    Brooks, Wesley R.; Fienen, Michael N.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  16. Bias correction for the least squares estimator of Weibull shape parameter with complete and censored data

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L.F. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore); Xie, M. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Tang, L.C. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)

    2006-08-15

    Estimation of the Weibull shape parameter is important in reliability engineering. However, commonly used methods such as the maximum likelihood estimation (MLE) and the least squares estimation (LSE) are known to be biased. Bias correction methods for MLE have been studied in the literature. This paper investigates the methods for bias correction when model parameters are estimated with LSE based on probability plot. Weibull probability plot is very simple and commonly used by practitioners and hence such a study is useful. The bias of the LS shape parameter estimator for multiple censored data is also examined. It is found that the bias can be modeled as the function of the sample size and the censoring level, and is mainly dependent on the latter. A simple bias function is introduced and bias correcting formulas are proposed for both complete and censored data. Simulation results are also presented. The bias correction methods proposed are very easy to use and they can typically reduce the bias of the LSE of the shape parameter to less than half percent.

  17. Aircraft Flutter Modal Parameter Identification Using a Numerically Robust Least-squares Estimator in Frequency Domain

    Institute of Scientific and Technical Information of China (English)

    Tang Wei; Shi Zhongke; Chen Jie

    2008-01-01

    Recently, frequency-based least-squares (LS) estimators have found wide application in identifying aircraft flutter parameters. However, the frequency methods are often known to suffer from numerical difficulties when identifying a continuous-time model, espe-cially, of broader frequency or higher order. In this article, a numerically robust LS estimator based on vector orthogonal polynomial is proposed to solve the numerical problem of multivariable systems and applied to the flutter testing. The key idea of this method is to represent the frequency response function (FRF) matrix by a right matrix fraction description (RMFD) model, and expand the numerator and denominator polynomial matrices on a vector onhogonal basis. As a result, a perfect numerical condition (numerical condition equals 1) can be obtained for linear LS estimator. Finally, this method is verified by flutter test of a wing model in a wind tunnel and real flight flutter test of an aircraft. The results are compared to those with notably LMS PolyMAX, which is not troubled by the numerical problem as it is established in z domain (e.g. derived from a discrete-time model). The verification has evidenced that this method, apart from overcoming the numerical problem, yields the results comparable to those acquired with LMS PolyMAX, or even considerably better at some frequency bands.

  18. A variant of sparse partial least squares for variable selection and data exploration

    Directory of Open Access Journals (Sweden)

    Megan Jodene Olson Hunt

    2014-03-01

    Full Text Available When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed all-possible SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a large number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.

  19. A variant of sparse partial least squares for variable selection and data exploration.

    Science.gov (United States)

    Olson Hunt, Megan J; Weissfeld, Lisa; Boudreau, Robert M; Aizenstein, Howard; Newman, Anne B; Simonsick, Eleanor M; Van Domelen, Dane R; Thomas, Fridtjof; Yaffe, Kristine; Rosano, Caterina

    2014-01-01

    When data are sparse and/or predictors multicollinear, current implementation of sparse partial least squares (SPLS) does not give estimates for non-selected predictors nor provide a measure of inference. In response, an approach termed "all-possible" SPLS is proposed, which fits a SPLS model for all tuning parameter values across a set grid. Noted is the percentage of time a given predictor is chosen, as well as the average non-zero parameter estimate. Using a "large" number of multicollinear predictors, simulation confirmed variables not associated with the outcome were least likely to be chosen as sparsity increased across the grid of tuning parameters, while the opposite was true for those strongly associated. Lastly, variables with a weak association were chosen more often than those with no association, but less often than those with a strong relationship to the outcome. Similarly, predictors most strongly related to the outcome had the largest average parameter estimate magnitude, followed by those with a weak relationship, followed by those with no relationship. Across two independent studies regarding the relationship between volumetric MRI measures and a cognitive test score, this method confirmed a priori hypotheses about which brain regions would be selected most often and have the largest average parameter estimates. In conclusion, the percentage of time a predictor is chosen is a useful measure for ordering the strength of the relationship between the independent and dependent variables, serving as a form of inference. The average parameter estimates give further insight regarding the direction and strength of association. As a result, all-possible SPLS gives more information than the dichotomous output of traditional SPLS, making it useful when undertaking data exploration and hypothesis generation for a large number of potential predictors.

  20. Bayesian inference for data assimilation using Least-Squares Finite Element methods

    International Nuclear Information System (INIS)

    It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - unlike other data assimilation techniques - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.

  1. Prediction of aged red wine aroma properties from aroma chemical composition. Partial least squares regression models.

    Science.gov (United States)

    Aznar, Margarita; López, Ricardo; Cacho, Juan; Ferreira, Vicente

    2003-04-23

    Partial least squares regression (PLSR) models able to predict some of the wine aroma nuances from its chemical composition have been developed. The aromatic sensory characteristics of 57 Spanish aged red wines were determined by 51 experts from the wine industry. The individual descriptions given by the experts were recorded, and the frequency with which a sensory term was used to define a given wine was taken as a measurement of its intensity. The aromatic chemical composition of the wines was determined by already published gas chromatography (GC)-flame ionization detector and GC-mass spectrometry methods. In the whole, 69 odorants were analyzed. Both matrixes, the sensory and chemical data, were simplified by grouping and rearranging correlated sensory terms or chemical compounds and by the exclusion of secondary aroma terms or of weak aroma chemicals. Finally, models were developed for 18 sensory terms and 27 chemicals or groups of chemicals. Satisfactory models, explaining more than 45% of the original variance, could be found for nine of the most important sensory terms (wood-vanillin-cinnamon, animal-leather-phenolic, toasted-coffee, old wood-reduction, vegetal-pepper, raisin-flowery, sweet-candy-cacao, fruity, and berry fruit). For this set of terms, the correlation coefficients between the measured and predicted Y (determined by cross-validation) ranged from 0.62 to 0.81. Models confirmed the existence of complex multivariate relationships between chemicals and odors. In general, pleasant descriptors were positively correlated to chemicals with pleasant aroma, such as vanillin, beta damascenone, or (E)-beta-methyl-gamma-octalactone, and negatively correlated to compounds showing less favorable odor properties, such as 4-ethyl and vinyl phenols, 3-(methylthio)-1-propanol, or phenylacetaldehyde.

  2. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

  3. Identifying grey matter changes in schizotypy using partial least squares correlation.

    Science.gov (United States)

    Wiebels, Kristina; Waldie, Karen E; Roberts, Reece P; Park, Haeme R P

    2016-08-01

    Neuroimaging research into the brain structure of schizophrenia patients has shown consistent reductions in grey matter volume relative to healthy controls. Examining structural differences in individuals with high levels of schizotypy may help elucidate the course of disorder progression, and provide further support for the schizotypy-schizophrenia continuum. Thus far, the few studies investigating grey matter differences in schizotypy have produced inconsistent results. In the current study, we used a multivariate partial least squares (PLS) approach to clarify the relationship between psychometric schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experiences) and grey matter volume in 49 healthy adults. We found a negative association between all schizotypy dimensions and grey matter volume in the frontal and temporal lobes, as well as the insula. We also found a positive association between all schizotypy dimensions and grey matter volume in the parietal and temporal lobes, and in subcortical regions. Further correlational analyses revealed that positive and disorganised schizotypy were strongly associated with key regions (left superior temporal gyrus and insula) most consistently reported to be affected in schizophrenia and schizotypy. We also compared PLS with the typically used General Linear Model (GLM) and demonstrate that PLS can be reliably used as an extension to voxel-based morphometry (VBM) data. This may be particularly valuable for schizotypy research due to PLS' ability to detect small, but reliable effects. Together, the findings indicate that healthy schizotypal individuals exhibit structural changes in regions associated with schizophrenia. This adds to the evidence of an overlap of phenotypic expression between schizotypy and schizophrenia, and may help establish biological endophenotypes for the disorder. PMID:27208815

  4. Using a partial least squares (PLS) method for estimating cyanobacterial pigments in eutrophic inland waters

    Science.gov (United States)

    Robertson, A. L.; Li, L.; Tedesco, L.; Wilson, J.; Soyeux, E.

    2009-08-01

    Midwestern lakes and reservoirs are commonly exposed to anthropogenic eutrophication. Cyanobacteria thrive in these nutrient rich-waters and some species pose three threats: 1) taste & odor (drinking), 2) toxins (drinking + recreational) and 3) water treatment process disturbance. Managers for drinking water production are interested in the rapid identification of cyanobacterial blooms to minimize effects caused by harmful cyanobacteria. There is potential to monitor cyanobacteria through the remote sensing of two algal pigments: chlorophyll a (CHL) and phycocyanin (PC). Several empirical methods that develop spectral parameters (e.g., simple band ratio) sensitive to these two pigments and map reflectance to the pigment concentration have been used in a number of investigations using field-based spectroradiometers. This study tests a multivariate analysis approach, partial least squares (PLS) regression, for the estimation of CHL and PC. PLS models were trained with 35 spectra collected from three central Indiana reservoirs during a 2007 field campaign with dual-headed Ocean Optics USB4000 field spectroradiometers (355 - 802 nm, nominal 1.0 nm intervals), and CHL and PC concentrations of the corresponding water samples analyzed at Indiana University-Purdue University at Indianapolis. Validation of these models with 19 remaining spectra show that PLS (CHL R2=0.90, slope=0.91, RMSE=20.61 μg/L PC R2=0.65, slope=1.15, RMSE=23.04. μg/L) performed equally well to the band tuning model based on Gitelson et al. 2005 (CHL: R2=0.75, slope=0.84, RMSE=40.16 μg/L PC: R2=0.59, slope=1.14, RMSE=20.24 μg/L).

  5. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    Science.gov (United States)

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these

  6. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    Science.gov (United States)

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these

  7. Statistical CT noise reduction with multiscale decomposition and penalized weighted least squares in the projection domain

    Energy Technology Data Exchange (ETDEWEB)

    Tang Shaojie; Tang Xiangyang [Imaging and Medical Physics, Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1701 Uppergate Dr., C-5018, Atlanta, Georgia 30322 (United States); School of Automation, Xi' an University of Posts and Telecommunications, Xi' an, Shaanxi 710121 (China); Imaging and Medical Physics, Department of Radiology and Imaging Sciences, Emory University School of Medicine, 1701 Uppergate Dr., C-5018, Atlanta, Georgia 30322 (United States)

    2012-09-15

    Purposes: The suppression of noise in x-ray computed tomography (CT) imaging is of clinical relevance for diagnostic image quality and the potential for radiation dose saving. Toward this purpose, statistical noise reduction methods in either the image or projection domain have been proposed, which employ a multiscale decomposition to enhance the performance of noise suppression while maintaining image sharpness. Recognizing the advantages of noise suppression in the projection domain, the authors propose a projection domain multiscale penalized weighted least squares (PWLS) method, in which the angular sampling rate is explicitly taken into consideration to account for the possible variation of interview sampling rate in advanced clinical or preclinical applications. Methods: The projection domain multiscale PWLS method is derived by converting an isotropic diffusion partial differential equation in the image domain into the projection domain, wherein a multiscale decomposition is carried out. With adoption of the Markov random field or soft thresholding objective function, the projection domain multiscale PWLS method deals with noise at each scale. To compensate for the degradation in image sharpness caused by the projection domain multiscale PWLS method, an edge enhancement is carried out following the noise reduction. The performance of the proposed method is experimentally evaluated and verified using the projection data simulated by computer and acquired by a CT scanner. Results: The preliminary results show that the proposed projection domain multiscale PWLS method outperforms the projection domain single-scale PWLS method and the image domain multiscale anisotropic diffusion method in noise reduction. In addition, the proposed method can preserve image sharpness very well while the occurrence of 'salt-and-pepper' noise and mosaic artifacts can be avoided. Conclusions: Since the interview sampling rate is taken into account in the projection domain

  8. Tropospheric refractivity and zenith path delays from least-squares collocation of meteorological and GNSS data

    Science.gov (United States)

    Wilgan, Karina; Hurter, Fabian; Geiger, Alain; Rohm, Witold; Bosy, Jarosław

    2016-08-01

    Precise positioning requires an accurate a priori troposphere model to enhance the solution quality. Several empirical models are available, but they may not properly characterize the state of troposphere, especially in severe weather conditions. Another possible solution is to use regional troposphere models based on real-time or near-real time measurements. In this study, we present the total refractivity and zenith total delay (ZTD) models based on a numerical weather prediction (NWP) model, Global Navigation Satellite System (GNSS) data and ground-based meteorological observations. We reconstruct the total refractivity profiles over the western part of Switzerland and the total refractivity profiles as well as ZTDs over Poland using the least-squares collocation software COMEDIE (Collocation of Meteorological Data for Interpretation and Estimation of Tropospheric Pathdelays) developed at ETH Zürich. In these two case studies, profiles of the total refractivity and ZTDs are calculated from different data sets. For Switzerland, the data set with the best agreement with the reference radiosonde (RS) measurements is the combination of ground-based meteorological observations and GNSS ZTDs. Introducing the horizontal gradients does not improve the vertical interpolation, and results in slightly larger biases and standard deviations. For Poland, the data set based on meteorological parameters from the NWP Weather Research and Forecasting (WRF) model and from a combination of the NWP model and GNSS ZTDs shows the best agreement with the reference RS data. In terms of ZTD, the combined NWP-GNSS observations and GNSS-only data set exhibit the best accuracy with an average bias (from all stations) of 3.7 mm and average standard deviations of 17.0 mm w.r.t. the reference GNSS stations.

  9. Partial least square modeling of hydrolysis: analyzing the impacts of pH and acetate

    Institute of Scientific and Technical Information of China (English)

    L(U) Fan; HE Pin-jing; SHAO Li-ming

    2006-01-01

    pH and volatile fatty acids both might affect the further hydrolysis of particulate solid waste, which is the limiting-step of anaerobic digestion. To clarify the individual effects of pH and volatile fatty acids, batch experiments were conducted at fixed pH value (pH 5-9) with or without acetate (20 g/L). The hydrolysis efficiencies of carbohydrate and protein were evaluated by carbon and nitrogen content of solids, amylase activity and proteinase activity. The trend of carbohydrate hydrolysis with pH was not affected by the addition of acetate, following the sequence of pH 7>pH 8>pH 9>pH 6>pH 5; but the inhibition of acetate (20 g/L) was obvious by 10%-60 %. The evolution of residual nitrogen showed that the effect of pH on protein hydrolysis was minor, while the acetate was seriously inhibitory especially at alkali condition by 45%-100 %. The relationship between the factors (pH and acetate) and the response variables was evaluated by partial least square modeling (PLS). The PLS analysis demonstrated that the hydrolysis of carbohydrate was both affected by pH and acetate, with pH the more important factor. Therefore, the inhibition by acetate on carbohydrate hydrolysis was mainly due to the corresponding decline of pH, but the presence of acetate species, while the acetate species was the absolutely important factor for the hydrolysis of protein.

  10. Prediction of Biomass Production and Nutrient Uptake in Land Application Using Partial Least Squares Regression Analysis

    Directory of Open Access Journals (Sweden)

    Vasileios A. Tzanakakis

    2014-12-01

    Full Text Available Partial Least Squares Regression (PLSR can integrate a great number of variables and overcome collinearity problems, a fact that makes it suitable for intensive agronomical practices such as land application. In the present study a PLSR model was developed to predict important management goals, including biomass production and nutrient recovery (i.e., nitrogen and phosphorus, associated with treatment potential, environmental impacts, and economic benefits. Effluent loading and a considerable number of soil parameters commonly monitored in effluent irrigated lands were considered as potential predictor variables during the model development. All data were derived from a three year field trial including plantations of four different plant species (Acacia cyanophylla, Eucalyptus camaldulensis, Populus nigra, and Arundo donax, irrigated with pre-treated domestic effluent. PLSR method was very effective despite the small sample size and the wide nature of data set (with many highly correlated inputs and several highly correlated responses. Through PLSR method the number of initial predictor variables was reduced and only several variables were remained and included in the final PLSR model. The important input variables maintained were: Effluent loading, electrical conductivity (EC, available phosphorus (Olsen-P, Na+, Ca2+, Mg2+, K2+, SAR, and NO3−-N. Among these variables, effluent loading, EC, and nitrates had the greater contribution to the final PLSR model. PLSR is highly compatible with intensive agronomical practices such as land application, in which a large number of highly collinear and noisy input variables is monitored to assess plant species performance and to detect impacts on the environment.

  11. Multilocus association testing of quantitative traits based on partial least-squares analysis.

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    Full Text Available Because of combining the genetic information of multiple loci, multilocus association studies (MLAS are expected to be more powerful than single locus association studies (SLAS in disease genes mapping. However, some researchers found that MLAS had similar or reduced power relative to SLAS, which was partly attributed to the increased degrees of freedom (dfs in MLAS. Based on partial least-squares (PLS analysis, we develop a MLAS approach, while avoiding large dfs in MLAS. In this approach, genotypes are first decomposed into the PLS components that not only capture majority of the genetic information of multiple loci, but also are relevant for target traits. The extracted PLS components are then regressed on target traits to detect association under multilinear regression. Simulation study based on real data from the HapMap project were used to assess the performance of our PLS-based MLAS as well as other popular multilinear regression-based MLAS approaches under various scenarios, considering genetic effects and linkage disequilibrium structure of candidate genetic regions. Using PLS-based MLAS approach, we conducted a genome-wide MLAS of lean body mass, and compared it with our previous genome-wide SLAS of lean body mass. Simulations and real data analyses results support the improved power of our PLS-based MLAS in disease genes mapping relative to other three MLAS approaches investigated in this study. We aim to provide an effective and powerful MLAS approach, which may help to overcome the limitations of SLAS in disease genes mapping.

  12. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models.

  13. Multimodal Classification of Mild Cognitive Impairment Based on Partial Least Squares.

    Science.gov (United States)

    Wang, Pingyue; Chen, Kewei; Yao, Li; Hu, Bin; Wu, Xia; Zhang, Jiacai; Ye, Qing; Guo, Xiaojuan

    2016-08-10

    In recent years, increasing attention has been given to the identification of the conversion of mild cognitive impairment (MCI) to Alzheimer's disease (AD). Brain neuroimaging techniques have been widely used to support the classification or prediction of MCI. The present study combined magnetic resonance imaging (MRI), 18F-fluorodeoxyglucose PET (FDG-PET), and 18F-florbetapir PET (florbetapir-PET) to discriminate MCI converters (MCI-c, individuals with MCI who convert to AD) from MCI non-converters (MCI-nc, individuals with MCI who have not converted to AD in the follow-up period) based on the partial least squares (PLS) method. Two types of PLS models (informed PLS and agnostic PLS) were built based on 64 MCI-c and 65 MCI-nc from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The results showed that the three-modality informed PLS model achieved better classification accuracy of 81.40%, sensitivity of 79.69%, and specificity of 83.08% compared with the single-modality model, and the three-modality agnostic PLS model also achieved better classification compared with the two-modality model. Moreover, combining the three modalities with clinical test score (ADAS-cog), the agnostic PLS model (independent data: florbetapir-PET; dependent data: FDG-PET and MRI) achieved optimal accuracy of 86.05%, sensitivity of 81.25%, and specificity of 90.77%. In addition, the comparison of PLS, support vector machine (SVM), and random forest (RF) showed greater diagnostic power of PLS. These results suggested that our multimodal PLS model has the potential to discriminate MCI-c from the MCI-nc and may therefore be helpful in the early diagnosis of AD. PMID:27567818

  14. Stable least-squares matching for oblique images using bound constrained optimization and a robust loss function

    Science.gov (United States)

    Hu, Han; Ding, Yulin; Zhu, Qing; Wu, Bo; Xie, Linfu; Chen, Min

    2016-08-01

    Least-squares matching is a standard procedure in photogrammetric applications for obtaining sub-pixel accuracies of image correspondences. However, least-squares matching has also been criticized for its instability, which is primarily reflected by the requests for the initial correspondence and favorable image quality. In image matching between oblique images, due to the blur, illumination differences and other effects, the image attributes of different views are notably different, which results in a more severe convergence problem. Aiming at improving the convergence rate and robustness of least-squares matching of oblique images, we incorporated prior geometric knowledge in the optimization process, which is reflected as the bounded constraints on the optimizing parameters that constrain the search for a solution to a reasonable region. Furthermore, to be resilient to outliers, we substituted the square loss with a robust loss function. To solve the composite problem, we reformulated the least-squares matching problem as a bound constrained optimization problem, which can be solved with bounds constrained Levenberg-Marquardt solver. Experimental results consisting of images from two different penta-view oblique camera systems confirmed that the proposed method shows guaranteed final convergences in various scenarios compared to the approximately 20-50% convergence rate of classical least-squares matching.

  15. WEIGHTED LEAST SQUARE CONVERGENCE OF LAGRANGE INTERPOLATION ON THE UNIT CIRCLE

    Institute of Scientific and Technical Information of China (English)

    Xie Siqing

    2001-01-01

    In the paper, a result of Walsh and Sharma on least squareconvergence of Lagrange interpolation polynomials based on the n-th roots of unity is extended to Lagrange interpolation on the sets obtained by pro-jecting vertically the zeros of (1-x)2P.β (x),α>0,β>0, (1-x)p β (x),α>0,β>-1, (1+x)p ,(x) ,α>-1 ,β>0, and P (x ) ,α> - 1 ,β> - 1, respectively, onto the unit circle, where p ( ,β) (x ) ,α> - 1 , β> - 1, stands for the n-th Jacobi polynomial. Moreover, a result of Saff and Walsh is also extended.CLC Number:O17 Document ID:AFoundation Item:Project supported by NSFC under grant 10071039, and by Education Committee of Jiangsu Province under grant 00KJB110005.References:[1]Walsh,J.L. and Sharma,A.,Least Square Approximation and Interpolation in Roots of Unity,Pacific J. Math. ,14(1964),727-730.[2]Erdos,P. and Turán,P. ,On Interpolation I ,Ann. Math. ,38(1937),142-155.[3]Lozinsi,S.M.,Uber Interpolation (in Russian),Math. Sbornik (N.S.),8(1940),57-68.[4]Saff,E.B. and Walsh,J.L. ,On the Convergence of Rational Functions which Interpolate in the Roots of Unity,Pacific J. Math. 45(1973),639-641.[5]Sharma,A. and Vertesi,P. ,Mean Convergence and Interpolation in Roots of Unity,SIAM J.Math. Anal. ,14(1983),800-806.[6]Natason,I.P. ,Constructive Theory of Functions,Gostekhizdat,Moscow,1949.[7]Szego,G. ,Orthogoral Polynomials,Math. Soc. Colloq. Publ. ,Vol.[2]3 4th ed. Math. Soc. ,Providence,RI. ,1975.Manuscript Received:1999年9月13日Manuscript Revised:2001年5月8日Published:2001年9月1日

  16. Current identification in vacuum circuit breakers as a least squares problem*

    Directory of Open Access Journals (Sweden)

    Ghezzi Luca

    2013-01-01

    Full Text Available In this work, a magnetostatic inverse problem is solved, in order to reconstruct the electric current distribution inside high voltage, vacuum circuit breakers from measurements of the outside magnetic field. The (rectangular final algebraic linear system is solved in the least square sense, by involving a regularized singular value decomposition of the system matrix. An approximated distribution of the electric current is thus returned, without the theoretical problem which is encountered with optical methods of matching light to temperature and finally to current density. The feasibility is justified from the computational point of view as the (industrial goal is to evaluate whether, or to what extent in terms of accuracy, a given experimental set-up (number and noise level of sensors is adequate to work as a “magnetic camera” for a given circuit breaker. Dans cet article, on résout un problème inverse magnétostatique pour déterminer la distribution du courant électrique dans le vide d’un disjoncteur à haute tension à partir des mesures du champ magnétique extérieur. Le système algébrique (rectangulaire final est résolu au sens des moindres carrés en faisant appel à une décomposition en valeurs singulières regularisée de la matrice du système. On obtient ainsi une approximation de la distribution du courant électrique sans le problème théorique propre des méthodes optiques qui est celui de relier la lumière à la température et donc à la densité du courant. La faisabilité est justifiée d’un point de vue numérique car le but (industriel est d’évaluer si, ou à quelle précision, un dispositif expérimental donné (nombre et seuil limite de bruit des senseurs peut travailler comme une “caméra magnétique” pour un certain disjoncteur.

  17. Least-squares Migration and Full Waveform Inversion with Multisource Frequency Selection

    KAUST Repository

    Huang, Yunsong

    2013-09-01

    Multisource Least-Squares Migration (LSM) of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. But for the marine acquisition geometry this approach faces the challenge of erroneous misfit due to the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modeling method. To tackle this mismatch problem, I present a frequency selection strategy with LSM of supergathers. The key idea is, at each LSM iteration, to assign a unique frequency band to each shot gather, so that the spectral overlap among those shots—and therefore their crosstallk—is zero. Consequently, each receiver can unambiguously identify and then discount the superfluous sources—those that are not associated with the receiver in marine acquisition. To compare with standard migration, I apply the proposed method to 2D SEG/EAGE salt model and obtain better resolved images computed at about 1/8 the cost; results for 3D SEG/EAGE salt model, with Ocean Bottom Seismometer (OBS) survey, show a speedup of 40×. This strategy is next extended to multisource Full Waveform Inversion (FWI) of supergathers for marine streamer data, with the same advantages of computational efficiency and storage savings. In the Finite-Difference Time-Domain (FDTD) method, to mitigate spectral leakage due to delayed onsets of sine waves detected at receivers, I double the simulation time and retain only the second half of the simulated records. To compare with standard FWI, I apply the proposed method to 2D velocity model of SEG/EAGE salt and to Gulf Of Mexico (GOM) field data, and obtain a speedup of about 4× and 8×. Formulas are then derived for the resolution limits of various constituent wavepaths pertaining to FWI: diving waves, primary reflections, diffractions, and multiple reflections. They suggest that inverting multiples can provide some low and intermediate

  18. Kernelized partial least squares for feature reduction and classification of gene microarray data

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-12-01

    Full Text Available Abstract Background The primary objectives of this paper are: 1. to apply Statistical Learning Theory (SLT, specifically Partial Least Squares (PLS and Kernelized PLS (K-PLS, to the universal "feature-rich/case-poor" (also known as "large p small n", or "high-dimension, low-sample size" microarray problem by eliminating those features (or probes that do not contribute to the "best" chromosome bio-markers for lung cancer, and 2. quantitatively measure and verify (by an independent means the efficacy of this PLS process. A secondary objective is to integrate these significant improvements in diagnostic and prognostic biomedical applications into the clinical research arena. That is, to devise a framework for converting SLT results into direct, useful clinical information for patient care or pharmaceutical research. We, therefore, propose and preliminarily evaluate, a process whereby PLS, K-PLS, and Support Vector Machines (SVM may be integrated with the accepted and well understood traditional biostatistical "gold standard", Cox Proportional Hazard model and Kaplan-Meier survival analysis methods. Specifically, this new combination will be illustrated with both PLS and Kaplan-Meier followed by PLS and Cox Hazard Ratios (CHR and can be easily extended for both the K-PLS and SVM paradigms. Finally, these previously described processes are contained in the Fine Feature Selection (FFS component of our overall feature reduction/evaluation process, which consists of the following components: 1. coarse feature reduction, 2. fine feature selection and 3. classification (as described in this paper and prediction. Results Our results for PLS and K-PLS showed that these techniques, as part of our overall feature reduction process, performed well on noisy microarray data. The best performance was a good 0.794 Area Under a Receiver Operating Characteristic (ROC Curve (AUC for classification of recurrence prior to or after 36 months and a strong 0.869 AUC for

  19. Seasonal prediction of the East Asian summer monsoon with a partial-least square model

    Science.gov (United States)

    Wu, Zhiwei; Yu, Lulu

    2016-05-01

    Seasonal prediction of the East Asian summer monsoon (EASM) strength is probably one of the most challenging and crucial issues for climate prediction over East Asia. In this paper, a statistical method called partial-least square (PLS) regression is utilized to uncover principal sea surface temperature (SST) modes in the winter preceding the EASM. Results show that the SST pattern of the first PLS mode is associated with the turnabout of warming (or cooling) phase of a mega-El Niño/Southern Oscillation (mega-ENSO) (a leading mode of interannual-to-interdecadal variations of global SST), whereas that of the second PLS mode leads the warming/cooling mega-ENSO by about 1 year, signaling precursory conditions for mega-ENSO. These indicate that mega-ENSO may provide a critical predictability source for the EASM strength. Based on a 40-year training period (1958-1997), a PLS prediction model is constructed using the two leading PLS modes and 3-month-lead hindcasts are performed for the validation period of 1998-2013. A promising skill is obtained, which is comparable to the ensemble mean of versions 3 and 4 of the Canadian Community Atmosphere Model (CanCM3/4) hindcasts from the newly developed North American Multi-model Ensemble Prediction System regarding the interannual variations of the EASM strength. How to improve dynamical model simulation of the EASM is also examined through comparing the CanCM3/4 hindcast (1982-2010) with the 106-year historical run (1900-2005) by the Second Generation Canadian Earth System Model (CanESM2). CanCM3/4 exhibits a high skill in the EASM hindcast period 1982-2010 during which it also has a better performance in capturing the relationship between the EASM and mega-ENSO. By contrast, the simulation skill of CanESM2 is quite low and it is unable to reproduce the linkage between the EASM and mega-ENSO. All these results emphasize importance of mega-ENSO in seasonal prediction and dynamical model simulation of the EASM.

  20. Sub-Model Partial Least Squares for Improved Accuracy in Quantitative Laser Induced Breakdown Spectroscopy

    Science.gov (United States)

    Anderson, R. B.; Clegg, S. M.; Frydenvang, J.

    2015-12-01

    One of the primary challenges faced by the ChemCam instrument on the Curiosity Mars rover is developing a regression model that can accurately predict the composition of the wide range of target types encountered (basalts, calcium sulfate, feldspar, oxides, etc.). The original calibration used 69 rock standards to train a partial least squares (PLS) model for each major element. By expanding the suite of calibration samples to >400 targets spanning a wider range of compositions, the accuracy of the model was improved, but some targets with "extreme" compositions (e.g. pure minerals) were still poorly predicted. We have therefore developed a simple method, referred to as "submodel PLS", to improve the performance of PLS across a wide range of target compositions. In addition to generating a "full" (0-100 wt.%) PLS model for the element of interest, we also generate several overlapping submodels (e.g. for SiO2, we generate "low" (0-50 wt.%), "mid" (30-70 wt.%), and "high" (60-100 wt.%) models). The submodels are generally more accurate than the "full" model for samples within their range because they are able to adjust for matrix effects that are specific to that range. To predict the composition of an unknown target, we first predict the composition with the submodels and the "full" model. Then, based on the predicted composition from the "full" model, the appropriate submodel prediction can be used (e.g. if the full model predicts a low composition, use the "low" model result, which is likely to be more accurate). For samples with "full" predictions that occur in a region of overlap between submodels, the submodel predictions are "blended" using a simple linear weighted sum. The submodel PLS method shows improvements in most of the major elements predicted by ChemCam and reduces the occurrence of negative predictions for low wt.% targets. Submodel PLS is currently being used in conjunction with ICA regression for the major element compositions of ChemCam data.

  1. Neutron spectrum unfolding using artificial neural network and modified least square method

    Science.gov (United States)

    Hosseini, Seyed Abolfazl

    2016-09-01

    In the present paper, neutron spectrum is reconstructed using the Artificial Neural Network (ANN) and Modified Least Square (MLSQR) methods. The detector's response (pulse height distribution) as a required data for unfolding of energy spectrum is calculated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Unlike the usual methods that apply inversion procedures to unfold the energy spectrum from the Fredholm integral equation, the MLSQR method uses the direct procedure. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry of neutron sources, the neutron pulse height distribution is simulated/measured in the NE-213 detector. The response matrix is calculated using the MCNPX-ESUT computational code through the simulation of NE-213 detector's response to monoenergetic neutron sources. For known neutron pulse height distribution, the energy spectrum of the neutron source is unfolded using the MLSQR method. In the developed multilayer perception neural network for reconstruction of the energy spectrum of the neutron source, there is no need for formation of the response matrix. The multilayer perception neural network is developed based on logsig, tansig and purelin transfer functions. The developed artificial neural network consists of two hidden layers of type hyperbolic tangent sigmoid transfer function and a linear transfer function in the output layer. The motivation of applying the ANN method may be explained by the fact that no matrix inversion is needed for energy spectrum unfolding. The simulated neutron pulse height distributions in each light bin due to randomly generated neutron spectrum are considered as the input data of ANN. Also, the randomly generated energy spectra are considered as the output data of the ANN. Energy spectrum of the neutron source is identified with high accuracy using both MLSQR and ANN methods. The results obtained from

  2. Multiparameter linear least-squares fitting to Poisson data one count at a time

    Science.gov (United States)

    Wheaton, Wm. A.; Dunklee, Alfred L.; Jacobsen, Allan S.; Ling, James C.; Mahoney, William A.; Radocinski, Robert G.

    1995-01-01

    A standard problem in gamma-ray astronomy data analysis is the decomposition of a set of observed counts, described by Poisson statistics, according to a given multicomponent linear model, with underlying physical count rates or fluxes which are to be estimated from the data. Despite its conceptual simplicity, the linear least-squares (LLSQ) method for solving this problem has generally been limited to situations in which the number ni of counts in each bin i is not too small, conventionally more than 5-30. It seems to be widely believed that the failure of the LLSQ method for small counts is due to the failure of the Poisson distribution to be even approximately normal for small numbers. The cause is more accurately the strong anticorrelation between the data and the wieghts wi in the weighted LLSQ method when square root of ni instead of square root of bar-ni is used to approximate the uncertainties, sigmai, in the data, where bar-ni = E(ni), the expected value of Ni. We show in an appendix that, avoiding this approximation, the correct equations for the Poisson LLSQ (PLLSQ) problems are actually identical to those for the maximum likelihood estimate using the exact Poisson distribution. We apply the method to solve a problem in high-resolution gamma-ray spectroscopy for the JPL High-Resolution Gamma-Ray Spectrometer flown on HEAO 3. Systematic error in subtracting the strong, highly variable background encountered in the low-energy gamma-ray region can be significantly reduced by closely pairing source and background data in short segments. Significant results can be built up by weighted averaging of the net fluxes obtained from the subtraction of many individual source/background pairs. Extension of the approach to complex situations, with multiple cosmic sources and realistic background parameterizations, requires a means of efficiently fitting to data from single scans in the narrow (approximately = 1.2 keV, HEAO 3) energy channels of a Ge spectrometer, where

  3. First-order system least-squares for second-order elliptic problems with discontinuous coefficients: Further results

    Energy Technology Data Exchange (ETDEWEB)

    Bloechle, B.; Manteuffel, T.; McCormick, S.; Starke, G.

    1996-12-31

    Many physical phenomena are modeled as scalar second-order elliptic boundary value problems with discontinuous coefficients. The first-order system least-squares (FOSLS) methodology is an alternative to standard mixed finite element methods for such problems. The occurrence of singularities at interface corners and cross-points requires that care be taken when implementing the least-squares finite element method in the FOSLS context. We introduce two methods of handling the challenges resulting from singularities. The first method is based on a weighted least-squares functional and results in non-conforming finite elements. The second method is based on the use of singular basis functions and results in conforming finite elements. We also share numerical results comparing the two approaches.

  4. An Iterative Method for the Least-Squares Problems of a General Matrix Equation Subjects to Submatrix Constraints

    Directory of Open Access Journals (Sweden)

    Li-fang Dai

    2013-01-01

    Full Text Available An iterative algorithm is proposed for solving the least-squares problem of a general matrix equation ∑i=1t‍MiZiNi=F, where Zi (i=1,2,…,t are to be determined centro-symmetric matrices with given central principal submatrices. For any initial iterative matrices, we show that the least-squares solution can be derived by this method within finite iteration steps in the absence of roundoff errors. Meanwhile, the unique optimal approximation solution pair for given matrices Z~i can also be obtained by the least-norm least-squares solution of matrix equation ∑i=1t‍MiZ-iNi=F-, in which Z-i=Zi-Z~i,  F-=F-∑i=1t‍MiZ~iNi. The given numerical examples illustrate the efficiency of this algorithm.

  5. Novel temperature modeling and compensation method for bias of ring laser gyroscope based on least-squares support vector machine

    Institute of Scientific and Technical Information of China (English)

    Xudong Yu; Yu Wang; Guo Wei; Pengfei Zhang; Xingwu Long

    2011-01-01

    Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way. This is an important restraining factor for improving the accuracy of RLG. Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM). Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method. Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method. The results show the maximum error of RLG bias drops by almost two orders of magnitude after static temperature compensation, while bias stability of RLG improves by one order of magnitude after dynamic temperature compensation. Thus, the proposed method reduces the influence of temperature variation on the bias of the RLG effectively and improves the accuracy of the gyro scope considerably.%@@ Bias of ring-laser-gyroscope (RLG) changes with temperature in a nonlinear way.This is an important restraining factor for improving the accuracy of RLG.Considering the limitations of least-squares regression and neural network, we propose a new method of temperature compensation of RLG bias-building function regression model using least-squares support vector machine (LS-SVM).Static and dynamic temperature experiments of RLG bias are carried out to validate the effectiveness of the proposed method.Moreover,the traditional least-squares regression method is compared with the LS-SVM-based method.

  6. Mass spectrometry and partial least-squares regression: a tool for identification of wheat variety and end-use quality

    DEFF Research Database (Denmark)

    Sørensen, Helle Aagaard; Petersen, Marianne Kjerstine; Jacobsen, Susanne;

    2004-01-01

    Rapid methods for the identification of wheat varieties and their end-use quality have been developed. The methods combine the analysis of wheat protein extracts by mass spectrometry with partial least-squares regression in order to predict the variety or end-use quality of unknown wheat samples....... The whole process takes similar to30 min. Extracts of alcohol-soluble storage proteins (gliadins) from wheat were analysed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Partial least-squares regression was subsequently applied using these mass spectra for making models...

  7. Mitigation of defocusing by statics and near-surface velocity errors by interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal

    2015-08-19

    We propose an interferometric least-squares migration method that can significantly reduce migration artifacts due to statics and errors in the near-surface velocity model. We first choose a reference reflector whose topography is well known from the, e.g., well logs. Reflections from this reference layer are correlated with the traces associated with reflections from deeper interfaces to get crosscorrelograms. These crosscorrelograms are then migrated using interferometric least-squares migration (ILSM). In this way statics and velocity errors at the near surface are largely eliminated for the examples in our paper.

  8. DISCRETE MINUS ONE NORM LEAST-SQUARES FOR THE STRESS FORMULATION OF LINEAR ELASTICITY WITH NUMERICAL RESULTS

    Institute of Scientific and Technical Information of China (English)

    Sang Dong Kim; Byeong Chun Shin; Seokchan Kim; Gyungsoo Woo

    2003-01-01

    This paper studies the discrete minus one norm least-squares methods for the stress formulation of pure displacement linear elasticity in two dimensions. The proposed leastsquares functional is defined as the sum of the L2- and H-1-norms of the residual equations weighted appropriately. The minus one norm in the functional is replaced by the discrete minus one norm and then the discrete minus one norm least-squares methods are analyzed with various numerical results focusing on the finite element accuracy and multigrid convergence performances.

  9. Application of neural network model coupling with the partial least-squares method for forecasting watre yield of mine

    Institute of Scientific and Technical Information of China (English)

    CHEN Nan-xiang; CAO Lian-hai; HUANG Qiang

    2005-01-01

    Scientific forecasting water yield of mine is of great significance to the safety production of mine and the colligated using of water resources. The paper established the forecasting model for water yield of mine, combining neural network with the partial least square method. Dealt with independent variables by the partial least square method, it can not only solve the relationship between independent variables but also reduce the input dimensions in neural network model, and then use the neural network which can solve the non-linear problem better. The result of an example shows that the prediction has higher precision in forecasting and fitting.

  10. Signs of divided differences yield least squares data fitting with constrained monotonicity or convexity

    Science.gov (United States)

    Demetriou, I. C.

    2002-09-01

    Methods are presented for least squares data smoothing by using the signs of divided differences of the smoothed values. Professor M.J.D. Powell initiated the subject in the early 1980s and since then, theory, algorithms and FORTRAN software make it applicable to several disciplines in various ways. Let us consider n data measurements of a univariate function which have been altered by random errors. Then it is usual for the divided differences of the measurements to show sign alterations, which are probably due to data errors. We make the least sum of squares change to the measurements, by requiring the sequence of divided differences of order m to have at most q sign changes for some prescribed integer q. The positions of the sign changes are integer variables of the optimization calculation, which implies a combinatorial problem whose solution can require about O(nq) quadratic programming calculations in n variables and n-m constraints. Suitable methods have been developed for the following cases. It has been found that a dynamic programming procedure can calculate the global minimum for the important cases of piecewise monotonicity m=1,q[greater-or-equal, slanted]1 and piecewise convexity/concavity m=2,q[greater-or-equal, slanted]1 of the smoothed values. The complexity of the procedure in the case of m=1 is O(n2+qn log2 n) computer operations, while it is reduced to only O(n) when q=0 (monotonicity) and q=1 (increasing/decreasing monotonicity). The case m=2,q[greater-or-equal, slanted]1 requires O(qn2) computer operations and n2 quadratic programming calculations, which is reduced to one and n-2 quadratic programming calculations when m=2,q=0, i.e. convexity, and m=2,q=1, i.e. convexity/concavity, respectively. Unfortunately, the technique that receives this efficiency cannot generalize for the highly nonlinear case m[greater-or-equal, slanted]3,q[greater-or-equal, slanted]2. However, the case m[greater-or-equal, slanted]3,q=0 is solved by a special strictly

  11. Least Squares Pure Imaginary Solution and Real Solution of the Quaternion Matrix Equation AXB+CXD=E with the Least Norm

    Directory of Open Access Journals (Sweden)

    Shi-Fang Yuan

    2014-01-01

    Full Text Available Using the Kronecker product of matrices, the Moore-Penrose generalized inverse, and the complex representation of quaternion matrices, we derive the expressions of least squares solution with the least norm, least squares pure imaginary solution with the least norm, and least squares real solution with the least norm of the quaternion matrix equation AXB+CXD=E, respectively.

  12. Online Low-Rank Tensor Subspace Tracking from Incomplete Data by CP Decomposition using Recursive Least Squares

    OpenAIRE

    Kasai, Hiroyuki

    2016-01-01

    We propose an online tensor subspace tracking algorithm based on the CP decomposition exploiting the recursive least squares (RLS), dubbed OnLine Low-rank Subspace tracking by TEnsor CP Decomposition (OLSTEC). Numerical evaluations show that the proposed OLSTEC algorithm gives faster convergence per iteration comparing with the state-of-the-art online algorithms.

  13. Normalization Ridge Regression in Practice I: Comparisons Between Ordinary Least Squares, Ridge Regression and Normalization Ridge Regression.

    Science.gov (United States)

    Bulcock, J. W.

    The problem of model estimation when the data are collinear was examined. Though the ridge regression (RR) outperforms ordinary least squares (OLS) regression in the presence of acute multicollinearity, it is not a problem free technique for reducing the variance of the estimates. It is a stochastic procedure when it should be nonstochastic and it…

  14. Application of penalized least squares estimation in height anomaly%遥感影像理解综述

    Institute of Scientific and Technical Information of China (English)

    张春晓; 王天宝; 鲁学军; 姜娉

    2011-01-01

    The model errors exist inevitably in conventional least square fitting model of height anomaly, this article proposed that model error could be dealt with as nonparametric information using penalized least squares and discussed the effect of Regularizer R and Smoothing Parameter α on the results of fitting. Through the research on the solution of the Smoothing Parameter, a method of function Xu (α)was presented, and experimented on a GPS leveling measurement data. The Results showed that penalized least squares is better than least-square method in determining height anomaly.%本文通过对近年来遥感影像理解(IU:Image Understanding)研究的分析,本文给出了遥感影像理解的框架流程,讨论了高级语义特征和低级影像特征,针对流程中的各个任务介绍了有代表性的方法应用,并对发展趋势进行预测,特别是基于知识系统和影像认知的应用.

  15. Perturbation Analysis of Structured Least Squares Problems and Its Application in Calibration of Interest Rate Term Structure

    Institute of Scientific and Technical Information of China (English)

    Chen Zhao; Weiguo Gao; Jungong Xue

    2007-01-01

    A structured perturbation analysis of the least squares problem is considered in this paper. The new error bound proves to be sharper than that for general perturbations. We apply the new error bound to study sensitivity of changing the knots for curve fitting of interest rate term structure by cubic spline. Numerical experiments are given to illustrate the sharpness of this bound.

  16. SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)

    Science.gov (United States)

    Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...

  17. Thruster fault identification method for autonomous underwater vehicle using peak region energy and least square grey relational grade

    Directory of Open Access Journals (Sweden)

    Mingjun Zhang

    2015-12-01

    Full Text Available A novel thruster fault identification method for autonomous underwater vehicle is presented in this article. It uses the proposed peak region energy method to extract fault feature and uses the proposed least square grey relational grade method to estimate fault degree. The peak region energy method is developed from fusion feature modulus maximum method. It applies the fusion feature modulus maximum method to get fusion feature and then regards the maximum of peak region energy in the convolution operation results of fusion feature as fault feature. The least square grey relational grade method is developed from grey relational analysis algorithm. It determines the fault degree interval by the grey relational analysis algorithm and then estimates fault degree in the interval by least square algorithm. Pool experiments of the experimental prototype are conducted to verify the effectiveness of the proposed methods. The experimental results show that the fault feature extracted by the peak region energy method is monotonic to fault degree while the one extracted by the fusion feature modulus maximum method is not. The least square grey relational grade method can further get an estimation result between adjacent standard fault degrees while the estimation result of the grey relational analysis algorithm is just one of the standard fault degrees.

  18. Comparative evaluation of photon cross section libraries for materials of interest in PET Monte Carlo simulations

    CERN Document Server

    Zaidi, H

    1999-01-01

    the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...

  19. Assessment of the Influence of Thermal Scattering Library on Monte-Carlo Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Gwanyoung; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2014-05-15

    Monte-Carlo Neutron Transport Code uses continuous energy neutron libraries generally. Also thermal scattering libraries are used to represent a thermal neutron scattering by molecules and crystalline solids completely. Both neutron libraries and thermal scattering libraries are generated by NJOY based on ENDF data. While a neutron library can be generated for any specific temperature, a thermal scattering library can be generated for restricted temperatures when using ENDF data. However it is able to generate a thermal scattering for any specific temperature by using the LEAPR module in NJOY instead of using ENDF data. In this study, thermal scattering libraries of hydrogen bound in light water and carbon bound in graphite are generated by using the LEAPR module and ENDF data, and it is assessed the influence of each libraries on Monte-Carlo calculations. In addition, it is assessed the influence of a library temperature on Monte-Carlo calculations. In this study, thermal scattering libraries are generated by using LEAPR module in NJOY, and it is developed NIM program to do this work. It is compared above libraries with libraries generated from ENDF thermal scattering data. And the comparison carried out for H in H{sub 2}O and C in graphite. As a result, similar results came out between libraries generated from LEAPR module and that generated from ENDF thermal scattering data. Hereby, it is conclude that the generation of thermal scattering libraries with LEAPR module is appropriate to use and it is able to generate a library with user-specific temperature. Also it is assessed how much a temperature in a thermal scattering library influences on Monte-Carlo calculations.

  20. Based on Partial Least-squares Regression to Build up and Analyze the Model of Rice Evapotranspiration

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    During the course of calculating the rice evapotranspiration using weather factors,we often find that some independent variables have multiple correlation.The phenomena can lead to the traditional multivariate regression model which based on least square method distortion.And the stability of the model will be lost.The model will be built based on partial least-square regression in the paper,through applying the idea of main component analyze and typical correlation analyze,the writer picks up some component from original material.Thus,the writer builds up the model of rice evapotranspiration to solve the multiple correlation among the independent variables (some weather factors).At last,the writer analyses the model in some parts,and gains the satisfied result.

  1. Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization

    CERN Document Server

    Slawski, Martin

    2012-01-01

    Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization. For a broad range of designs with Gram matrix having non-negative entries, we establish bounds on the $\\ell_2$-prediction error of non-negative least squares (NNLS) whose form qualitatively matches corresponding results for $\\ell_1$-regularization. Under slightly stronger conditions, it is established that NNLS followed by hard thresholding performs excellently in terms of support recovery of an (approximately) sparse target, in some cases improving over $\\ell_1$-regularization. A substantial advantage of NNLS over r...

  2. Finite element solution of multi-scale transport problems using the least squares based bubble function enrichment

    CERN Document Server

    Yazdani, A

    2011-01-01

    This paper presents an optimum technique based on the least squares method for the derivation of the bubble functions to enrich the standard linear finite elements employed in the formulation of Galerkin weighted-residual statements. The element-level linear shape functions are enhanced with supplementary polynomial bubble functions with undetermined coefficients. The best least squares minimization of the residual functional obtained from the insertion of these trial functions into model equations results in an algebraic system of equations whose solution provides the unknown coefficients in terms of element-level nodal values. The normal finite element procedures for the construction of stiffness matrices may then be followed with no extra degree of freedom incurred as a result of such enrichment. The performance of the proposed method has been tested on a number of benchmark linear transport equations with the results compared against the exact and standard linear element solutions. It has been observed th...

  3. A consensus least squares support vector regression (LS-SVR) for analysis of near-infrared spectra of plant samples.

    Science.gov (United States)

    Li, Yankun; Shao, Xueguang; Cai, Wensheng

    2007-04-15

    Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods. PMID:19071605

  4. A least-squares finite-element S{sub n} method for solving first-order neutron transport equation

    Energy Technology Data Exchange (ETDEWEB)

    Ju Haitao [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China)]. E-mail: jht0@hotmail.com; Wu Hongchun [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Zhou Yongqiang [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Cao Liangzhi [School of Energy and Power Engineering, Xi' an Jiaotong University, Xi' an 710049 (China); Yao Dong [Nuclear Power Institute of China, Chengdu 610041 (China); Xian, Chun-Yu [Nuclear Power Institute of China, Chengdu 610041 (China)

    2007-04-15

    A discrete ordinates finite-element method for solving the two-dimensional first-order neutron transport equation is derived using the least-squares variation. It avoids the singularity in void regions of the method derived from the second-order equation which contains the inversion of the cross-section. Different from using the standard Galerkin variation to the first-order equation, the least-squares variation results in a symmetric matrix, which can be solved easily and effectively. To eliminate the discontinuity of the angular flux on the vacuum boundary in the spherical harmonics method, the angle variable is discretized by the discrete ordinates method. A two-dimensional transport simulation code is developed and applied to some benchmark problems with unstructured geometry. The numerical results verified the validity of this method.

  5. Estimating Burnup for UMo Plate Type Fuel with Least Square Fitting

    Energy Technology Data Exchange (ETDEWEB)

    Alawneh, Luay M.; Jaradat, Mustafa K. [Univ. of Science and Technology, Daejeon (Korea, Republic of); Park, Chang Je; Lee, Byungchul [Korea Atomic Energy Research Institute, Daejeon (Korea, Republic of)

    2013-10-15

    The feasibility test of this approach has been done by comparing the results with a Monte Carlo code results. UMo fuel is a promising candidate for a high performance research reactor and provides better fuel performance including an extended burnup and swelling resistance. Additionally, its relatively high uranium content provides high power density. However, when irradiating UMo fuel in the core, lots of pores are produced due to an extensive interaction between the UMo and Al matrix. The pore leads to an expansion of fuel meat and may result in a fuel failure after all. This problem has almost been solved by using an optimal Si additive to depress the interaction layer. An international program has been performed to manufacture a robust UMo fuel. However, in terms of neutronics, the absorption cross section of Mo is much higher than that of Si, and thus a slightly high uranium density of UMo fuel is required to provide equivalent characteristics to U{sub 3}Si{sub 2} fuel. Recently, Korea considers U-Mo fuel for the KJRR design, which is under design stage. This work is focused on calculating burnup for plate type UMo fuel through a couple of code systems such as TRITON/NEWT and ORIGEN-ARP. The estimated burnup is compared with that of MCNPX calculation. It is founded that the fitted burnup agrees well with the MCNPX results. This approach will be applicable to easily estimate discharge burnup in research reactor without additional burden. However, some sensitivity tests required for another parameters in order to obtain burnup exactly.

  6. Partial least squares and principal components analysis of wine vintage by high performance liquid chromatography with chemiluminescence detection.

    Science.gov (United States)

    Bellomarino, S A; Parker, R M; Conlan, X A; Barnett, N W; Adams, M J

    2010-09-23

    HPLC with acidic potassium permanganate chemiluminescence detection was employed to analyse 17 Cabernet Sauvignon wines across a range of vintages (1971-2003). Partial least squares regression analysis and principal components analysis was used in order to investigate the relationship between wine composition and vintage. Tartaric acid, vanillic acid, catechin, sinapic acid, ethyl gallate, myricetin, procyanadin B and resveratrol were found to be important components in terms of differences between the vintages.

  7. FTIR Spectroscopy Combined with Partial Least Square for Analysis of Red Fruit Oil in Ternary Mixture System

    OpenAIRE

    Rohman, A.; Dwi Larasati Setyaningrum; Sugeng Riyanto

    2014-01-01

    FTIR spectroscopy is a promising method for quantification of edible oils. Three edible oils, namely, red fruit oil (RFO), corn oil (CO), and soybean oil (SO), in ternary mixture system were quantitatively analyzed using FTIR spectroscopy in combination with partial least square (PLS). FTIR spectra of edible oils in ternary mixture were subjected to several treatments including normal spectra and their derivative. Using PLS calibration, the first derivative FTIR spectra can be exploited for d...

  8. Estimation of most probable power distribution in BWRs by least squares method using in-core measurements

    Energy Technology Data Exchange (ETDEWEB)

    Ezure, Hideo

    1988-09-01

    Effective combination of measured data with theoretical analysis has permitted deriving a mehtod for more accurately estimating the power distribution in BWRs. Use is made of least squares method for the combination between relationship of the power distribution with measured values and the model used in FLARE or in the three-dimensional two-group diffusion code. Trial application of the new method to estimating the power distribution in JPDR-1 has proved the method to provide reliable results.

  9. Application of the european customer satisfaction index to postal services. Structural equation models versus partial least squars

    OpenAIRE

    O'Loughlin, Christina; Coenders, Germà

    2002-01-01

    Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI...

  10. On discrete least square projection in unbounded domain with random evaluations and its application to parametric uncertainty quantification

    OpenAIRE

    TANG, TAO; Zhou, Tao

    2014-01-01

    This work is concerned with approximating multivariate functions in unbounded domain by using discrete least-squares projection with random points evaluations. Particular attention are given to functions with random Gaussian or Gamma parameters. We first demonstrate that the traditional Hermite (Laguerre) polynomials chaos expansion suffers from the \\textit{instability} in the sense that an \\textit{unfeasible} number of points, which is relevant to the dimension of the approximation space, is...

  11. Solve: a non linear least-squares code and its application to the optimal placement of torsatron vertical field coils

    International Nuclear Information System (INIS)

    A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem

  12. Strain Rates in the Sichuan-Yunnan Region Based upon the Total Least Squares Heterogeneous Strain Model from GPS Data

    Directory of Open Access Journals (Sweden)

    Caijun Xu

    2011-01-01

    Full Text Available We present crustal strain and deformation models for the Sichuan-Yunnan region based on high-precision GPS measurements from 1998 - 2004 using the total least squares method (TLSM. Coordinate errors as well as GPS velocity errors recorded at GPS stations are considered, but only the latter errors are considered using the conventional least squares method (LSM. In addition, the spatial pattern of a given strain field is also likely to be heterogeneous. We investigate two models with a spatially variable strain, the least squares heterogeneous strain model (LS-HSM and the total least squares heterogeneous strain model (TLS-HSM. Our result shows that estimated strain field parameters are more precise using the TLS-HSM than those by LS-HSM because the fitting to the data is improved, hence the TLS-HSM is preferred. The principal dilation strain rate, principal contraction strain rate, maximum shearing strain rate and surface dilation rate estimated by TLS-HSM in the northwestern Sichuan-Yunnan sub-block are 13.2526 _ 1.2624, -10.8001 _ 2.9826, 24.0527 _ 3.2381, and 2.4525 _ 3.2393 _ 10-9 yr-1 (with a confidence probability of 95%, respectively, while those in the southeastern Sichuan-Yunnan sub-block are 18.8651 _ 1.8353, -12.0875 _ 1.3926, 30.9525 _ 2.2971 and 6.7776 _ 2.3105 _ 10-9 yr-1 (and exhibiting similar probabilities, respectively. The results indicate that the sub-blocks play a key role in continental tectonic deformation in the Sichuan-Yunnan region, and that small errors in site coordinates can have a significant impact on strain estimates, especially where sites are close together.

  13. Novel approach of crater detection by crater candidate region selection and matrix-pattern-oriented least squares support vector machine

    Institute of Scientific and Technical Information of China (English)

    Ding Meng; Cao Yunfeng; Wu Qingxian

    2013-01-01

    Impacted craters are commonly found on the surface of planets,satellites,asteroids and other solar system bodies.In order to speed up the rate of constructing the database of craters,it is important to develop crater detection algorithms.This paper presents a novel approach to automatically detect craters on planetary surfaces.The approach contains two parts:crater candidate region selection and crater detection.In the first part,crater candidate region selection is achieved by Kanade-Lucas-Tomasi (KLT) detector.Matrix-pattern-oriented least squares support vector machine (MatLSSVM),as the matrixization version of least square support vector machine (SVM),inherits the advantages of least squares support vector machine (LSSVM),reduces storage space greatly and reserves spatial redundancies within each image matrix compared with general LSSVM.The second part of the approach employs MatLSSVM to design classifier for crater detection.Experimental results on the dataset which comprises 160 preprocessed image patches from Google Mars demonstrate that the accuracy rate of crater detection can be up to 88%.In addition,the outstanding feature of the approach introduced in this paper is that it takes resized crater candidate region as input pattern directly to finish crater detection.The results of the last experiment demonstrate that MatLSSVM-based classifier can detect crater regions effectively on the basis of KLT-based crater candidate region selection.

  14. An evaluation of least-squares fitting methods in XAFS spectroscopy: iron-based SBA-15 catalyst formulations.

    Science.gov (United States)

    Huggins, Frank E; Kim, Dae-Jung; Dunn, Brian C; Eyring, Edward M; Huffman, Gerald P

    2009-06-01

    A detailed comparison has been made of determinations by (57)Fe Mössbauer spectroscopy and four different XAFS spectroscopic methods of %Fe as hematite and ferrihydrite in 11 iron-based SBA-15 catalyst formulations. The four XAFS methods consisted of least-squares fitting of iron XANES, d(XANES)/dE, and EXAFS (k(3)chi and k(2)chi) spectra to the corresponding standard spectra of hematite and ferrihydrite. The comparison showed that, for this particular application, the EXAFS methods were superior to the XANES methods in reproducing the results of the benchmark Mössbauer method in large part because the EXAFS spectra of the two iron-oxide standards were much less correlated than the corresponding XANES spectra. Furthermore, the EXAFS and Mössbauer results could be made completely consistent by inclusion of a factor of 1.3+/-0.05 for the ratio of the Mössbauer recoilless fraction of hematite relative to that of ferrihydrite at room temperature (293K). This difference in recoilless fraction is attributed to the nanoparticle nature of the ferrihydrite compared to the bulk nature of the hematite. Also discussed are possible alternative non-least-squares XAFS methods for determining the iron speciation in this application as well as criteria for deciding whether or not least-squares XANES methods should be applied for the determination of element speciation in unknown materials. PMID:19185532

  15. A new formulation for total least square error method in d-dimensional space with mapping to a parametric line

    Science.gov (United States)

    Skala, Vaclav

    2016-06-01

    There are many practical applications based on the Least Square Error (LSE) or Total Least Square Error (TLSE) methods. Usually the standard least square error is used due to its simplicity, but it is not an optimal solution, as it does not optimize distance, but square of a distance. The TLSE method, respecting the orthogonality of a distance measurement, is computed in d-dimensional space, i.e. for points given in E2 a line π in E2, resp. for points given in E3 a plane ρ in E3, fitting the TLSE criteria are found. However, some tasks in physical sciences lead to a slightly different problem. In this paper, a new TSLE method is introduced for solving a problem when data are given in E3 a line π ∈ E3 is to be found fitting the TLSE criterion. The presented approach is applicable for a general d-dimensional case, i.e. when points are given in Ed a line π ∈ Ed is to be found. This formulation is different from the TLSE formulation.

  16. A Component Prediction Method for Flue Gas of Natural Gas Combustion Based on Nonlinear Partial Least Squares Method

    Directory of Open Access Journals (Sweden)

    Hui Cao

    2014-01-01

    Full Text Available Quantitative analysis for the flue gas of natural gas-fired generator is significant for energy conservation and emission reduction. The traditional partial least squares method may not deal with the nonlinear problems effectively. In the paper, a nonlinear partial least squares method with extended input based on radial basis function neural network (RBFNN is used for components prediction of flue gas. For the proposed method, the original independent input matrix is the input of RBFNN and the outputs of hidden layer nodes of RBFNN are the extension term of the original independent input matrix. Then, the partial least squares regression is performed on the extended input matrix and the output matrix to establish the components prediction model of flue gas. A near-infrared spectral dataset of flue gas of natural gas combustion is used for estimating the effectiveness of the proposed method compared with PLS. The experiments results show that the root-mean-square errors of prediction values of the proposed method for methane, carbon monoxide, and carbon dioxide are, respectively, reduced by 4.74%, 21.76%, and 5.32% compared to those of PLS. Hence, the proposed method has higher predictive capabilities and better robustness.

  17. [Discrimination of patients with Xiao-Chaihu Tang syndrome using 1H NMR metabonomics and partial least square analysis].

    Science.gov (United States)

    Xing, Jie; Yuan, Shu-chun; Sun, Hui-min; Fan, Ma-li; Li, Zhen-yu; Qin, Xue-mei

    2015-08-01

    1H NMR metabonomics approach was used to reveal the chemical difference of urine between patients with Xiao-Chaihu Tang syndrome (XCHTS) and healthy participants (HP). The partial least square method was used to establish a model to distinguish the patients with Xiao-Chaihu-Tang syndrome from the healthy controls. Thirty-four endogenous metabolites were identified in the 1H NMR spectrum, and orthogonal partial least squares discriminant analysis showed the urine of patients with Xiao-Chaihu Tang syndrome and healthy participants could be separated clearly. It is indicated that the metabolic profiling of patients with Xiao-Chaihu Tang syndrome was changed obviously. Fifteen metabolites were found by S-pot of OPLS-DA and VIP value. The contents of leucine, formic acid, glycine, hippuric acid and uracil increased in the urine of patients, while threonine, 2-hydroxyisobutyrate, acetamide, 2-oxoglutarate, citric acid, dimethylamine, malonic acid, betaine, trimethylamine oxide, phenylacetyl glycine, and uridine decreased. These metabolites involve the intestinal microbial balance, energy metabolism and amino acid metabolism pathways, which is related with the major symptom of Xiao-Chaihu Tang syndrome. The patients with Xiao-Chaihu Tang syndrome could be identified and predicted correctly using the established partial least squares model. This study could be served as the basis for the accurate diagnostic and reasonable administration of Xiao-Chaihu-Tang syndrome.

  18. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  19. [The net analyte preprocessing combined with radial basis partial least squares regression applied in noninvasive measurement of blood glucose].

    Science.gov (United States)

    Li, Qing-Bo; Huang, Zheng-Wei

    2014-02-01

    In order to improve the prediction accuracy of quantitative analysis model in the near-infrared spectroscopy of blood glucose, this paper, by combining net analyte preprocessing (NAP) algorithm and radial basis functions partial least squares (RBFPLS) regression, builds a nonlinear model building method which is suitable for glucose measurement of human, named as NAP-RBFPLS. First, NAP is used to pre-process the near-infrared spectroscopy of blood glucose, in order to effectively extract the information which only relates to glucose signal from the original near-infrared spectra, so that it could effectively weaken the occasional correlation problems of the glucose changes and the interference factors which are caused by the absorption of water, albumin, hemoglobin, fat and other components of the blood in human body, the change of temperature of human body, the drift of measuring instruments, the changes of measuring environment, and the changes of measuring conditions; and then a nonlinear quantitative analysis model is built with the near-infrared spectroscopy data after NAP, in order to solve the nonlinear relationship between glucose concentrations and near-infrared spectroscopy which is caused by body strong scattering. In this paper, the new method is compared with other three quantitative analysis models building on partial least squares (PLS), net analyte preprocessing partial least squares (NAP-PLS) and RBFPLS respectively. At last, the experimental results show that the nonlinear calibration model, developed by combining NAP algorithm and RBFPLS regression, which was put forward in this paper, greatly improves the prediction accuracy of prediction sets, and what has been proved in this paper is that the nonlinear model building method will produce practical applications for the research of non-invasive detection techniques on human glucose concentrations.

  20. NOISE REDUCTION FOR FAST FADING CHANNEL BY RECURRENT LEAST SQUARES SUPPORT VECTOR MACHINES IN EMBEDDING PHASE SPACES

    Institute of Scientific and Technical Information of China (English)

    Xiang Zheng; Zhang Taiyi; Sun Jiancheng

    2006-01-01

    A new strategy for noise reduction of fast fading channel is presented. Firstly, more information is acquired utilizing the reconstructed embedding phase space. Then, based on the Recurrent Least Squares Support Vector Machines (RLS-SVM), noise reduction of the fast fading channel is realized. This filtering technique does not make use of the spectral contents of the signal. Based on the stability and the fractal of the chaotic attractor, the RLS-SVM algorithm is a better candidate for the nonlinear time series noise-reduction. The simulation results shows that better noise-reduction performance is acquired when the signal to noise ratio is 12dB.