WorldWideScience

Sample records for carlo library least-squares

  1. Status of software for PGNAA bulk analysis by the Monte Carlo - Library Least-Squares (MCLLS) approach

    International Nuclear Information System (INIS)

    The Center for Engineering Applications of Radioisotopes (CEAR) has been working for about ten years on the Monte Carlo - Library Least-Squares (MCLLS) approach for treating the nonlinear inverse analysis problem for PGNAA bulk analysis. This approach consists essentially of using Monte Carlo simulation to generate the libraries of all the elements to be analyzed plus any other required libraries. These libraries are then used in the linear Library Least-Squares (LLS) approach with unknown sample spectra to analyze for all elements in the sample. The other libraries include all sources of background which includes: (1) gamma-rays emitted by the neutron source, (2) prompt gamma-rays produced in the analyzer construction materials, (3) natural gamma-rays from K-40 and the uranium and thorium decay chains, and (4) prompt and decay gamma-rays produced in the NaI detector by neutron activation. A number of unforeseen problems have arisen in pursuing this approach including: (1) the neutron activation of the most common detector (NaI) used in bulk analysis PGNAA systems, (2) the nonlinearity of this detector, and (3) difficulties in obtaining detector response functions for this (and other) detectors. These problems have been addressed by CEAR recently and have either been solved or are almost solved at the present time. Development of Monte Carlo simulation for all of the libraries has been finished except the prompt gamma-ray library from the activation of the NaI detector. Treatment for the coincidence schemes for Na and particularly I must be first determined to complete the Monte Carlo simulation of this last library. (author)

  2. On the treatment of ill-conditioned cases in the Monte Carlo library least-squares approach for inverse radiation analyzers

    International Nuclear Information System (INIS)

    Prompt gamma-ray neutron activation analysis (PGNAA) has been and still is one of the major methods of choice for the elemental analysis of various bulk samples. This is mostly due to the fact that PGNAA offers a rapid, non-destructive and on-line means of sample interrogation. The quantitative analysis of the prompt gamma-ray data could, on the other hand, be performed either through the single peak analysis or the so-called Monte Carlo library least-squares (MCLLS) approach, of which the latter has been shown to be more sensitive and more accurate than the former. The MCLLS approach is based on the assumption that the total prompt gamma-ray spectrum of any sample is a linear combination of the contributions from the individual constituents or libraries. This assumption leads to, through the minimization of the chi-square value, a set of linear equations which has to be solved to obtain the library multipliers, a process that involves the inversion of the covariance matrix. The least-squares solution may be extremely uncertain due to the ill-conditioning of the covariance matrix. The covariance matrix will become ill-conditioned whenever, in the subsequent calculations, two or more libraries are highly correlated. The ill-conditioning will also be unavoidable whenever the sample contains trace amounts of certain elements or elements with significantly low thermal neutron capture cross-sections. In this work, a new iterative approach, which can handle the ill-conditioning of the covariance matrix, is proposed and applied to a hydrocarbon multiphase flow problem in which the parameters of interest are the separate amounts of the oil, gas, water and salt phases. The results of the proposed method are also compared with the results obtained through the implementation of a well-known regularization method, the truncated singular value decomposition. Final calculations indicate that the proposed approach would be able to treat ill-conditioned cases appropriately. (paper)

  3. Enhanced least squares Monte Carlo method for real-time decision optimizations for evolving natural hazards

    DEFF Research Database (Denmark)

    Anders, Annett; Nishijima, Kazuyoshi

    The present paper aims at enhancing a solution approach proposed by Anders & Nishijima (2011) to real-time decision problems in civil engineering. The approach takes basis in the Least Squares Monte Carlo method (LSM) originally proposed by Longstaff & Schwartz (2001) for computing American option...... prices. In Anders & Nishijima (2011) the LSM is adapted for a real-time operational decision problem; however it is found that further improvement is required in regard to the computational efficiency, in order to facilitate it for practice. This is the focus in the present paper. The idea behind the...

  4. Elemental PGNAA analysis using gamma-gamma coincidence counting with the library least-squares approach

    Science.gov (United States)

    Metwally, Walid A.; Gardner, Robin P.; Mayo, Charles W.

    2004-01-01

    An accurate method for determining elemental analysis using gamma-gamma coincidence counting is presented. To demonstrate the feasibility of this method for PGNAA, a system of three radioisotopes (Na-24, Co-60 and Cs-134) that emit coincident gamma rays was used. Two HPGe detectors were connected to a system that allowed both singles and coincidences to be collected simultaneously. A known mixture of the three radioisotopes was used and data was deliberately collected at relatively high counting rates to determine the effect of pulse pile-up distortion. The results obtained, with the library least-squares analysis, of both the normal and coincidence counting are presented and compared to the known amounts. The coincidence results are shown to give much better accuracy. It appears that in addition to the expected advantage of reduced background, the coincidence approach is considerably more resistant to pulse pile-up distortion.

  5. A library least-squares approach for scatter correction in gamma-ray tomography

    Science.gov (United States)

    Meric, Ilker; Anton Johansen, Geir; Valgueiro Malta Moreira, Icaro

    2015-03-01

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system.

  6. Calculation of Credit Valuation Adjustment Based on Least Square Monte Carlo Methods

    Directory of Open Access Journals (Sweden)

    Qian Liu

    2015-01-01

    Full Text Available Counterparty credit risk has become one of the highest-profile risks facing participants in the financial markets. Despite this, relatively little is known about how counterparty credit risk is actually priced mathematically. We examine this issue using interest rate swaps. This largely traded financial product allows us to well identify the risk profiles of both institutions and their counterparties. Concretely, Hull-White model for rate and mean-reverting model for default intensity have proven to be in correspondence with the reality and to be well suited for financial institutions. Besides, we find that least square Monte Carlo method is quite efficient in the calculation of credit valuation adjustment (CVA, for short as it avoids the redundant step to generate inner scenarios. As a result, it accelerates the convergence speed of the CVA estimators. In the second part, we propose a new method to calculate bilateral CVA to avoid double counting in the existing bibliographies, where several copula functions are adopted to describe the dependence of two first to default times.

  7. A library least-squares approach for scatter correction in gamma-ray tomography

    International Nuclear Information System (INIS)

    Scattered radiation is known to lead to distortion in reconstructed images in Computed Tomography (CT). The effects of scattered radiation are especially more pronounced in non-scanning, multiple source systems which are preferred for flow imaging where the instantaneous density distribution of the flow components is of interest. In this work, a new method based on a library least-squares (LLS) approach is proposed as a means of estimating the scatter contribution and correcting for this. The validity of the proposed method is tested using the 85-channel industrial gamma-ray tomograph previously developed at the University of Bergen (UoB). The results presented here confirm that the LLS approach can effectively estimate the amounts of transmission and scatter components in any given detector in the UoB gamma-ray tomography system. - Highlights: • A LLS approach is proposed for scatter correction in gamma-ray tomography. • The validity of the LLS approach is tested through experiments. • Gain shift and pulse pile-up affect the accuracy of the LLS approach. • The LLS approach successfully estimates scatter profiles

  8. Uncovering Time-Varying Parameters with the Kalman-Filter and the Flexible Least Squares: a Monte Carlo Study

    OpenAIRE

    Zsolt Darvas; Balázs Varga

    2012-01-01

    Using Monte Carlo methods, we compare the ability of the Kalman-filter, the Kalman-smoother and the flexible least squares (FLS) to uncover the parameters of an autoregression. We find that the ordinary least squares (OLS) estimator performs much better that the time-varying coefficient methods when the parameters are in fact constant, but the OLS does very poorly when parameters change. Neither the FLS, nor the Kalman-filter and Kalman-smoother can uncover sudden changes in parameters. But w...

  9. Unifying Least Squares, Total Least Squares and Data Least Squares

    Czech Academy of Sciences Publication Activity Database

    Paige, C. C.; Strakoš, Zdeněk

    Dordrecht : Kluwer Academic Publishers, 2002 - (van Huffel, S.; Lemmerling, P.), s. 25-34 ISBN 1-4020-0476-1. [International Workshop on TLS and Errore-in-Variables Modelling. Leuven (BE), 27.08.2001-29.08.2001] R&D Projects: GA AV ČR IAA2030801 Grant ostatní: NSERC(CA) OGP0009236 Institutional research plan: AV0Z1030915 Keywords : scaled total least squares * ordinary least squares * data least squares * core problem * orthogonal reduction * singular value decomposition Subject RIV: BA - General Mathematics

  10. Bayesian least squares deconvolution

    Science.gov (United States)

    Asensio Ramos, A.; Petit, P.

    2015-11-01

    Aims: We develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods: We consider LSD under the Bayesian framework and we introduce a flexible Gaussian process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results: We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  11. Bayesian least squares deconvolution

    CERN Document Server

    Ramos, A Asensio

    2015-01-01

    Aims. To develop a fully Bayesian least squares deconvolution (LSD) that can be applied to the reliable detection of magnetic signals in noise-limited stellar spectropolarimetric observations using multiline techniques. Methods. We consider LSD under the Bayesian framework and we introduce a flexible Gaussian Process (GP) prior for the LSD profile. This prior allows the result to automatically adapt to the presence of signal. We exploit several linear algebra identities to accelerate the calculations. The final algorithm can deal with thousands of spectral lines in a few seconds. Results. We demonstrate the reliability of the method with synthetic experiments and we apply it to real spectropolarimetric observations of magnetic stars. We are able to recover the magnetic signals using a small number of spectral lines, together with the uncertainty at each velocity bin. This allows the user to consider if the detected signal is reliable. The code to compute the Bayesian LSD profile is freely available.

  12. A SUCCESSIVE LEAST SQUARES METHOD FOR STRUCTURED TOTAL LEAST SQUARES

    Institute of Scientific and Technical Information of China (English)

    Plamen Y. Yalamov; Jin-yun Yuan

    2003-01-01

    A new method for Total Least Squares (TLS) problems is presented. It differs from previous approaches and is based on the solution of successive Least Squares problems.The method is quite suitable for Structured TLS (STLS) problems. We study mostly the case of Toeplitz matrices in this paper. The numerical tests illustrate that the method converges to the solution fast for Toeplitz STLS problems. Since the method is designed for general TLS problems, other structured problems can be treated similarly.

  13. Maximum likelihood, least squares and penalized least squares for PET

    International Nuclear Information System (INIS)

    The EM algorithm is the basic approach used to maximize the log likelihood objective function for the reconstruction problem in PET. The EM algorithm is a scaled steepest ascent algorithm that elegantly handles the nonnegativity constraints of the problem. The authors show that the same scaled steepest descent algorithm can be applied to the least squares merit function, and that it can be accelerated using the conjugate gradient approach. The experiments suggest that one can cut the computation by about a factor of 3 by using this technique. The results also apply to various penalized least squares functions which might be used to produce a smoother image

  14. Quasi-least squares regression

    CERN Document Server

    Shults, Justine

    2014-01-01

    Drawing on the authors' substantial expertise in modeling longitudinal and clustered data, Quasi-Least Squares Regression provides a thorough treatment of quasi-least squares (QLS) regression-a computational approach for the estimation of correlation parameters within the framework of generalized estimating equations (GEEs). The authors present a detailed evaluation of QLS methodology, demonstrating the advantages of QLS in comparison with alternative methods. They describe how QLS can be used to extend the application of the traditional GEE approach to the analysis of unequally spaced longitu

  15. Least Squares Ranking on Graphs

    OpenAIRE

    Hirani, Anil N.; Kalyanaraman, Kaushik; Watts, Seth

    2010-01-01

    Given a set of alternatives to be ranked, and some pairwise comparison data, ranking is a least squares computation on a graph. The vertices are the alternatives, and the edge values comprise the comparison data. The basic idea is very simple and old: come up with values on vertices such that their differences match the given edge data. Since an exact match will usually be impossible, one settles for matching in a least squares sense. This formulation was first described by Leake in 1976 for ...

  16. Monte Carlo method of least squares fitting of experimental data%基于蒙特卡罗最小二乘的实验数据拟合方法

    Institute of Scientific and Technical Information of China (English)

    颜清; 彭小平

    2011-01-01

    Using the least squares method that fits chemical industry empirical datum, the correlation coefficient approaches in 1, and the precision is high.the results differ with the empirical correlation. Monte Carlo method is a probabilistic model based on non-deterministic numerical methods. Monte Carlo method of least squares fits of experimental data processing chemicals.so the application is more flexible and broader scope. In the Excel spreadsheet, using the worksheet data and VBA programming is easy to complete mixing least-squares data fitting Monte Carlo, VBA and Excel spreadsheets to achieve data communications and parallel processing experimental data, to read the worksheet experimental data and calculate the approximate point random search, the least-squares statistical analysis, and the results output to the worksheet. Monte Carlo method of least squares fits method of least squares using the same precision with the standard, in line with large numbers theorem, which is based on the accuracy improved significantly. Monte Carlo method in the random search point is small, the error, and when the random search points to 10 000, its accuracy is almost the same with the method of least squares. At the same time we can get the empirical correlation that has been very close relationship between the number of quasi-equation sand practice which make unified theory of the experimental results.%采用最小二乘法拟合化工实验数据,相关系数接近于1,精度高,但所得的结果与经验关联式大相径庭.蒙特卡罗方法是一种基于概率模型的非确定性数值方法.蒙特卡罗最小二乘拟合方法处理化工实验数据,应用中更为灵活,适用范围更广.在Excel电子表格中,利用工作表中的数据与VBA混合编程很容易完成蒙特卡罗最小二乘数据拟合,VBA实现与Excel电子表格的数据通讯及并行处理实验数据,读取工作表中的实验数据,计算随机点的大致搜索范围,进行最小二乘

  17. The Monte Carlo validation framework for the discriminant partial least squares model extended with variable selection methods applied to authenticity studies of Viagra® based on chromatographic impurity profiles.

    Science.gov (United States)

    Krakowska, B; Custers, D; Deconinck, E; Daszykowski, M

    2016-02-01

    The aim of this work was to develop a general framework for the validation of discriminant models based on the Monte Carlo approach that is used in the context of authenticity studies based on chromatographic impurity profiles. The performance of the validation approach was applied to evaluate the usefulness of the diagnostic logic rule obtained from the partial least squares discriminant model (PLS-DA) that was built to discriminate authentic Viagra® samples from counterfeits (a two-class problem). The major advantage of the proposed validation framework stems from the possibility of obtaining distributions for different figures of merit that describe the PLS-DA model such as, e.g., sensitivity, specificity, correct classification rate and area under the curve in a function of model complexity. Therefore, one can quickly evaluate their uncertainty estimates. Moreover, the Monte Carlo model validation allows balanced sets of training samples to be designed, which is required at the stage of the construction of PLS-DA and is recommended in order to obtain fair estimates that are based on an independent set of samples. In this study, as an illustrative example, 46 authentic Viagra® samples and 97 counterfeit samples were analyzed and described by their impurity profiles that were determined using high performance liquid chromatography with photodiode array detection and further discriminated using the PLS-DA approach. In addition, we demonstrated how to extend the Monte Carlo validation framework with four different variable selection schemes: the elimination of uninformative variables, the importance of a variable in projections, selectivity ratio and significance multivariate correlation. The best PLS-DA model was based on a subset of variables that were selected using the variable importance in the projection approach. For an independent test set, average estimates with the corresponding standard deviation (based on 1000 Monte Carlo runs) of the correct

  18. Least Squares Data Fitting with Applications

    DEFF Research Database (Denmark)

    Hansen, Per Christian; Pereyra, Víctor; Scherer, Godela

    that help readers to understand and evaluate the computed solutions • many examples that illustrate the techniques and algorithms Least Squares Data Fitting with Applications can be used as a textbook for advanced undergraduate or graduate courses and professionals in the sciences and in engineering....... predictively. The main concern of Least Squares Data Fitting with Applications is how to do this on a computer with efficient and robust computational methods for linear and nonlinear relationships. The presentation also establishes a link between the statistical setting and the computational issues. In a......As one of the classical statistical regression techniques, and often the first to be taught to new students, least squares fitting can be a very effective tool in data analysis. Given measured data, we establish a relationship between independent and dependent variables so that we can use the data...

  19. Least squares methods in physics and engineering

    International Nuclear Information System (INIS)

    These lectures deal with numerical methods representing the state of the art, including the available computer software. First a brief background of basic matrix factorizations is given. Dense linear problems are then treated, including methods for updating the solution when rows and variables are added or deleted. Special consideration is given to weighted least squares problems and constrained problems. Regularization of ill-posed problems are briefly surveyed. Sparse linear least squares problems are treated in some detail. Different ordering schemes, including ordering to block-angular form and nested dissection, and iterative methods are discussed. Finally a survey of methods for nonlinear least squares problems is given. The Gauss-Newton method is analyzed and its local convergence derived. Levenberg-Marquardt methods are discussed and different methods using second derivative information surveyed. Special methods are given for separable linear-nonlinear problems. (orig.)

  20. Partial update least-square adaptive filtering

    CERN Document Server

    Xie, Bei

    2014-01-01

    Adaptive filters play an important role in the fields related to digital signal processing and communication, such as system identification, noise cancellation, channel equalization, and beamforming. In practical applications, the computational complexity of an adaptive filter is an important consideration. The Least Mean Square (LMS) algorithm is widely used because of its low computational complexity (O(N)) and simplicity in implementation. The least squares algorithms, such as Recursive Least Squares (RLS), Conjugate Gradient (CG), and Euclidean Direction Search (EDS), can converge faster a

  1. Deformation analysis with Total Least Squares

    Directory of Open Access Journals (Sweden)

    M. Acar

    2006-01-01

    Full Text Available Deformation analysis is one of the main research fields in geodesy. Deformation analysis process comprises measurement and analysis phases. Measurements can be collected using several techniques. The output of the evaluation of the measurements is mainly point positions. In the deformation analysis phase, the coordinate changes in the point positions are investigated. Several models or approaches can be employed for the analysis. One approach is based on a Helmert or similarity coordinate transformation where the displacements and the respective covariance matrix are transformed into a unique datum. Traditionally a Least Squares (LS technique is used for the transformation procedure. Another approach that could be introduced as an alternative methodology is the Total Least Squares (TLS that is considerably a new approach in geodetic applications. In this study, in order to determine point displacements, 3-D coordinate transformations based on the Helmert transformation model were carried out individually by the Least Squares (LS and the Total Least Squares (TLS, respectively. The data used in this study was collected by GPS technique in a landslide area located nearby Istanbul. The results obtained from these two approaches have been compared.

  2. Iterative methods for weighted least-squares

    Energy Technology Data Exchange (ETDEWEB)

    Bobrovnikova, E.Y.; Vavasis, S.A. [Cornell Univ., Ithaca, NY (United States)

    1996-12-31

    A weighted least-squares problem with a very ill-conditioned weight matrix arises in many applications. Because of round-off errors, the standard conjugate gradient method for solving this system does not give the correct answer even after n iterations. In this paper we propose an iterative algorithm based on a new type of reorthogonalization that converges to the solution.

  3. An algorithm for nonlinear least squares

    Czech Academy of Sciences Publication Activity Database

    Balda, Miroslav

    Praha : Humusoft, 2007, s. 1-8. ISBN 978-80-7080-658-6. [Technical Computing Prague 2007. Praha (CZ), 14.11.2007] R&D Projects: GA ČR GA101/05/0199 Institutional research plan: CEZ:AV0Z20760514 Keywords : optimization * least squares * MATLAB Subject RIV: JC - Computer Hardware ; Software

  4. Least square fitting with one parameter less

    CERN Document Server

    Berg, Bernd A

    2015-01-01

    It is shown that whenever the multiplicative normalization of a fitting function is not known, least square fitting by $\\chi^2$ minimization can be performed with one parameter less than usual by converting the normalization parameter into a function of the remaining parameters and the data.

  5. Discrete least squares approximation with polynomial vectors

    OpenAIRE

    Van Barel, Marc; Bultheel, Adhemar

    1993-01-01

    We give a solution of a discrete least squares approximation problem in terms of orthogonal polynomial vectors. The degrees of the polynomial elements of these vectors can be different. An algorithm is constructed computing the coefficients of recurrence relations for the orthogonal polynomial vectors. In case the function values are prescribed in points on the real line or on the unit circle variants of the original algorithm can be designed which are an order of magnitude more efficient. Al...

  6. Least-squares finite element methods for quantum chromodynamics

    Energy Technology Data Exchange (ETDEWEB)

    Ketelsen, Christian [Los Alamos National Laboratory; Brannick, J [PENN STATE UNIV; Manteuffel, T [UNIV OF CO.; Mccormick, S [UNIV OF CO.

    2008-01-01

    A significant amount of the computational time in large Monte Carlo simulations of lattice quantum chromodynamics (QCD) is spent inverting the discrete Dirac operator. Unfortunately, traditional covariant finite difference discretizations of the Dirac operator present serious challenges for standard iterative methods. For interesting physical parameters, the discretized operator is large and ill-conditioned, and has random coefficients. More recently, adaptive algebraic multigrid (AMG) methods have been shown to be effective preconditioners for Wilson's discretization of the Dirac equation. This paper presents an alternate discretization of the Dirac operator based on least-squares finite elements. The discretization is systematically developed and physical properties of the resulting matrix system are discussed. Finally, numerical experiments are presented that demonstrate the effectiveness of adaptive smoothed aggregation ({alpha}SA ) multigrid as a preconditioner for the discrete field equations resulting from applying the proposed least-squares FE formulation to a simplified test problem, the 2d Schwinger model of quantum electrodynamics.

  7. ON THE SEPARABLE NONLINEAR LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Xin Liu; Yaxiang Yuan

    2008-01-01

    Separable nonlinear least squares problems are a special class of nonlinear least squares problems, where the objective functions are linear and nonlinear on different parts of variables. Such problems have broad applications in practice. Most existing algorithms for this kind of problems are derived from the variable projection method proposed by Golub and Pereyra, which utilizes the separability under a separate framework. However, the methods based on variable projection strategy would be invalid if there exist some constraints to the variables, as the real problems always do, even if the constraint is simply the ball constraint. We present a new algorithm which is based on a special approximation to the Hessian by noticing the fact that certain terms of the Hessian can be derived from the gradient. Our method maintains all the advantages of variable projection based methods, and moreover it can be combined with trust region methods easily and can be applied to general constrained separable nonlinear problems. Convergence analysis of our method is presented and numerical results are also reported.

  8. Total least squares for anomalous change detection

    Energy Technology Data Exchange (ETDEWEB)

    Theiler, James P [Los Alamos National Laboratory; Matsekh, Anna M [Los Alamos National Laboratory

    2010-01-01

    A family of difference-based anomalous change detection algorithms is derived from a total least squares (TLSQ) framework. This provides an alternative to the well-known chronochrome algorithm, which is derived from ordinary least squares. In both cases, the most anomalous changes are identified with the pixels that exhibit the largest residuals with respect to the regression of the two images against each other. The family of TLSQ-based anomalous change detectors is shown to be equivalent to the subspace RX formulation for straight anomaly detection, but applied to the stacked space. However, this family is not invariant to linear coordinate transforms. On the other hand, whitened TLSQ is coordinate invariant, and furthermore it is shown to be equivalent to the optimized covariance equalization algorithm. What whitened TLSQ offers, in addition to connecting with a common language the derivations of two of the most popular anomalous change detection algorithms - chronochrome and covariance equalization - is a generalization of these algorithms with the potential for better performance.

  9. Multiples least-squares reverse time migration

    KAUST Repository

    Zhang, D. L.

    2013-01-01

    To enhance the image quality, we propose multiples least-squares reverse time migration (MLSRTM) that transforms each hydrophone into a virtual point source with a time history equal to that of the recorded data. Since each recorded trace is treated as a virtual source, knowledge of the source wavelet is not required. Numerical tests on synthetic data for the Sigsbee2B model and field data from Gulf of Mexico show that MLSRTM can improve the image quality by removing artifacts, balancing amplitudes, and suppressing crosstalk compared to standard migration of the free-surface multiples. The potential liability of this method is that multiples require several roundtrips between the reflector and the free surface, so that high frequencies in the multiples are attenuated compared to the primary reflections. This can lead to lower resolution in the migration image compared to that computed from primaries.

  10. Vehicle detection using partial least squares.

    Science.gov (United States)

    Kembhavi, Aniruddha; Harwood, David; Davis, Larry S

    2011-06-01

    Detecting vehicles in aerial images has a wide range of applications, from urban planning to visual surveillance. We describe a vehicle detector that improves upon previous approaches by incorporating a very large and rich set of image descriptors. A new feature set called Color Probability Maps is used to capture the color statistics of vehicles and their surroundings, along with the Histograms of Oriented Gradients feature and a simple yet powerful image descriptor that captures the structural characteristics of objects named Pairs of Pixels. The combination of these features leads to an extremely high-dimensional feature set (approximately 70,000 elements). Partial Least Squares is first used to project the data onto a much lower dimensional sub-space. Then, a powerful feature selection analysis is employed to improve the performance while vastly reducing the number of features that must be calculated. We compare our system to previous approaches on two challenging data sets and show superior performance. PMID:20921579

  11. Multisource Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-12-01

    Least-squares migration has been shown to be able to produce high quality migration images, but its computational cost is considered to be too high for practical imaging. In this dissertation, a multisource least-squares reverse time migration algorithm (LSRTM) is proposed to increase by up to 10 times the computational efficiency by utilizing the blended sources processing technique. There are three main chapters in this dissertation. In Chapter 2, the multisource LSRTM algorithm is implemented with random time-shift and random source polarity encoding functions. Numerical tests on the 2D HESS VTI data show that the multisource LSRTM algorithm suppresses migration artifacts, balances the amplitudes, improves image resolution, and reduces crosstalk noise associated with the blended shot gathers. For this example, multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution, and fewer migration artifacts compared to conventional RTM. The empirical results suggest that the multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with similar or less computational cost. The caveat is that LSRTM image is sensitive to large errors in the migration velocity model. In Chapter 3, the multisource LSRTM algorithm is implemented with frequency selection encoding strategy and applied to marine streamer data, for which traditional random encoding functions are not applicable. The frequency-selection encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique non-overlapping frequency content. Therefore, the receivers can distinguish the wavefield from each shot according to the frequencies. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is

  12. Positive Scattering Cross Sections using Constrained Least Squares

    International Nuclear Information System (INIS)

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented

  13. Positive Scattering Cross Sections using Constrained Least Squares

    Energy Technology Data Exchange (ETDEWEB)

    Dahl, J.A.; Ganapol, B.D.; Morel, J.E.

    1999-09-27

    A method which creates a positive Legendre expansion from truncated Legendre cross section libraries is presented. The cross section moments of order two and greater are modified by a constrained least squares algorithm, subject to the constraints that the zeroth and first moments remain constant, and that the standard discrete ordinate scattering matrix is positive. A method using the maximum entropy representation of the cross section which reduces the error of these modified moments is also presented. These methods are implemented in PARTISN, and numerical results from a transport calculation using highly anisotropic scattering cross sections with the exponential discontinuous spatial scheme is presented.

  14. Skeletonized Least Squares Wave Equation Migration

    KAUST Repository

    Zhan, Ge

    2010-10-17

    The theory for skeletonized least squares wave equation migration (LSM) is presented. The key idea is, for an assumed velocity model, the source‐side Green\\'s function and the geophone‐side Green\\'s function are computed by a numerical solution of the wave equation. Only the early‐arrivals of these Green\\'s functions are saved and skeletonized to form the migration Green\\'s function (MGF) by convolution. Then the migration image is obtained by a dot product between the recorded shot gathers and the MGF for every trial image point. The key to an efficient implementation of iterative LSM is that at each conjugate gradient iteration, the MGF is reused and no new finitedifference (FD) simulations are needed to get the updated migration image. It is believed that this procedure combined with phase‐encoded multi‐source technology will allow for the efficient computation of wave equation LSM images in less time than that of conventional reverse time migration (RTM).

  15. Recursive total-least-squares adaptive filtering

    Science.gov (United States)

    Dowling, Eric M.; DeGroat, Ronald D.

    1991-12-01

    In this paper a recursive total least squares (RTLS) adaptive filter is introduced and studied. The TLS approach is more appropriate and provides more accurate results than the LS approach when there is error on both sides of the adaptive filter equation; for example, linear prediction, AR modeling, and direction finding. The RTLS filter weights are updated in time O(mr) where m is the filter order and r is the dimension of the tracked subspace. In conventional adaptive filtering problems, r equals 1, so that updates can be performed with complexity O(m). The updates are performed by tracking an orthonormal basis for the smaller of the signal or noise subspaces using a computationally efficient subspace tracking algorithm. The filter is shown to outperform both LMS and RLS in terms of tracking and steady state tap weight error norms. It is also more versatile in that it can adapt its weight in the absence of persistent excitation, i.e., when the input data correlation matrix is near rank deficient. Through simulation, the convergence and tracking properties of the filter are presented and compared with LMS and RLS.

  16. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    Directory of Open Access Journals (Sweden)

    Greenwood L.R.

    2016-01-01

    Full Text Available The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002 [3], which was sponsored by the International Atomic Energy Agency (IAEA, though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the

  17. Least-Squares Neutron Spectral Adjustment with STAYSL PNNL

    Science.gov (United States)

    Greenwood, L. R.; Johnson, C. D.

    2016-02-01

    The STAYSL PNNL computer code, a descendant of the STAY'SL code [1], performs neutron spectral adjustment of a starting neutron spectrum, applying a least squares method to determine adjustments based on saturated activation rates, neutron cross sections from evaluated nuclear data libraries, and all associated covariances. STAYSL PNNL is provided as part of a comprehensive suite of programs [2], where additional tools in the suite are used for assembling a set of nuclear data libraries and determining all required corrections to the measured data to determine saturated activation rates. Neutron cross section and covariance data are taken from the International Reactor Dosimetry File (IRDF-2002) [3], which was sponsored by the International Atomic Energy Agency (IAEA), though work is planned to update to data from the IAEA's International Reactor Dosimetry and Fusion File (IRDFF) [4]. The nuclear data and associated covariances are extracted from IRDF-2002 using the third-party NJOY99 computer code [5]. The NJpp translation code converts the extracted data into a library data array format suitable for use as input to STAYSL PNNL. The software suite also includes three utilities to calculate corrections to measured activation rates. Neutron self-shielding corrections are calculated as a function of neutron energy with the SHIELD code and are applied to the group cross sections prior to spectral adjustment, thus making the corrections independent of the neutron spectrum. The SigPhi Calculator is a Microsoft Excel spreadsheet used for calculating saturated activation rates from raw gamma activities by applying corrections for gamma self-absorption, neutron burn-up, and the irradiation history. Gamma self-absorption and neutron burn-up corrections are calculated (iteratively in the case of the burn-up) within the SigPhi Calculator spreadsheet. The irradiation history corrections are calculated using the BCF computer code and are inserted into the SigPhi Calculator

  18. Least squares regression with errors in both variables: case studies

    Directory of Open Access Journals (Sweden)

    Elcio Cruz de Oliveira

    2013-01-01

    Full Text Available Analytical curves are normally obtained from discrete data by least squares regression. The least squares regression of data involving significant error in both x and y values should not be implemented by ordinary least squares (OLS. In this work, the use of orthogonal distance regression (ODR is discussed as an alternative approach in order to take into account the error in the x variable. Four examples are presented to illustrate deviation between the results from both regression methods. The examples studied show that, in some situations, ODR coefficients must substitute for those of OLS, and, in other situations, the difference is not significant.

  19. Least Square Approximation by Linear Combination of Exponential Functions

    OpenAIRE

    Bahman Mehri; Dariush Shadman; Sadegh Jokar

    2006-01-01

    Here we were concerned with least square approximation by exponential functions for given data. In this manuscript, we approximate the given data such that this approximant satisfies a differential equation. The case of nonlinear differential equations was also considered.

  20. A Newton Algorithm for Multivariate Total Least Squares Problems

    Directory of Open Access Journals (Sweden)

    WANG Leyang

    2016-04-01

    Full Text Available In order to improve calculation efficiency of parameter estimation, an algorithm for multivariate weighted total least squares adjustment based on Newton method is derived. The relationship between the solution of this algorithm and that of multivariate weighted total least squares adjustment based on Lagrange multipliers method is analyzed. According to propagation of cofactor, 16 computational formulae of cofactor matrices of multivariate total least squares adjustment are also listed. The new algorithm could solve adjustment problems containing correlation between observation matrix and coefficient matrix. And it can also deal with their stochastic elements and deterministic elements with only one cofactor matrix. The results illustrate that the Newton algorithm for multivariate total least squares problems could be practiced and have higher convergence rate.

  1. Bibliography on total least squares and related methods

    OpenAIRE

    Markovsky, Ivan

    2010-01-01

    The class of total least squares methods has been growing since the basic total least squares method was proposed by Golub and Van Loan in the 70's. Efficient and robust computational algorithms were developed and properties of the resulting estimators were established in the errors-in-variables setting. At the same time the developed methods were applied in diverse areas, leading to broad literature on the subject. This paper collects the main references and guides the reader in finding deta...

  2. Sparse Partial Least Squares Classification for High Dimensional Data*

    OpenAIRE

    Chung, Dongjun; Keles, Sunduz

    2010-01-01

    Partial least squares (PLS) is a well known dimension reduction method which has been recently adapted for high dimensional classification problems in genome biology. We develop sparse versions of the recently proposed two PLS-based classification methods using sparse partial least squares (SPLS). These sparse versions aim to achieve variable selection and dimension reduction simultaneously. We consider both binary and multicategory classification. We provide analytical and simulation-based i...

  3. A Recursive Restricted Total Least-Squares Algorithm

    OpenAIRE

    Stephan Rhode; Konstantin Usevich; Ivan Markovsky; Frank Gauterin

    2014-01-01

    We show that the generalized total least squares (GTLS) problem with a singular noise covariance matrix is equivalent to the restricted total least squares (RTLS) problem and propose a recursive method for its numerical solution. The method is based on the generalized inverse iteration. The estimation error covariance matrix and the estimated augmented correction are also characterized and computed recursively. The algorithm is cheap to compute and is suitable for online implementation. Simul...

  4. Solving regularized total least squares problems based on eigenproblems

    OpenAIRE

    Lampe, Jörg

    2010-01-01

    In the first part of the thesis we review basic knowledge of regularized least squares problems and present a significant acceleration of an existing method for the solution of trust-region problems. In the second part we present the basic theory of total least squares (TLS) problems and give an overview of possible extensions. Regularization of TLS problems by truncation and bidiagonalization approaches are briefly covered. Several approaches for solving the Tikhonov TLS problem based on ...

  5. Performance analysis of the Least-Squares estimator in Astrometry

    CERN Document Server

    Lobos, Rodrigo A; Mendez, Rene A; Orchard, Marcos

    2015-01-01

    We characterize the performance of the widely-used least-squares estimator in astrometry in terms of a comparison with the Cramer-Rao lower variance bound. In this inference context the performance of the least-squares estimator does not offer a closed-form expression, but a new result is presented (Theorem 1) where both the bias and the mean-square-error of the least-squares estimator are bounded and approximated analytically, in the latter case in terms of a nominal value and an interval around it. From the predicted nominal value we analyze how efficient is the least-squares estimator in comparison with the minimum variance Cramer-Rao bound. Based on our results, we show that, for the high signal-to-noise ratio regime, the performance of the least-squares estimator is significantly poorer than the Cramer-Rao bound, and we characterize this gap analytically. On the positive side, we show that for the challenging low signal-to-noise regime (attributed to either a weak astronomical signal or a noise-dominated...

  6. Regularized total least squares approach for nonconvolutional linear inverse problems.

    Science.gov (United States)

    Zhu, W; Wang, Y; Galatsanos, N P; Zhang, J

    1999-01-01

    In this correspondence, a solution is developed for the regularized total least squares (RTLS) estimate in linear inverse problems where the linear operator is nonconvolutional. Our approach is based on a Rayleigh quotient (RQ) formulation of the TLS problem, and we accomplish regularization by modifying the RQ function to enforce a smooth solution. A conjugate gradient algorithm is used to minimize the modified RQ function. As an example, the proposed approach has been applied to the perturbation equation encountered in optical tomography. Simulation results show that this method provides more stable and accurate solutions than the regularized least squares and a previously reported total least squares approach, also based on the RQ formulation. PMID:18267442

  7. Efficient Model Selection for Sparse Least-Square SVMs

    OpenAIRE

    Xiao-Lei Xia; Suxiang Qian; Xueqin Liu; Huanlai Xing

    2013-01-01

    The Forward Least-Squares Approximation (FLSA) SVM is a newly-emerged Least-Square SVM (LS-SVM) whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to im...

  8. Multi-source least-squares migration of marine data

    KAUST Repository

    Wang, Xin

    2012-11-04

    Kirchhoff based multi-source least-squares migration (MSLSM) is applied to marine streamer data. To suppress the crosstalk noise from the excitation of multiple sources, a dynamic encoding function (including both time-shifts and polarity changes) is applied to the receiver side traces. Results show that the MSLSM images are of better quality than the standard Kirchhoff migration and reverse time migration images; moreover, the migration artifacts are reduced and image resolution is significantly improved. The computational cost of MSLSM is about the same as conventional least-squares migration, but its IO cost is significantly decreased.

  9. LSL: a logarithmic least-squares adjustment method

    International Nuclear Information System (INIS)

    To meet regulatory requirements, spectral unfolding codes must not only provide reliable estimates for spectral parameters, but must also be able to determine the uncertainties associated with these parameters. The newer codes, which are more appropriately called adjustment codes, use the least squares principle to determine estimates and uncertainties. The principle is simple and straightforward, but there are several different mathematical models to describe the unfolding problem. In addition to a sound mathematical model, ease of use and range of options are important considerations in the construction of adjustment codes. Based on these considerations, a least squares adjustment code for neutron spectrum unfolding has been constructed some time ago and tentatively named LSL

  10. An Algorithm to Solve Separable Nonlinear Least Square Problem

    Directory of Open Access Journals (Sweden)

    Wajeb Gharibi

    2013-07-01

    Full Text Available Separable Nonlinear Least Squares (SNLS problem is a special class of Nonlinear Least Squares (NLS problems, whose objective function is a mixture of linear and nonlinear functions. SNLS has many applications in several areas, especially in the field of Operations Research and Computer Science. Problems related to the class of NLS are hard to resolve having infinite-norm metric. This paper gives a brief explanation about SNLS problem and offers a Lagrangian based algorithm for solving mixed linear-nonlinear minimization problem

  11. Computing circles and spheres of arithmitic least squares

    Science.gov (United States)

    Nievergelt, Yves

    1994-07-01

    A proof of the existence and uniqueness of L. Moura and R. Kitney's circle of least squares leads to estimates of the accuracy with which a computer can determine that circle. The result shows that the accuracy deteriorates as the correlation between the coordinates of the data points increases in magnitude. Yet a numerically more stable computation of eigenvectors yields the limiting straight line, which a further analysis reveals to be the line of total least squares. The same analysis also provides generalizations to fitting spheres in higher dimensions.

  12. Sparse least-squares reverse time migration using seislets

    KAUST Repository

    Dutta, Gaurav

    2015-08-19

    We propose sparse least-squares reverse time migration (LSRTM) using seislets as a basis for the reflectivity distribution. This basis is used along with a dip-constrained preconditioner that emphasizes image updates only along prominent dips during the iterations. These dips can be estimated from the standard migration image or from the gradient using plane-wave destruction filters or structural tensors. Numerical tests on synthetic datasets demonstrate the benefits of this method for mitigation of aliasing artifacts and crosstalk noise in multisource least-squares migration.

  13. Uniqueness of Minima of a Certain Least Squares Problem

    OpenAIRE

    Nohra, Jad

    2016-01-01

    This paper is essentially an exercise in studying the minima of a certain least squares optimization using the second partial derivative test. The motivation is to gain insight into an optimization-based solution to the problem of tracking human limbs using IMU sensors.

  14. Spectral Condition Numbers of Full Rank Linear Least Squares Solutions

    CERN Document Server

    Grcar, Joseph F

    2010-01-01

    The condition number of the linear least squares solution depends on three independent quantities each of which can cause ill-conditioning. The numerical linear algebra literature presents several derivations of condition numbers with varying results, even among popular textbooks. This paper explains the variations and shows how to determine condition numbers with certainty by directly evaluating norms for Jacobian matrices.

  15. Multivariate calibration with least-squares support vector machines.

    NARCIS (Netherlands)

    Thissen, U.M.J.; Ustun, B.; Melssen, W.J.; Buydens, L.M.C.

    2004-01-01

    This paper proposes the use of least-squares support vector machines (LS-SVMs) as a relatively new nonlinear multivariate calibration method, capable of dealing with ill-posed problems. LS-SVMs are an extension of "traditional" SVMs that have been introduced recently in the field of chemistry and ch

  16. Plane-wave Least-squares Reverse Time Migration

    KAUST Repository

    Dai, Wei

    2012-11-04

    Least-squares reverse time migration is formulated with a new parameterization, where the migration image of each shot is updated separately and a prestack image is produced with common image gathers. The advantage is that it can offer stable convergence for least-squares migration even when the migration velocity is not completely accurate. To significantly reduce computation cost, linear phase shift encoding is applied to hundreds of shot gathers to produce dozens of planes waves. A regularization term which penalizes the image difference between nearby angles are used to keep the prestack image consistent through all the angles. Numerical tests on a marine dataset is performed to illustrate the advantages of least-squares reverse time migration in the plane-wave domain. Through iterations of least-squares migration, the migration artifacts are reduced and the image resolution is improved. Empirical results suggest that the LSRTM in plane wave domain is an efficient method to improve the image quality and produce common image gathers.

  17. NON-PARAMETRIC LEAST SQUARE ESTIMATION OF DISTRIBUTION FUNCTION

    Institute of Scientific and Technical Information of China (English)

    ChaiGenxiang; HuaHong; ShangHanji

    2002-01-01

    By using the non-parametric least square method, the strong consistent estimations of distribution function and failure function are established, where the distribution function F(x)after logist transformation is assumed to be approximated by a polynomial. The performance of simulation shows that the estimations are highly satisfactory.

  18. Preconditioned Iterative Methods for Solving Weighted Linear Least Squares Problems

    Czech Academy of Sciences Publication Activity Database

    Bru, R.; Marín, J.; Mas, J.; Tůma, Miroslav

    2014-01-01

    Roč. 36, č. 4 (2014), A2002-A2022. ISSN 1064-8275 Institutional support: RVO:67985807 Keywords : preconditioned iterative methods * incomplete decompositions * approximate inverses * linear least squares Subject RIV: BA - General Mathematics Impact factor: 1.854, year: 2014

  19. Least-squares variance component estimation: theory and GPS applications

    NARCIS (Netherlands)

    Amiri-Simkooei, A.

    2007-01-01

    In this thesis we study the method of least-squares variance component estimation (LS-VCE) and elaborate on theoretical and practical aspects of the method. We show that LS-VCE is a simple, flexible, and attractive VCE-method. The LS-VCE method is simple because it is based on the well-known princip

  20. Parallel block schemes for large scale least squares computations

    Energy Technology Data Exchange (ETDEWEB)

    Golub, G.H.; Plemmons, R.J.; Sameh, A.

    1986-04-01

    Large scale least squares computations arise in a variety of scientific and engineering problems, including geodetic adjustments and surveys, medical image analysis, molecular structures, partial differential equations and substructuring methods in structural engineering. In each of these problems, matrices often arise which possess a block structure which reflects the local connection nature of the underlying physical problem. For example, such super-large nonlinear least squares computations arise in geodesy. Here the coordinates of positions are calculated by iteratively solving overdetermined systems of nonlinear equations by the Gauss-Newton method. The US National Geodetic Survey will complete this year (1986) the readjustment of the North American Datum, a problem which involves over 540 thousand unknowns and over 6.5 million observations (equations). The observation matrix for these least squares computations has a block angular form with 161 diagnonal blocks, each containing 3 to 4 thousand unknowns. In this paper parallel schemes are suggested for the orthogonal factorization of matrices in block angular form and for the associated backsubstitution phase of the least squares computations. In addition, a parallel scheme for the calculation of certain elements of the covariance matrix for such problems is described. It is shown that these algorithms are ideally suited for multiprocessors with three levels of parallelism such as the Cedar system at the University of Illinois. 20 refs., 7 figs.

  1. A Genetic Algorithm Approach to Nonlinear Least Squares Estimation

    Science.gov (United States)

    Olinsky, Alan D.; Quinn, John T.; Mangiameli, Paul M.; Chen, Shaw K.

    2004-01-01

    A common type of problem encountered in mathematics is optimizing nonlinear functions. Many popular algorithms that are currently available for finding nonlinear least squares estimators, a special class of nonlinear problems, are sometimes inadequate. They might not converge to an optimal value, or if they do, it could be to a local rather than…

  2. SAS Partial Least Squares (PLS) for Discriminant Analysis

    Science.gov (United States)

    The objective of this work was to implement discriminant analysis using SAS partial least squares (PLS) regression for analysis of spectral data. This was done in combination with previous efforts which implemented data pre-treatments including scatter correction, derivatives, mean centering, and v...

  3. Least Squares Based and Two-Stage Least Squares Based Iterative Estimation Algorithms for H-FIR-MA Systems

    OpenAIRE

    Zhenwei Shi; Zhicheng Ji

    2015-01-01

    This paper studies the identification of Hammerstein finite impulse response moving average (H-FIR-MA for short) systems. A new two-stage least squares iterative algorithm is developed to identify the parameters of the H-FIR-MA systems. The simulation cases indicate the efficiency of the proposed algorithms.

  4. Wave-equation Q tomography and least-squares migration

    KAUST Repository

    Dutta, Gaurav

    2016-03-01

    This thesis designs new methods for Q tomography and Q-compensated prestack depth migration when the recorded seismic data suffer from strong attenuation. A motivation of this work is that the presence of gas clouds or mud channels in overburden structures leads to the distortion of amplitudes and phases in seismic waves propagating inside the earth. If the attenuation parameter Q is very strong, i.e., Q<30, ignoring the anelastic effects in imaging can lead to dimming of migration amplitudes and loss of resolution. This, in turn, adversely affects the ability to accurately predict reservoir properties below such layers. To mitigate this problem, I first develop an anelastic least-squares reverse time migration (Q-LSRTM) technique. I reformulate the conventional acoustic least-squares migration problem as a viscoacoustic linearized inversion problem. Using linearized viscoacoustic modeling and adjoint operators during the least-squares iterations, I show with numerical tests that Q-LSRTM can compensate for the amplitude loss and produce images with better balanced amplitudes than conventional migration. To estimate the background Q model that can be used for any Q-compensating migration algorithm, I then develop a wave-equation based optimization method that inverts for the subsurface Q distribution by minimizing a skeletonized misfit function ε. Here, ε is the sum of the squared differences between the observed and the predicted peak/centroid-frequency shifts of the early-arrivals. Through numerical tests on synthetic and field data, I show that noticeable improvements in the migration image quality can be obtained from Q models inverted using wave-equation Q tomography. A key feature of skeletonized inversion is that it is much less likely to get stuck in a local minimum than a standard waveform inversion method. Finally, I develop a preconditioning technique for least-squares migration using a directional Gabor-based preconditioning approach for isotropic

  5. Weighted discrete least-squares polynomial approximation using randomized quadratures

    Science.gov (United States)

    Zhou, Tao; Narayan, Akil; Xiu, Dongbin

    2015-10-01

    We discuss the problem of polynomial approximation of multivariate functions using discrete least squares collocation. The problem stems from uncertainty quantification (UQ), where the independent variables of the functions are random variables with specified probability measure. We propose to construct the least squares approximation on points randomly and uniformly sampled from tensor product Gaussian quadrature points. We analyze the stability properties of this method and prove that the method is asymptotically stable, provided that the number of points scales linearly (up to a logarithmic factor) with the cardinality of the polynomial space. Specific results in both bounded and unbounded domains are obtained, along with a convergence result for Chebyshev measure. Numerical examples are provided to verify the theoretical results.

  6. Moving least-squares corrections for smoothed particle hydrodynamics

    Directory of Open Access Journals (Sweden)

    Ciro Del Negro

    2011-12-01

    Full Text Available First-order moving least-squares are typically used in conjunction with smoothed particle hydrodynamics in the form of post-processing filters for density fields, to smooth out noise that develops in most applications of smoothed particle hydrodynamics. We show how an approach based on higher-order moving least-squares can be used to correct some of the main limitations in gradient and second-order derivative computation in classic smoothed particle hydrodynamics formulations. With a small increase in computational cost, we manage to achieve smooth density distributions without the need for post-processing and with higher accuracy in the computation of the viscous term of the Navier–Stokes equations, thereby reducing the formation of spurious shockwaves or other streaming effects in the evolution of fluid flow. Numerical tests on a classic two-dimensional dam-break problem confirm the improvement of the new approach.

  7. Linearized least-square imaging of internally scattered data

    KAUST Repository

    Aldawood, Ali

    2014-01-01

    Internal multiples deteriorate the quality of the migrated image obtained conventionally by imaging single scattering energy. However, imaging internal multiples properly has the potential to enhance the migrated image because they illuminate zones in the subsurface that are poorly illuminated by single-scattering energy such as nearly vertical faults. Standard migration of these multiples provide subsurface reflectivity distributions with low spatial resolution and migration artifacts due to the limited recording aperture, coarse sources and receivers sampling, and the band-limited nature of the source wavelet. Hence, we apply a linearized least-square inversion scheme to mitigate the effect of the migration artifacts, enhance the spatial resolution, and provide more accurate amplitude information when imaging internal multiples. Application to synthetic data demonstrated the effectiveness of the proposed inversion in imaging a reflector that is poorly illuminated by single-scattering energy. The least-square inversion of doublescattered data helped delineate that reflector with minimal acquisition fingerprint.

  8. Speckle reduction by phase-based weighted least squares.

    Science.gov (United States)

    Zhu, Lei; Wang, Weiming; Qin, Jing; Heng, Pheng-Ann

    2014-01-01

    Although ultrasonography has been widely used in clinical applications, the doctor suffers great difficulties in diagnosis due to the artifacts of ultrasound images, especially the speckle noise. This paper proposes a novel framework for speckle reduction by using a phase-based weighted least squares optimization. The proposed approach can effectively smooth out speckle noise while preserving the features in the image, e.g., edges with different contrasts. To this end, we first employ a local phase-based measure, which is theoretically intensity-invariant, to extract the edge map from the input image. The edge map is then incorporated into the weighted least squares framework to supervise the optimization during despeckling, so that low contrast edges can be retained while the noise has been greatly removed. Experimental results in synthetic and clinical ultrasound images demonstrate that our approach performs better than state-of-the-art methods. PMID:25570846

  9. Source allocation by least-squares hydrocarbon fingerprint matching

    Energy Technology Data Exchange (ETDEWEB)

    William A. Burns; Stephen M. Mudge; A. Edward Bence; Paul D. Boehm; John S. Brown; David S. Page; Keith R. Parker [W.A. Burns Consulting Services LLC, Houston, TX (United States)

    2006-11-01

    There has been much controversy regarding the origins of the natural polycyclic aromatic hydrocarbon (PAH) and chemical biomarker background in Prince William Sound (PWS), Alaska, site of the 1989 Exxon Valdez oil spill. Different authors have attributed the sources to various proportions of coal, natural seep oil, shales, and stream sediments. The different probable bioavailabilities of hydrocarbons from these various sources can affect environmental damage assessments from the spill. This study compares two different approaches to source apportionment with the same data (136 PAHs and biomarkers) and investigate whether increasing the number of coal source samples from one to six increases coal attributions. The constrained least-squares (CLS) source allocation method that fits concentrations meets geologic and chemical constraints better than partial least-squares (PLS) which predicts variance. The field data set was expanded to include coal samples reported by others, and CLS fits confirm earlier findings of low coal contributions to PWS. 15 refs., 5 figs.

  10. CONDITION NUMBER FOR WEIGHTED LINEAR LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Yimin Wei; Huaian Diao; Sanzheng Qiao

    2007-01-01

    In this paper,we investigate the condition numbers for the generalized matrix inversion and the rank deficient linear least squares problem:minx ||Ax-b||2,where A is an m-by-n (m≥n)rank deficient matrix.We first derive an explicit expression for the condition number in the weighted Frobenius norm || [AT,βb]||F of the data A and b,where T is a positive diagonal matrix and β is a positive scalar.We then discuss the sensitivity of the standard 2-norm condition numbers for the generalized matrix inversion and rank deficient least squares and establish relations between the condition numbers and their condition numbers called level-2 condition numbers.

  11. Least Squares Shadowing for Sensitivity Analysis of Turbulent Fluid Flows

    CERN Document Server

    Blonigan, Patrick; Wang, Qiqi

    2014-01-01

    Computational methods for sensitivity analysis are invaluable tools for aerodynamics research and engineering design. However, traditional sensitivity analysis methods break down when applied to long-time averaged quantities in turbulent fluid flow fields, specifically those obtained using high-fidelity turbulence simulations. This is because of a number of dynamical properties of turbulent and chaotic fluid flows, most importantly high sensitivity of the initial value problem, popularly known as the "butterfly effect". The recently developed least squares shadowing (LSS) method avoids the issues encountered by traditional sensitivity analysis methods by approximating the "shadow trajectory" in phase space, avoiding the high sensitivity of the initial value problem. The following paper discusses how the least squares problem associated with LSS is solved. Two methods are presented and are demonstrated on a simulation of homogeneous isotropic turbulence and the Kuramoto-Sivashinsky (KS) equation, a 4th order c...

  12. On the computation of the structured total least squares estimator

    OpenAIRE

    I. Markovsky; Van Huffel, S.; Kukush, A.

    2004-01-01

    A class of structured total least squares problems is considered, in which the extended data matrix is partitioned into blocks and each of the blocks is (block) Toeplitz/Hankel structured, unstructured, or noise free. We describe the implementation of two types of numerical solution methods for this problem: i) standard local optimization methods in combination with efficient evaluation of the cost function and its gradient, and ii) an iterative procedure proposed originally for the element-w...

  13. Block-Toeplitz/Hankel structured total least squares

    OpenAIRE

    I. Markovsky; Van Huffel, S.; Pintelon, R.

    2005-01-01

    A multivariate structured total least squares problem is considered, in which the extended data matrix is partitioned into blocks and each of the blocks is block-Toeplitz/Hankel structured, unstructured, or noise free. An equivalent optimization problem is derived and its properties are established. The special structure of the equivalent problem enables to improve the computational efficiency of the numerical solution via local optimization methods. By exploiting the structure, the computati...

  14. Least-squares inversion for density-matrix reconstruction

    OpenAIRE

    Opatrny, T.; Welsch, D. -G.; Vogel, W.

    1997-01-01

    We propose a method for reconstruction of the density matrix from measurable time-dependent (probability) distributions of physical quantities. The applicability of the method based on least-squares inversion is - compared with other methods - very universal. It can be used to reconstruct quantum states of various systems, such as harmonic and and anharmonic oscillators including molecular vibrations in vibronic transitions and damped motion. It also enables one to take into account various s...

  15. Single Directional SMO Algorithm for Least Squares Support Vector Machines

    OpenAIRE

    Xigao Shao; Kun Wu; Bifeng Liao

    2013-01-01

    Working set selection is a major step in decomposition methods for training least squares support vector machines (LS-SVMs). In this paper, a new technique for the selection of working set in sequential minimal optimization- (SMO-) type decomposition methods is proposed. By the new method, we can select a single direction to achieve the convergence of the optimality condition. A simple asymptotic convergence proof for the new algorithm is given. Experimental comparisons demonstrate that the c...

  16. Least-Square Conformal Brain Mapping with Spring Energy

    OpenAIRE

    Nie, Jingxin; Liu, Tianming; Li, Gang; Young, Geoffrey; Tarokh, Ashley; Guo, Lei; Wong, Stephen TC

    2007-01-01

    The human brain cortex is a highly convoluted sheet. Mapping of the cortical surface into a canonical coordinate space is an important tool for the study of the structure and function of the brain. Here, we present a technique based on least-square conformal mapping with spring energy for the mapping of the cortical surface. This method aims to reduce the metric and area distortion while maintaining the conformal map and computation efficiency. We demonstrate through numerical results that th...

  17. Multisplitting for linear, least squares and nonlinear problems

    Energy Technology Data Exchange (ETDEWEB)

    Renaut, R.

    1996-12-31

    In earlier work, presented at the 1994 Iterative Methods meeting, a multisplitting (MS) method of block relaxation type was utilized for the solution of the least squares problem, and nonlinear unconstrained problems. This talk will focus on recent developments of the general approach and represents joint work both with Andreas Frommer, University of Wupertal for the linear problems and with Hans Mittelmann, Arizona State University for the nonlinear problems.

  18. MODIFIED LEAST SQUARE METHOD ON COMPUTING DIRICHLET PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    The singularity theory of dynamical systems is linked to the numerical computation of boundary value problems of differential equations. It turns out to be a modified least square method for a calculation of variational problem defined on Ck(Ω), in which the base functions are polynomials and the computation of problems is transferred to compute the coefficients of the base functions. The theoretical treatment and some simple examples are provided for understanding the modification procedure of the metho...

  19. ESTIMASI KURVA REGRESI PADA DATA LONGITUDINAL DENGAN WEIGHTED LEAST SQUARE

    OpenAIRE

    Ragil P., Dian

    2014-01-01

    Model varying-coefficient pada data longitudinal akan dikaji dalam proposal ini. Hubungan antara variabel respon dan prediktor diasumsikan linier pada waktu tertentu, tapi koefisien-koefisiennya berubah terhadap waktu. Estimator spline berdasarkan Weighted least square (WLS) digunakan untuk mengestimasi kurva regresi dari Model Varying Coefficient. Generalized Cross-Validation (GCV) digunakan untuk memilih titik knot optimal. Aplikasi pada proposal ini diterapkan pada data ACTG yaitu hubungan...

  20. A Novel Fault Classification Scheme Based on Least Square SVM

    OpenAIRE

    Dubey, Harishchandra; Tiwari, A. K.; Nandita; Ray, P. K.; Mohanty, S. R.; Kishor, Nand

    2016-01-01

    This paper presents a novel approach for fault classification and section identification in a series compensated transmission line based on least square support vector machine. The current signal corresponding to one-fourth of the post fault cycle is used as input to proposed modular LS-SVM classifier. The proposed scheme uses four binary classifier; three for selection of three phases and fourth for ground detection. The proposed classification scheme is found to be accurate and reliable in ...

  1. An Efficient Inexact ABCD Method for Least Squares Semidefinite Programming

    OpenAIRE

    Sun, Defeng; Toh, Kim-Chuan; Yang, Liuqin

    2015-01-01

    We consider least squares semidefinite programming (LSSDP) where the primal matrix variable must satisfy given linear equality and inequality constraints, and must also lie in the intersection of the cone of symmetric positive semidefinite matrices and a simple polyhedral set. We propose an inexact accelerated block coordinate descent (ABCD) method for solving LSSDP via its dual, which can be reformulated as a convex composite minimization problem whose objective is the sum of a coupled quadr...

  2. River flow time series using least squares support vector machines

    OpenAIRE

    R. Samsudin; P. Saad; A. Shabri

    2011-01-01

    This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM). The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The perform...

  3. Multilevel first-order system least squares for PDEs

    Energy Technology Data Exchange (ETDEWEB)

    McCormick, S.

    1994-12-31

    The purpose of this talk is to analyze the least-squares finite element method for second-order convection-diffusion equations written as a first-order system. In general, standard Galerkin finite element methods applied to non-self-adjoint elliptic equations with significant convection terms exhibit a variety of deficiencies, including oscillations or nonmonotonicity of the solution and poor approximation of its derivatives, A variety of stabilization techniques, such as up-winding, Petrov-Galerkin, and stream-line diffusion approximations, have been introduced to eliminate these and other drawbacks of standard Galerkin methods. Yet, although significant progress has been made, convection-diffusion problems remain among the more difficult problems to solve numerically. The first-order system least-squares approach promises to overcome these deficiencies. This talk develops ellipticity estimates and discretization error bounds for elliptic equations (with lower order terms) that are reformulated as a least-squares problem for an equivalent first-order system. The main results are the proofs of ellipticity and optimal convergence of multiplicative and additive solvers of the discrete systems.

  4. Partial least squares Cox regression for genome-wide data.

    Science.gov (United States)

    Nygård, Ståle; Borgan, Ornulf; Lingjaerde, Ole Christian; Størvold, Hege Leite

    2008-06-01

    Most methods for survival prediction from high-dimensional genomic data combine the Cox proportional hazards model with some technique of dimension reduction, such as partial least squares regression (PLS). Applying PLS to the Cox model is not entirely straightforward, and multiple approaches have been proposed. The method of Park etal. (Bioinformatics 18(Suppl. 1):S120-S127, 2002) uses a reformulation of the Cox likelihood to a Poisson type likelihood, thereby enabling estimation by iteratively reweighted partial least squares for generalized linear models. We propose a modification of the method of Park et al. (2002) such that estimates of the baseline hazard and the gene effects are obtained in separate steps. The resulting method has several advantages over the method of Park et al. (2002) and other existing Cox PLS approaches, as it allows for estimation of survival probabilities for new patients, enables a less memory-demanding estimation procedure, and allows for incorporation of lower-dimensional non-genomic variables like disease grade and tumor thickness. We also propose to combine our Cox PLS method with an initial gene selection step in which genes are ordered by their Cox score and only the highest-ranking k% of the genes are retained, obtaining a so-called supervised partial least squares regression method. In simulations, both the unsupervised and the supervised version outperform other Cox PLS methods. PMID:18188699

  5. Solving linear inequalities in a least squares sense

    Energy Technology Data Exchange (ETDEWEB)

    Bramley, R.; Winnicka, B. [Indiana Univ., Bloomington, IN (United States)

    1994-12-31

    Let A {element_of} {Re}{sup mxn} be an arbitrary real matrix, and let b {element_of} {Re}{sup m} a given vector. A familiar problem in computational linear algebra is to solve the system Ax = b in a least squares sense; that is, to find an x* minimizing {parallel}Ax {minus} b{parallel}, where {parallel} {center_dot} {parallel} refers to the vector two-norm. Such an x* solves the normal equations A{sup T}(Ax {minus} b) = 0, and the optimal residual r* = b {minus} Ax* is unique (although x* need not be). The least squares problem is usually interpreted as corresponding to multiple observations, represented by the rows of A and b, on a vector of data x. The observations may be inconsistent, and in this case a solution is sought that minimizes the norm of the residuals. A less familiar problem to numerical linear algebraists is the solution of systems of linear inequalities Ax {le} b in a least squares sense, but the motivation is similar: if a set of observations places upper or lower bounds on linear combinations of variables, the authors want to find x* minimizing {parallel} (Ax {minus} b){sub +} {parallel}, where the i{sup th} component of the vector v{sub +} is the maximum of zero and the i{sup th} component of v.

  6. Least-squares framework for projection MRI reconstruction

    Science.gov (United States)

    Gregor, Jens; Rannou, Fernando

    2001-07-01

    Magnetic resonance signals that have very short relaxation times are conveniently sampled in a spherical fashion. We derive a least squares framework for reconstructing three-dimensional source distribution images from such data. Using a finite-series approach, the image is represented as a weighted sum of translated Kaiser-Bessel window functions. The Radon transform thereof establishes the connection with the projection data that one can obtain from the radial sampling trajectories. The resulting linear system of equations is sparse, but quite large. To reduce the size of the problem, we introduce focus of attention. Based on the theory of support functions, this data-driven preprocessing scheme eliminates equations and unknowns that merely represent the background. The image reconstruction and the focus of attention both require a least squares solution to be computed. We describe a projected gradient approach that facilitates a non-negativity constrained version of the powerful LSQR algorithm. In order to ensure reasonable execution times, the least squares computation can be distributed across a network of PCs and/or workstations. We discuss how to effectively parallelize the NN-LSQR algorithm. We close by presenting results from experimental work that addresses both computational issues and image quality using a mathematical phantom.

  7. Multi-source least-squares reverse time migration

    KAUST Repository

    Dai, Wei

    2012-06-15

    Least-squares migration has been shown to improve image quality compared to the conventional migration method, but its computational cost is often too high to be practical. In this paper, we develop two numerical schemes to implement least-squares migration with the reverse time migration method and the blended source processing technique to increase computation efficiency. By iterative migration of supergathers, which consist in a sum of many phase-encoded shots, the image quality is enhanced and the crosstalk noise associated with the encoded shots is reduced. Numerical tests on 2D HESS VTI data show that the multisource least-squares reverse time migration (LSRTM) algorithm suppresses migration artefacts, balances the amplitudes, improves image resolution and reduces crosstalk noise associated with the blended shot gathers. For this example, the multisource LSRTM is about three times faster than the conventional RTM method. For the 3D example of the SEG/EAGE salt model, with a comparable computational cost, multisource LSRTM produces images with more accurate amplitudes, better spatial resolution and fewer migration artefacts compared to conventional RTM. The empirical results suggest that multisource LSRTM can produce more accurate reflectivity images than conventional RTM does with a similar or less computational cost. The caveat is that the LSRTM image is sensitive to large errors in the migration velocity model. © 2012 European Association of Geoscientists & Engineers.

  8. Risk and Management Control: A Partial Least Square Modelling Approach

    DEFF Research Database (Denmark)

    Nielsen, Steen; Pontoppidan, Iens Christian

    and interrelations between risk and areas within management accounting. The idea is that management accounting should be able to conduct a valid feed forward but also predictions for decision making including risk. This study reports the test of a theoretical model using partial least squares (PLS) on survey data...... and a external attitude dimension. The results have important implications for both management control research and for the management control systems design for the way accountants consider the element of risk in their different tasks, both operational and strategic. Specifically, it seems that different risk...

  9. MULTI-RESOLUTION LEAST SQUARES SUPPORT VECTOR MACHINES

    Institute of Scientific and Technical Information of China (English)

    2007-01-01

    The Least Squares Support Vector Machines (LS-SVM) is an improvement to the SVM.Combined the LS-SVM with the Multi-Resolution Analysis (MRA), this letter proposes the Multi-resolution LS-SVM (MLS-SVM). The proposed algorithm has the same theoretical framework as MRA but with better approximation ability. At a fixed scale MLS-SVM is a classical LS-SVM, but MLS-SVM can gradually approximate the target function at different scales. In experiments, the MLS-SVM is used for nonlinear system identification, and achieves better identification accuracy.

  10. Least square estimation of phase, frequency and PDEV

    CERN Document Server

    Danielson, Magnus; Rubiola, Enrico

    2016-01-01

    The Omega-preprocessing was introduced to improve phase noise rejection by using a least square algorithm. The associated variance is the PVAR which is more efficient than MVAR to separate the different noise types. However, unlike AVAR and MVAR, the decimation of PVAR estimates for multi-tau analysis is not possible if each counter measurement is a single scalar. This paper gives a decimation rule based on two scalars, the processing blocks, for each measurement. For the Omega-preprocessing, this implies the definition of an output standard as well as hardware requirements for performing high-speed computations of the blocks.

  11. Least Squares Shadowing method for sensitivity analysis of differential equations

    OpenAIRE

    Chater, Mario; Ni, Angxiu; Blonigan, Patrick J.; Wang, Qiqi

    2015-01-01

    For a parameterized hyperbolic system $\\frac{du}{dt}=f(u,s)$ the derivative of the ergodic average $\\langle J \\rangle = \\lim_{T \\to \\infty}\\frac{1}{T}\\int_0^T J(u(t),s)$ to the parameter $s$ can be computed via the Least Squares Shadowing algorithm (LSS). We assume that the sytem is ergodic which means that $\\langle J \\rangle$ depends only on $s$ (not on the initial condition of the hyperbolic system). After discretizing this continuous system using a fixed timestep, the algorithm solves a co...

  12. Handbook of Partial Least Squares Concepts, Methods and Applications

    CERN Document Server

    Vinzi, Vincenzo Esposito; Henseler, Jörg

    2010-01-01

    This handbook provides a comprehensive overview of Partial Least Squares (PLS) methods with specific reference to their use in marketing and with a discussion of the directions of current research and perspectives. It covers the broad area of PLS methods, from regression to structural equation modeling applications, software and interpretation of results. The handbook serves both as an introduction for those without prior knowledge of PLS and as a comprehensive reference for researchers and practitioners interested in the most recent advances in PLS methodology.

  13. Classification using least squares support vector machine for reliability analysis

    Institute of Scientific and Technical Information of China (English)

    Zhi-wei GUO; Guang-chen BAI

    2009-01-01

    In order to improve the efficiency of the support vector machine (SVM) for classification to deal with a large amount of samples,the least squares support vector machine (LSSVM) for classification methods is introduced into the reliability analysis.To reduce the computational cost,the solution of the SVM is transformed from a quadratic programming to a group of linear equations.The numerical results indicate that the reliability method based on the LSSVM for classification has higher accuracy and requires less computational cost than the SVM method.

  14. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach

    Energy Technology Data Exchange (ETDEWEB)

    Haddad, Khaled [School of Computing, Engineering and Mathematics, University of Western Sydney, Building XB, Locked Bag 1797, Penrith, NSW 2751 (Australia); Egodawatta, Prasanna [Science and Engineering Faculty, Queensland University of Technology, GPO Box 2434, Brisbane 4001 (Australia); Rahman, Ataur [School of Computing, Engineering and Mathematics, University of Western Sydney, Building XB, Locked Bag 1797, Penrith, NSW 2751 (Australia); Goonetilleke, Ashantha, E-mail: a.goonetilleke@qut.edu.au [Science and Engineering Faculty, Queensland University of Technology, GPO Box 2434, Brisbane 4001 (Australia)

    2013-04-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. - Highlights: ► Water quality data spans short time scales leading to significant model uncertainty. ► Assessment of uncertainty essential for informed decision making in water

  15. A least-squares framework for Component Analysis.

    Science.gov (United States)

    De la Torre, Fernando

    2012-06-01

    Over the last century, Component Analysis (CA) methods such as Principal Component Analysis (PCA), Linear Discriminant Analysis (LDA), Canonical Correlation Analysis (CCA), Locality Preserving Projections (LPP), and Spectral Clustering (SC) have been extensively used as a feature extraction step for modeling, classification, visualization, and clustering. CA techniques are appealing because many can be formulated as eigen-problems, offering great potential for learning linear and nonlinear representations of data in closed-form. However, the eigen-formulation often conceals important analytic and computational drawbacks of CA techniques, such as solving generalized eigen-problems with rank deficient matrices (e.g., small sample size problem), lacking intuitive interpretation of normalization factors, and understanding commonalities and differences between CA methods. This paper proposes a unified least-squares framework to formulate many CA methods. We show how PCA, LDA, CCA, LPP, SC, and its kernel and regularized extensions correspond to a particular instance of least-squares weighted kernel reduced rank regression (LS--WKRRR). The LS-WKRRR formulation of CA methods has several benefits: 1) provides a clean connection between many CA techniques and an intuitive framework to understand normalization factors; 2) yields efficient numerical schemes to solve CA techniques; 3) overcomes the small sample size problem; 4) provides a framework to easily extend CA methods. We derive weighted generalizations of PCA, LDA, SC, and CCA, and several new CA techniques. PMID:21911913

  16. On the stability and accuracy of least squares approximations

    CERN Document Server

    Cohen, Albert; Leviatan, Dany

    2011-01-01

    We consider the problem of reconstructing an unknown function $f$ on a domain $X$ from samples of $f$ at $n$ randomly chosen points with respect to a given measure $\\rho_X$. Given a sequence of linear spaces $(V_m)_{m>0}$ with ${\\rm dim}(V_m)=m\\leq n$, we study the least squares approximations from the spaces $V_m$. It is well known that such approximations can be inaccurate when $m$ is too close to $n$, even when the samples are noiseless. Our main result provides a criterion on $m$ that describes the needed amount of regularization to ensure that the least squares method is stable and that its accuracy, measured in $L^2(X,\\rho_X)$, is comparable to the best approximation error of $f$ by elements from $V_m$. We illustrate this criterion for various approximation schemes, such as trigonometric polynomials, with $\\rho_X$ being the uniform measure, and algebraic polynomials, with $\\rho_X$ being either the uniform or Chebyshev measure. For such examples we also prove similar stability results using deterministic...

  17. Plane-wave least-squares reverse-time migration

    KAUST Repository

    Dai, Wei

    2013-06-03

    A plane-wave least-squares reverse-time migration (LSRTM) is formulated with a new parameterization, where the migration image of each shot gather is updated separately and an ensemble of prestack images is produced along with common image gathers. The merits of plane-wave prestack LSRTM are the following: (1) plane-wave prestack LSRTM can sometimes offer stable convergence even when the migration velocity has bulk errors of up to 5%; (2) to significantly reduce computation cost, linear phase-shift encoding is applied to hundreds of shot gathers to produce dozens of plane waves. Unlike phase-shift encoding with random time shifts applied to each shot gather, plane-wave encoding can be effectively applied to data with a marine streamer geometry. (3) Plane-wave prestack LSRTM can provide higher-quality images than standard reverse-time migration. Numerical tests on the Marmousi2 model and a marine field data set are performed to illustrate the benefits of plane-wave LSRTM. Empirical results show that LSRTM in the plane-wave domain, compared to standard reversetime migration, produces images efficiently with fewer artifacts and better spatial resolution. Moreover, the prestack image ensemble accommodates more unknowns to makes it more robust than conventional least-squares migration in the presence of migration velocity errors. © 2013 Society of Exploration Geophysicists.

  18. Decision-Directed Recursive Least Squares MIMO Channels Tracking

    Directory of Open Access Journals (Sweden)

    2006-01-01

    Full Text Available A new approach for joint data estimation and channel tracking for multiple-input multiple-output (MIMO channels is proposed based on the decision-directed recursive least squares (DD-RLS algorithm. RLS algorithm is commonly used for equalization and its application in channel estimation is a novel idea. In this paper, after defining the weighted least squares cost function it is minimized and eventually the RLS MIMO channel estimation algorithm is derived. The proposed algorithm combined with the decision-directed algorithm (DDA is then extended for the blind mode operation. From the computational complexity point of view being O3 versus the number of transmitter and receiver antennas, the proposed algorithm is very efficient. Through various simulations, the mean square error (MSE of the tracking of the proposed algorithm for different joint detection algorithms is compared with Kalman filtering approach which is one of the most well-known channel tracking algorithms. It is shown that the performance of the proposed algorithm is very close to Kalman estimator and that in the blind mode operation it presents a better performance with much lower complexity irrespective of the need to know the channel model.

  19. Faraday rotation data analysis with least-squares elliptical fitting

    International Nuclear Information System (INIS)

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.

  20. Making the most out of the least (squares migration)

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Standard migration images can suffer from migration artifacts due to 1) poor source-receiver sampling, 2) weak amplitudes caused by geometric spreading, 3) attenuation, 4) defocusing, 5) poor resolution due to limited source-receiver aperture, and 6) ringiness caused by a ringy source wavelet. To partly remedy these problems, least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), proposes to linearly invert seismic data for the reflectivity distribution. If the migration velocity model is sufficiently accurate, then LSM can mitigate many of the above problems and lead to a more resolved migration image, sometimes with twice the spatial resolution. However, there are two problems with LSM: the cost can be an order of magnitude more than standard migration and the quality of the LSM image is no better than the standard image for velocity errors of 5% or more. We now show how to get the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  1. Faraday rotation data analysis with least-squares elliptical fitting

    Energy Technology Data Exchange (ETDEWEB)

    White, Adam D.; McHale, G. Brent; Goerz, David A.; Speer, Ron D. [Lawrence Livermore National Laboratory, Livermore, California 94550 (United States)

    2010-10-15

    A method of analyzing Faraday rotation data from pulsed magnetic field measurements is described. The method uses direct least-squares elliptical fitting to measured data. The least-squares fit conic parameters are used to rotate, translate, and rescale the measured data. Interpretation of the transformed data provides improved accuracy and time-resolution characteristics compared with many existing methods of analyzing Faraday rotation data. The method is especially useful when linear birefringence is present at the input or output of the sensing medium, or when the relative angle of the polarizers used in analysis is not aligned with precision; under these circumstances the method is shown to return the analytically correct input signal. The method may be pertinent to other applications where analysis of Lissajous figures is required, such as the velocity interferometer system for any reflector (VISAR) diagnostics. The entire algorithm is fully automated and requires no user interaction. An example of algorithm execution is shown, using data from a fiber-based Faraday rotation sensor on a capacitive discharge experiment.

  2. Making the most out of least-squares migration

    KAUST Repository

    Huang, Yunsong

    2014-09-01

    Standard migration images can suffer from (1) migration artifacts caused by an undersampled acquisition geometry, (2) poor resolution resulting from a limited recording aperture, (3) ringing artifacts caused by ripples in the source wavelet, and (4) weak amplitudes resulting from geometric spreading, attenuation, and defocusing. These problems can be remedied in part by least-squares migration (LSM), also known as linearized seismic inversion or migration deconvolution (MD), which aims to linearly invert seismic data for the reflectivity distribution. Given a sufficiently accurate migration velocity model, LSM can mitigate many of the above problems and can produce more resolved migration images, sometimes with more than twice the spatial resolution of standard migration. However, LSM faces two challenges: The computational cost can be an order of magnitude higher than that of standard migration, and the resulting image quality can fail to improve for migration velocity errors of about 5% or more. It is possible to obtain the most from least-squares migration by reducing the cost and velocity sensitivity of LSM.

  3. Efficient Model Selection for Sparse Least-Square SVMs

    Directory of Open Access Journals (Sweden)

    Xiao-Lei Xia

    2013-01-01

    Full Text Available The Forward Least-Squares Approximation (FLSA SVM is a newly-emerged Least-Square SVM (LS-SVM whose solution is extremely sparse. The algorithm uses the number of support vectors as the regularization parameter and ensures the linear independency of the support vectors which span the solution. This paper proposed a variant of the FLSA-SVM, namely, Reduced FLSA-SVM which is of reduced computational complexity and memory requirements. The strategy of “contexts inheritance” is introduced to improve the efficiency of tuning the regularization parameter for both the FLSA-SVM and the RFLSA-SVM algorithms. Experimental results on benchmark datasets showed that, compared to the SVM and a number of its variants, the RFLSA-SVM solutions contain a reduced number of support vectors, while maintaining competitive generalization abilities. With respect to the time cost for tuning of the regularize parameter, the RFLSA-SVM algorithm was empirically demonstrated fastest compared to FLSA-SVM, the LS-SVM, and the SVM algorithms.

  4. Spatial autocorrelation approaches to testing residuals from least squares regression

    CERN Document Server

    Chen, Yanguang

    2015-01-01

    In statistics, the Durbin-Watson test is always employed to detect the presence of serial correlation of residuals from a least squares regression analysis. However, the Durbin-Watson statistic is only suitable for ordered time or spatial series. If the variables comprise cross-sectional data coming from spatial random sampling, the Durbin-Watson will be ineffectual because the value of Durbin-Watson's statistic depends on the sequences of data point arrangement. Based on the ideas from spatial autocorrelation, this paper presents two new statistics for testing serial correlation of residuals from least squares regression based on spatial samples. By analogy with the new form of Moran's index, an autocorrelation coefficient is defined with a standardized residual vector and a normalized spatial weight matrix. Then on the analogy of the Durbin-Watson statistic, a serial correlation index is constructed. As a case, the two statistics are applied to the spatial sample of 29 China's regions. These results show th...

  5. Implementation of the Least-Squares Lattice with Order and Forgetting Factor Estimation for FPGA

    Czech Academy of Sciences Publication Activity Database

    Pohl, Zdeněk; Tichý, Milan; Kadlec, Jiří

    2008-01-01

    Roč. 2008, č. 2008 (2008), s. 1-11. ISSN 1687-6172 R&D Projects: GA MŠk(CZ) 1M0567 EU Projects: European Commission(XE) 027611 - AETHER Institutional research plan: CEZ:AV0Z10750506 Keywords : DSP * Least-squares lattice * order estimation * exponential forgetting factor estimation * FPGA implementation * scheduling * dynamic reconfiguration * microblaze Subject RIV: IN - Informatics, Computer Science Impact factor: 1.055, year: 2008 http://library.utia.cas.cz/separaty/2008/ZS/pohl-tichy-kadlec-implementation%20of%20the%20least-squares%20lattice%20with%20order%20and%20forgetting%20factor%20estimation%20for%20fpga.pdf

  6. Regularization Techniques for Linear Least-Squares Problems

    KAUST Repository

    Suliman, Mohamed

    2016-04-01

    Linear estimation is a fundamental branch of signal processing that deals with estimating the values of parameters from a corrupted measured data. Throughout the years, several optimization criteria have been used to achieve this task. The most astonishing attempt among theses is the linear least-squares. Although this criterion enjoyed a wide popularity in many areas due to its attractive properties, it appeared to suffer from some shortcomings. Alternative optimization criteria, as a result, have been proposed. These new criteria allowed, in one way or another, the incorporation of further prior information to the desired problem. Among theses alternative criteria is the regularized least-squares (RLS). In this thesis, we propose two new algorithms to find the regularization parameter for linear least-squares problems. In the constrained perturbation regularization algorithm (COPRA) for random matrices and COPRA for linear discrete ill-posed problems, an artificial perturbation matrix with a bounded norm is forced into the model matrix. This perturbation is introduced to enhance the singular value structure of the matrix. As a result, the new modified model is expected to provide a better stabilize substantial solution when used to estimate the original signal through minimizing the worst-case residual error function. Unlike many other regularization algorithms that go in search of minimizing the estimated data error, the two new proposed algorithms are developed mainly to select the artifcial perturbation bound and the regularization parameter in a way that approximately minimizes the mean-squared error (MSE) between the original signal and its estimate under various conditions. The first proposed COPRA method is developed mainly to estimate the regularization parameter when the measurement matrix is complex Gaussian, with centered unit variance (standard), and independent and identically distributed (i.i.d.) entries. Furthermore, the second proposed COPRA

  7. Least-squares reverse time migration of multiples

    KAUST Repository

    Zhang, Dongliang

    2013-12-06

    The theory of least-squares reverse time migration of multiples (RTMM) is presented. In this method, least squares migration (LSM) is used to image free-surface multiples where the recorded traces are used as the time histories of the virtual sources at the hydrophones and the surface-related multiples are the observed data. For a single source, the entire free-surface becomes an extended virtual source where the downgoing free-surface multiples more fully illuminate the subsurface compared to the primaries. Since each recorded trace is treated as the time history of a virtual source, knowledge of the source wavelet is not required and the ringy time series for each source is automatically deconvolved. If the multiples can be perfectly separated from the primaries, numerical tests on synthetic data for the Sigsbee2B and Marmousi2 models show that least-squares reverse time migration of multiples (LSRTMM) can significantly improve the image quality compared to RTMM or standard reverse time migration (RTM) of primaries. However, if there is imperfect separation and the multiples are strongly interfering with the primaries then LSRTMM images show no significant advantage over the primary migration images. In some cases, they can be of worse quality. Applying LSRTMM to Gulf of Mexico data shows higher signal-to-noise imaging of the salt bottom and top compared to standard RTM images. This is likely attributed to the fact that the target body is just below the sea bed so that the deep water multiples do not have strong interference with the primaries. Migrating a sparsely sampled version of the Marmousi2 ocean bottom seismic data shows that LSM of primaries and LSRTMM provides significantly better imaging than standard RTM. A potential liability of LSRTMM is that multiples require several round trips between the reflector and the free surface, so that high frequencies in the multiples suffer greater attenuation compared to the primary reflections. This can lead to lower

  8. Penalized Nonlinear Least Squares Estimation of Time-Varying Parameters in Ordinary Differential Equations

    KAUST Repository

    Cao, Jiguo

    2012-01-01

    Ordinary differential equations (ODEs) are widely used in biomedical research and other scientific areas to model complex dynamic systems. It is an important statistical problem to estimate parameters in ODEs from noisy observations. In this article we propose a method for estimating the time-varying coefficients in an ODE. Our method is a variation of the nonlinear least squares where penalized splines are used to model the functional parameters and the ODE solutions are approximated also using splines. We resort to the implicit function theorem to deal with the nonlinear least squares objective function that is only defined implicitly. The proposed penalized nonlinear least squares method is applied to estimate a HIV dynamic model from a real dataset. Monte Carlo simulations show that the new method can provide much more accurate estimates of functional parameters than the existing two-step local polynomial method which relies on estimation of the derivatives of the state function. Supplemental materials for the article are available online.

  9. ADAPTIVE FUSION ALGORITHMS BASED ON WEIGHTED LEAST SQUARE METHOD

    Institute of Scientific and Technical Information of China (English)

    SONG Kaichen; NIE Xili

    2006-01-01

    Weighted fusion algorithms, which can be applied in the area of multi-sensor data fusion,are advanced based on weighted least square method. A weighted fusion algorithm, in which the relationship between weight coefficients and measurement noise is established, is proposed by giving attention to the correlation of measurement noise. Then a simplified weighted fusion algorithm is deduced on the assumption that measurement noise is uncorrelated. In addition, an algorithm, which can adjust the weight coefficients in the simplified algorithm by making estimations of measurement noise from measurements, is presented. It is proved by emulation and experiment that the precision performance of the multi-sensor system based on these algorithms is better than that of the multi-sensor system based on other algorithms.

  10. Least squares deconvolution of the stellar intensity and polarization spectra

    CERN Document Server

    Kochukhov, O; Piskunov, N

    2010-01-01

    Least squares deconvolution (LSD) is a powerful method of extracting high-precision average line profiles from the stellar intensity and polarization spectra. Despite its common usage, the LSD method is poorly documented and has never been tested using realistic synthetic spectra. In this study we revisit the key assumptions of the LSD technique, clarify its numerical implementation, discuss possible improvements and give recommendations how to make LSD results understandable and reproducible. We also address the problem of interpretation of the moments and shapes of the LSD profiles in terms of physical parameters. We have developed an improved, multiprofile version of LSD and have extended the deconvolution procedure to linear polarization analysis taking into account anomalous Zeeman splitting of spectral lines. This code is applied to the theoretical Stokes parameter spectra. We test various methods of interpreting the mean profiles, investigating how coarse approximations of the multiline technique trans...

  11. Parameter Uncertainty for Aircraft Aerodynamic Modeling using Recursive Least Squares

    Science.gov (United States)

    Grauer, Jared A.; Morelli, Eugene A.

    2016-01-01

    A real-time method was demonstrated for determining accurate uncertainty levels of stability and control derivatives estimated using recursive least squares and time-domain data. The method uses a recursive formulation of the residual autocorrelation to account for colored residuals, which are routinely encountered in aircraft parameter estimation and change the predicted uncertainties. Simulation data and flight test data for a subscale jet transport aircraft were used to demonstrate the approach. Results showed that the corrected uncertainties matched the observed scatter in the parameter estimates, and did so more accurately than conventional uncertainty estimates that assume white residuals. Only small differences were observed between batch estimates and recursive estimates at the end of the maneuver. It was also demonstrated that the autocorrelation could be reduced to a small number of lags to minimize computation and memory storage requirements without significantly degrading the accuracy of predicted uncertainty levels.

  12. Regularized plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2013-09-22

    A Kirchhoff least-squares migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images. A regularization term is included that accounts for mispositioning of reflectors due to errors in the velocity model. Both synthetic and field results show that: 1) LSM with a reflectivity model common for all the plane-wave gathers provides the best image when the migration velocity model is accurate, but it is more sensitive to the velocity errors, 2) the regularized plane-wave LSM is more robust in the presence of velocity errors, and 3) LSM achieves both computational and IO saving by plane-wave encoding compared to shot-domain LSM for the models tested.

  13. Local validation of EU-DEM using Least Squares Collocation

    Science.gov (United States)

    Ampatzidis, Dimitrios; Mouratidis, Antonios; Gruber, Christian; Kampouris, Vassilios

    2016-04-01

    In the present study we are dealing with the evaluation of the European Digital Elevation Model (EU-DEM) in a limited area, covering few kilometers. We compare EU-DEM derived vertical information against orthometric heights obtained by classical trigonometric leveling for an area located in Northern Greece. We apply several statistical tests and we initially fit a surface model, in order to quantify the existing biases and outliers. Finally, we implement a methodology for orthometric heights prognosis, using the Least Squares Collocation for the remaining residuals of the first step (after the fitted surface application). Our results, taking into account cross validation points, reveal a local consistency between EU-DEM and official heights, which is better than 1.4 meters.

  14. Flow Applications of the Least Squares Finite Element Method

    Science.gov (United States)

    Jiang, Bo-Nan

    1998-01-01

    The main thrust of the effort has been towards the development, analysis and implementation of the least-squares finite element method (LSFEM) for fluid dynamics and electromagnetics applications. In the past year, there were four major accomplishments: 1) special treatments in computational fluid dynamics and computational electromagnetics, such as upwinding, numerical dissipation, staggered grid, non-equal order elements, operator splitting and preconditioning, edge elements, and vector potential are unnecessary; 2) the analysis of the LSFEM for most partial differential equations can be based on the bounded inverse theorem; 3) the finite difference and finite volume algorithms solve only two Maxwell equations and ignore the divergence equations; and 4) the first numerical simulation of three-dimensional Marangoni-Benard convection was performed using the LSFEM.

  15. Nonlinear Least-squares Fitting for PIXE Spectra

    Directory of Open Access Journals (Sweden)

    A. Tchantchane

    2005-01-01

    Full Text Available An interactive computer program for the analysis of PIXE ( Particle Induced X-ray Emission spectra was described in this study. The fitting procedure consists of computing a function Y (I, a which approximates the experimental data at each channel I. a is a set of fitting parameters (energy and resolution calibration, X-rays intensities, absorption and background. The parameters of fit were determined by using a nonlinear least-squares fitting based on the Marquardt`s algorithm. The program takes into account of low energy tail and escape peaks. The program was employed for the analysis of PIXE spectra of geological and biological samples. The peak areas determined by this program are compared to those obtained with AXIL code

  16. DIRECT ITERATIVE METHODS FOR RANK DEFICIENT GENERALIZED LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Jin-yun Yuan; Xiao-qing Jin

    2000-01-01

    The generalized least squares (LS) problem appears in many application areas. Here W is an m × m symmetric positive definite matrix and A is an m × n matrix with m≥n. Since the problem has many solutions in rank deficient case, some special preconditioned techniques are adapted to obtain the minimum 2-norm solution. A block SOR method and the preconditioned conjugate gradient (PCG) method are proposed here. Convergence and optimal relaxation parameter for the block SOR method are studied. An error bound for the PCG method is given. The comparison of these methods is investigated. Some remarks on the implementation of the methods and the operation cost are given as well.

  17. semPLS: Structural Equation Modeling Using Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Armin Monecke

    2012-05-01

    Full Text Available Structural equation models (SEM are very popular in many disciplines. The partial least squares (PLS approach to SEM offers an alternative to covariance-based SEM, which is especially suited for situations when data is not normally distributed. PLS path modelling is referred to as soft-modeling-technique with minimum demands regarding mea- surement scales, sample sizes and residual distributions. The semPLS package provides the capability to estimate PLS path models within the R programming environment. Different setups for the estimation of factor scores can be used. Furthermore it contains modular methods for computation of bootstrap confidence intervals, model parameters and several quality indices. Various plot functions help to evaluate the model. The well known mobile phone dataset from marketing research is used to demonstrate the features of the package.

  18. Improved linear least squares estimation using bounded data uncertainty

    KAUST Repository

    Ballal, Tarig

    2015-04-01

    This paper addresses the problemof linear least squares (LS) estimation of a vector x from linearly related observations. In spite of being unbiased, the original LS estimator suffers from high mean squared error, especially at low signal-to-noise ratios. The mean squared error (MSE) of the LS estimator can be improved by introducing some form of regularization based on certain constraints. We propose an improved LS (ILS) estimator that approximately minimizes the MSE, without imposing any constraints. To achieve this, we allow for perturbation in the measurement matrix. Then we utilize a bounded data uncertainty (BDU) framework to derive a simple iterative procedure to estimate the regularization parameter. Numerical results demonstrate that the proposed BDU-ILS estimator is superior to the original LS estimator, and it converges to the best linear estimator, the linear-minimum-mean-squared error estimator (LMMSE), when the elements of x are statistically white.

  19. Estimating Military Aircraft Cost Using Least Squares Support Vector Machines

    Institute of Scientific and Technical Information of China (English)

    ZHU Jia-yuan; ZHANG Xi-bin; ZHANG Heng-xi; REN Bo

    2004-01-01

    A multi-layer adaptive optimizing parameters algorithm is developed for improving least squares support vector machines(LS-SVM),and a military aircraft life-cycle-cost(LCC)intelligent estimation model is proposed based on the improved LS-SVM.The intelligent cost estimation process is divided into three steps in the model.In the first step,a cost-drive-factor needs to be selected,which is significant for cost estimation.In the second step,military aircraft training samples within costs and cost-drive-factor set are obtained by the LS-SVM.Then the model can be used for new type aircraft cost estimation.Chinese military aircraft costs are estimated in the paper.The results show that the estimated costs by the new model are closer to the true costs than that of the traditionally used methods.

  20. A Galerkin least squares approach to viscoelastic flow.

    Energy Technology Data Exchange (ETDEWEB)

    Rao, Rekha R. [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States); Schunk, Peter Randall [Sandia National Lab. (SNL-NM), Albuquerque, NM (United States)

    2015-10-01

    A Galerkin/least-squares stabilization technique is applied to a discrete Elastic Viscous Stress Splitting formulation of for viscoelastic flow. From this, a possible viscoelastic stabilization method is proposed. This method is tested with the flow of an Oldroyd-B fluid past a rigid cylinder, where it is found to produce inaccurate drag coefficients. Furthermore, it fails for relatively low Weissenberg number indicating it is not suited for use as a general algorithm. In addition, a decoupled approach is used as a way separating the constitutive equation from the rest of the system. A Pressure Poisson equation is used when the velocity and pressure are sought to be decoupled, but this fails to produce a solution when inflow/outflow boundaries are considered. However, a coupled pressure-velocity equation with a decoupled constitutive equation is successful for the flow past a rigid cylinder and seems to be suitable as a general-use algorithm.

  1. Estimating Frequency by Interpolation Using Least Squares Support Vector Regression

    Directory of Open Access Journals (Sweden)

    Changwei Ma

    2015-01-01

    Full Text Available Discrete Fourier transform- (DFT- based maximum likelihood (ML algorithm is an important part of single sinusoid frequency estimation. As signal to noise ratio (SNR increases and is above the threshold value, it will lie very close to Cramer-Rao lower bound (CRLB, which is dependent on the number of DFT points. However, its mean square error (MSE performance is directly proportional to its calculation cost. As a modified version of support vector regression (SVR, least squares SVR (LS-SVR can not only still keep excellent capabilities for generalizing and fitting but also exhibit lower computational complexity. In this paper, therefore, LS-SVR is employed to interpolate on Fourier coefficients of received signals and attain high frequency estimation accuracy. Our results show that the proposed algorithm can make a good compromise between calculation cost and MSE performance under the assumption that the sample size, number of DFT points, and resampling points are already known.

  2. Least-squares deconvolution based analysis of stellar spectra

    CERN Document Server

    Van Reeth, T; Tsymbal, V

    2013-01-01

    In recent years, astronomical photometry has been revolutionised by space missions such as MOST, CoRoT and Kepler. However, despite this progress, high-quality spectroscopy is still required as well. Unfortunately, high-resolution spectra can only be obtained using ground-based telescopes, and since many interesting targets are rather faint, the spectra often have a relatively low S/N. Consequently, we have developed an algorithm based on the least-squares deconvolution profile, which allows to reconstruct an observed spectrum, but with a higher S/N. We have successfully tested the method using both synthetic and observed data, and in combination with several common spectroscopic applications, such as e.g. the determination of atmospheric parameter values, and frequency analysis and mode identification of stellar pulsations.

  3. Partial Least Squares tutorial for analyzing neuroimaging data

    Directory of Open Access Journals (Sweden)

    Patricia Van Roon

    2014-09-01

    Full Text Available Partial least squares (PLS has become a respected and meaningful soft modeling analysis technique that can be applied to very large datasets where the number of factors or variables is greater than the number of observations. Current biometric studies (e.g., eye movements, EKG, body movements, EEG are often of this nature. PLS eliminates the multiple linear regression issues of over-fitting data by finding a few underlying or latent variables (factors that account for most of the variation in the data. In real-world applications, where linear models do not always apply, PLS can model the non-linear relationship well. This tutorial introduces two PLS methods, PLS Correlation (PLSC and PLS Regression (PLSR and their applications in data analysis which are illustrated with neuroimaging examples. Both methods provide straightforward and comprehensible techniques for determining and modeling relationships between two multivariate data blocks by finding latent variables that best describes the relationships. In the examples, the PLSC will analyze the relationship between neuroimaging data such as Event-Related Potential (ERP amplitude averages from different locations on the scalp with their corresponding behavioural data. Using the same data, the PLSR will be used to model the relationship between neuroimaging and behavioural data. This model will be able to predict future behaviour solely from available neuroimaging data. To find latent variables, Singular Value Decomposition (SVD for PLSC and Non-linear Iterative PArtial Least Squares (NIPALS for PLSR are implemented in this tutorial. SVD decomposes the large data block into three manageable matrices containing a diagonal set of singular values, as well as left and right singular vectors. For PLSR, NIPALS algorithms are used because it provides amore precise estimation of the latent variables. Mathematica notebooks are provided for each PLS method with clearly labeled sections and subsections. The

  4. Recursive least square vehicle mass estimation based on acceleration partition

    Science.gov (United States)

    Feng, Yuan; Xiong, Lu; Yu, Zhuoping; Qu, Tong

    2014-05-01

    Vehicle mass is an important parameter in vehicle dynamics control systems. Although many algorithms have been developed for the estimation of mass, none of them have yet taken into account the different types of resistance that occur under different conditions. This paper proposes a vehicle mass estimator. The estimator incorporates road gradient information in the longitudinal accelerometer signal, and it removes the road grade from the longitudinal dynamics of the vehicle. Then, two different recursive least square method (RLSM) schemes are proposed to estimate the driving resistance and the mass independently based on the acceleration partition under different conditions. A 6 DOF dynamic model of four In-wheel Motor Vehicle is built to assist in the design of the algorithm and in the setting of the parameters. The acceleration limits are determined to not only reduce the estimated error but also ensure enough data for the resistance estimation and mass estimation in some critical situations. The modification of the algorithm is also discussed to improve the result of the mass estimation. Experiment data on a sphalt road, plastic runway, and gravel road and on sloping roads are used to validate the estimation algorithm. The adaptability of the algorithm is improved by using data collected under several critical operating conditions. The experimental results show the error of the estimation process to be within 2.6%, which indicates that the algorithm can estimate mass with great accuracy regardless of the road surface and gradient changes and that it may be valuable in engineering applications. This paper proposes a recursive least square vehicle mass estimation method based on acceleration partition.

  5. Götterdämmerung over total least squares

    Science.gov (United States)

    Malissiovas, G.; Neitzel, F.; Petrovic, S.

    2016-06-01

    The traditional way of solving non-linear least squares (LS) problems in Geodesy includes a linearization of the functional model and iterative solution of a nonlinear equation system. Direct solutions for a class of nonlinear adjustment problems have been presented by the mathematical community since the 1980s, based on total least squares (TLS) algorithms and involving the use of singular value decomposition (SVD). However, direct LS solutions for this class of problems have been developed in the past also by geodesists. In this contributionwe attempt to establish a systematic approach for direct solutions of non-linear LS problems from a "geodetic" point of view. Therefore, four non-linear adjustment problems are investigated: the fit of a straight line to given points in 2D and in 3D, the fit of a plane in 3D and the 2D symmetric similarity transformation of coordinates. For all these problems a direct LS solution is derived using the same methodology by transforming the problem to the solution of a quadratic or cubic algebraic equation. Furthermore, by applying TLS all these four problems can be transformed to solving the respective characteristic eigenvalue equations. It is demonstrated that the algebraic equations obtained in this way are identical with those resulting from the LS approach. As a by-product of this research two novel approaches are presented for the TLS solutions of fitting a straight line to 3D and the 2D similarity transformation of coordinates. The derived direct solutions of the four considered problems are illustrated on examples from the literature and also numerically compared to published iterative solutions.

  6. Spreadsheet for designing valid least-squares calibrations: A tutorial.

    Science.gov (United States)

    Bettencourt da Silva, Ricardo J N

    2016-02-01

    Instrumental methods of analysis are used to define the price of goods, the compliance of products with a regulation, or the outcome of fundamental or applied research. These methods can only play their role properly if reported information is objective and their quality is fit for the intended use. If measurement results are reported with an adequately small measurement uncertainty both of these goals are achieved. The evaluation of the measurement uncertainty can be performed by the bottom-up approach, that involves a detailed description of the measurement process, or using a pragmatic top-down approach that quantify major uncertainty components from global performance data. The bottom-up approach is not so frequently used due to the need to master the quantification of individual components responsible for random and systematic effects that affect measurement results. This work presents a tutorial that can be easily used by non-experts in the accurate evaluation of the measurement uncertainty of instrumental methods of analysis calibrated using least-squares regressions. The tutorial includes the definition of the calibration interval, the assessments of instrumental response homoscedasticity, the definition of calibrators preparation procedure required for least-squares regression model application, the assessment of instrumental response linearity and the evaluation of measurement uncertainty. The developed measurement model is only applicable in calibration ranges where signal precision is constant. A MS-Excel file is made available to allow the easy application of the tutorial. This tool can be useful for cases where top-down approaches cannot produce results with adequately low measurement uncertainty. An example of the application of this tool to the determination of nitrate in water by ion chromatography is presented. PMID:26653439

  7. Uncertainty analysis of pollutant build-up modelling based on a Bayesian weighted least squares approach.

    Science.gov (United States)

    Haddad, Khaled; Egodawatta, Prasanna; Rahman, Ataur; Goonetilleke, Ashantha

    2013-04-01

    Reliable pollutant build-up prediction plays a critical role in the accuracy of urban stormwater quality modelling outcomes. However, water quality data collection is resource demanding compared to streamflow data monitoring, where a greater quantity of data is generally available. Consequently, available water quality datasets span only relatively short time scales unlike water quantity data. Therefore, the ability to take due consideration of the variability associated with pollutant processes and natural phenomena is constrained. This in turn gives rise to uncertainty in the modelling outcomes as research has shown that pollutant loadings on catchment surfaces and rainfall within an area can vary considerably over space and time scales. Therefore, the assessment of model uncertainty is an essential element of informed decision making in urban stormwater management. This paper presents the application of a range of regression approaches such as ordinary least squares regression, weighted least squares regression and Bayesian weighted least squares regression for the estimation of uncertainty associated with pollutant build-up prediction using limited datasets. The study outcomes confirmed that the use of ordinary least squares regression with fixed model inputs and limited observational data may not provide realistic estimates. The stochastic nature of the dependent and independent variables need to be taken into consideration in pollutant build-up prediction. It was found that the use of the Bayesian approach along with the Monte Carlo simulation technique provides a powerful tool, which attempts to make the best use of the available knowledge in prediction and thereby presents a practical solution to counteract the limitations which are otherwise imposed on water quality modelling. PMID:23454702

  8. Application of a Bayesian/generalised least-squares method to generate correlations between independent neutron fission yield data

    International Nuclear Information System (INIS)

    Fission product yields are fundamental parameters for several nuclear engineering calculations and in particular for burn-up/activation problems. The impact of their uncertainties was widely studied in the past and evaluations were released, although still incomplete. Recently, the nuclear community expressed the need for full fission yield covariance matrices to produce inventory calculation results that take into account the complete uncertainty data. In this work, we studied and applied a Bayesian/generalised least-squares method for covariance generation, and compared the generated uncertainties to the original data stored in the JEFF-3.1.2 library. Then, we focused on the effect of fission yield covariance information on fission pulse decay heat results for thermal fission of 235U. Calculations were carried out using different codes (ACAB and ALEPH-2) after introducing the new covariance values. Results were compared with those obtained with the uncertainty data currently provided by the library. The uncertainty quantification was performed with the Monte Carlo sampling technique. Indeed, correlations between fission yields strongly affect the statistics of decay heat. (authors)

  9. Application of the Least Squares Method in Axisymmetric Biharmonic Problems

    Directory of Open Access Journals (Sweden)

    Vasyl Chekurin

    2016-01-01

    Full Text Available An approach for solving of the axisymmetric biharmonic boundary value problems for semi-infinite cylindrical domain was developed in the paper. On the lateral surface of the domain homogeneous Neumann boundary conditions are prescribed. On the remaining part of the domain’s boundary four different biharmonic boundary pieces of data are considered. To solve the formulated biharmonic problems the method of least squares on the boundary combined with the method of homogeneous solutions was used. That enabled reducing the problems to infinite systems of linear algebraic equations which can be solved with the use of reduction method. Convergence of the solution obtained with developed approach was studied numerically on some characteristic examples. The developed approach can be used particularly to solve axisymmetric elasticity problems for cylindrical bodies, the heights of which are equal to or exceed their diameters, when on their lateral surface normal and tangential tractions are prescribed and on the cylinder’s end faces various types of boundary conditions in stresses in displacements or mixed ones are given.

  10. 3D plane-wave least-squares Kirchhoff migration

    KAUST Repository

    Wang, Xin

    2014-08-05

    A three dimensional least-squares Kirchhoff migration (LSM) is developed in the prestack plane-wave domain to increase the quality of migration images and the computational efficiency. Due to the limitation of current 3D marine acquisition geometries, a cylindrical-wave encoding is adopted for the narrow azimuth streamer data. To account for the mispositioning of reflectors due to errors in the velocity model, a regularized LSM is devised so that each plane-wave or cylindrical-wave gather gives rise to an individual migration image, and a regularization term is included to encourage the similarities between the migration images of similar encoding schemes. Both synthetic and field results show that: 1) plane-wave or cylindrical-wave encoding LSM can achieve both computational and IO saving, compared to shot-domain LSM, however, plane-wave LSM is still about 5 times more expensive than plane-wave migration; 2) the regularized LSM is more robust compared to LSM with one reflectivity model common for all the plane-wave or cylindrical-wave gathers.

  11. Robustness of ordinary least squares in randomized clinical trials.

    Science.gov (United States)

    Judkins, David R; Porter, Kristin E

    2016-05-20

    There has been a series of occasional papers in this journal about semiparametric methods for robust covariate control in the analysis of clinical trials. These methods are fairly easy to apply on currently available computers, but standard software packages do not yet support these methods with easy option selections. Moreover, these methods can be difficult to explain to practitioners who have only a basic statistical education. There is also a somewhat neglected history demonstrating that ordinary least squares (OLS) is very robust to the types of outcome distribution features that have motivated the newer methods for robust covariate control. We review these two strands of literature and report on some new simulations that demonstrate the robustness of OLS to more extreme normality violations than previously explored. The new simulations involve two strongly leptokurtic outcomes: near-zero binary outcomes and zero-inflated gamma outcomes. Potential examples of such outcomes include, respectively, 5-year survival rates for stage IV cancer and healthcare claim amounts for rare conditions. We find that traditional OLS methods work very well down to very small sample sizes for such outcomes. Under some circumstances, OLS with robust standard errors work well with even smaller sample sizes. Given this literature review and our new simulations, we think that most researchers may comfortably continue using standard OLS software, preferably with the robust standard errors. Copyright © 2015 John Wiley & Sons, Ltd. PMID:26694758

  12. Nonlinear Least Squares Methods for Joint DOA and Pitch Estimation

    DEFF Research Database (Denmark)

    Jensen, Jesper Rindom; Christensen, Mads Græsbøll; Jensen, Søren Holdt

    2013-01-01

    In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram\\'{e}r-Rao bo......In this paper, we consider the problem of joint direction-of-arrival (DOA) and fundamental frequency estimation. Joint estimation enables robust estimation of these parameters in multi-source scenarios where separate estimators may fail. First, we derive the exact and asymptotic Cram......\\'{e}r-Rao bounds for the joint estimation problem. Then, we propose a nonlinear least squares (NLS) and an approximate NLS (aNLS) estimator for joint DOA and fundamental frequency estimation. The proposed estimators are maximum likelihood estimators when: 1) the noise is white Gaussian, 2) the environment is...... anechoic, and 3) the source of interest is in the far-field. Otherwise, the methods still approximately yield maximum likelihood estimates. Simulations on synthetic data show that the proposed methods have similar or better performance than state-of-the-art methods for DOA and fundamental frequency...

  13. Non-parametric and least squares Langley plot methods

    Science.gov (United States)

    Kiedron, P. W.; Michalsky, J. J.

    2016-01-01

    Langley plots are used to calibrate sun radiometers primarily for the measurement of the aerosol component of the atmosphere that attenuates (scatters and absorbs) incoming direct solar radiation. In principle, the calibration of a sun radiometer is a straightforward application of the Bouguer-Lambert-Beer law V = V0e-τ ṡ m, where a plot of ln(V) voltage vs. m air mass yields a straight line with intercept ln(V0). This ln(V0) subsequently can be used to solve for τ for any measurement of V and calculation of m. This calibration works well on some high mountain sites, but the application of the Langley plot calibration technique is more complicated at other, more interesting, locales. This paper is concerned with ferreting out calibrations at difficult sites and examining and comparing a number of conventional and non-conventional methods for obtaining successful Langley plots. The 11 techniques discussed indicate that both least squares and various non-parametric techniques produce satisfactory calibrations with no significant differences among them when the time series of ln(V0)'s are smoothed and interpolated with median and mean moving window filters.

  14. LS-CS: Compressive Sensing on Least Squares Residual

    CERN Document Server

    Vaswani, Namrata

    2009-01-01

    We consider the problem of recursively reconstructing time sequences of sparse signals (with unknown and time-varying sparsity patterns) from a limited number of linear incoherent measurements with additive noise. The signals are sparse in some transform domain referred to as the sparsity basis and the sparsity pattern is assumed to change slowly with time. The idea of our proposed solution, LS-CS-residual (LS-CS), is to replace compressed sensing (CS) on the observation by CS on the least squares (LS) observation residual computed using the previous estimate of the support. We bound the CS-residual error and show that when the number of available measurements is small, the bound is much smaller than that on CS error if the sparsity pattern changes slowly enough. We also obtain conditions for "stability" of LS-CS over time for a simple deterministic signal model of coefficient addition/removal and coefficient magnitude increase/decrease which has bounded signal power. By "stability", we mean that the number o...

  15. River flow time series using least squares support vector machines

    Directory of Open Access Journals (Sweden)

    R. Samsudin

    2011-06-01

    Full Text Available This paper proposes a novel hybrid forecasting model known as GLSSVM, which combines the group method of data handling (GMDH and the least squares support vector machine (LSSVM. The GMDH is used to determine the useful input variables which work as the time series forecasting for the LSSVM model. Monthly river flow data from two stations, the Selangor and Bernam rivers in Selangor state of Peninsular Malaysia were taken into consideration in the development of this hybrid model. The performance of this model was compared with the conventional artificial neural network (ANN models, Autoregressive Integrated Moving Average (ARIMA, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The root mean square error (RMSE and coefficient of correlation (R are used to evaluate the models' performances. In both cases, the new hybrid model has been found to provide more accurate flow forecasts compared to the other models. The results of the comparison indicate that the new hybrid model is a useful tool and a promising new method for river flow forecasting.

  16. A least squares closure approximation for liquid crystalline polymers

    Science.gov (United States)

    Sievenpiper, Traci Ann

    2011-12-01

    An introduction to existing closure schemes for the Doi-Hess kinetic theory of liquid crystalline polymers is provided. A new closure scheme is devised based on a least squares fit of a linear combination of the Doi, Tsuji-Rey, Hinch-Leal I, and Hinch-Leal II closure schemes. The orientation tensor and rate-of-strain tensor are fit separately using data generated from the kinetic solution of the Smoluchowski equation. The known behavior of the kinetic solution and existing closure schemes at equilibrium is compared with that of the new closure scheme. The performance of the proposed closure scheme in simple shear flow for a variety of shear rates and nematic polymer concentrations is examined, along with that of the four selected existing closure schemes. The flow phase diagram for the proposed closure scheme under the conditions of shear flow is constructed and compared with that of the kinetic solution. The study of the closure scheme is extended to the simulation of nematic polymers in plane Couette cells. The results are compared with existing kinetic simulations for a Landau-deGennes mesoscopic model with the application of a parameterized closure approximation. The proposed closure scheme is shown to produce a reasonable approximation to the kinetic results in the case of simple shear flow and plane Couette flow.

  17. Fast Dating Using Least-Squares Criteria and Algorithms.

    Science.gov (United States)

    To, Thu-Hien; Jung, Matthieu; Lycett, Samantha; Gascuel, Olivier

    2016-01-01

    Phylogenies provide a useful way to understand the evolutionary history of genetic samples, and data sets with more than a thousand taxa are becoming increasingly common, notably with viruses (e.g., human immunodeficiency virus (HIV)). Dating ancestral events is one of the first, essential goals with such data. However, current sophisticated probabilistic approaches struggle to handle data sets of this size. Here, we present very fast dating algorithms, based on a Gaussian model closely related to the Langley-Fitch molecular-clock model. We show that this model is robust to uncorrelated violations of the molecular clock. Our algorithms apply to serial data, where the tips of the tree have been sampled through times. They estimate the substitution rate and the dates of all ancestral nodes. When the input tree is unrooted, they can provide an estimate for the root position, thus representing a new, practical alternative to the standard rooting methods (e.g., midpoint). Our algorithms exploit the tree (recursive) structure of the problem at hand, and the close relationships between least-squares and linear algebra. We distinguish between an unconstrained setting and the case where the temporal precedence constraint (i.e., an ancestral node must be older that its daughter nodes) is accounted for. With rooted trees, the former is solved using linear algebra in linear computing time (i.e., proportional to the number of taxa), while the resolution of the latter, constrained setting, is based on an active-set method that runs in nearly linear time. With unrooted trees the computing time becomes (nearly) quadratic (i.e., proportional to the square of the number of taxa). In all cases, very large input trees (>10,000 taxa) can easily be processed and transformed into time-scaled trees. We compare these algorithms to standard methods (root-to-tip, r8s version of Langley-Fitch method, and BEAST). Using simulated data, we show that their estimation accuracy is similar to that

  18. Finding a Minimally Informative Dirichlet Prior Distribution Using Least Squares

    International Nuclear Information System (INIS)

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straight-forward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in closed form, and so an approximate beta distribution is used in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial aleatory model for common-cause failure, must be estimated from data that is often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  19. Finding a minimally informative Dirichlet prior distribution using least squares

    International Nuclear Information System (INIS)

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  20. Finding A Minimally Informative Dirichlet Prior Using Least Squares

    International Nuclear Information System (INIS)

    In a Bayesian framework, the Dirichlet distribution is the conjugate distribution to the multinomial likelihood function, and so the analyst is required to develop a Dirichlet prior that incorporates available information. However, as it is a multiparameter distribution, choosing the Dirichlet parameters is less straightforward than choosing a prior distribution for a single parameter, such as p in the binomial distribution. In particular, one may wish to incorporate limited information into the prior, resulting in a minimally informative prior distribution that is responsive to updates with sparse data. In the case of binomial p or Poisson λ, the principle of maximum entropy can be employed to obtain a so-called constrained noninformative prior. However, even in the case of p, such a distribution cannot be written down in the form of a standard distribution (e.g., beta, gamma), and so a beta distribution is used as an approximation in the case of p. In the case of the multinomial model with parametric constraints, the approach of maximum entropy does not appear tractable. This paper presents an alternative approach, based on constrained minimization of a least-squares objective function, which leads to a minimally informative Dirichlet prior distribution. The alpha-factor model for common-cause failure, which is widely used in the United States, is the motivation for this approach, and is used to illustrate the method. In this approach to modeling common-cause failure, the alpha-factors, which are the parameters in the underlying multinomial model for common-cause failure, must be estimated from data that are often quite sparse, because common-cause failures tend to be rare, especially failures of more than two or three components, and so a prior distribution that is responsive to updates with sparse data is needed.

  1. The moving-least-squares-particle hydrodynamics method (MLSPH)

    Energy Technology Data Exchange (ETDEWEB)

    Dilts, G. [Los Alamos National Lab., NM (United States)

    1997-12-31

    An enhancement of the smooth-particle hydrodynamics (SPH) method has been developed using the moving-least-squares (MLS) interpolants of Lancaster and Salkauskas which simultaneously relieves the method of several well-known undesirable behaviors, including spurious boundary effects, inaccurate strain and rotation rates, pressure spikes at impact boundaries, and the infamous tension instability. The classical SPH method is derived in a novel manner by means of a Galerkin approximation applied to the Lagrangian equations of motion for continua using as basis functions the SPH kernel function multiplied by the particle volume. This derivation is then modified by simply substituting the MLS interpolants for the SPH Galerkin basis, taking care to redefine the particle volume and mass appropriately. The familiar SPH kernel approximation is now equivalent to a colocation-Galerkin method. Both classical conservative and recent non-conservative formulations of SPH can be derived and emulated. The non-conservative forms can be made conservative by adding terms that are zero within the approximation at the expense of boundary-value considerations. The familiar Monaghan viscosity is used. Test calculations of uniformly expanding fluids, the Swegle example, spinning solid disks, impacting bars, and spherically symmetric flow illustrate the superiority of the technique over SPH. In all cases it is seen that the marvelous ability of the MLS interpolants to add up correctly everywhere civilizes the noisy, unpredictable nature of SPH. Being a relatively minor perturbation of the SPH method, it is easily retrofitted into existing SPH codes. On the down side, computational expense at this point is significant, the Monaghan viscosity undoes the contribution of the MLS interpolants, and one-point quadrature (colocation) is not accurate enough. Solutions to these difficulties are being pursued vigorously.

  2. Comparing implementations of penalized weighted least-squares sinogram restoration

    Energy Technology Data Exchange (ETDEWEB)

    Forthmann, Peter; Koehler, Thomas; Defrise, Michel; La Riviere, Patrick [Philips Research Europe, Roentgenstrasse 24-26, 22315 Hamburg (Germany); Department of Nuclear Medicine, Vrije Universitat, Brussels, AZ-VUB, B-1090 Brussels (Belgium); Department of Radiology, University of Chicago, 5841 South Maryland Avenue, MC-2026, Chicago, Illinois 60637 (United States)

    2010-11-15

    Purpose: A CT scanner measures the energy that is deposited in each channel of a detector array by x rays that have been partially absorbed on their way through the object. The measurement process is complex and quantitative measurements are always and inevitably associated with errors, so CT data must be preprocessed prior to reconstruction. In recent years, the authors have formulated CT sinogram preprocessing as a statistical restoration problem in which the goal is to obtain the best estimate of the line integrals needed for reconstruction from the set of noisy, degraded measurements. The authors have explored both penalized Poisson likelihood (PL) and penalized weighted least-squares (PWLS) objective functions. At low doses, the authors found that the PL approach outperforms PWLS in terms of resolution-noise tradeoffs, but at standard doses they perform similarly. The PWLS objective function, being quadratic, is more amenable to computational acceleration than the PL objective. In this work, the authors develop and compare two different methods for implementing PWLS sinogram restoration with the hope of improving computational performance relative to PL in the standard-dose regime. Sinogram restoration is still significant in the standard-dose regime since it can still outperform standard approaches and it allows for correction of effects that are not usually modeled in standard CT preprocessing. Methods: The authors have explored and compared two implementation strategies for PWLS sinogram restoration: (1) A direct matrix-inversion strategy based on the closed-form solution to the PWLS optimization problem and (2) an iterative approach based on the conjugate-gradient algorithm. Obtaining optimal performance from each strategy required modifying the naive off-the-shelf implementations of the algorithms to exploit the particular symmetry and sparseness of the sinogram-restoration problem. For the closed-form approach, the authors subdivided the large matrix

  3. Modeling geochemical datasets for source apportionment: Comparison of least square regression and inversion approaches.

    Digital Repository Service at National Institute of Oceanography (India)

    Tripathy, G.R.; Das, Anirban.

    . Analysis of different modes of factor analysis as least squares fit problems. Chemometrics and Intelligent Laboratory Systems 18, 183–194. Paatero, P., 1997. Least squares formulation of robust non-negative factor analysis. Chemometrics and Intelligent...

  4. Frequency domain analysis and synthesis of lumped parameter systems using nonlinear least squares techniques

    Science.gov (United States)

    Hays, J. R.

    1969-01-01

    Lumped parametric system models are simplified and computationally advantageous in the frequency domain of linear systems. Nonlinear least squares computer program finds the least square best estimate for any number of parameters in an arbitrarily complicated model.

  5. Linear least squares compartmental-model-independent parameter identification in PET

    International Nuclear Information System (INIS)

    A simplified approach involving linear-regression straight-line parameter fitting of dynamic scan data is developed for both specific and nonspecific models. Where compartmental-model topologies apply, the measured activity may be expressed in terms of: its integrals, plasma activity and plasma integrals -- all in a linear expression with macroparameters as coefficients. Multiple linear regression, as in spreadsheet software, determines parameters for best data fits. Positron emission tomography (PET)-acquired gray-matter images in a dynamic scan are analyzed: both by this method and by traditional iterative nonlinear least squares. Both patient and simulated data were used. Regression and traditional methods are in expected agreement. Monte-Carlo simulations evaluate parameter standard deviations, due to data noise, and much smaller noise-induced biases. Unique straight-line graphical displays permit visualizing data influences on various macroparameters as changes in slopes. Advantages of regression fitting are: simplicity, speed, ease of implementation in spreadsheet software, avoiding risks of convergence failures or false solutions in iterative least squares, and providing various visualizations of the uptake process by straight line graphical displays. Multiparameter model-independent analyses on lesser understood systems is also made possible

  6. From least squares to multilevel modeling: A graphical introduction to Bayesian inference

    Science.gov (United States)

    Loredo, Thomas J.

    2016-01-01

    This tutorial presentation will introduce some of the key ideas and techniques involved in applying Bayesian methods to problems in astrostatistics. The focus will be on the big picture: understanding the foundations (interpreting probability, Bayes's theorem, the law of total probability and marginalization), making connections to traditional methods (propagation of errors, least squares, chi-squared, maximum likelihood, Monte Carlo simulation), and highlighting problems where a Bayesian approach can be particularly powerful (Poisson processes, density estimation and curve fitting with measurement error). The "graphical" component of the title reflects an emphasis on pictorial representations of some of the math, but also on the use of graphical models (multilevel or hierarchical models) for analyzing complex data. Code for some examples from the talk will be available to participants, in Python and in the Stan probabilistic programming language.

  7. Fitting of two and three variate polynomials from experimental data through the least squares method

    International Nuclear Information System (INIS)

    Obtaining polynomial fittings from observational data in two and three dimensions is an interesting and practical task. Such an arduous problem suggests the development of an automatic code. The main novelty we provide lies in the generalization of the classical least squares method in three FORTRAN 77 programs usable in any sampling problem. Furthermore, we introduce the orthogonal 2D-Legendre function in the fitting process. These FORTRAN 77 programs are equipped with the options to calculate the approximation quality standard indicators, obviously generalized to two and three dimensions (correlation nonlinear factor, confidence intervals, cuadratic mean error, and so on). The aim of this paper is to rectify the absence of fitting algorithms for more than one independent variable in mathematical libraries

  8. LSFODF: a generalized nonlinear least-squares fitting program for use with ORELA ODF files

    International Nuclear Information System (INIS)

    The Fortran-10 program LSFODF has been written on the ORELA PDP-10 in order to perform non-linear least-squares curve fitting with user supplied functions and derivatives on data which can be read directly from ORELA-data-format (ODF) files. LSFODF can be used with any user supplied function and derivatives; has its storage requirements specified in this function; has P-search and eta-search capabilities; and can output the input data and fitted curve in an ODF file which then can be manipulated and plotted with the existing ORELA library of ODF programs. A description of the fitting formalism, input instructions, five test cases, and a program listing are given

  9. Least-squares dual characterization for ROI assessment in emission tomography

    International Nuclear Information System (INIS)

    Our aim is to describe an original method for estimating the statistical properties of regions of interest (ROIs) in emission tomography. Drawn upon the works of Louis on the approximate inverse, we propose a dual formulation of the ROI estimation problem to derive the ROI activity and variance directly from the measured data without any image reconstruction. The method requires the definition of an ROI characteristic function that can be extracted from a co-registered morphological image. This characteristic function can be smoothed to optimize the resolution-variance tradeoff. An iterative procedure is detailed for the solution of the dual problem in the least-squares sense (least-squares dual (LSD) characterization), and a linear extrapolation scheme is described to compensate for sampling partial volume effect and reduce the estimation bias (LSD-ex). LSD and LSD-ex are compared with classical ROI estimation using pixel summation after image reconstruction and with Huesman's method. For this comparison, we used Monte Carlo simulations (GATE simulation tool) of 2D PET data of a Hoffman brain phantom containing three small uniform high-contrast ROIs and a large non-uniform low-contrast ROI. Our results show that the performances of LSD characterization are at least as good as those of the classical methods in terms of root mean square (RMS) error. For the three small tumor regions, LSD-ex allows a reduction in the estimation bias by up to 14%, resulting in a reduction in the RMS error of up to 8.5%, compared with the optimal classical estimation. For the large non-specific region, LSD using appropriate smoothing could intuitively and efficiently handle the resolution-variance tradeoff. (paper)

  10. NEGATIVE NORM LEAST-SQUARES METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Gao Shaoqin; Duan Huoyuan

    2008-01-01

    The purpose of this article is to develop and analyze least-squares approxi-mations for the incompressible magnetohydrodynamic equations. The major advantage of the least-squares finite element method is that it is not subjected to the so-called Ladyzhenskaya-Babuska-Brezzi (LBB) condition. The authors employ least-squares func-tionals which involve a discrete inner product which is related to the inner product in H-1(Ω).

  11. Application of Partial Least-Squares Regression Model on Temperature Analysis and Prediction of RCCD

    OpenAIRE

    Yuqing Zhao; Zhenxian Xing

    2013-01-01

    This study, based on the temperature monitoring data of jiangya RCCD, uses principle and method of partial least-squares regression to analyze and predict temperature variation of RCCD. By founding partial least-squares regression model, multiple correlations of independent variables is overcome, organic combination on multiple linear regressions, multiple linear regression and canonical correlation analysis is achieved. Compared with general least-squares regression model result, it is more ...

  12. Research on Application of Regression Least Squares Support Vector Machine on Performance Prediction of Hydraulic Excavator

    Directory of Open Access Journals (Sweden)

    Zhan-bo Chen

    2014-01-01

    Full Text Available In order to improve the performance prediction accuracy of hydraulic excavator, the regression least squares support vector machine is applied. First, the mathematical model of the regression least squares support vector machine is studied, and then the algorithm of the regression least squares support vector machine is designed. Finally, the performance prediction simulation of hydraulic excavator based on regression least squares support vector machine is carried out, and simulation results show that this method can predict the performance changing rules of hydraulic excavator correctly.

  13. Function Based Nonlinear Least Squares and Application to Jelinski--Moranda Software Reliability Model

    CERN Document Server

    Liu, Jingwei

    2011-01-01

    A function based nonlinear least squares estimation (FNLSE) method is proposed and investigated in parameter estimation of Jelinski-Moranda software reliability model. FNLSE extends the potential fitting functions of traditional least squares estimation (LSE), and takes the logarithm transformed nonlinear least squares estimation (LogLSE) as a special case. A novel power transformation function based nonlinear least squares estimation (powLSE) is proposed and applied to the parameter estimation of Jelinski-Moranda model. Solved with Newton-Raphson method, Both LogLSE and powLSE of Jelinski-Moranda models are applied to the mean time between failures (MTBF) predications on six standard software failure time data sets. The experimental results demonstrate the effectiveness of powLSE with optimal power index compared to the classical least--squares estimation (LSE), maximum likelihood estimation (MLE) and LogLSE in terms of recursively relative error (RE) index and Braun statistic index.

  14. Least-squares methods involving the H{sup -1} inner product

    Energy Technology Data Exchange (ETDEWEB)

    Pasciak, J.

    1996-12-31

    Least-squares methods are being shown to be an effective technique for the solution of elliptic boundary value problems. However, the methods differ depending on the norms in which they are formulated. For certain problems, it is much more natural to consider least-squares functionals involving the H{sup -1} norm. Such norms give rise to improved convergence estimates and better approximation to problems with low regularity solutions. In addition, fewer new variables need to be added and less stringent boundary conditions need to be imposed. In this talk, I will describe some recent developments involving least-squares methods utilizing the H{sup -1} inner product.

  15. Multilevel solvers of first-order system least-squares for Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Lai, Chen-Yao G. [National Chung Cheng Univ., Chia-Yi (Taiwan, Province of China)

    1996-12-31

    Recently, The use of first-order system least squares principle for the approximate solution of Stokes problems has been extensively studied by Cai, Manteuffel, and McCormick. In this paper, we study multilevel solvers of first-order system least-squares method for the generalized Stokes equations based on the velocity-vorticity-pressure formulation in three dimensions. The least-squares functionals is defined to be the sum of the L{sup 2}-norms of the residuals, which is weighted appropriately by the Reynolds number. We develop convergence analysis for additive and multiplicative multilevel methods applied to the resulting discrete equations.

  16. An Effective Hybrid Artificial Bee Colony Algorithm for Nonnegative Linear Least Squares Problems

    Directory of Open Access Journals (Sweden)

    Xiangyu Kong

    2014-07-01

    Full Text Available An effective hybrid artificial bee colony algorithm is proposed in this paper for nonnegative linear least squares problems. To further improve the performance of algorithm, orthogonal initialization method is employed to generate the initial swarm. Furthermore, to balance the exploration and exploitation abilities, a new search mechanism is designed. The performance of this algorithm is verified by using 27 benchmark functions and 5 nonnegative linear least squares test problems. And the comparison analyses are given between the proposed algorithm and other swarm intelligence algorithms. Numerical results demonstrate that the proposed algorithm displays a high performance compared with other algorithms for global optimization problems and nonnegative linear least squares problems.

  17. Recursive least squares method of regression coefficients estimation as a special case of Kalman filter

    Science.gov (United States)

    Borodachev, S. M.

    2016-06-01

    The simple derivation of recursive least squares (RLS) method equations is given as special case of Kalman filter estimation of a constant system state under changing observation conditions. A numerical example illustrates application of RLS to multicollinearity problem.

  18. Methodology and theory for partial least squares applied to functional data

    CERN Document Server

    Delaigle, Aurore; 10.1214/11-AOS958

    2012-01-01

    The partial least squares procedure was originally developed to estimate the slope parameter in multivariate parametric models. More recently it has gained popularity in the functional data literature. There, the partial least squares estimator of slope is either used to construct linear predictive models, or as a tool to project the data onto a one-dimensional quantity that is employed for further statistical analysis. Although the partial least squares approach is often viewed as an attractive alternative to projections onto the principal component basis, its properties are less well known than those of the latter, mainly because of its iterative nature. We develop an explicit formulation of partial least squares for functional data, which leads to insightful results and motivates new theory, demonstrating consistency and establishing convergence rates.

  19. Least-squares finite element discretizations of neutron transport equations in 3 dimensions

    Energy Technology Data Exchange (ETDEWEB)

    Manteuffel, T.A [Univ. of Colorado, Boulder, CO (United States); Ressel, K.J. [Interdisciplinary Project Center for Supercomputing, Zurich (Switzerland); Starkes, G. [Universtaet Karlsruhe (Germany)

    1996-12-31

    The least-squares finite element framework to the neutron transport equation introduced in is based on the minimization of a least-squares functional applied to the properly scaled neutron transport equation. Here we report on some practical aspects of this approach for neutron transport calculations in three space dimensions. The systems of partial differential equations resulting from a P{sub 1} and P{sub 2} approximation of the angular dependence are derived. In the diffusive limit, the system is essentially a Poisson equation for zeroth moment and has a divergence structure for the set of moments of order 1. One of the key features of the least-squares approach is that it produces a posteriori error bounds. We report on the numerical results obtained for the minimum of the least-squares functional augmented by an additional boundary term using trilinear finite elements on a uniform tesselation into cubes.

  20. Iterative least-squares solvers for the Navier-Stokes equations

    Energy Technology Data Exchange (ETDEWEB)

    Bochev, P. [Univ. of Texas, Arlington, TX (United States)

    1996-12-31

    In the recent years finite element methods of least-squares type have attracted considerable attention from both mathematicians and engineers. This interest has been motivated, to a large extent, by several valuable analytic and computational properties of least-squares variational principles. In particular, finite element methods based on such principles circumvent Ladyzhenskaya-Babuska-Brezzi condition and lead to symmetric and positive definite algebraic systems. Thus, it is not surprising that numerical solution of fluid flow problems has been among the most promising and successful applications of least-squares methods. In this context least-squares methods offer significant theoretical and practical advantages in the algorithmic design, which makes resulting methods suitable, among other things, for large-scale numerical simulations.

  1. Adaptive slab laser beam quality improvement using a weighted least-squares reconstruction algorithm.

    Science.gov (United States)

    Chen, Shanqiu; Dong, LiZhi; Chen, XiaoJun; Tan, Yi; Liu, Wenjin; Wang, Shuai; Yang, Ping; Xu, Bing; Ye, YuTang

    2016-04-10

    Adaptive optics is an important technology for improving beam quality in solid-state slab lasers. However, there are uncorrectable aberrations in partial areas of the beam. In the criterion of the conventional least-squares reconstruction method, it makes the zones with small aberrations nonsensitive and hinders this zone from being further corrected. In this paper, a weighted least-squares reconstruction method is proposed to improve the relative sensitivity of zones with small aberrations and to further improve beam quality. Relatively small weights are applied to the zones with large residual aberrations. Comparisons of results show that peak intensity in the far field improved from 1242 analog digital units (ADU) to 2248 ADU, and beam quality β improved from 2.5 to 2.0. This indicates the weighted least-squares method has better performance than the least-squares reconstruction method when there are large zonal uncorrectable aberrations in the slab laser system. PMID:27139877

  2. A window least squares algorithm for statistical noise smoothing of 2D-ACAR data

    International Nuclear Information System (INIS)

    Taking into account a number of basic features of the histograms of two-dimensional angular correlation of the positron annihilation radiation (2D-ACAR), a window least squares technique for statistical noise smoothing is proposed. (author). 15 refs

  3. Safety Monitoring of a Super-High Dam Using Optimal Kernel Partial Least Squares

    OpenAIRE

    Hao Huang; Bo Chen; Chungao Liu

    2015-01-01

    Considering the characteristics of complex nonlinear and multiple response variables of a super-high dam, kernel partial least squares (KPLS) method, as a strongly nonlinear multivariate analysis method, is introduced into the field of dam safety monitoring for the first time. A universal unified optimization algorithm is designed to select the key parameters of the KPLS method and obtain the optimal kernel partial least squares (OKPLS). Then, OKPLS is used to establish a strongly nonlinear m...

  4. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    Energy Technology Data Exchange (ETDEWEB)

    Jiang, Lijian, E-mail: ljjiang@hnu.edu.cn; Li, Xinping, E-mail: exping@126.com

    2015-08-01

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in each subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in

  5. Imposing Observation-Varying Equality Constraints Using Generalised Restricted Least Squares

    OpenAIRE

    Dr Alicia Rambaldi; Dr Chris O'Donnell; Doran, Howard E.

    2003-01-01

    Linear equality restrictions derived from economic theory are frequently observation-varying. Except in special cases, Restricted Least Squares (RLS) cannot be used to impose such restrictions without either underconstraining or overconstraining the parameter space. We solve the problem by developing a new estimator that collapses to RLS in cases where the restrictions are observation-invariant. We derive some theoretical properties of our so-called Generalised Restricted Least Squares (GRLS)...

  6. Consistency of the structured total least squares estimator in a multivariate errors-in-variables model

    OpenAIRE

    Kukush, A.; I. Markovsky; Van Huffel, S.

    2005-01-01

    The structured total least squares estimator, defined via a constrained optimization problem, is a generalization of the total least squares estimator when the data matrix and the applied correction satisfy given structural constraints. In the paper, an affine structure with additional assumptions is considered. In particular, Toeplitz and Hankel structured, noise free and unstructured blocks are allowed simultaneously in the augmented data matrix. An equivalent optimization problem is derive...

  7. ON STABLE PERTURBATIONS OF THE STIFFLY WEIGHTED PSEUDOINVERSE AND WEIGHTED LEAST SQUARES PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Mu-sheng Wei

    2005-01-01

    In this paper we study perturbations of the stiffly weighted pseudoinverse (W1/2 A)+W1/2 and the related stiffly weighted least squares problem, where both the matrices A and W are given with W positive diagonal and severely stiff. We show that the perturbations to the stiffly weighted pseudoinverse and the related stiffly weighted least squares problem are stable, if and only if the perturbed matrices (^)A = A+δA satisfy several row rank preserving conditions.

  8. SUPERCONVERGENCE OF LEAST-SQUARES MIXED FINITE ELEMENTS FOR ELLIPTIC PROBLEMS ON TRIANGULATION

    Institute of Scientific and Technical Information of China (English)

    陈艳萍; 杨菊娥

    2003-01-01

    In this paper,we present the least-squares mixed finite element method and investigate superconvergence phenomena for the second order elliptic boundary-value problems over triangulations.On the basis of the L2-projection and some mixed finite element projections,we obtain the superconvergence result of least-squares mixed finite element solutions.This error estimate indicates an accuracy of O(h3/2)if the lowest order Raviart-Thomas elements are employed.

  9. Multi-element least square HDMR methods and their applications for stochastic multiscale model reduction

    International Nuclear Information System (INIS)

    Stochastic multiscale modeling has become a necessary approach to quantify uncertainty and characterize multiscale phenomena for many practical problems such as flows in stochastic porous media. The numerical treatment of the stochastic multiscale models can be very challengeable as the existence of complex uncertainty and multiple physical scales in the models. To efficiently take care of the difficulty, we construct a computational reduced model. To this end, we propose a multi-element least square high-dimensional model representation (HDMR) method, through which the random domain is adaptively decomposed into a few subdomains, and a local least square HDMR is constructed in each subdomain. These local HDMRs are represented by a finite number of orthogonal basis functions defined in low-dimensional random spaces. The coefficients in the local HDMRs are determined using least square methods. We paste all the local HDMR approximations together to form a global HDMR approximation. To further reduce computational cost, we present a multi-element reduced least-square HDMR, which improves both efficiency and approximation accuracy in certain conditions. To effectively treat heterogeneity properties and multiscale features in the models, we integrate multiscale finite element methods with multi-element least-square HDMR for stochastic multiscale model reduction. This approach significantly reduces the original model's complexity in both the resolution of the physical space and the high-dimensional stochastic space. We analyze the proposed approach, and provide a set of numerical experiments to demonstrate the performance of the presented model reduction techniques. - Highlights: • Multi-element least square HDMR is proposed to treat stochastic models. • Random domain is adaptively decomposed into some subdomains to obtain adaptive multi-element HDMR. • Least-square reduced HDMR is proposed to enhance computation efficiency and approximation accuracy in

  10. Shape constrained kernel-weighted least squares: Application to production function estimation for Chilean manufacturing industries

    OpenAIRE

    Yagi, Daisuke; Johnson, Andrew L.; Kuosmanen, Timo

    2016-01-01

    Two approaches to nonparametric regression include local averaging and shape constrained regression. In this paper we examine a novel way to impose shape constraints on a local linear kernel estimator. The proposed approach is referred to as Shape Constrained Kernel-weighted Least Squares (SCKLS). We prove consistency of SCKLS estimator and show that SCKLS is a generalization of Convex Nonparametric Least Squares (CNLS). We compare the performance of three estimators, SCKLS, CNLS, and Constra...

  11. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    OpenAIRE

    Spackman, K. A.

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates...

  12. Improvements to the Levenberg-Marquardt algorithm for nonlinear least-squares minimization

    OpenAIRE

    Transtrum, Mark K.; Sethna, James P.

    2012-01-01

    When minimizing a nonlinear least-squares function, the Levenberg-Marquardt algorithm can suffer from a slow convergence, particularly when it must navigate a narrow canyon en route to a best fit. On the other hand, when the least-squares function is very flat, the algorithm may easily become lost in parameter space. We introduce several improvements to the Levenberg-Marquardt algorithm in order to improve both its convergence speed and robustness to initial parameter guesses. We update the u...

  13. Solving method of generalized nonlinear dynamic least squares for data processing in building of digital mine

    Institute of Scientific and Technical Information of China (English)

    TAO Hua-xue (陶华学); GUO Jin-yun (郭金运)

    2003-01-01

    Data are very important to build the digital mine. Data come from many sources, have different types and temporal states. Relations between one class of data and the other one, or between data and unknown parameters are more nonlinear. The unknown parameters are non-random or random, among which the random parameters often dynamically vary with time. Therefore it is not accurate and reliable to process the data in building the digital mine with the classical least squares method or the method of the common nonlinear least squares. So a generalized nonlinear dynamic least squares method to process data in building the digital mine is put forward. In the meantime, the corresponding mathematical model is also given. The generalized nonlinear least squares problem is more complex than the common nonlinear least squares problem and its solution is more difficultly obtained because the dimensions of data and parameters in the former are bigger. So a new solution model and the method are put forward to solve the generalized nonlinear dynamic least squares problem. In fact, the problem can be converted to two sub-problems, each of which has a single variable. That is to say, a complex problem can be separated and then solved. So the dimension of unknown parameters can be reduced to its half, which simplifies the original high dimensional equations. The method lessens the calculating load and opens up a new way to process the data in building the digital mine, which have more sources, different types and more temporal states.

  14. Application of the Polynomial-Based Least Squares and Total Least Squares Models for the Attenuated Total Reflection Fourier Transform Infrared Spectra of Binary Mixtures of Hydroxyl Compounds.

    Science.gov (United States)

    Shan, Peng; Peng, Silong; Zhao, Yuhui; Tang, Liang

    2016-03-01

    An analysis of binary mixtures of hydroxyl compound by Attenuated Total Reflection Fourier transform infrared spectroscopy (ATR FT-IR) and classical least squares (CLS) yield large model error due to the presence of unmodeled components such as H-bonded components. To accommodate these spectral variations, polynomial-based least squares (LSP) and polynomial-based total least squares (TLSP) are proposed to capture the nonlinear absorbance-concentration relationship. LSP is based on assuming that only absorbance noise exists; while TLSP takes both absorbance noise and concentration noise into consideration. In addition, based on different solving strategy, two optimization algorithms (limited-memory Broyden-Fletcher-Goldfarb-Shanno (LBFGS) algorithm and Levenberg-Marquardt (LM) algorithm) are combined with TLSP and then two different TLSP versions (termed as TLSP-LBFGS and TLSP-LM) are formed. The optimum order of each nonlinear model is determined by cross-validation. Comparison and analyses of the four models are made from two aspects: absorbance prediction and concentration prediction. The results for water-ethanol solution and ethanol-ethyl lactate solution show that LSP, TLSP-LBFGS, and TLSP-LM can, for both absorbance prediction and concentration prediction, obtain smaller root mean square error of prediction than CLS. Additionally, they can also greatly enhance the accuracy of estimated pure component spectra. However, from the view of concentration prediction, the Wilcoxon signed rank test shows that there is no statistically significant difference between each nonlinear model and CLS. PMID:26810185

  15. Least-Squares Regression and Spectral Residual Augmented Classical Least-Squares Chemometric Models for Stability-Indicating Analysis of Agomelatine and Its Degradation Products: A Comparative Study.

    Science.gov (United States)

    Naguib, Ibrahim A; Abdelrahman, Maha M; El Ghobashy, Mohamed R; Ali, Nesma A

    2016-03-01

    Two accurate, sensitive, and selective stability-indicating methods are developed and validated for simultaneous quantitative determination of agomelatine (AGM) and its forced degradation products (Deg I and Deg II), whether in pure forms or in pharmaceutical formulations. Partial least-squares regression (PLSR) and spectral residual augmented classical least-squares (SRACLS) are two chemometric models that are being subjected to a comparative study through handling UV spectral data in range (215-350 nm). For proper analysis, a three-factor, four-level experimental design was established, resulting in a training set consisting of 16 mixtures containing different ratios of interfering species. An independent test set consisting of eight mixtures was used to validate the prediction ability of the suggested models. The results presented indicate the ability of mentioned multivariate calibration models to analyze AGM, Deg I, and Deg II with high selectivity and accuracy. The analysis results of the pharmaceutical formulations were statistically compared to the reference HPLC method, with no significant differences observed regarding accuracy and precision. The SRACLS model gives comparable results to the PLSR model; however, it keeps the qualitative spectral information of the classical least-squares algorithm for analyzed components. PMID:26987554

  16. The possibilities of least-squares migration of internally scattered seismic energy

    KAUST Repository

    Aldawood, Ali

    2015-05-26

    Approximate images of the earth’s subsurface structures are usually obtained by migrating surface seismic data. Least-squares migration, under the single-scattering assumption, is used as an iterative linearized inversion scheme to suppress migration artifacts, deconvolve the source signature, mitigate the acquisition fingerprint, and enhance the spatial resolution of migrated images. The problem with least-squares migration of primaries, however, is that it may not be able to enhance events that are mainly illuminated by internal multiples, such as vertical and nearly vertical faults or salt flanks. To alleviate this problem, we adopted a linearized inversion framework to migrate internally scattered energy. We apply the least-squares migration of first-order internal multiples to image subsurface vertical fault planes. Tests on synthetic data demonstrated the ability of the proposed method to resolve vertical fault planes, which are poorly illuminated by the least-squares migration of primaries only. The proposed scheme is robust in the presence of white Gaussian observational noise and in the case of imaging the fault planes using inaccurate migration velocities. Our results suggested that the proposed least-squares imaging, under the double-scattering assumption, still retrieved the vertical fault planes when imaging the scattered data despite a slight defocusing of these events due to the presence of noise or velocity errors.

  17. New Physics Data Libraries for Monte Carlo Transport

    CERN Document Server

    Augelli, M; Kuster, M; Han, M; Kim, C H; Pia, M G; Quintieri, L; Seo, H; Saracco, P; Weidenspointner, G; Zoglauer, A

    2010-01-01

    The role of data libraries as a collaborative tool across Monte Carlo codes is discussed. Some new contributions in this domain are presented; they concern a data library of proton and alpha ionization cross sections, the development in progress of a data library of electron ionization cross sections and proposed improvements to the EADL (Evaluated Atomic Data Library), the latter resulting from an extensive data validation process.

  18. On the equivalence of Kalman filtering and least-squares estimation

    Science.gov (United States)

    Mysen, E.

    2016-07-01

    The Kalman filter is derived directly from the least-squares estimator, and generalized to accommodate stochastic processes with time variable memory. To complete the link between least-squares estimation and Kalman filtering of first-order Markov processes, a recursive algorithm is presented for the computation of the off-diagonal elements of the a posteriori least-squares error covariance. As a result of the algebraic equivalence of the two estimators, both approaches can fully benefit from the advantages implied by their individual perspectives. In particular, it is shown how Kalman filter solutions can be integrated into the normal equation formalism that is used for intra- and inter-technique combination of space geodetic data.

  19. An Improved Moving Least Squares Method for Curve and Surface Fitting

    Directory of Open Access Journals (Sweden)

    Lei Zhang

    2013-01-01

    Full Text Available The moving least squares (MLS method has been developed for the fitting of measured data contaminated with random error. The local approximants of MLS method only take the error of dependent variable into account, whereas the independent variable of measured data always contains random error. Considering the errors of all variables, this paper presents an improved moving least squares (IMLS method to generate curve and surface for the measured data. In IMLS method, total least squares (TLS with a parameter λ based on singular value decomposition is introduced to the local approximants. A procedure is developed to determine the parameter λ. Numerical examples for curve and surface fitting are given to prove the performance of IMLS method.

  20. Generation of optimal correlations by simulated annealing for ill-conditioned least-squares solution

    International Nuclear Information System (INIS)

    A typical process of determining parameters of empirical correlation is collecting measurements of experiments and applying least-squares method with over-determined number of variable data. Least-squares problems occur frequently in the parameter identification of linear/nonlinear dynamic models, model fitting using dimensionless variables in flow interfacial treatment, heat transfer and pressure drop models, etc. Considering the inevitable measurement noise and careless experimental design, the ill-posedness property of the least-squares method can arise and limit the accuracy of the assumed correlation structures. In this paper, a method of simulated annealing is proposed for estimating power-law parameters of the empirical correlation of experimental data. The method is applied to the determination of the hydrogen removal correlation being used in reactor containment analysis. The analysis results show the remarkable improvement in accuracy and robustness for the noisy measurement data. (author)

  1. Maximum likelihood training of connectionist models: comparison with least squares back-propagation and logistic regression.

    Science.gov (United States)

    Spackman, K A

    1991-01-01

    This paper presents maximum likelihood back-propagation (ML-BP), an approach to training neural networks. The widely reported original approach uses least squares back-propagation (LS-BP), minimizing the sum of squared errors (SSE). Unfortunately, least squares estimation does not give a maximum likelihood (ML) estimate of the weights in the network. Logistic regression, on the other hand, gives ML estimates for single layer linear models only. This report describes how to obtain ML estimates of the weights in a multi-layer model, and compares LS-BP to ML-BP using several examples. It shows that in many neural networks, least squares estimation gives inferior results and should be abandoned in favor of maximum likelihood estimation. Questions remain about the potential uses of multi-level connectionist models in such areas as diagnostic systems and risk-stratification in outcomes research. PMID:1807606

  2. Meshless Least-Squares Method for Solving the Steady-State Heat Conduction Equation

    Institute of Scientific and Technical Information of China (English)

    LIU Yan; ZHANG Xiong; LU Mingwan

    2005-01-01

    The meshless weighted least-squares (MWLS) method is a pure meshless method that combines the moving least-squares approximation scheme and least-square discretization. Previous studies of the MWLS method for elastostatics and wave propagation problems have shown that the MWLS method possesses several advantages, such as high accuracy, high convergence rate, good stability, and high computational efficiency. In this paper, the MWLS method is extended to heat conduction problems. The MWLS computational parameters are chosen based on a thorough numerical study of 1-dimensional problems. Several 2-dimensional examples show that the MWLS method is much faster than the element free Galerkin method (EFGM), while the accuracy of the MWLS method is close to, or even better than the EFGM. These numerical results demonstrate that the MWLS method has good potential for numerical analyses of heat transfer problems.

  3. Robust parallel iterative solvers for linear and least-squares problems, Final Technical Report

    Energy Technology Data Exchange (ETDEWEB)

    Saad, Yousef

    2014-01-16

    The primary goal of this project is to study and develop robust iterative methods for solving linear systems of equations and least squares systems. The focus of the Minnesota team is on algorithms development, robustness issues, and on tests and validation of the methods on realistic problems. 1. The project begun with an investigation on how to practically update a preconditioner obtained from an ILU-type factorization, when the coefficient matrix changes. 2. We investigated strategies to improve robustness in parallel preconditioners in a specific case of a PDE with discontinuous coefficients. 3. We explored ways to adapt standard preconditioners for solving linear systems arising from the Helmholtz equation. These are often difficult linear systems to solve by iterative methods. 4. We have also worked on purely theoretical issues related to the analysis of Krylov subspace methods for linear systems. 5. We developed an effective strategy for performing ILU factorizations for the case when the matrix is highly indefinite. The strategy uses shifting in some optimal way. The method was extended to the solution of Helmholtz equations by using complex shifts, yielding very good results in many cases. 6. We addressed the difficult problem of preconditioning sparse systems of equations on GPUs. 7. A by-product of the above work is a software package consisting of an iterative solver library for GPUs based on CUDA. This was made publicly available. It was the first such library that offers complete iterative solvers for GPUs. 8. We considered another form of ILU which blends coarsening techniques from Multigrid with algebraic multilevel methods. 9. We have released a new version on our parallel solver - called pARMS [new version is version 3]. As part of this we have tested the code in complex settings - including the solution of Maxwell and Helmholtz equations and for a problem of crystal growth.10. As an application of polynomial preconditioning we considered the

  4. Taking correlations in GPS least squares adjustments into account with a diagonal covariance matrix

    Science.gov (United States)

    Kermarrec, Gaël; Schön, Steffen

    2016-05-01

    Based on the results of Luati and Proietti (Ann Inst Stat Math 63:673-686, 2011) on an equivalence for a certain class of polynomial regressions between the diagonally weighted least squares (DWLS) and the generalized least squares (GLS) estimator, an alternative way to take correlations into account thanks to a diagonal covariance matrix is presented. The equivalent covariance matrix is much easier to compute than a diagonalization of the covariance matrix via eigenvalue decomposition which also implies a change of the least squares equations. This condensed matrix, for use in the least squares adjustment, can be seen as a diagonal or reduced version of the original matrix, its elements being simply the sums of the rows elements of the weighting matrix. The least squares results obtained with the equivalent diagonal matrices and those given by the fully populated covariance matrix are mathematically strictly equivalent for the mean estimator in terms of estimate and its a priori cofactor matrix. It is shown that this equivalence can be empirically extended to further classes of design matrices such as those used in GPS positioning (single point positioning, precise point positioning or relative positioning with double differences). Applying this new model to simulated time series of correlated observations, a significant reduction of the coordinate differences compared with the solutions computed with the commonly used diagonal elevation-dependent model was reached for the GPS relative positioning with double differences, single point positioning as well as precise point positioning cases. The estimate differences between the equivalent and classical model with fully populated covariance matrix were below the mm for all simulated GPS cases and below the sub-mm for the relative positioning with double differences. These results were confirmed by analyzing real data. Consequently, the equivalent diagonal covariance matrices, compared with the often used elevation

  5. Online Least Squares One-Class Support Vector Machines-Based Abnormal Visual Event Detection

    OpenAIRE

    Tian Wang; Jie Chen; Yi Zhou; Hichem Snoussi

    2013-01-01

    The abnormal event detection problem is an important subject in real-time video surveillance. In this paper, we propose a novel online one-class classification algorithm, online least squares one-class support vector machine (online LS-OC-SVM), combined with its sparsified version (sparse online LS-OC-SVM). LS-OC-SVM extracts a hyperplane as an optimal description of training objects in a regularized least squares sense. The online LS-OC-SVM learns a training set with a limited number of samp...

  6. Analysis of total least squares in estimating the parameters of a mortar trajectory

    Energy Technology Data Exchange (ETDEWEB)

    Lau, D.L.; Ng, L.C.

    1994-12-01

    Least Squares (LS) is a method of curve fitting used with the assumption that error exists in the observation vector. The method of Total Least Squares (TLS) is more useful in cases where there is error in the data matrix as well as the observation vector. This paper describes work done in comparing the LS and TLS results for parameter estimation of a mortar trajectory based on a time series of angular observations. To improve the results, we investigated several derivations of the LS and TLS methods, and early findings show TLS provided slightly, 10%, improved results over the LS method.

  7. Unknown parameter's variance-covariance propagation and calculation in generalized nonlinear least squares problem

    Institute of Scientific and Technical Information of China (English)

    TAO Hua-xue; GUO Jin-yun

    2005-01-01

    The unknown parameter's variance-covariance propagation and calculation in the generalized nonlinear least squares remain to be studied now,which didn't appear in the internal and external referencing documents. The unknown parameter's variance-covariance propagation formula, considering the two-power terms, was concluded used to evaluate the accuracy of unknown parameter estimators in the generalized nonlinear least squares problem. It is a new variance-covariance formula and opens up a new way to evaluate the accuracy when processing data which have the multi-source,multi-dimensional, multi-type, multi-time-state, different accuracy and nonlinearity.

  8. Least-squares streamline diffusion finite element approximations to singularly perturbed convection-diffusion problems

    Energy Technology Data Exchange (ETDEWEB)

    Lazarov, R D; Vassilevski, P S

    1999-05-06

    In this paper we introduce and study a least-squares finite element approximation for singularly perturbed convection-diffusion equations of second order. By introducing the flux (diffusive plus convective) as a new unknown, the problem is written in a mixed form as a first order system. Further, the flux is augmented by adding the lower order terms with a small parameter. The new first order system is approximated by the least-squares finite element method using the minus one norm approach of Bramble, Lazarov, and Pasciak [2]. Further, we estimate the error of the method and discuss its implementation and the numerical solution of some test problems.

  9. Iterative Weighted Semiparametric Least Squares Estimation in Repeated Measurement Partially Linear Regression Models

    Institute of Scientific and Technical Information of China (English)

    Ge-mai Chen; Jin-hong You

    2005-01-01

    Consider a repeated measurement partially linear regression model with an unknown vector pasemiparametric generalized least squares estimator (SGLSE) ofβ, we propose an iterative weighted semiparametric least squares estimator (IWSLSE) and show that it improves upon the SGLSE in terms of asymptotic covariance matrix. An adaptive procedure is given to determine the number of iterations. We also show that when the number of replicates is less than or equal to two, the IWSLSE can not improve upon the SGLSE.These results are generalizations of those in [2] to the case of semiparametric regressions.

  10. Least square based method for obtaining one-particle spectral functions from temperature Green functions

    Science.gov (United States)

    Liu, Jun

    2013-02-01

    A least square based fitting scheme is proposed to extract an optimal one-particle spectral function from any one-particle temperature Green function. It uses the existing non-negative least square (NNLS) fit algorithm to do the fit, and Tikhonov regularization to help with possible numerical singular behaviors. By flexibly adding delta peaks to represent very sharp features of the target spectrum, this scheme guarantees a global minimization of the fitted residue. The performance of this scheme is manifested with diverse physical examples. The proposed scheme is shown to be comparable in performance to the standard Padé analytic continuation scheme.

  11. Least square neural network model of the crude oil blending process.

    Science.gov (United States)

    Rubio, José de Jesús

    2016-06-01

    In this paper, the recursive least square algorithm is designed for the big data learning of a feedforward neural network. The proposed method as the combination of the recursive least square and feedforward neural network obtains four advantages over the alone algorithms: it requires less number of regressors, it is fast, it has the learning ability, and it is more compact. Stability, convergence, boundedness of parameters, and local minimum avoidance of the proposed technique are guaranteed. The introduced strategy is applied for the modeling of the crude oil blending process. PMID:26992706

  12. Explicit least squares system parameter identification for exact differential input/output models

    Science.gov (United States)

    Pearson, A. E.

    1993-01-01

    The equation error for a class of systems modeled by input/output differential operator equations has the potential to be integrated exactly, given the input/output data on a finite time interval, thereby opening up the possibility of using an explicit least squares estimation technique for system parameter identification. The paper delineates the class of models for which this is possible and shows how the explicit least squares cost function can be obtained in a way that obviates dealing with unknown initial and boundary conditions. The approach is illustrated by two examples: a second order chemical kinetics model and a third order system of Lorenz equations.

  13. Hierarchical Least Squares Identification and Its Convergence for Large Scale Multivariable Systems

    Institute of Scientific and Technical Information of China (English)

    丁锋; 丁韬

    2002-01-01

    The recursive least squares identification algorithm (RLS) for large scale multivariable systems requires a large amount of calculations, therefore, the RLS algorithm is difficult to implement on a computer. The computational load of estimation algorithms can be reduced using the hierarchical least squares identification algorithm (HLS) for large scale multivariable systems. The convergence analysis using the Martingale Convergence Theorem indicates that the parameter estimation error (PEE) given by the HLS algorithm is uniformly bounded without a persistent excitation signal and that the PEE consistently converges to zero for the persistent excitation condition. The HLS algorithm has a much lower computational load than the RLS algorithm.

  14. The structured total least squares algorithm research for passive location based on angle information

    Institute of Scientific and Technical Information of China (English)

    WANG Ding; ZHANG Li; WU Ying

    2009-01-01

    Based on the constrained total least squares (CTLS) passive location algorithm with bearing-only measurements, in this paper, the same passive location problem is transformed into the structured total least squares (STLS) problem. The solution of the STLS problem for passive location can be obtained using the inverse iteration method. It also expatiates that both the STLS algorithm and the CTLS algorithm have the same location mean squares error under certain condition. Finally, the article presents a kind of location and tracking algorithm for moving target by combining STLS location algorithm with Kalman filter (KF). The efficiency and superiority of the proposed algorithms can be confirmed by computer simulation results.

  15. Constrained total least squares algorithm for passive location based on bearing-only measurements

    Institute of Scientific and Technical Information of China (English)

    WANG Ding; ZHANG Li; WU Ying

    2007-01-01

    The constrained total least squares algorithm for the passive location is presented based on the bearing-only measurements in this paper. By this algorithm the non-linear measurement equations are firstly transformed into linear equations and the effect of the measurement noise on the linear equation coefficients is analyzed,therefore the problem of the passive location can be considered as the problem of constrained total least squares, then the problem is changed into the optimized question without restraint which can be solved by the Newton algorithm, and finally the analysis of the location accuracy is given. The simulation results prove that the new algorithm is effective and practicable.

  16. Simulation of Foam Divot Weight on External Tank Utilizing Least Squares and Neural Network Methods

    Science.gov (United States)

    Chamis, Christos C.; Coroneos, Rula M.

    2007-01-01

    Simulation of divot weight in the insulating foam, associated with the external tank of the U.S. space shuttle, has been evaluated using least squares and neural network concepts. The simulation required models based on fundamental considerations that can be used to predict under what conditions voids form, the size of the voids, and subsequent divot ejection mechanisms. The quadratic neural networks were found to be satisfactory for the simulation of foam divot weight in various tests associated with the external tank. Both linear least squares method and the nonlinear neural network predicted identical results.

  17. Galerkin-Petrov least squares mixed element method for stationary incompressible magnetohydrodynamics

    Institute of Scientific and Technical Information of China (English)

    LUO Zhen-dong; MAO Yun-kui; ZHU Jiang

    2007-01-01

    The Galerkin-Petrov least squares method is combined with the mixed finite element method to deal with the stationary, incompressible magnetohydrodynamics system of equations with viscosity. A Galerkin-Petrov least squares mixed finite element format for the stationary incompressible magnetohydrodynamics equations is presented.And the existence and error estimates of its solution are derived. Through this method,the combination among the mixed finite element spaces does not demand the discrete Babu(s)ka-Brezzi stability conditions so that the mixed finite element spaces could be chosen arbitrartily and the error estimates with optimal order could be obtained.

  18. An Algorithm For Interval Continuous –Time MIMO Systems Reduction Using Least Squares Method

    Directory of Open Access Journals (Sweden)

    K.Kiran Kumar, Dr.G.V.K.R.Sastry

    2013-05-01

    Full Text Available A new algorithm for the reduction of Large Scale Linear MIMO (Multi Input MultiOutput Interval systems is proposed in this paper. The proposed method combines the Least squares methods shifting about a point ‘a’ together with the Moment matching technique.The denominator of the reduced interval model is found by Least squares methods shifting about a point ‘a’ while the numerator of the reduced interval model is obtained by Moment matchingTechnique. The reduced order interval MIMO models retain the steady-state value and stability of the original interval MIMO system. The algorithm is illustrated by a numerical example.

  19. Genfit: a general least squares curve fitting program for mini-computer

    International Nuclear Information System (INIS)

    Genfit is a basic data processing program, suitable for small on line computers. In essence the program solve the curve fitting problem using the non-linear least squares method. A data set consisting of a series of points in X-Y plane is fitted to a selected function whose parameters are adjusted to give the best fit in the least squares sence. Convergence may be accelerated by modifying (or interchanging) the values of the constant parameters in accordance with results of previous calculations

  20. Seismic reliability assessment of RC structures including soil–structure interaction using wavelet weighted least squares support vector machine

    International Nuclear Information System (INIS)

    An efficient metamodeling framework in conjunction with the Monte-Carlo Simulation (MCS) is introduced to reduce the computational cost in seismic reliability assessment of existing RC structures. In order to achieve this purpose, the metamodel is designed by combining weighted least squares support vector machine (WLS-SVM) and a wavelet kernel function, called wavelet weighted least squares support vector machine (WWLS-SVM). In this study, the seismic reliability assessment of existing RC structures with consideration of soil–structure interaction (SSI) effects is investigated in accordance with Performance-Based Design (PBD). This study aims to incorporate the acceptable performance levels of PBD into reliability theory for comparing the obtained annual probability of non-performance with the target values for each performance level. The MCS method as the most reliable method is utilized to estimate the annual probability of failure associated with a given performance level in this study. In WWLS-SVM-based MCS, the structural seismic responses are accurately predicted by WWLS-SVM for reducing the computational cost. To show the efficiency and robustness of the proposed metamodel, two RC structures are studied. Numerical results demonstrate the efficiency and computational advantages of the proposed metamodel for the seismic reliability assessment of structures. Furthermore, the consideration of the SSI effects in the seismic reliability assessment of existing RC structures is compared to the fixed base model. It shows which SSI has the significant influence on the seismic reliability assessment of structures.

  1. Fitting a linear regression model by combining least squares and least absolute value estimation

    OpenAIRE

    Allende, Sira; Bouza, Carlos; Romero, Isidro

    1995-01-01

    Robust estimation of the multiple regression is modeled by using a convex combination of Least Squares and Least Absolute Value criterions. A Bicriterion Parametric algorithm is developed for computing the corresponding estimates. The proposed procedure should be specially useful when outliers are expected. Its behavior is analyzed using some examples.

  2. On Solution of Total Least Squares Problems with Multiple Right-hand Sides

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, I.; Plešinger, Martin; Strakoš, Zdeněk

    2008-01-01

    Roč. 8, č. 1 (2008), s. 10815-10816. ISSN 1617-7061 R&D Projects: GA AV ČR IAA100300802 Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares problem * multiple right-hand sides * linear approximation problem Subject RIV: BA - General Mathematics

  3. Noise suppression using preconditioned least-squares prestack time migration: application to the Mississippian limestone

    Science.gov (United States)

    Guo, Shiguang; Zhang, Bo; Wang, Qing; Cabrales-Vargas, Alejandro; Marfurt, Kurt J.

    2016-08-01

    Conventional Kirchhoff migration often suffers from artifacts such as aliasing and acquisition footprint, which come from sub-optimal seismic acquisition. The footprint can mask faults and fractures, while aliased noise can focus into false coherent events which affect interpretation and contaminate amplitude variation with offset, amplitude variation with azimuth and elastic inversion. Preconditioned least-squares migration minimizes these artifacts. We implement least-squares migration by minimizing the difference between the original data and the modeled demigrated data using an iterative conjugate gradient scheme. Unpreconditioned least-squares migration better estimates the subsurface amplitude, but does not suppress aliasing. In this work, we precondition the results by applying a 3D prestack structure-oriented LUM (lower–upper–middle) filter to each common offset and common azimuth gather at each iteration. The preconditioning algorithm not only suppresses aliasing of both signal and noise, but also improves the convergence rate. We apply the new preconditioned least-squares migration to the Marmousi model and demonstrate how it can improve the seismic image compared with conventional migration, and then apply it to one survey acquired over a new resource play in the Mid-Continent, USA. The acquisition footprint from the targets is attenuated and the signal to noise ratio is enhanced. To demonstrate the impact on interpretation, we generate a suite of seismic attributes to image the Mississippian limestone, and show that the karst-enhanced fractures in the Mississippian limestone can be better illuminated.

  4. Analysis of Total Least Squares Problem with Multiple Right-Hand Sides

    Czech Academy of Sciences Publication Activity Database

    Hnětynková, Iveta; Plešinger, Martin; Strakoš, Zdeněk

    Dundee : University of Dundee, 2007 - (Griffith, D.; Watson , G.). s. 22-22 [Biennial Conference on Numerical Analysis /22./. 26.06.2007-29.06.2007, University of Dundee] Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares * multiple right-hand sides * data reduction

  5. On the convergence of the partial least squares path modeling algorithm

    NARCIS (Netherlands)

    Henseler, Jörg

    2010-01-01

    This paper adds to an important aspect of Partial Least Squares (PLS) path modeling, namely the convergence of the iterative PLS path modeling algorithm. Whilst conventional wisdom says that PLS always converges in practice, there is no formal proof for path models with more than two blocks of manif

  6. LEAST-SQUARES MIXED FINITE ELEMENT METHOD FOR SADDLE-POINT PROBLEM

    Institute of Scientific and Technical Information of China (English)

    Lie-heng Wang; Huo-yuan Duan

    2000-01-01

    In this paper, a least-squares mixed finite element method for the solution of the primal saddle-point problem is developed. It is proved that the approximate problem is consistent ellipticity in the conforming finite element spaces with only the discrete BB-condition needed for a smaller auxiliary problem. The abstract error estimate is derived.

  7. Harmonic tidal analysis at a few stations using the least squares method

    Digital Repository Service at National Institute of Oceanography (India)

    Fernandes, A.A; Das, V.K.; Bahulayan, N.

    Using the least squares method, harmonic analysis has been performed on hourly water level records of 29 days at several stations depicting different types of non-tidal noise. For a tidal record at Mormugao, which was free from storm surges (low...

  8. Using AMMI, factorial regression and partial least squares regression models for interpreting genotype x environment interaction.

    NARCIS (Netherlands)

    Vargas, M.; Crossa, J.; Eeuwijk, van F.A.; Ramirez, M.E.; Sayre, K.

    1999-01-01

    Partial least squares (PLS) and factorial regression (FR) are statistical models that incorporate external environmental and/or cultivar variables for studying and interpreting genotype × environment interaction (GEl). The Additive Main effect and Multiplicative Interaction (AMMI) model uses only th

  9. Bootstrap Confidence Intervals for Ordinary Least Squares Factor Loadings and Correlations in Exploratory Factor Analysis

    Science.gov (United States)

    Zhang, Guangjian; Preacher, Kristopher J.; Luo, Shanhong

    2010-01-01

    This article is concerned with using the bootstrap to assign confidence intervals for rotated factor loadings and factor correlations in ordinary least squares exploratory factor analysis. Coverage performances of "SE"-based intervals, percentile intervals, bias-corrected percentile intervals, bias-corrected accelerated percentile intervals, and…

  10. Revisiting the Least-squares Procedure for Gradient Reconstruction on Unstructured Meshes

    Science.gov (United States)

    Mavriplis, Dimitri J.; Thomas, James L. (Technical Monitor)

    2003-01-01

    The accuracy of the least-squares technique for gradient reconstruction on unstructured meshes is examined. While least-squares techniques produce accurate results on arbitrary isotropic unstructured meshes, serious difficulties exist for highly stretched meshes in the presence of surface curvature. In these situations, gradients are typically under-estimated by up to an order of magnitude. For vertex-based discretizations on triangular and quadrilateral meshes, and cell-centered discretizations on quadrilateral meshes, accuracy can be recovered using an inverse distance weighting in the least-squares construction. For cell-centered discretizations on triangles, both the unweighted and weighted least-squares constructions fail to provide suitable gradient estimates for highly stretched curved meshes. Good overall flow solution accuracy can be retained in spite of poor gradient estimates, due to the presence of flow alignment in exactly the same regions where the poor gradient accuracy is observed. However, the use of entropy fixes has the potential for generating large but subtle discretization errors.

  11. Convergence of Inner-Iteration GMRES Methods for Rank-Deficient Least Squares Problems

    Czech Academy of Sciences Publication Activity Database

    Morikuni, Keiichi; Hayami, K.

    2015-01-01

    Roč. 36, č. 1 (2015), s. 225-250. ISSN 0895-4798 Institutional support: RVO:67985807 Keywords : least squares problem * iterative methods * preconditioner * inner-outer iteration * GMRES method * stationary iterative method * rank-deficient problem Subject RIV: BA - General Mathematics Impact factor: 1.590, year: 2014

  12. Linking Socioeconomic Status to Social Cognitive Career Theory Factors: A Partial Least Squares Path Modeling Analysis

    Science.gov (United States)

    Huang, Jie-Tsuen; Hsieh, Hui-Hsien

    2011-01-01

    The purpose of this study was to investigate the contributions of socioeconomic status (SES) in predicting social cognitive career theory (SCCT) factors. Data were collected from 738 college students in Taiwan. The results of the partial least squares (PLS) analyses indicated that SES significantly predicted career decision self-efficacy (CDSE);…

  13. A Coupled Finite Difference and Moving Least Squares Simulation of Violent Breaking Wave Impact

    DEFF Research Database (Denmark)

    Lindberg, Ole; Bingham, Harry B.; Engsig-Karup, Allan Peter

    2012-01-01

    Two model for simulation of free surface flow is presented. The first model is a finite difference based potential flow model with non-linear kinematic and dynamic free surface boundary conditions. The second model is a weighted least squares based incompressible and inviscid flow model. A special...

  14. Explicit representation formulas for the minimum norm least squares solutions of some quaternion matrix equations

    CERN Document Server

    Kyrchei, Ivan

    2012-01-01

    Within the framework of the theory of the column and row determinants, we obtain explicit representation formulas (analogs of Cramer's rule) for the minimum norm least squares solutions of quaternion matrix equations ${\\bf A} {\\bf X} = {\\bf B}$, $ {\\bf X} {\\bf A} = {\\bf B}$ and ${\\bf A} {\\bf X} {\\bf B} = {\\bf D} $.

  15. Unbiased Invariant Least Squares Estimation in A Generalized Growth Curve Model

    OpenAIRE

    Wu, Xiaoyong; Liang, Hua; Zou, Guohua

    2009-01-01

    This paper is concerned with a generalized growth curve model. We derive the unbiased invariant least squares estimators of the linear functions of variance-covariance matrix of disturbances. Under the minimum variance criterion, we obtain the necessary and sufficient conditions of the proposed estimators to be optimal. Simulation studies show that the proposed estimators perform well.

  16. SAS MACRO LANGUAGE PROGRAM FOR PARTIAL LEAST SQUARES REGRESSION OF SPECTRAL DATA

    Science.gov (United States)

    A computer program was written in the SAS language for the purpose of examining the effect of spectral pretreatments on partial least squares regression of near-infrared (or similarly structured) data. The program operates in an unattended batch mode, in which the user may specify a number of commo...

  17. Mis-parametrization subsets for a penalized least squares model selection

    OpenAIRE

    Guyon, Xavier; Hardouin, Cécile

    2011-01-01

    When identifying a model by a penalized minimum contrast procedure, we give a description of the over and under fitting parametrization subsets for a least squares contrast. This allows to determine an accurate sequence of penalization rates ensuring good identification. We present applications for the identification of the covariance for a general time series, and for the variogram identification of a geostatistical model.

  18. Adjoint sensitivity in PDE constrained least squares problems as a multiphysics problem

    NARCIS (Netherlands)

    Lahaye, D.; Mulckhuyse, W.F.W.

    2012-01-01

    Purpose - The purpose of this paper is to provide a framework for the implementation of an adjoint sensitivity formulation for least-squares partial differential equations constrained optimization problems exploiting a multiphysics finite elements package. The estimation of the diffusion coefficient

  19. The MCLIB library: Monte Carlo simulation of neutron scatterring instruments

    International Nuclear Information System (INIS)

    This report describes the philosophy and structure of MCLIB, Fortran library of Monte Carlo subroutines which has been developed to test designs of neutron scattering instruments. A pair of programs (LQDGEOM and MCRUN) which use the library are shown as an example. (author) 7 figs., 9 refs

  20. Data libraries as a collaborative tool across Monte Carlo codes

    CERN Document Server

    Augelli, Mauro; Han, Mincheol; Hauf, Steffen; Kim, Chan-Hyeung; Kuster, Markus; Pia, Maria Grazia; Quintieri, Lina; Saracco, Paolo; Seo, Hee; Sudhakar, Manju; Eidenspointner, Georg; Zoglauer, Andreas

    2010-01-01

    The role of data libraries in Monte Carlo simulation is discussed. A number of data libraries currently in preparation are reviewed; their data are critically examined with respect to the state-of-the-art in the respective fields. Extensive tests with respect to experimental data have been performed for the validation of their content.

  1. Discrete least squares polynomial approximation with random evaluations − application to parametric and stochastic elliptic PDEs

    KAUST Repository

    Chkifa, Abdellah

    2015-04-08

    Motivated by the numerical treatment of parametric and stochastic PDEs, we analyze the least-squares method for polynomial approximation of multivariate functions based on random sampling according to a given probability measure. Recent work has shown that in the univariate case, the least-squares method is quasi-optimal in expectation in [A. Cohen, M A. Davenport and D. Leviatan. Found. Comput. Math. 13 (2013) 819–834] and in probability in [G. Migliorati, F. Nobile, E. von Schwerin, R. Tempone, Found. Comput. Math. 14 (2014) 419–456], under suitable conditions that relate the number of samples with respect to the dimension of the polynomial space. Here “quasi-optimal” means that the accuracy of the least-squares approximation is comparable with that of the best approximation in the given polynomial space. In this paper, we discuss the quasi-optimality of the polynomial least-squares method in arbitrary dimension. Our analysis applies to any arbitrary multivariate polynomial space (including tensor product, total degree or hyperbolic crosses), under the minimal requirement that its associated index set is downward closed. The optimality criterion only involves the relation between the number of samples and the dimension of the polynomial space, independently of the anisotropic shape and of the number of variables. We extend our results to the approximation of Hilbert space-valued functions in order to apply them to the approximation of parametric and stochastic elliptic PDEs. As a particular case, we discuss “inclusion type” elliptic PDE models, and derive an exponential convergence estimate for the least-squares method. Numerical results confirm our estimate, yet pointing out a gap between the condition necessary to achieve optimality in the theory, and the condition that in practice yields the optimal convergence rate.

  2. The consistency of ordinary least-squares and generalized least-squares polynomial regression on characterizing the mechanomyographic amplitude versus torque relationship

    International Nuclear Information System (INIS)

    The primary purpose of this study was to examine the consistency of ordinary least-squares (OLS) and generalized least-squares (GLS) polynomial regression analyses utilizing linear, quadratic and cubic models on either five or ten data points that characterize the mechanomyographic amplitude (MMGRMS) versus isometric torque relationship. The secondary purpose was to examine the consistency of OLS and GLS polynomial regression utilizing only linear and quadratic models (excluding cubic responses) on either ten or five data points. Eighteen participants (mean ± SD age = 24 ± 4 yr) completed ten randomly ordered isometric step muscle actions from 5% to 95% of the maximal voluntary contraction (MVC) of the right leg extensors during three separate trials. MMGRMS was recorded from the vastus lateralis during the MVCs and each submaximal muscle action. MMGRMS versus torque relationships were analyzed on a subject-by-subject basis using OLS and GLS polynomial regression. When using ten data points, only 33% and 27% of the subjects were fitted with the same model (utilizing linear, quadratic and cubic models) across all three trials for OLS and GLS, respectively. After eliminating the cubic model, there was an increase to 55% of the subjects being fitted with the same model across all trials for both OLS and GLS regression. Using only five data points (instead of ten data points), 55% of the subjects were fitted with the same model across all trials for OLS and GLS regression. Overall, OLS and GLS polynomial regression models were only able to consistently describe the torque-related patterns of response for MMGRMS in 27–55% of the subjects across three trials. Future studies should examine alternative methods for improving the consistency and reliability of the patterns of response for the MMGRMS versus isometric torque relationship

  3. L2CXCV: A Fortran 77 package for least squares convex/concave data smoothing

    Science.gov (United States)

    Demetriou, I. C.

    2006-04-01

    , biology and engineering. Distribution material that includes single and double precision versions of the code, driver programs, technical details of the implementation of the software package and test examples that demonstrate the use of the software is available in an accompanying ASCII file. Program summaryTitle of program:L2CXCV Catalogue identifier:ADXM_v1_0 Program summary URL:http://cpc.cs.qub.ac.uk/summaries/ADXM_v1_0 Program obtainable from: CPC Program Library, Queen's University of Belfast, N. Ireland Computer:PC Intel Pentium, Sun Sparc Ultra 5, Hewlett-Packard HP UX 11.0 Operating system:WINDOWS 98, 2000, Unix/Solaris 7, Unix/HP UX 11.0 Programming language used:FORTRAN 77 Memory required to execute with typical data:O(n), where n is the number of data No. of bits in a byte:8 No. of lines in distributed program, including test data, etc.:29 349 No. of bytes in distributed program, including test data, etc.:1 276 663 No. of processors used:1 Has the code been vectorized or parallelized?:no Distribution format:default tar.gz Separate documentation available:Yes Nature of physical problem:Analysis of processes that show initially increasing and then decreasing rates of change (sigmoid shape), as, for example, in heat curves, reactor stability conditions, evolution curves, photoemission yields, growth models, utility functions, etc. Identifying an unknown convex/concave (sigmoid) function from some measurements of its values that contain random errors. Also, identifying the inflection point of this sigmoid function. Method of solution:Univariate data smoothing by minimizing the sum of the squares of the residuals (least squares approximation) subject to the condition that the second order divided differences of the smoothed values change sign at most once. Ideally, this is the number of sign changes in the second derivative of the underlying function. The remarkable property of the smoothed values is that they consist of one separate section of optimal components

  4. Doppler-shift estimation of flat underwater channel using data-aided least-square approach

    Science.gov (United States)

    Pan, Weiqiang; Liu, Ping; Chen, Fangjiong; Ji, Fei; Feng, Jing

    2015-06-01

    In this paper we proposed a dada-aided Doppler estimation method for underwater acoustic communication. The training sequence is non-dedicate, hence it can be designed for Doppler estimation as well as channel equalization. We assume the channel has been equalized and consider only flat-fading channel. First, based on the training symbols the theoretical received sequence is composed. Next the least square principle is applied to build the objective function, which minimizes the error between the composed and the actual received signal. Then an iterative approach is applied to solve the least square problem. The proposed approach involves an outer loop and inner loop, which resolve the channel gain and Doppler coefficient, respectively. The theoretical performance bound, i.e. the Cramer-Rao Lower Bound (CRLB) of estimation is also derived. Computer simulations results show that the proposed algorithm achieves the CRLB in medium to high SNR cases.

  5. The Work of Exchange and Least Square Algorithms to Approximating Univariate Functions

    International Nuclear Information System (INIS)

    This paper will discuss the work of exchange and least square algorithms for minimax and least square approximations of univariate functions. Evaluation of the work of the two algorithms is directed to parameters of internal lengths, arc length;curvative and degrees of polynomial of approximation, so that hopefully the work of algorithm can be optimized. Both algorithms are implemented at MATLAB software. Several statistical analysis will be used to measure indicators of the work mentioned above. Numerical results show that there exist a significant difference of the process durations of the two algorithms, and otherwise there doesn't exist a difference of the accuracies of approximating functions. In general, the parameters mentioned above can affect the work of both algorithms

  6. Least Squares Ranking on Graphs, Hodge Laplacians, Time Optimality, and Iterative Methods

    CERN Document Server

    Hirani, Anil N; Watts, Seth

    2010-01-01

    Given a set of alternatives to be ranked and some pairwise comparison values, ranking can be posed as a least squares computation on a graph. This was first used by Leake for ranking football teams. The residual can be further analyzed to find inconsistencies in the given data, and this leads to a second least squares problem. This whole process was formulated recently by Jiang et al. as a Hodge decomposition of the edge values. Recently, Koutis et al., showed that linear systems involving symmetric diagonally dominant (SDD) matrices can be solved in time approaching optimality. By using Hodge 0-Laplacian and 2-Laplacian, we give various results on when the normal equations for ranking are SDD and when iterative Krylov methods should be used. We also give iteration bounds for conjugate gradient method for these problems.

  7. A NUMERICALLY STABLE BLOCK MODIFIED GRAM-SCHMIDT ALGORITHM FOR SOLVING STIFF WEIGHTED LEAST SQUARES PROBLEMS

    Institute of Scientific and Technical Information of China (English)

    Musheng Wei; Qiaohua Liu

    2007-01-01

    Recently,Wei in[18]proved that perturbed stiff weighted pseudoinverses and stiff weighted least squares problems are stable,if and only if the original and perturbed coefficient matrices A and A satisfy several row rank preservation conditions.According to these conditions,in this paper we show that in general,ordinary modified Gram-Schmidt with column pivoting is not numerically stable for solving the stiff weighted least squares problem.We then propose a row block modified Gram-Schmidt algorithm with column pivoting,and show that with appropriately chosen tolerance,this algorithm can correctly determine the numerical ranks of these row partitioned sub-matrices,and the computed QR factor R contains small roundoff error which is row stable.Several numerical experiments are also provided to compare the results of the ordinary Modified Gram-Schmidt algorithm with column pivoting and the row block Modified Gram-Schmidt algorithm with column pivoting.

  8. GLUCS: a generalized least-squares program for updating cross section evaluations with correlated data sets

    International Nuclear Information System (INIS)

    The PDP-10 FORTRAN IV computer programs INPUT.F4, GLUCS.F4, and OUTPUT.F4, which employ Bayes' theorem (or generalized least-squares) for simultaneous evaluation of reaction cross sections, are described. Evaluations of cross sections and covariances are used as input for incorporating correlated data sets, particularly ratios. These data are read from Evaluated Nuclear Data File (ENDF/B-V) formatted files. Measured data sets, including ratios and absolute and relative cross section data, are read and combined with the input evaluations by means of the least-squares technique. The resulting output evaluations have not updated only cross sections and covariances, but also cross-reaction covariances. These output data are written into ENDF/B-V format

  9. Method for exploiting bias in factor analysis using constrained alternating least squares algorithms

    Science.gov (United States)

    Keenan, Michael R.

    2008-12-30

    Bias plays an important role in factor analysis and is often implicitly made use of, for example, to constrain solutions to factors that conform to physical reality. However, when components are collinear, a large range of solutions may exist that satisfy the basic constraints and fit the data equally well. In such cases, the introduction of mathematical bias through the application of constraints may select solutions that are less than optimal. The biased alternating least squares algorithm of the present invention can offset mathematical bias introduced by constraints in the standard alternating least squares analysis to achieve factor solutions that are most consistent with physical reality. In addition, these methods can be used to explicitly exploit bias to provide alternative views and provide additional insights into spectral data sets.

  10. Low-rank matrix recovery via iteratively reweighted least squares minimization

    CERN Document Server

    Fornasier, Massimo; Ward, Rachel

    2010-01-01

    We present and analyze an efficient implementation of an iteratively reweighted least squares algorithm for recovering a matrix from a small number of linear measurements. The algorithm is designed for the simultaneous promotion of both a minimal nuclear norm and an approximatively low-rank solution. Under the assumption that the linear measurements fulfill a suitable generalization of the Null Space Property known in the context of compressed sensing, the algorithm is guaranteed to recover iteratively any matrix with an error of the order of the best k-rank approximation. In certain relevant cases, for instance for the matrix completion problem, our version of this algorithm can take advantage of the Woodbury matrix identity, which allows to expedite the solution of the least squares problems required at each iteration. We present numerical experiments that confirm the robustness of the algorithm for the solution of matrix completion problems, and demonstrate its competitiveness with respect to other techniq...

  11. ON THE SINGULARITY OF LEAST SQUARES ESTIMATOR FOR MEAN-REVERTING Α-STABLE MOTIONS

    Institute of Scientific and Technical Information of China (English)

    Hu Yaozhong; Long Hongwei

    2009-01-01

    We study the problem of parameter estimation for mean-reverting α-stable motion, dXt= (a0- θ0Xt)dt + dZt, observed at discrete time instants.A least squares estimator is obtained and its asymptotics is discussed in the singular case (a0, θ0)=(0,0).If a0=0, then the mean-reverting α-stable motion becomes Ornstein-Uhlenbeck process and is studied in [7] in the ergodie case θ0 > 0.For the Ornstein-Uhlenbeck process, asymptoties of the least squares estimators for the singular case (θ0 = 0) and for ergodic case (θ0 > 0) are completely different.

  12. Semiparametric Regression of Multidimensional Genetic Pathway Data: Least-Squares Kernel Machines and Linear Mixed Models

    OpenAIRE

    Liu, Dawei; Lin, Xihong; Ghosh, Debashis

    2007-01-01

    We consider a semiparametric regression model that relates a normal outcome to covariates and a genetic pathway, where the covariate effects are modeled parametrically and the pathway effect of multiple gene expressions is modeled parametrically or nonparametrically using least-squares kernel machines (LSKMs). This unified framework allows a flexible function for the joint effect of multiple genes within a pathway by specifying a kernel function and allows for the possibility that each gene e...

  13. Moving Least Squares Method for a One-Dimensional Parabolic Inverse Problem

    Directory of Open Access Journals (Sweden)

    Baiyu Wang

    2014-01-01

    Full Text Available This paper investigates the numerical solution of a class of one-dimensional inverse parabolic problems using the moving least squares approximation; the inverse problem is the determination of an unknown source term depending on time. The collocation method is used for solving the equation; some numerical experiments are presented and discussed to illustrate the stability and high efficiency of the method.

  14. Solving the Axisymmetric Inverse Heat Conduction Problem by a Wavelet Dual Least Squares Method

    Directory of Open Access Journals (Sweden)

    Fu Chu-Li

    2009-01-01

    Full Text Available We consider an axisymmetric inverse heat conduction problem of determining the surface temperature from a fixed location inside a cylinder. This problem is ill-posed; the solution (if it exists does not depend continuously on the data. A special project method—dual least squares method generated by the family of Shannon wavelet is applied to formulate regularized solution. Meanwhile, an order optimal error estimate between the approximate solution and exact solution is proved.

  15. Patent value models: partial least squares path modelling with mode C and few indicators

    OpenAIRE

    Martínez Ruiz, Alba

    2011-01-01

    Two general goals were raised in this thesis: First, to establish a PLS model for patent value and to investigate causality relationships among variables that determine the patent value; second, to investigate the performance of Partial Least Squares (PLS) Path Modelling with Mode C inthe context of patent value models. This thesis is organized in 10 chapters. Chapter 1 presents an introduction to the thesis that includes the objectives, research scope and the document’s structure. C...

  16. High-performance numerical algorithms and software for structured total least squares

    OpenAIRE

    I. Markovsky; Van Huffel, S.

    2005-01-01

    We present a software package for structured total least squares approximation problems. The allowed structures in the data matrix are block-Toeplitz, block-Hankel, unstructured, and exact. Combination of blocks with these structures can be specified. The computational complexity of the algorithms is O(m), where m is the sample size. We show simulation examples with different approximation problems. Application of the method for multivariable system identification is illustrated on examples f...

  17. Online Soft Sensor of Humidity in PEM Fuel Cell Based on Dynamic Partial Least Squares

    OpenAIRE

    Rong Long; Qihong Chen; Liyan Zhang; Longhua Ma; Shuhai Quan

    2013-01-01

    Online monitoring humidity in the proton exchange membrane (PEM) fuel cell is an important issue in maintaining proper membrane humidity. The cost and size of existing sensors for monitoring humidity are prohibitive for online measurements. Online prediction of humidity using readily available measured data would be beneficial to water management. In this paper, a novel soft sensor method based on dynamic partial least squares (DPLS) regression is proposed and applied to humidity prediction i...

  18. Prediction of ferric iron precipitation in bioleaching process using partial least squares and artificial neural network

    OpenAIRE

    Golmohammadi Hassan; Rashidi Abbas; Safdari Seyed Jaber

    2013-01-01

    A quantitative structure-property relationship (QSPR) study based on partial least squares (PLS) and artificial neural network (ANN) was developed for the prediction of ferric iron precipitation in bioleaching process. The leaching temperature, initial pH, oxidation/reduction potential (ORP), ferrous concentration and particle size of ore were used as inputs to the network. The output of the model was ferric iron precipitation. The optimal condition of the neural network was obtained by...

  19. Characterization of ocean biogeochemical processes: a generalized total least-squares estimator of the Redfield ratios

    OpenAIRE

    Guglielmi, V.; Goyet, C; Touratier, F.

    2015-01-01

    The chemical composition of the global ocean is governed by biological, chemical and physical processes. These processes interact with each other so that the concentrations of carbon dioxide, oxygen, nitrate and phosphate vary in constant proportions, referred to as the Redfield ratios. We build here the Generalized Total Least-Squares estimator of these ratios. The interest of our approach is twofold: it respects the hydrological characteristics of the studied areas, and it...

  20. Least squares algorithm for region-of-interest evaluation in emission tomography

    Energy Technology Data Exchange (ETDEWEB)

    Formiconi, A.R. (Sezione di Medicina Nucleare, Firenze (Italy). Dipt. di Fisiopatologia Clinica)

    1993-03-01

    In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region.

  1. Least-Squares Solutions of the Equation AX = B Over Anti-Hermitian Generalized Hamiltonian Matrices

    Institute of Scientific and Technical Information of China (English)

    2006-01-01

    Upon using the denotative theorem of anti-Hermitian generalized Hamiltonian matrices, we solve effectively the least-squares problem min ‖AX - B‖ over anti-Hermitian generalized Hamiltonian matrices. We derive some necessary and sufficient conditions for solvability of the problem and an expression for general solution of the matrix equation AX = B. In addition, we also obtain the expression for the solution of a relevant optimal approximate problem.

  2. Discussion About Nonlinear Time Series Prediction Using Least Squares Support Vector Machine

    Institute of Scientific and Technical Information of China (English)

    XU Rui-Rui; BIAN Guo-Xing; GAO Chen-Feng; CHEN Tian-Lun

    2005-01-01

    The least squares support vector machine (LS-SVM) is used to study the nonlinear time series prediction.First, the parameter γ and multi-step prediction capabilities of the LS-SVM network are discussed. Then we employ clustering method in the model to prune the number of the support values. The learning rate and the capabilities of filtering noise for LS-SVM are all greatly improved.

  3. A mixed effects least squares support vector machine model for classification of longitudinal data

    OpenAIRE

    Luts, Jan; Molenberghs, Geert; Verbeke, Geert; Van Huffel, Sabine; Suykens, Johan A.K.

    2012-01-01

    A mixed effects least squares support vector machine (LS-SVM) classifier is introduced to extend the standard LS-SVM classifier for handling longitudinal data. The mixed effects LS-SVM model contains a random intercept and allows to classify highly unbalanced data, in the sense that there is an unequal number of observations for each case at non-fixed time points. The methodology consists of a regression modeling and a classification step based on the obtained regression estimates. Regression...

  4. Fast Combinatorial Algorithm for the Solution of Linearly Constrained Least Squares Problems

    Science.gov (United States)

    Van Benthem, Mark H.; Keenan, Michael R.

    2008-11-11

    A fast combinatorial algorithm can significantly reduce the computational burden when solving general equality and inequality constrained least squares problems with large numbers of observation vectors. The combinatorial algorithm provides a mathematically rigorous solution and operates at great speed by reorganizing the calculations to take advantage of the combinatorial nature of the problems to be solved. The combinatorial algorithm exploits the structure that exists in large-scale problems in order to minimize the number of arithmetic operations required to obtain a solution.

  5. Least squares algorithm for region-of-interest evaluation in emission tomography

    International Nuclear Information System (INIS)

    In a simulation study, the performances of the least squares algorithm applied to region-of-interest evaluation were studied. The least squares algorithm is a direct algorithm which does not require any iterative computation scheme and also provides estimates of statistical uncertainties of the region-of-interest values (covariance matrix). A model of physical factors, such as system resolution, attenuation and scatter, can be specified in the algorithm. In this paper an accurate model of the non-stationary geometrical response of a camera-collimator system was considered. The algorithm was compared with three others which are specialized for region-of-interest evaluation, as well as with the conventional method of summing the reconstructed quantity over the regions of interest. For the latter method, two algorithms were used for image reconstruction; these included filtered back projection and conjugate gradient least squares with the model of nonstationary geometrical response. For noise-free data and for regions of accurate shape least squares estimates were unbiased within roundoff errors. For noisy data, estimates were still unbiased but precision worsened for regions smaller than resolution: simulating typical statistics of brain perfusion studies performed with a collimated camera, the estimated standard deviation for a 1 cm square region was 10% with an ultra high-resolution collimator and 7% with a low energy all purpose collimator. Conventional region-of-interest estimates showed comparable precision but were heavily biased if filtered back projection was employed for image reconstruction. Using the conjugate gradient iterative algorithm and the model of nonstationary geometrical response, bias of estimates decreased on increasing the number of iterations, but precision worsened thus achieving an estimated standard deviation of more than 25% for the same 1 cm region

  6. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    OpenAIRE

    Qingzhou Mao; Liang Zhang; Qingquan Li; Qingwu Hu; Jianwei Yu; Shaojun Feng; Washington Ochieng; Hanlu Gong

    2015-01-01

    In environments that are hostile to Global Navigation Satellites Systems (GNSS), the precision achieved by a mobile light detection and ranging (LiDAR) system (MLS) can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS). This paper proposes a novel least squares collocation (LSC)-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, ...

  7. Sparse partial least squares for on-line variable selection in multivariate data streams

    OpenAIRE

    McWilliams, Brian; Montana, Giovanni

    2009-01-01

    In this paper we propose a computationally efficient algorithm for on-line variable selection in multivariate regression problems involving high dimensional data streams. The algorithm recursively extracts all the latent factors of a partial least squares solution and selects the most important variables for each factor. This is achieved by means of only one sparse singular value decomposition which can be efficiently updated on-line and in an adaptive fashion. Simulation results based on art...

  8. A PRESS statistic for two-block partial least squares regression

    OpenAIRE

    McWilliams, Brian; Montana, Giovanni

    2013-01-01

    Predictive modelling of multivariate data where both the covariates and responses are high-dimensional is becoming an increasingly popular task in many data mining applications. Partial Least Squares (PLS) regression often turns out to be a useful model in these situations since it performs dimensionality reduction by assuming the existence of a small number of latent factors that may explain the linear dependence between input and output. In practice, the number of latent factors to be retai...

  9. Gemini Planet Imager Observational Calibrations IX: Least-Squares Inversion Flux Extraction

    OpenAIRE

    Draper, Zachary H.; Marois, Christian; Wolff, Schuyler; Perrin, Marshall; Ingraham, Patrick; Ruffio, Jean-Baptiste; Rantakyrö, Fredrik T.; Hartung, Markus; Goodsell, Stephen J.; team, with the GPI

    2014-01-01

    The Gemini Planet Imager (GPI) is an instrument designed to directly image planets and circumstellar disks from 0.9 to 2.5 microns (the $YJHK$ infrared bands) using high contrast adaptive optics with a lenslet-based integral field spectrograph. We develop an extraction algorithm based on a least-squares method to disentangle the spectra and systematic noise contributions simultaneously. We utilize two approaches to adjust for the effect of flexure of the GPI optics which move the position of ...

  10. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    OpenAIRE

    Ying Chen; Shiqing Zhang; Xiaoming Zhao

    2014-01-01

    Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS) sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP) and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments ...

  11. Kernelized partial least squares for feature reduction and classification of gene microarray data

    OpenAIRE

    Land Walker H; Qiao Xingye; Margolis Daniel E; Ford William S; Paquette Christopher T; Perez-Rogers Joseph F; Borgia Jeffrey A; Yang Jack Y; Deng Youping

    2011-01-01

    Abstract Background The primary objectives of this paper are: 1.) to apply Statistical Learning Theory (SLT), specifically Partial Least Squares (PLS) and Kernelized PLS (K-PLS), to the universal "feature-rich/case-poor" (also known as "large p small n", or "high-dimension, low-sample size") microarray problem by eliminating those features (or probes) that do not contribute to the "best" chromosome bio-markers for lung cancer, and 2.) quantitatively measure and verify (by an independent means...

  12. POWLS-60 Program for refinement of powder diffraction data (powder least squares)

    International Nuclear Information System (INIS)

    A computer program POWLS - 60, based on least-squares calculations, for crystallographic and magnetic structure refinements from X-ray or neutron diffraction powder data is reported. Great flexibility is achieved by a special problem oriented subroutine to be supplied by the user defining any functions in terms of any parameters. Three examples are given together with the source listing of the whole program. (orig.)

  13. Estimasi Kurva Regresi Pada Model Varying Coefficient Dengan Weighted Least Square

    OpenAIRE

    Ragil P., Dian; Raupong; ISLAMIYATI, ANNA

    2014-01-01

    Model varying-coefficient pada data longitudinal akan dikaji dalam proposal ini. Hubungan antara variabel respon dan prediktor diasumsikan linier pada waktu tertentu, tapi koefisien-koefisiennya berubah terhadap waktu. Estimator spline berdasarkan Weighted least square (WLS) digunakan untuk mengestimasi kurva regresi dari Model Varying Coefficient. Generalized Cross-Validation (GCV) digunakan untuk memilih titik knot optimal. Aplikasi pada proposal ini diterapkan pada data ACTG yaitu hubungan...

  14. Integrated application of uniform design and least-squares support vector machines to transfection optimization

    Directory of Open Access Journals (Sweden)

    Pan Jin-Shui

    2009-05-01

    Full Text Available Abstract Background Transfection in mammalian cells based on liposome presents great challenge for biological professionals. To protect themselves from exogenous insults, mammalian cells tend to manifest poor transfection efficiency. In order to gain high efficiency, we have to optimize several conditions of transfection, such as amount of liposome, amount of plasmid, and cell density at transfection. However, this process may be time-consuming and energy-consuming. Fortunately, several mathematical methods, developed in the past decades, may facilitate the resolution of this issue. This study investigates the possibility of optimizing transfection efficiency by using a method referred to as least-squares support vector machine, which requires only a few experiments and maintains fairly high accuracy. Results A protocol consists of 15 experiments was performed according to the principle of uniform design. In this protocol, amount of liposome, amount of plasmid, and the number of seeded cells 24 h before transfection were set as independent variables and transfection efficiency was set as dependent variable. A model was deduced from independent variables and their respective dependent variable. Another protocol made up by 10 experiments was performed to test the accuracy of the model. The model manifested a high accuracy. Compared to traditional method, the integrated application of uniform design and least-squares support vector machine greatly reduced the number of required experiments. What's more, higher transfection efficiency was achieved. Conclusion The integrated application of uniform design and least-squares support vector machine is a simple technique for obtaining high transfection efficiency. Using this novel method, the number of required experiments would be greatly cut down while higher efficiency would be gained. Least-squares support vector machine may be applicable to many other problems that need to be optimized.

  15. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    OpenAIRE

    Bo Liu; Sanfeng Chen; Shuai Li; Yongsheng Liang

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via s...

  16. LEAST-SQUARES METHOD-BASED FEATURE FITTING AND EXTRACTION IN REVERSE ENGINEERING

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    The main purpose of reverse engineering is to convert discrete data points into piecewise smooth, continuous surface models.Before carrying out model reconstruction it is significant to extract geometric features because the quality of modeling greatly depends on the representation of features.Some fitting techniques of natural quadric surfaces with least-squares method are described.And these techniques can be directly used to extract quadric surfaces features during the process of segmentation for point cloud.

  17. Learning rates of least-square regularized regression with polynomial kernels

    Institute of Scientific and Technical Information of China (English)

    2009-01-01

    This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in Lρ2X with Borel probability measure.

  18. Learning rates of least-square regularized regression with polynomial kernels

    Institute of Scientific and Technical Information of China (English)

    LI BingZheng; WANG GuoMao

    2009-01-01

    This paper presents learning rates for the least-square regularized regression algorithms with polynomial kernels. The target is the error analysis for the regression problem in learning theory. A regularization scheme is given, which yields sharp learning rates. The rates depend on the dimension of polynomial space and polynomial reproducing kernel Hilbert space measured by covering numbers. Meanwhile, we also establish the direct approximation theorem by Bernstein-Durrmeyer operators in Lpx2 with Borel probability measure.

  19. Unfolding of nuclear radiation spectra by means of the least square method

    International Nuclear Information System (INIS)

    Using Least Square Methods on easily may get solutions of the convolution integral by iteration, satisfactory for practical purposes. For continuum radiation the method takes into account the condition that the radiation intensities must be nonnegative. For a line radiation of known structure the parameters characterising that structure are determined. The sum of the squares of the residuals gives the accuracy of the deconvolution procedure. The programs for the described methods need little space; they may be implemented even in small computers. (orig.)

  20. Total Least Squares Problem in Linear Algebraic Systems with Multiple Right-Hand Side

    Czech Academy of Sciences Publication Activity Database

    Plešinger, Martin; Hnětynková, Iveta; Sima, D.M.; Strakoš, Zdeněk

    Ostrava : ÚGN AV ČR, 2007 - (Blaheta, R.; Starý, J.), s. 81-84 ISBN 978-80-86407-12-8. [SNA '07. Seminar on Numerical Analysis. Ostrava (CZ), 22.01.2007-26.01.2007] R&D Projects: GA AV ČR 1ET400300415 Institutional research plan: CEZ:AV0Z10300504 Keywords : total least squares * core problem * multiple right-hand sides

  1. A Comparison of Recursive Least Squares Estimation and Kalman Filtering for Flow in Open Channels

    OpenAIRE

    DURDU, Ömer Faruk

    2005-01-01

    An integrated approach to the design of an automatic control system for canals using a Linear Quadratic Gaussian regulator based on recursive least squares estimation was developed. The one-dimensional partial differential equations describing open channel flow (Saint-Venant) equations are linearized about an average operating condition of the canal. The concept of optimal control theory is applied to drive a feedback control algorithm for constant-level control of an irrigation cana...

  2. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    OpenAIRE

    R. Samsudin; P. Saad; A. Shabri

    2010-01-01

    This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH) and the least squares support vector machine (LSSVM), known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with...

  3. ANALISIS KEPUASAN KONSUMEN RESTORAN CEPAT SAJI MENGGUNAKAN METODE PARTIAL LEAST SQUARE (Studi Kasus: Burger King Bali)

    OpenAIRE

    MADE SANJIWANI; KETUT JAYANEGARA; I PUTU EKA N. KENCANA

    2015-01-01

    The were two aims of this research. First is to get model of the relation between the latent variable quality of service and product quality to customer satisfaction. The second was to determine the influence of service quality on customer satisfaction and the influence of product quality on consumer satisfaction at Burger King Bali. This research implemented Partial Least Square method with 3 second order variables is the service quality, product quality, and customer satisfaction. In this r...

  4. A New Least Squares Support Vector Machines Ensemble Model for Aero Engine Performance Parameter Chaotic Prediction

    OpenAIRE

    Dangdang Du; Xiaoliang Jia; Chaobo Hao

    2016-01-01

    Aiming at the nonlinearity, chaos, and small-sample of aero engine performance parameters data, a new ensemble model, named the least squares support vector machine (LSSVM) ensemble model with phase space reconstruction (PSR) and particle swarm optimization (PSO), is presented. First, to guarantee the diversity of individual members, different single kernel LSSVMs are selected as base predictors, and they also output the primary prediction results independently. Then, all the primary predicti...

  5. Safety Monitoring of a Super-High Dam Using Optimal Kernel Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Hao Huang

    2015-01-01

    Full Text Available Considering the characteristics of complex nonlinear and multiple response variables of a super-high dam, kernel partial least squares (KPLS method, as a strongly nonlinear multivariate analysis method, is introduced into the field of dam safety monitoring for the first time. A universal unified optimization algorithm is designed to select the key parameters of the KPLS method and obtain the optimal kernel partial least squares (OKPLS. Then, OKPLS is used to establish a strongly nonlinear multivariate safety monitoring model to identify the abnormal behavior of a super-high dam via model multivariate fusion diagnosis. An analysis of deformation monitoring data of a super-high arch dam was undertaken as a case study. Compared to the multiple linear regression (MLR, partial least squares (PLS, and KPLS models, the OKPLS model displayed the best fitting accuracy and forecast precision, and the model multivariate fusion diagnosis reduced the number of false alarms compared to the traditional univariate diagnosis. Thus, OKPLS is a promising method in the application of super-high dam safety monitoring.

  6. Gas transport networks: Entry–exit tariffs via least squares methodology

    International Nuclear Information System (INIS)

    Following some of the directives and regulations in the 3rd EU Energy Package, many of the EU members are reconsidering their methodologies to derive the tariffs charged for access and usage of their gas transport systems. Among these methodologies, the use of entry–exit tariffs computed via least squares has received the most attention over the last few years and there is a wide consensus towards the application of this approach. The main contribution of this paper is to raise awareness on the fact that, even after a given methodology has been chosen, there are still important details to be fixed before the final tariffs are computed. Within the context of the least squares methodology we argue that, although many of these details may seem minor, they can have a big impact on the final outcome. The paper also presents proposals on how these details can be handled while still pursuing the goals set by the EU; goals such as being transparent, cost-reflective, and non-discriminatory. Finally, the paper concludes with an illustration of the discussed proposals, applying them to the Spanish gas transport network. - Highlights: • We present a methodological discussion of entry–exit tariffs via least squares. • We discuss some implementation aspects that have to be handled carefully. • We present a series of proposals to handle these aspects. • Illustration with the Spanish Gas Transmission Network

  7. A cross-correlation objective function for least-squares migration and visco-acoustic imaging

    KAUST Repository

    Dutta, Gaurav

    2014-08-05

    Conventional acoustic least-squares migration inverts for a reflectivity image that best matches the amplitudes of the observed data. However, for field data applications, it is not easy to match the recorded amplitudes because of the visco-elastic nature of the earth and inaccuracies in the estimation of source signature and strength at different shot locations. To relax the requirement for strong amplitude matching of least-squares migration, we use a normalized cross-correlation objective function that is only sensitive to the similarity between the predicted and the observed data. Such a normalized cross-correlation objective function is also equivalent to a time-domain phase inversion method where the main emphasis is only on matching the phase of the data rather than the amplitude. Numerical tests on synthetic and field data show that such an objective function can be used as an alternative to visco-acoustic least-squares reverse time migration (Qp-LSRTM) when there is strong attenuation in the subsurface and the estimation of the attenuation parameter Qp is insufficiently accurate.

  8. Semi-supervised least squares support vector machine algorithm: application to offshore oil reservoir

    Science.gov (United States)

    Luo, Wei-Ping; Li, Hong-Qi; Shi, Ning

    2016-06-01

    At the early stages of deep-water oil exploration and development, fewer and further apart wells are drilled than in onshore oilfields. Supervised least squares support vector machine algorithms are used to predict the reservoir parameters but the prediction accuracy is low. We combined the least squares support vector machine (LSSVM) algorithm with semi-supervised learning and established a semi-supervised regression model, which we call the semi-supervised least squares support vector machine (SLSSVM) model. The iterative matrix inversion is also introduced to improve the training ability and training time of the model. We use the UCI data to test the generalization of a semi-supervised and a supervised LSSVM models. The test results suggest that the generalization performance of the LSSVM model greatly improves and with decreasing training samples the generalization performance is better. Moreover, for small-sample models, the SLSSVM method has higher precision than the semi-supervised K-nearest neighbor (SKNN) method. The new semisupervised LSSVM algorithm was used to predict the distribution of porosity and sandstone in the Jingzhou study area.

  9. Limitation of the Least Square Method in the Evaluation of Dimension of Fractal Brownian Motions

    CERN Document Server

    Qiao, Bingqiang; Zeng, Houdun; Li, Xiang; Dai, Benzhong

    2015-01-01

    With the standard deviation for the logarithm of the re-scaled range $\\langle |F(t+\\tau)-F(t)|\\rangle$ of simulated fractal Brownian motions $F(t)$ given in a previous paper \\cite{q14}, the method of least squares is adopted to determine the slope, $S$, and intercept, $I$, of the log$(\\langle |F(t+\\tau)-F(t)|\\rangle)$ vs $\\rm{log}(\\tau)$ plot to investigate the limitation of this procedure. It is found that the reduced $\\chi^2$ of the fitting decreases with the increase of the Hurst index, $H$ (the expectation value of $S$), which may be attributed to the correlation among the re-scaled ranges. Similarly, it is found that the errors of the fitting parameters $S$ and $I$ are usually smaller than their corresponding standard deviations. These results show the limitation of using the simple least square method to determine the dimension of a fractal time series. Nevertheless, they may be used to reinterpret the fitting results of the least square method to determine the dimension of fractal Brownian motions more...

  10. Partial Least Squares (PLS) methods for neuroimaging: a tutorial and review.

    Science.gov (United States)

    Krishnan, Anjali; Williams, Lynne J; McIntosh, Anthony Randal; Abdi, Hervé

    2011-05-15

    Partial Least Squares (PLS) methods are particularly suited to the analysis of relationships between measures of brain activity and of behavior or experimental design. In neuroimaging, PLS refers to two related methods: (1) symmetric PLS or Partial Least Squares Correlation (PLSC), and (2) asymmetric PLS or Partial Least Squares Regression (PLSR). The most popular (by far) version of PLS for neuroimaging is PLSC. It exists in several varieties based on the type of data that are related to brain activity: behavior PLSC analyzes the relationship between brain activity and behavioral data, task PLSC analyzes how brain activity relates to pre-defined categories or experimental design, seed PLSC analyzes the pattern of connectivity between brain regions, and multi-block or multi-table PLSC integrates one or more of these varieties in a common analysis. PLSR, in contrast to PLSC, is a predictive technique which, typically, predicts behavior (or design) from brain activity. For both PLS methods, statistical inferences are implemented using cross-validation techniques to identify significant patterns of voxel activation. This paper presents both PLS methods and illustrates them with small numerical examples and typical applications in neuroimaging. PMID:20656037

  11. Preprocessing in Matlab Inconsistent Linear System for a Meaningful Least Squares Solution

    Science.gov (United States)

    Sen, Symal K.; Shaykhian, Gholam Ali

    2011-01-01

    Mathematical models of many physical/statistical problems are systems of linear equations Due to measurement and possible human errors/mistakes in modeling/data, as well as due to certain assumptions to reduce complexity, inconsistency (contradiction) is injected into the model, viz. the linear system. While any inconsistent system irrespective of the degree of inconsistency has always a least-squares solution, one needs to check whether an equation is too much inconsistent or, equivalently too much contradictory. Such an equation will affect/distort the least-squares solution to such an extent that renders it unacceptable/unfit to be used in a real-world application. We propose an algorithm which (i) prunes numerically redundant linear equations from the system as these do not add any new information to the model, (ii) detects contradictory linear equations along with their degree of contradiction (inconsistency index), (iii) removes those equations presumed to be too contradictory, and then (iv) obtain the . minimum norm least-squares solution of the acceptably inconsistent reduced linear system. The algorithm presented in Matlab reduces the computational and storage complexities and also improves the accuracy of the solution. It also provides the necessary warning about the existence of too much contradiction in the model. In addition, we suggest a thorough relook into the mathematical modeling to determine the reason why unacceptable contradiction has occurred thus prompting us to make necessary corrections/modifications to the models - both mathematical and, if necessary, physical.

  12. Weighted least-squares algorithm for phase unwrapping based on confidence level in frequency domain

    Science.gov (United States)

    Wang, Shaohua; Yu, Jie; Yang, Cankun; Jiao, Shuai; Fan, Jun; Wan, Yanyan

    2015-12-01

    Phase unwrapping is a key step in InSAR (Synthetic Aperture Radar Interferometry) processing, and its result may directly affect the accuracy of DEM (Digital Elevation Model) and ground deformation. However, the decoherence phenomenon such as shadows and layover, in the area of severe land subsidence where the terrain is steep and the slope changes greatly, will cause error transmission in the differential wrapped phase information, leading to inaccurate unwrapping phase. In order to eliminate the effect of the noise and reduce the effect of less sampling which caused by topographical factors, a weighted least-squares method based on confidence level in frequency domain is used in this study. This method considered to express the terrain slope in the interferogram as the partial phase frequency in range and azimuth direction, then integrated them into the confidence level. The parameter was used as the constraints of the nonlinear least squares phase unwrapping algorithm, to smooth the un-requirements unwrapped phase gradient and improve the accuracy of phase unwrapping. Finally, comparing with interferometric data of the Beijing subsidence area obtained from TerraSAR verifies that the algorithm has higher accuracy and stability than the normal weighted least-square phase unwrapping algorithms, and could consider to terrain factors.

  13. Comparison of Response Surface Construction Methods for Derivative Estimation Using Moving Least Squares, Kriging and Radial Basis Functions

    Science.gov (United States)

    Krishnamurthy, Thiagarajan

    2005-01-01

    Response construction methods using Moving Least Squares (MLS), Kriging and Radial Basis Functions (RBF) are compared with the Global Least Squares (GLS) method in three numerical examples for derivative generation capability. Also, a new Interpolating Moving Least Squares (IMLS) method adopted from the meshless method is presented. It is found that the response surface construction methods using the Kriging and RBF interpolation yields more accurate results compared with MLS and GLS methods. Several computational aspects of the response surface construction methods also discussed.

  14. Multisource least-squares migration of marine streamer and land data with frequency-division encoding

    KAUST Repository

    Huang, Yunsong

    2012-05-22

    Multisource migration of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. The accompanying crosstalk noise, in addition to the migration footprint, can be reduced by least-squares inversion. But the application of this approach to marine streamer data is hampered by the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modelling method. This leads to a strong mismatch in the misfit function and results in strong artefacts (crosstalk) in the multisource least-squares migration image. To eliminate this noise, we present a frequency-division multiplexing (FDM) strategy with iterative least-squares migration (ILSM) of supergathers. The key idea is, at each ILSM iteration, to assign a unique frequency band to each shot gather. In this case there is no overlap in the crosstalk spectrum of each migrated shot gather m(x, ω i), so the spectral crosstalk product m(x, ω i)m(x, ω j) =δ i, j is zero, unless i=j. Our results in applying this method to 2D marine data for a SEG/EAGE salt model show better resolved images than standard migration computed at about 1/10 th of the cost. Similar results are achieved after applying this method to synthetic data for a 3D SEG/EAGE salt model, except the acquisition geometry is similar to that of a marine OBS survey. Here, the speedup of this method over conventional migration is more than 10. We conclude that multisource migration for a marine geometry can be successfully achieved by a frequency-division encoding strategy, as long as crosstalk-prone sources are segregated in their spectral content. This is both the strength and the potential limitation of this method. © 2012 European Association of Geoscientists & Engineers.

  15. A Galerkin least-square stabilisation technique for hyperelastic biphasic soft tissue

    CERN Document Server

    Vignollet, Julien; Kaczmarczyk, Lukasz

    2011-01-01

    An hyperelastic biphasic model is presented. For slow-draining problems (permeability less than 1\\times10-2 mm4 N-1 s-1), numerical instabilities in the form of non-physical oscillations in the pressure field are observed in 3D problems using tetrahedral Taylor-Hood finite elements. As an alternative to considerable mesh refinement, a Galerkin least-square stabilization framework is proposed. This technique drastically reduces the pressure discrepancies and prevents these oscillations from propagating towards the centre of the medium. The performance and robustness of this technique are demonstrated on a 3D numerical example.

  16. A comparison of three additive tree algorithms that rely on a least-squares loss criterion.

    Science.gov (United States)

    Smith, T J

    1998-11-01

    The performances of three additive tree algorithms which seek to minimize a least-squares loss criterion were compared. The algorithms included the penalty-function approach of De Soete (1983), the iterative projection strategy of Hubert & Arabie (1995) and the two-stage ADDTREE algorithm, (Corter, 1982; Sattath & Tversky, 1977). Model fit, comparability of structure, processing time and metric recovery were assessed. Results indicated that the iterative projection strategy consistently located the best-fitting tree, but also displayed a wider range and larger number of local optima. PMID:9854946

  17. Distribution of error in least-squares solution of an overdetermined system of linear simultaneous equations

    Science.gov (United States)

    Miller, C. D.

    1972-01-01

    Probability density functions were derived for errors in the evaluation of unknowns by the least squares method in system of nonhomogeneous linear equations. Coefficients of the unknowns were assumed correct and computational precision were also assumed. A vector space was used, with number of dimensions equal to the number of equations. An error vector was defined and assumed to have uniform distribution of orientation throughout the vector space. The density functions are shown to be insensitive to the biasing effects of the source of the system of equations.

  18. And still, a new beginning: the Galerkin least-squares gradient method

    International Nuclear Information System (INIS)

    A finite element method is proposed to solve a scalar singular diffusion problem. The method is constructed by adding to the standard Galerkin a mesh-dependent term obtained by taking the gradient of the Euler-lagrange equation and multiplying it by its least-squares. For the one-dimensional homogeneous problem the method is designed to develop nodal exact solution. An error estimate shows that the method converges optimaly for any value of the singular parameter. Numerical results demonstrate the good stability and accuracy properties of the method. (author)

  19. Multigrid for the Galerkin least squares method in linear elasticity: The pure displacement problem

    Energy Technology Data Exchange (ETDEWEB)

    Yoo, Jaechil [Univ. of Wisconsin, Madison, WI (United States)

    1996-12-31

    Franca and Stenberg developed several Galerkin least squares methods for the solution of the problem of linear elasticity. That work concerned itself only with the error estimates of the method. It did not address the related problem of finding effective methods for the solution of the associated linear systems. In this work, we prove the convergence of a multigrid (W-cycle) method. This multigrid is robust in that the convergence is uniform as the parameter, v, goes to 1/2 Computational experiments are included.

  20. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    International Nuclear Information System (INIS)

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a open-quotes best estimateclose quotes of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards

  1. Review of the Palisades pressure vessel accumulated fluence estimate and of the least squares methodology employed

    Energy Technology Data Exchange (ETDEWEB)

    Griffin, P.J.

    1998-05-01

    This report provides a review of the Palisades submittal to the Nuclear Regulatory Commission requesting endorsement of their accumulated neutron fluence estimates based on a least squares adjustment methodology. This review highlights some minor issues in the applied methodology and provides some recommendations for future work. The overall conclusion is that the Palisades fluence estimation methodology provides a reasonable approach to a {open_quotes}best estimate{close_quotes} of the accumulated pressure vessel neutron fluence and is consistent with the state-of-the-art analysis as detailed in community consensus ASTM standards.

  2. Least Squares Spectral Analysis and Its Application to Superconducting Gravimeter Data Analysis

    Institute of Scientific and Technical Information of China (English)

    YIN Hui; Spiros D. Pagiatakis

    2004-01-01

    Detection of a periodic signal hidden in noise is the goal of Superconducting Gravimeter (SG) data analysis. Due to spikes, gaps, datum shrifts (offsets) and other disturbances, the traditional FFT method shows inherent limitations. Instead, the least squares spectral analysis (LSSA) has showed itself more suitable than Fourier analysis of gappy, unequally spaced and unequally weighted data series in a variety of applications in geodesy and geophysics. This paper reviews the principle of LSSA and gives a possible strategy for the analysis of time series obtained from the Canadian Superconducting Gravimeter Installation (CGSI), with gaps, offsets, unequal sampling decimation of the data and unequally weighted data points.

  3. Application of Least-Squares Adjustment Technique to Geometric Camera Calibration and Photogrammetric Flow Visualization

    Science.gov (United States)

    Chen, Fang-Jenq

    1997-01-01

    Flow visualization produces data in the form of two-dimensional images. If the optical components of a camera system are perfect, the transformation equations between the two-dimensional image and the three-dimensional object space are linear and easy to solve. However, real camera lenses introduce nonlinear distortions that affect the accuracy of transformation unless proper corrections are applied. An iterative least-squares adjustment algorithm is developed to solve the nonlinear transformation equations incorporated with distortion corrections. Experimental applications demonstrate that a relative precision on the order of 40,000 is achievable without tedious laboratory calibrations of the camera.

  4. Calibration data processing of streak camera with nonlinear-least-squares method

    International Nuclear Information System (INIS)

    The result of full-screen sweeping rates of streak camera(SC) is obtained using a nonlinear-least-squares method. The uncertainty of this method is about 0.04%, far below SC's systematic uncertainty. Full-screen result eliminates nonlinearity and space-distortion of sweeping rates, minimizes the error of SC's measurement to about 1.5%.The robustness and time-efficiency of this method make full-screen calibration of time-domain and space-domain feasible. (authors)

  5. Fully Modified Narrow-Band Least Squares Estimation of Weak Fractional Cointegration

    DEFF Research Database (Denmark)

    Nielsen, Morten Ørregaard; Frederiksen, Per

    We consider estimation of the cointegrating relation in the weak fractional cointegration model, where the strength of the cointegrating relation (difference in memory parameters) is less than one-half. A special case is the stationary fractional cointegration model, which has found important...... application recently, especially in financial economics. Previous research on this model has considered a semiparametric narrow-band least squares (NBLS) estimator in the frequency domain, but in the stationary case its asymptotic distribution has been derived only under a condition of non-coherence between...

  6. A Least-Squares Method for Unfolding Convolution Products in X-ray Diffraction Line Profiles

    OpenAIRE

    Yokoyama, Fumiyoshi

    1982-01-01

    A deconvolution method for the X-ray diffraction line profile is proposed, which is based on the conventional least-squares method. The true profile is assumed to be a functional form. The numerical values of parameters of the function assumed are determined so that the calculated profile, which is a convolution of the function and the instrumental profile, has a minimum deviation from the observed one. The method is illustrated by analysis of the X-ray powder diffraction profile of sodium ch...

  7. Least Square Method for Porous Fin in the Presence of Uniform Magnetic Field

    Directory of Open Access Journals (Sweden)

    H.A. Hoshyar

    2016-01-01

    Full Text Available In this study, the Least Square Method (LSM is a powerful and easy to use analytic tool for predicting the temperature distribution in a porous fin which is exposed to uniform magnetic field. The heat transfer through porous media is simulated using passage velocity from the Darcy’s model. It has been attempted to show the capabilities and wide-range applications of the LSM in comparison with a type of numerical analysis as Boundary Value Problem (BVP in solving this problem. The results reveal that the present method is very effective and convenient, and it is suggested that LSM can be found widely applications in engineering and physics.

  8. A negative-norm least-squares method for time-harmonic Maxwell equations

    KAUST Repository

    Copeland, Dylan M.

    2012-04-01

    This paper presents and analyzes a negative-norm least-squares finite element discretization method for the dimension-reduced time-harmonic Maxwell equations in the case of axial symmetry. The reduced equations are expressed in cylindrical coordinates, and the analysis consequently involves weighted Sobolev spaces based on the degenerate radial weighting. The main theoretical results established in this work include existence and uniqueness of the continuous and discrete formulations and error estimates for simple finite element functions. Numerical experiments confirm the error estimates and efficiency of the method for piecewise constant coefficients. © 2011 Elsevier Inc.

  9. Extracting information from two-dimensional electrophoresis gels by partial least squares regression

    DEFF Research Database (Denmark)

    Jessen, Flemming; Lametsch, R.; Bendixen, E.;

    2002-01-01

    disappear depending on the experimental conditions. Such biomarkers are found by comparing the relative volumes of individual spots in the individual gels. Multivariate statistical analysis and modelling of 2-DE data for comparison and classification is an alternative approach utilising the combination of...... all proteins/spots in the gels. In the present study it is demonstrated how information can be extracted by multivariate data analysis. The strategy is based on partial least squares regression followed by variable selection to find proteins that individually or in combination with other proteins vary...

  10. Least-squares dosimetry unfolding: the program STAY'SL. [STAY'SL Code

    Energy Technology Data Exchange (ETDEWEB)

    Perey, F. G.

    1977-10-01

    A PDP-10 FORTRAN IV computer program STAY'SL, which solves the dosimetry unfolding problem by the method of least squares, is described. The solution (the output spectrum and its covariance matrix) is calculated by minimizing chi-square based on the input data (the activation data, the input spectrum, the dosimetry cross sections and their uncertainties given by covariance matrices). The solution reflects therefore the uncertainties in all of the input data and their correlations. The correlations among the various dosimetry cross sections are taken into account; however, the activation data, input spectrum and cross sections as classes are assumed to be uncorrelated with each other.

  11. Automatic classification of harmonic data using $k$-means and least square support vector machine

    OpenAIRE

    ERİŞTİ, HÜSEYİN; TÜMEN, VEDAT; YILDIRIM, ÖZAL; ERİŞTİ, BELKIS; DEMİR, Yakup

    2015-01-01

    In this paper, an effective classification approach to classify harmonic data has been proposed. In the proposed classifier approach, harmonic data obtained through a 3-phase system have been classified by using $k$-means and least square support vector machine (LS-SVM) models. In order to obtain class details regarding harmonic data, a $k$-means clustering algorithm has been applied to these data first. The training of the LS-SVM model has been realized with the class details obtained throug...

  12. Pressurized water reactor monitoring. Study of detection, diagnostic and estimation (least squares and filtering) methods

    International Nuclear Information System (INIS)

    This thesis presents a study for the surveillance of the Primary circuit water inventory of a pressurized water reactor. A reference model is developed for the development of an automatic system ensuring detection and real-time diagnostic. The methods to our application are statistical tests and adapted a pattern recognition method. The estimation of the detected anomalies is treated by the least square fit method, and by filtering. A new projected optimization method with superlinear convergence is developed in this framework, and a segmented linearization of the model is introduced, in view of a multiple filtering. 46 refs

  13. Retinal Oximetry with 510-600 nm Light Based on Partial Least-Squares Regression Technique

    Science.gov (United States)

    Arimoto, Hidenobu; Furukawa, Hiromitsu

    2010-11-01

    The oxygen saturation distribution in the retinal blood stream is estimated by measuring spectral images and adopting the partial-least squares regression. The wavelengths range used for the calculation is from 510 to 600 nm. The regression model for estimating the retinal oxygen saturation is built on the basis of the arterial and venous blood spectra. The experiment is performed using an originally designed spectral ophthalmoscope. The obtained two-dimensional (2D) oxygen saturation indicates the reasonable oxygen level across the retina. The measurement quality is compared with those obtained using other wavelengths sets and data processing methods.

  14. Thrust estimator design based on least squares support vector regression machine

    Institute of Scientific and Technical Information of China (English)

    ZHAO Yong-ping; SUN Jian-guo

    2010-01-01

    In order to realize direct thrust control instead of traditional sensor-based control for nero-engines,it is indispensable to design a thrust estimator with high accuracy,so a scheme for thrust estimator design based on the least square support vector regression machine is proposed to solve this problem.Furthermore,numerical simulations confirm the effectiveness of our presented scheme.During the process of estimator design,a wrap per criterion that can not only reduce the computational complexity but also enhance the generalization performance is proposed to select variables as input variables for estimator.

  15. An Empirical State Error Covariance Matrix for the Weighted Least Squares Estimation Method

    Science.gov (United States)

    Frisbee, Joseph H., Jr.

    2011-01-01

    State estimation techniques effectively provide mean state estimates. However, the theoretical state error covariance matrices provided as part of these techniques often suffer from a lack of confidence in their ability to describe the un-certainty in the estimated states. By a reinterpretation of the equations involved in the weighted least squares algorithm, it is possible to directly arrive at an empirical state error covariance matrix. This proposed empirical state error covariance matrix will contain the effect of all error sources, known or not. Results based on the proposed technique will be presented for a simple, two observer, measurement error only problem.

  16. LEAST-SQUARES MIXED FINITE ELEMENT METHODS FOR THE INCOMPRESSIBLE MAGNETOHYDRODYNAMIC EQUATIONS

    Institute of Scientific and Technical Information of China (English)

    Shao-qin Gao

    2005-01-01

    Least-squares mixed finite element methods are proposed and analyzed for the incompressible magnetohydrodynamic equations, where the two vorticities are additionally introduced as independent variables in order that the primal equations are transformed into the first-order systems. We show that there hold the coerciveness and the optimal error bound in appropriate norms for all variables under consideration, which can be approximated by all kinds of continuous element. Consequently, the Babuska-Brezzi condition (i.e. the inf-sup condition) and the indefiniteness are avoided which are essential features of the classical mixed methods.

  17. Voronoi based discrete least squares meshless method for heat conduction simulation in highly irregular geometries

    Science.gov (United States)

    Labibzadeh, Mojtaba

    2016-01-01

    A new technique is used in Discrete Least Square Meshfree(DLSM) method to remove the common existing deficiencies of meshfree methods in handling of the problems containing cracks or concave boundaries. An enhanced Discrete Least Squares Meshless method named as VDLSM(Voronoi based Discrete Least Squares Meshless) is developed in order to solve the steady-state heat conduction problem in irregular solid domains including concave boundaries or cracks. Existing meshless methods cannot estimate precisely the required unknowns in the vicinity of the above mentioned boundaries. Conducted researches are limited to domains with regular convex boundaries. To this end, the advantages of the Voronoi tessellation algorithm are implemented. The support domains of the sampling points are determined using a Voronoi tessellation algorithm. For the weight functions, a cubic spline polynomial is used based on a normalized distance variable which can provide a high degree of smoothness near those mentioned above discontinuities. Finally, Moving Least Squares(MLS) shape functions are constructed using a varitional method. This straight-forward scheme can properly estimate the unknowns(in this particular study, the temperatures at the nodal points) near and on the crack faces, crack tip or concave boundaries without need to extra backward corrective procedures, i.e. the iterative calculations for modifying the shape functions of the nodes located near or on these types of the complex boundaries. The accuracy and efficiency of the presented method are investigated by analyzing four particular examples. Obtained results from VDLSM are compared with the available analytical results or with the results of the well-known Finite Elements Method(FEM) when an analytical solution is not available. By comparisons, it is revealed that the proposed technique gives high accuracy for the solution of the steady-state heat conduction problems within cracked domains or domains with concave boundaries

  18. A Least Square Approach to Analyze usage data for Effective web Personilization

    Directory of Open Access Journals (Sweden)

    S. S. Patil

    2011-09-01

    Full Text Available Web server logs have abundant information about the nature of users accessing it. Web usage mining, in conjunction with standard approaches to personalization helps to address some of the shortcomings of these techniques, including reliance on subjective lack of scalability, poor performance, user ratings and sparse data. But, it is not sufficient to discover patterns from usage data for performing the personalization tasks. It is necessary to derive a good quality of aggregate usage profiles which indeed will help to devise efficient recommendation for web personalization [11, 12, 13].This paper presents and experimentally evaluates a technique for finely tuning user clusters based on similar web access patterns on their usage profiles by approximating through least square approach. Each cluster is having users with similar browsing patterns. These clusters are useful in web personalization so that it communicates better with its users. Experimental results indicate that using the generated aggregate usage profiles with approximating clusters through least square approach effectively personalize at early stages of user visits to a site without deeper knowledge about them.

  19. Online Least Squares Estimation with Self-Normalized Processes: An Application to Bandit Problems

    CERN Document Server

    Abbasi-Yadkori, Yasin; Szepesvari, Csaba

    2011-01-01

    The analysis of online least squares estimation is at the heart of many stochastic sequential decision making problems. We employ tools from the self-normalized processes to provide a simple and self-contained proof of a tail bound of a vector-valued martingale. We use the bound to construct a new tighter confidence sets for the least squares estimate. We apply the confidence sets to several online decision problems, such as the multi-armed and the linearly parametrized bandit problems. The confidence sets are potentially applicable to other problems such as sleeping bandits, generalized linear bandits, and other linear control problems. We improve the regret bound of the Upper Confidence Bound (UCB) algorithm of Auer et al. (2002) and show that its regret is with high-probability a problem dependent constant. In the case of linear bandits (Dani et al., 2008), we improve the problem dependent bound in the dimension and number of time steps. Furthermore, as opposed to the previous result, we prove that our bou...

  20. Optimization of Active Muscle Force-Length Models Using Least Squares Curve Fitting.

    Science.gov (United States)

    Mohammed, Goran Abdulrahman; Hou, Ming

    2016-03-01

    The objective of this paper is to propose an asymmetric Gaussian function as an alternative to the existing active force-length models, and to optimize this model along with several other existing models by using the least squares curve fitting method. The minimal set of coefficients is identified for each of these models to facilitate the least squares curve fitting. Sarcomere simulated data and one set of rabbits extensor digitorum II experimental data are used to illustrate optimal curve fitting of the selected force-length functions. The results shows that all the curves fit reasonably well with the simulated and experimental data, while the Gordon-Huxley-Julian model and asymmetric Gaussian function are better than other functions in terms of statistical test scores root mean squared error and R-squared. However, the differences in RMSE scores are insignificant (0.3-6%) for simulated data and (0.2-5%) for experimental data. The proposed asymmetric Gaussian model and the method of parametrization of this and the other force-length models mentioned above can be used in the studies on active force-length relationships of skeletal muscles that generate forces to cause movements of human and animal bodies. PMID:26276984

  1. Least-squares migration of multisource data with a deblurring filter

    KAUST Repository

    Dai, Wei

    2011-09-01

    Least-squares migration (LSM) has been shown to be able to produce high-quality migration images, but its computational cost is considered to be too high for practical imaging. We have developed a multisource least-squares migration algorithm (MLSM) to increase the computational efficiency by using the blended sources processing technique. To expedite convergence, a multisource deblurring filter is used as a preconditioner to reduce the data residual. This MLSM algorithm is applicable with Kirchhoff migration, wave-equation migration, or reverse time migration, and the gain in computational efficiency depends on the choice of migration method. Numerical results with Kirchhoff LSM on the 2D SEG/EAGE salt model show that an accurate image is obtained by migrating a supergather of 320 phase-encoded shots. When the encoding functions are the same for every iteration, the input/output cost of MLSM is reduced by 320 times. Empirical results show that the crosstalk noise introduced by blended sources is more effectively reduced when the encoding functions are changed at every iteration. The analysis of signal-to-noise ratio (S/N) suggests that not too many iterations are needed to enhance the S/N to an acceptable level. Therefore, when implemented with wave-equation migration or reverse time migration methods, the MLSM algorithm can be more efficient than the conventional migration method. © 2011 Society of Exploration Geophysicists.

  2. Online segmentation of time series based on polynomial least-squares approximations.

    Science.gov (United States)

    Fuchs, Erich; Gruber, Thiemo; Nitschke, Jiri; Sick, Bernhard

    2010-12-01

    The paper presents SwiftSeg, a novel technique for online time series segmentation and piecewise polynomial representation. The segmentation approach is based on a least-squares approximation of time series in sliding and/or growing time windows utilizing a basis of orthogonal polynomials. This allows the definition of fast update steps for the approximating polynomial, where the computational effort depends only on the degree of the approximating polynomial and not on the length of the time window. The coefficients of the orthogonal expansion of the approximating polynomial-obtained by means of the update steps-can be interpreted as optimal (in the least-squares sense) estimators for average, slope, curvature, change of curvature, etc., of the signal in the time window considered. These coefficients, as well as the approximation error, may be used in a very intuitive way to define segmentation criteria. The properties of SwiftSeg are evaluated by means of some artificial and real benchmark time series. It is compared to three different offline and online techniques to assess its accuracy and runtime. It is shown that SwiftSeg-which is suitable for many data streaming applications-offers high accuracy at very low computational costs. PMID:20975120

  3. Non-negative least-squares variance component estimation with application to GPS time series

    Science.gov (United States)

    Amiri-Simkooei, A. R.

    2016-05-01

    The problem of negative variance components is probable to occur in many geodetic applications. This problem can be avoided if non-negativity constraints on variance components (VCs) are introduced to the stochastic model. Based on the standard non-negative least-squares (NNLS) theory, this contribution presents the method of non-negative least-squares variance component estimation (NNLS-VCE). The method is easy to understand, simple to implement, and efficient in practice. The NNLS-VCE is then applied to the coordinate time series of the permanent GPS stations to simultaneously estimate the amplitudes of different noise components such as white noise, flicker noise, and random walk noise. If a noise model is unlikely to be present, its amplitude is automatically estimated to be zero. The results obtained from 350 GPS permanent stations indicate that the noise characteristics of the GPS time series are well described by combination of white noise and flicker noise. This indicates that all time series contain positive noise amplitudes for white and flicker noise. In addition, around two-thirds of the series consist of random walk noise, of which its average amplitude is the (small) value of 0.16, 0.13, and 0.45 { mm/year }^{1/2} for the north, east, and up components, respectively. Also, about half of the positive estimated amplitudes of random walk noise are statistically significant, indicating that one-third of the total time series have significant random walk noise.

  4. Limited-memory BFGS based least-squares pre-stack Kirchhoff depth migration

    Science.gov (United States)

    Wu, Shaojiang; Wang, Yibo; Zheng, Yikang; Chang, Xu

    2015-08-01

    Least-squares migration (LSM) is a linearized inversion technique for subsurface reflectivity estimation. Compared to conventional migration algorithms, it can improve spatial resolution significantly with a few iterative calculations. There are three key steps in LSM, (1) calculate data residuals between observed data and demigrated data using the inverted reflectivity model; (2) migrate data residuals to form reflectivity gradient and (3) update reflectivity model using optimization methods. In order to obtain an accurate and high-resolution inversion result, the good estimation of inverse Hessian matrix plays a crucial role. However, due to the large size of Hessian matrix, the inverse matrix calculation is always a tough task. The limited-memory BFGS (L-BFGS) method can evaluate the Hessian matrix indirectly using a limited amount of computer memory which only maintains a history of the past m gradients (often m < 10). We combine the L-BFGS method with least-squares pre-stack Kirchhoff depth migration. Then, we validate the introduced approach by the 2-D Marmousi synthetic data set and a 2-D marine data set. The results show that the introduced method can effectively obtain reflectivity model and has a faster convergence rate with two comparison gradient methods. It might be significant for general complex subsurface imaging.

  5. Equalization of Loudspeaker and Room Responses Using Kautz Filters: Direct Least Squares Design

    Directory of Open Access Journals (Sweden)

    Tuomas Paatero

    2007-01-01

    Full Text Available DSP-based correction of loudspeaker and room responses is becoming an important part of improving sound reproduction. Such response equalization (EQ is based on using a digital filter in cascade with the reproduction channel to counteract the response errors introduced by loudspeakers and room acoustics. Several FIR and IIR filter design techniques have been proposed for equalization purposes. In this paper we investigate Kautz filters, an interesting class of IIR filters, from the point of view of direct least squares EQ design. Kautz filters can be seen as generalizations of FIR filters and their frequency-warped counterparts. They provide a flexible means to obtain desired frequency resolution behavior, which allows low filter orders even for complex corrections. Kautz filters have also the desirable property to avoid inverting dips in transfer function to sharp and long-ringing resonances in the equalizer. Furthermore, the direct least squares design is applicable to nonminimum-phase EQ design and allows using a desired target response. The proposed method is demonstrated by case examples with measured and synthetic loudspeaker and room responses.

  6. Kernel Recursive Least-Squares Temporal Difference Algorithms with Sparsification and Regularization

    Science.gov (United States)

    Zhu, Qingxin; Niu, Xinzheng

    2016-01-01

    By combining with sparse kernel methods, least-squares temporal difference (LSTD) algorithms can construct the feature dictionary automatically and obtain a better generalization ability. However, the previous kernel-based LSTD algorithms do not consider regularization and their sparsification processes are batch or offline, which hinder their widespread applications in online learning problems. In this paper, we combine the following five techniques and propose two novel kernel recursive LSTD algorithms: (i) online sparsification, which can cope with unknown state regions and be used for online learning, (ii) L2 and L1 regularization, which can avoid overfitting and eliminate the influence of noise, (iii) recursive least squares, which can eliminate matrix-inversion operations and reduce computational complexity, (iv) a sliding-window approach, which can avoid caching all history samples and reduce the computational cost, and (v) the fixed-point subiteration and online pruning, which can make L1 regularization easy to implement. Finally, simulation results on two 50-state chain problems demonstrate the effectiveness of our algorithms. PMID:27436996

  7. Comparison of approaches for parameter estimation on stochastic models: Generic least squares versus specialized approaches.

    Science.gov (United States)

    Zimmer, Christoph; Sahle, Sven

    2016-04-01

    Parameter estimation for models with intrinsic stochasticity poses specific challenges that do not exist for deterministic models. Therefore, specialized numerical methods for parameter estimation in stochastic models have been developed. Here, we study whether dedicated algorithms for stochastic models are indeed superior to the naive approach of applying the readily available least squares algorithm designed for deterministic models. We compare the performance of the recently developed multiple shooting for stochastic systems (MSS) method designed for parameter estimation in stochastic models, a stochastic differential equations based Bayesian approach and a chemical master equation based techniques with the least squares approach for parameter estimation in models of ordinary differential equations (ODE). As test data, 1000 realizations of the stochastic models are simulated. For each realization an estimation is performed with each method, resulting in 1000 estimates for each approach. These are compared with respect to their deviation to the true parameter and, for the genetic toggle switch, also their ability to reproduce the symmetry of the switching behavior. Results are shown for different set of parameter values of a genetic toggle switch leading to symmetric and asymmetric switching behavior as well as an immigration-death and a susceptible-infected-recovered model. This comparison shows that it is important to choose a parameter estimation technique that can treat intrinsic stochasticity and that the specific choice of this algorithm shows only minor performance differences. PMID:26826353

  8. A new finite element formulation for CFD:VIII. The Galerkin/least-squares method for advective-diffusive equations

    International Nuclear Information System (INIS)

    Galerkin/least-squares finite element methods are presented for advective-diffusive equations. Galerkin/least-squares represents a conceptual simplification of SUPG, and is in fact applicable to a wide variety of other problem types. A convergence analysis and error estimates are presented. (author)

  9. Radio astronomical image formation using constrained least squares and Krylov subspaces

    Science.gov (United States)

    Mouri Sardarabadi, Ahmad; Leshem, Amir; van der Veen, Alle-Jan

    2016-04-01

    Aims: Image formation for radio astronomy can be defined as estimating the spatial intensity distribution of celestial sources throughout the sky, given an array of antennas. One of the challenges with image formation is that the problem becomes ill-posed as the number of pixels becomes large. The introduction of constraints that incorporate a priori knowledge is crucial. Methods: In this paper we show that in addition to non-negativity, the magnitude of each pixel in an image is also bounded from above. Indeed, the classical "dirty image" is an upper bound, but a much tighter upper bound can be formed from the data using array processing techniques. This formulates image formation as a least squares optimization problem with inequality constraints. We propose to solve this constrained least squares problem using active set techniques, and the steps needed to implement it are described. It is shown that the least squares part of the problem can be efficiently implemented with Krylov-subspace-based techniques. We also propose a method for correcting for the possible mismatch between source positions and the pixel grid. This correction improves both the detection of sources and their estimated intensities. The performance of these algorithms is evaluated using simulations. Results: Based on parametric modeling of the astronomical data, a new imaging algorithm based on convex optimization, active sets, and Krylov-subspace-based solvers is presented. The relation between the proposed algorithm and sequential source removing techniques is explained, and it gives a better mathematical framework for analyzing existing algorithms. We show that by using the structure of the algorithm, an efficient implementation that allows massive parallelism and storage reduction is feasible. Simulations are used to compare the new algorithm to classical CLEAN. Results illustrate that for a discrete point model, the proposed algorithm is capable of detecting the correct number of sources

  10. Genetic and least squares algorithms for estimating spectral EIS parameters of prostatic tissues

    International Nuclear Information System (INIS)

    We employed electrical impedance spectroscopy (EIS) to evaluate the electrical properties of prostatic tissues. We collected freshly excised prostates from 23 men immediately following radical prostatectomy. The prostates were sectioned into 3 mm slices and electrical property measurements of complex resistivity were recorded from each of the slices using an impedance probe over the frequency range of 100 Hz to 100 kHz. The area probed was marked so that following tissue fixation and slide preparation, histological assessment could be correlated directly with the recorded EIS spectra. Prostate cancer (CaP), benign prostatic hyperplasia (BPH), non-hyperplastic glandular tissue and stroma were the primary prostatic tissue types probed. Genetic and least squares parameter estimation algorithms were implemented for fitting a Cole-type resistivity model to the measured data. The four multi-frequency-based spectral parameters defining the recorded spectrum (ρ∞, Δρ, fc and α) were determined using these algorithms and statistically analyzed with respect to the tissue type. Both algorithms fit the measured data well, with the least squares algorithm having a better average goodness of fit (95.2 mΩ m versus 109.8 mΩ m) and a faster execution time (80.9 ms versus 13 637 ms) than the genetic algorithm. The mean parameters, from all tissue samples, estimated using the genetic algorithm ranged from 4.44 to 5.55 Ω m, 2.42 to 7.14 Ω m, 3.26 to 6.07 kHz and 0.565 to 0.654 for ρ∞, Δρ, fc and α, respectively. These same parameters estimated using the least squares algorithm ranged from 4.58 to 5.79 Ω m, 2.18 to 6.98 Ω m, 2.97 to 5.06 kHz and 0.621 to 0.742 for ρ∞, Δρ, fc and α, respectively. The ranges of these parameters were similar to those reported in the literature. Further, significant differences (p c; this is especially important since current prostate cancer screening methods do not reliably differentiate between these two tissue types

  11. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    International Nuclear Information System (INIS)

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues

  12. [Modelling a penicillin fed-batch fermentation using least squares support vector machines].

    Science.gov (United States)

    Liu, Yi; Wang, Hai-Qing

    2006-01-01

    The biochemical processes are usually characterized as seriously time varying and nonlinear dynamic systems. Building their first-principle models are very costly and difficult due to the absence of inherent mechanism and efficient on-line sensors. Furthermore, these detailed and complicated models do not necessary guarantee a good performance in practice. An approach via least squares support vector machines (LS-SVM) based on Pensim simulator is proposed for modelling the penicillin fed-batch fermentation process, and the adjustment strategy for parameters of LS-SVM is presented. Based on the proposed modelling method, the predictive models of penicillin concentration, biomass concentration and substrate concentration are obtained by using very limited on-line measurements. The results show that the models established are more accurate and efficient, and suffice for the requirements of control and optimization for biochemical processes. PMID:16572855

  13. Sparsity-Cognizant Total Least-Squares for Perturbed Compressive Sampling

    CERN Document Server

    Zhu, Hao; Giannakis, Georgios B

    2010-01-01

    Solving linear regression problems based on the total least-squares (TLS) criterion has well-documented merits in various applications, where perturbations appear both in the data vector as well as in the regression matrix. However, existing TLS approaches do not account for sparsity possibly present in the unknown vector of regression coefficients. On the other hand, sparsity is the key attribute exploited by modern compressive sampling and variable selection approaches to linear regression, which include noise in the data, but do not account for perturbations in the regression matrix. The present paper fills this gap by formulating and solving TLS optimization problems under sparsity constraints. Near-optimum and reduced-complexity suboptimum sparse (S-) TLS algorithms are developed to address the perturbed compressive sampling (and the related dictionary learning) challenge, when there is a mismatch between the true and adopted bases over which the unknown vector is sparse. The novel S-TLS schemes also all...

  14. a Robust Pct Method Based on Complex Least Squares Adjustment Method

    Science.gov (United States)

    Haiqiang, F.; Jianjun, Z.; Changcheng, W.; Qinghua, X.; Rong, Z.

    2013-07-01

    Polarization Coherence Tomography (PCT) method has the good performance in deriving the vegetation vertical structure. However, Errors caused by temporal decorrelation and vegetation height and ground phase always propagate to the data analysis and contaminate the results. In order to overcome this disadvantage, we exploit Complex Least Squares Adjustment Method to compute vegetation height and ground phase based on Random Volume over Ground and Volume Temporal Decorrelation (RVoG + VTD) model. By the fusion of different polarimetric InSAR data, we can use more observations to obtain more robust estimations of temporal decorrelation and vegetation height, and then, we introduce them into PCT to acquire more accurate vegetation vertical structure. Finally the new approach is validated on E-SAR data of Oberpfaffenhofen, Germany. The results demonstrate that the robust method can greatly improve accusation of vegetation vertical structure.

  15. Prediction of chaotic systems with multidimensional recurrent least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Sun Jian-Cheng; Zhou Ya-Tong; Luo Jian-Guo

    2006-01-01

    In this paper, we propose a multidimensional version of recurrent least squares support vector machines (MDRLSSVM) to solve the problem about the prediction of chaotic system. To acquire better prediction performance, the high-dimensional space, which provides more information on the system than the scalar time series, is first reconstructed utilizing Takens's embedding theorem. Then the MDRLS-SVM instead of traditional RLS-SVM is used in the highdimensional space, and the prediction performance can be improved from the point of view of reconstructed embedding phase space. In addition, the MDRLS-SVM algorithm is analysed in the context of noise, and we also find that the MDRLS-SVM has lower sensitivity to noise than the RLS-SVM.

  16. Least-Squares Solution of Inverse Problem for Hermitian Anti-reflexive Matrices and Its Appoximation

    Institute of Scientific and Technical Information of China (English)

    Zhen Yun PENG; Yuan Bei DENG; Jin Wang LIU

    2006-01-01

    In this paper, we first consider the least-squares solution of the matrix inverse problem as follows: Find a hermitian anti-reflexive matrix corresponding to a given generalized reflection matrix J such that for given matrices X, B we have minA‖AX - B‖. The existence theorems are obtained, and a general representation of such a matrix is presented. We denote the set of such matrices by SE. Then the matrix nearness problem for the matrix inverse problem is discussed. That is: Given an arbitrary A*, find a matrix A ∈ SE which is nearest to A* in Frobenius norm. We show that the nearest matrix is unique and provide an expression for this nearest matrix.

  17. First-Order System Least Squares for the Stokes Equations, with Application to Linear Elasticity

    Science.gov (United States)

    Cai, Z.; Manteuffel, T. A.; McCormick, S. F.

    1996-01-01

    Following our earlier work on general second-order scalar equations, here we develop a least-squares functional for the two- and three-dimensional Stokes equations, generalized slightly by allowing a pressure term in the continuity equation. By introducing a velocity flux variable and associated curl and trace equations, we are able to establish ellipticity in an H(exp 1) product norm appropriately weighted by the Reynolds number. This immediately yields optimal discretization error estimates for finite element spaces in this norm and optimal algebraic convergence estimates for multiplicative and additive multigrid methods applied to the resulting discrete systems. Both estimates are uniform in the Reynolds number. Moreover, our pressure-perturbed form of the generalized Stokes equations allows us to develop an analogous result for the Dirichlet problem for linear elasticity with estimates that are uniform in the Lame constants.

  18. A multivariate partial least squares approach to joint association analysis for multiple correlated traits

    Institute of Scientific and Technical Information of China (English)

    Yang Xu; Wenming Hu; Zefeng Yang; Chenwu Xu

    2016-01-01

    Many complex traits are highly correlated rather than independent. By taking the correlation structure of multiple traits into account, joint association analyses can achieve both higher statistical power and more accurate estimation. To develop a statistical approach to joint association analysis that includes allele detection and genetic effect estimation, we combined multivariate partial least squares regression with variable selection strategies and selected the optimal model using the Bayesian Information Criterion (BIC). We then performed extensive simulations under varying heritabilities and sample sizes to compare the performance achieved using our method with those obtained by single-trait multilocus methods. Joint association analysis has measurable advantages over single-trait methods, as it exhibits superior gene detection power, especially for pleiotropic genes. Sample size, heritability, polymorphic information content (PIC), and magnitude of gene effects influence the statistical power, accuracy and precision of effect estimation by the joint association analysis.

  19. Least Squares Shadowing Sensitivity Analysis of Chaotic Flow Around a Two-Dimensional Airfoil

    Science.gov (United States)

    Blonigan, Patrick J.; Wang, Qiqi; Nielsen, Eric J.; Diskin, Boris

    2016-01-01

    Gradient-based sensitivity analysis has proven to be an enabling technology for many applications, including design of aerospace vehicles. However, conventional sensitivity analysis methods break down when applied to long-time averages of chaotic systems. This breakdown is a serious limitation because many aerospace applications involve physical phenomena that exhibit chaotic dynamics, most notably high-resolution large-eddy and direct numerical simulations of turbulent aerodynamic flows. A recently proposed methodology, Least Squares Shadowing (LSS), avoids this breakdown and advances the state of the art in sensitivity analysis for chaotic flows. The first application of LSS to a chaotic flow simulated with a large-scale computational fluid dynamics solver is presented. The LSS sensitivity computed for this chaotic flow is verified and shown to be accurate, but the computational cost of the current LSS implementation is high.

  20. Least squares parameter estimation methods for material decomposition with energy discriminating detectors

    Energy Technology Data Exchange (ETDEWEB)

    Le, Huy Q.; Molloi, Sabee [Department of Radiological Sciences, University of California, Irvine, California 92697 (United States)

    2011-01-15

    Purpose: Energy resolving detectors provide more than one spectral measurement in one image acquisition. The purpose of this study is to investigate, with simulation, the ability to decompose four materials using energy discriminating detectors and least squares minimization techniques. Methods: Three least squares parameter estimation decomposition techniques were investigated for four-material breast imaging tasks in the image domain. The first technique treats the voxel as if it consisted of fractions of all the materials. The second method assumes that a voxel primarily contains one material and divides the decomposition process into segmentation and quantification tasks. The third is similar to the second method but a calibration was used. The simulated computed tomography (CT) system consisted of an 80 kVp spectrum and a CdZnTe (CZT) detector that could resolve the x-ray spectrum into five energy bins. A postmortem breast specimen was imaged with flat panel CT to provide a model for the digital phantoms. Hydroxyapatite (HA) (50, 150, 250, 350, 450, and 550 mg/ml) and iodine (4, 12, 20, 28, 36, and 44 mg/ml) contrast elements were embedded into the glandular region of the phantoms. Calibration phantoms consisted of a 30/70 glandular-to-adipose tissue ratio with embedded HA (100, 200, 300, 400, and 500 mg/ml) and iodine (5, 15, 25, 35, and 45 mg/ml). The x-ray transport process was simulated where the Beer-Lambert law, Poisson process, and CZT absorption efficiency were applied. Qualitative and quantitative evaluations of the decomposition techniques were performed and compared. The effect of breast size was also investigated. Results: The first technique decomposed iodine adequately but failed for other materials. The second method separated the materials but was unable to quantify the materials. With the addition of a calibration, the third technique provided good separation and quantification of hydroxyapatite, iodine, glandular, and adipose tissues

  1. Least Squares Temporal Difference Actor-Critic Methods with Applications to Robot Motion Control

    CERN Document Server

    Estanjini, Reza Moazzez; Lahijanian, Morteza; Wang, Jing; Belta, Calin A; Paschalidis, Ioannis Ch

    2011-01-01

    We consider the problem of finding a control policy for a Markov Decision Process (MDP) to maximize the probability of reaching some states while avoiding some other states. This problem is motivated by applications in robotics, where such problems naturally arise when probabilistic models of robot motion are required to satisfy temporal logic task specifications. We transform this problem into a Stochastic Shortest Path (SSP) problem and develop a new approximate dynamic programming algorithm to solve it. This algorithm is of the actor-critic type and uses a least-square temporal difference learning method. It operates on sample paths of the system and optimizes the policy within a pre-specified class parameterized by a parsimonious set of parameters. We show its convergence to a policy corresponding to a stationary point in the parameters' space. Simulation results confirm the effectiveness of the proposed solution.

  2. Two new methods for solving large scale least squares in geodetic surveying computations

    Science.gov (United States)

    Murigande, Ch.; Toint, Ph. L.; Paquet, P.

    1986-12-01

    This paper considers the solution of linear least squares problems arising in space geodesy, with a special application to multistation adjustment by a short arc method based on Doppler observations. The widely used second-order regression algorithm due to Brown (1976) for reducing the normal equations system is briefly recalled. Then two algorithms which avoid the use of the normal equations are proposed. The first one is a direct method that applies orthogonal transformations to the observation matrix directly, in order to reduce it to upper triangular form. The solution is then obtained by back-substitution. The second method is iterative and uses a preconditioned conjugate gradient technique. A comparison of the three procedures is provided on data of the second European Doppler Observation Campaign.

  3. Underwater terrain positioning method based on least squares estimation for AUV

    Science.gov (United States)

    Chen, Peng-yun; Li, Ye; Su, Yu-min; Chen, Xiao-long; Jiang, Yan-qing

    2015-12-01

    To achieve accurate positioning of autonomous underwater vehicles, an appropriate underwater terrain database storage format for underwater terrain-matching positioning is established using multi-beam data as underwater terrainmatching data. An underwater terrain interpolation error compensation method based on fractional Brownian motion is proposed for defects of normal terrain interpolation, and an underwater terrain-matching positioning method based on least squares estimation (LSE) is proposed for correlation analysis of topographic features. The Fisher method is introduced as a secondary criterion for pseudo localization appearing in a topographic features flat area, effectively reducing the impact of pseudo positioning points on matching accuracy and improving the positioning accuracy of terrain flat areas. Simulation experiments based on electronic chart and multi-beam sea trial data show that drift errors of an inertial navigation system can be corrected effectively using the proposed method. The positioning accuracy and practicality are high, satisfying the requirement of underwater accurate positioning.

  4. Comparison of the least squares and the maximum likelihood estimators for gamma-spectrometry

    International Nuclear Information System (INIS)

    A comparison of the characteristics of the maximum likelihood (ML) and the least squares (LS) estimators of nuclides activities for low-intensity scintillation γ-spectra has been carried out by the computer simulation. It has been found that the part of the LS estimators gives biased activity estimates and the bias grows with increase of the multichannel analyzer resolution (the number of the spectrum channels). Such bias in estimates leads to the significant deterioration of the estimation accuracy for low-intensity spectra. Consequently, the threshold of nuclides detection rises up to 2-10 times in comparison with the ML estimator. It has been shown that the ML estimator and the special LS estimator provide non biased estimates of nuclides activities. Thus, these estimators are optimal for practical application to low-intensity spectrometry. (Copyright (c) 1998 Elsevier Science B.V., Amsterdam. All rights reserved.)

  5. First-order system least squares for the pure traction problem in planar linear elasticity

    Energy Technology Data Exchange (ETDEWEB)

    Cai, Z.; Manteuffel, T.; McCormick, S.; Parter, S.

    1996-12-31

    This talk will develop two first-order system least squares (FOSLS) approaches for the solution of the pure traction problem in planar linear elasticity. Both are two-stage algorithms that first solve for the gradients of displacement, then for the displacement itself. One approach, which uses L{sup 2} norms to define the FOSLS functional, is shown under certain H{sup 2} regularity assumptions to admit optimal H{sup 1}-like performance for standard finite element discretization and standard multigrid solution methods that is uniform in the Poisson ratio for all variables. The second approach, which is based on H{sup -1} norms, is shown under general assumptions to admit optimal uniform performance for displacement flux in an L{sup 2} norm and for displacement in an H{sup 1} norm. These methods do not degrade as other methods generally do when the material properties approach the incompressible limit.

  6. The Helmholtz equation least squares method for reconstructing and predicting acoustic radiation

    CERN Document Server

    Wu, Sean F

    2015-01-01

    This book gives a comprehensive introduction to the Helmholtz Equation Least Squares (HELS) method and its use in diagnosing noise and vibration problems. In contrast to the traditional NAH technologies, the HELS method does not seek an exact solution to the acoustic field produced by an arbitrarily shaped structure. Rather, it attempts to obtain the best approximation of an acoustic field through the expansion of certain basis functions. Therefore, it significantly simplifies the complexities of the reconstruction process, yet still enables one to acquire an understanding of the root causes of different noise and vibration problems that involve arbitrarily shaped surfaces in non-free space using far fewer measurement points than either Fourier acoustics or BEM based NAH. The examples given in this book illustrate that the HELS method may potentially become a practical and versatile tool for engineers to tackle a variety of complex noise and vibration issues in engineering applications.

  7. Analysis of Shift and Deformation of Planar Surfaces Using the Least Squares Plane

    Directory of Open Access Journals (Sweden)

    Hrvoje Matijević

    2006-12-01

    Full Text Available Modern methods of measurement developed on the basis of advanced reflectorless distance measurement have paved the way for easier detection and analysis of shift and deformation. A large quantity of collected data points will often require a mathematical model of the surface that fits best into these. Although this can be a complex task, in the case of planar surfaces it is easily done, enabling further processing and analysis of measurement results. The paper describes the fitting of a plane to a set of collected points using the least squares distance, with previously excluded outliers via the RANSAC algorithm. Based on that, a method for analysis of the deformation and shift of planar surfaces is also described.

  8. A Hybridization of Enhanced Artificial Bee Colony-Least Squares Support Vector Machines for Price Forecasting

    Directory of Open Access Journals (Sweden)

    Yuhanis Yusof

    2012-01-01

    Full Text Available Problem statement: As the performance of Least Squares Support Vector Machines (LSSVM is highly rely on its value of regularization parameter, γ and kernel parameter, σ2, man-made approach is clearly not an appropriate solution since it may lead to blindness in certain extent. In addition, this technique is time consuming and unsystematic, which consequently affect the generalization performance of LSSVM. Approach: This study presents an enhanced Artificial Bee Colony (ABC to automatically optimize the hyper parameters of interest. The enhancement involved modifications that provide better exploitation activity by the bees during searching and prevent premature convergence. Later, the prediction process is accomplished by LSSVM. Results and Conclusion: Empirical results obtained indicated that feasibility of proposed technique showed a satisfactory performance by producing better prediction accuracy as compared to standard ABC-LSSVM and Back Propagation Neural Network.

  9. On the Semivalues and the Least Square Values Average Per Capita Formulas and Relationships

    Institute of Scientific and Technical Information of China (English)

    Irinel DRAGAN

    2006-01-01

    In this paper, it is shown that both the Semivalues and the Least Square Values of cooperative transferable utilities games can be expressed in terms of n2 averages of values of the characteristic function of the game, by means of what we call the Average per capita formulas. Moreover, like the case of the Shapley value earlier considered, the terms of the formulas can be computed in parallel, and an algorithm is derived. From these results, it follows that each of the two values mentioned above are Shapley values of games easily obtained from the given game, and this fact gives another computational opportunity, as soon as the computation of the Shapley value is efficiently done.

  10. A Note on the Nonparametric Least-squares Test for Checking a Polynomial Relationship

    Institute of Scientific and Technical Information of China (English)

    Chang-lin Mei; Shu-yuan He; Yan-hua Wang

    2003-01-01

    Recently, Gijbels and Rousson[6] suggested a new approach, called nonparametric least-squares test, to check polynomial regression relationships. Although this test procedure is not only simple but also powerful in most cases, there are several other parameters to be chosen in addition to the kernel and bandwidth.As shown in their paper, choice of these parameters is crucial but sometimes intractable. We propose in this paper a new statistic which is based on sample variance of the locally estimated pth derivative of the regression function at each design point. The resulting test is still simple but includes no extra parameters to be determined besides the kernel and bandwidth that are necessary for nonparametric smoothing techniques. Comparison by simulations demonstrates that our test performs as well as or even better than Gijbels and Rousson's approach.Furthermore, a real-life data set is analyzed by our method and the results obtained are satisfactory.

  11. Least Squares Inference on Integrated Volatility and the Relationship between Efficient Prices and Noise

    DEFF Research Database (Denmark)

    Nolte, Ingmar; Voev, Valeri

    The expected value of sums of squared intraday returns (realized variance) gives rise to a least squares regression which adapts itself to the assumptions of the noise process and allows for a joint inference on integrated volatility (IV), noise moments and price-noise relations. In the iid noise...... increasing" type of dependence and analyze its ability to cope with the empirically observed price-noise dependence in quote data. In the empirical section of the paper we apply the LS methodology to estimate the integrated volatility as well as the noise properties of 25 liquid stocks both with midquote and...... transaction price data. We find that while iid noise is an oversimplification, its non-iid characteristics have a decidedly negligible effect on volatility estimation within our framework, for which we provide a sound theoretical reason. In terms of noise-price endogeneity, we are not able to find empirical...

  12. Modelling of chaotic systems based on modified weighted recurrent least squares support vector machines

    Institute of Scientific and Technical Information of China (English)

    Sun Jian-Cheng; Zhang Tai-Yi; Liu Feng

    2004-01-01

    Positive Lyapunov exponents cause the errors in modelling of the chaotic time series to grow exponentially. In this paper, we propose the modified version of the support vector machines (SVM) to deal with this problem. Based on recurrent least squares support vector machines (RLS-SVM), we introduce a weighted term to the cost function to compensate the prediction errors resulting from the positive global Lyapunov exponents. To demonstrate the effectiveness of our algorithm, we use the power spectrum and dynamic invariants involving the Lyapunov exponents and the correlation dimension as criterions, and then apply our method to the Santa Fe competition time series. The simulation results shows that the proposed method can capture the dynamics of the chaotic time series effectively.

  13. Defense of the Least Squares Solution to Peelle’s Pertinent Puzzle

    Directory of Open Access Journals (Sweden)

    Nicolas Hengartner

    2011-02-01

    Full Text Available Generalized least squares (GLS for model parameter estimation has a long and successful history dating to its development by Gauss in 1795. Alternatives can outperform GLS in some settings, and alternatives to GLS are sometimes sought when GLS exhibits curious behavior, such as in Peelle’s Pertinent Puzzle (PPP. PPP was described in 1987 in the context of estimating fundamental parameters that arise in nuclear interaction experiments. In PPP, GLS estimates fell outside the range of the data, eliciting concerns that GLS was somehow flawed. These concerns have led to suggested alternatives to GLS estimators. This paper defends GLS in the PPP context, investigates when PPP can occur, illustrates when PPP can be beneficial for parameter estimation, reviews optimality properties of GLS estimators, and gives an example in which PPP does occur.

  14. Facial Expression Recognition via Non-Negative Least-Squares Sparse Coding

    Directory of Open Access Journals (Sweden)

    Ying Chen

    2014-05-01

    Full Text Available Sparse coding is an active research subject in signal processing, computer vision, and pattern recognition. A novel method of facial expression recognition via non-negative least squares (NNLS sparse coding is presented in this paper. The NNLS sparse coding is used to form a facial expression classifier. To testify the performance of the presented method, local binary patterns (LBP and the raw pixels are extracted for facial feature representation. Facial expression recognition experiments are conducted on the Japanese Female Facial Expression (JAFFE database. Compared with other widely used methods such as linear support vector machines (SVM, sparse representation-based classifier (SRC, nearest subspace classifier (NSC, K-nearest neighbor (KNN and radial basis function neural networks (RBFNN, the experiment results indicate that the presented NNLS method performs better than other used methods on facial expression recognition tasks.

  15. Nonlinear Spline Kernel-based Partial Least Squares Regression Method and Its Application

    Institute of Scientific and Technical Information of China (English)

    JIA Jin-ming; WEN Xiang-jun

    2008-01-01

    Inspired by the traditional Wold's nonlinear PLS algorithm comprises of NIPALS approach and a spline inner function model,a novel nonlinear partial least squares algorithm based on spline kernel(named SK-PLS)is proposed for nonlinear modeling in the presence of multicollinearity.Based on the iuner-product kernel spanned by the spline basis functions with infinite numher of nodes,this method firstly maps the input data into a high dimensional feature space,and then calculates a linear PLS model with reformed NIPALS procedure in the feature space and gives a unified framework of traditional PLS"kernel"algorithms in consequence.The linear PLS in the feature space corresponds to a nonlinear PLS in the original input (primal)space.The good approximating property of spline kernel function enhances the generalization ability of the novel model,and two numerical experiments are given to illustrate the feasibility of the proposed method.

  16. A Least-Squares Finite Element Method for Electromagnetic Scattering Problems

    Science.gov (United States)

    Wu, Jie; Jiang, Bo-nan

    1996-01-01

    The least-squares finite element method (LSFEM) is applied to electromagnetic scattering and radar cross section (RCS) calculations. In contrast to most existing numerical approaches, in which divergence-free constraints are omitted, the LSFF-M directly incorporates two divergence equations in the discretization process. The importance of including the divergence equations is demonstrated by showing that otherwise spurious solutions with large divergence occur near the scatterers. The LSFEM is based on unstructured grids and possesses full flexibility in handling complex geometry and local refinement Moreover, the LSFEM does not require any special handling, such as upwinding, staggered grids, artificial dissipation, flux-differencing, etc. Implicit time discretization is used and the scheme is unconditionally stable. By using a matrix-free iterative method, the computational cost and memory requirement for the present scheme is competitive with other approaches. The accuracy of the LSFEM is verified by several benchmark test problems.

  17. The least-squares finite element method for low-mach-number compressible viscous flows

    Science.gov (United States)

    Yu, Sheng-Tao

    1994-01-01

    The present paper reports the development of the Least-Squares Finite Element Method (LSFEM) for simulating compressible viscous flows at low Mach numbers in which the incompressible flows pose as an extreme. Conventional approach requires special treatments for low-speed flows calculations: finite difference and finite volume methods are based on the use of the staggered grid or the preconditioning technique; and, finite element methods rely on the mixed method and the operator-splitting method. In this paper, however, we show that such difficulty does not exist for the LSFEM and no special treatment is needed. The LSFEM always leads to a symmetric, positive-definite matrix through which the compressible flow equations can be effectively solved. Two numerical examples are included to demonstrate the method: first, driven cavity flows at various Reynolds numbers; and, buoyancy-driven flows with significant density variation. Both examples are calculated by using full compressible flow equations.

  18. A Collocation Method by Moving Least Squares Applicable to European Option Pricing

    Directory of Open Access Journals (Sweden)

    M. Amirfakhrian

    2016-05-01

    Full Text Available The subject matter of the present inquiry is the pricing of European options in the actual form of numbers. To assess the numerical prices of European options, a scheme independent of any kind of mesh but rather powered by moving least squares (MLS estimation is made. In practical terms, first the discretion of time variable is implemented and then, an MLS-powered method is applied for spatial approximation. As, unlike other methods, these courses of action mentioned here don't rely on a mesh, one can firmly claim they are to be categorized under mesh-less methods. And, of course, at the end of the paper, various experiments are offered to prove how efficient and how powerful the introduced approach is.

  19. Credit Risk Evaluation Using a C-Variable Least Squares Support Vector Classification Model

    Science.gov (United States)

    Yu, Lean; Wang, Shouyang; Lai, K. K.

    Credit risk evaluation is one of the most important issues in financial risk management. In this paper, a C-variable least squares support vector classification (C-VLSSVC) model is proposed for credit risk analysis. The main idea of this model is based on the prior knowledge that different classes may have different importance for modeling and more weights should be given to those classes with more importance. The C-VLSSVC model can be constructed by a simple modification of the regularization parameter in LSSVC, whereby more weights are given to the lease squares classification errors with important classes than the lease squares classification errors with unimportant classes while keeping the regularized terms in its original form. For illustration purpose, a real-world credit dataset is used to test the effectiveness of the C-VLSSVC model.

  20. Least squares approach for initial data recovery in dynamic data-driven applications simulations

    KAUST Repository

    Douglas, C.

    2010-12-01

    In this paper, we consider the initial data recovery and the solution update based on the local measured data that are acquired during simulations. Each time new data is obtained, the initial condition, which is a representation of the solution at a previous time step, is updated. The update is performed using the least squares approach. The objective function is set up based on both a measurement error as well as a penalization term that depends on the prior knowledge about the solution at previous time steps (or initial data). Various numerical examples are considered, where the penalization term is varied during the simulations. Numerical examples demonstrate that the predictions are more accurate if the initial data are updated during the simulations. © Springer-Verlag 2011.

  1. Slip distribution of the 2010 Mentawai earthquake from GPS observation using least squares inversion method

    Science.gov (United States)

    Awaluddin, Moehammad; Yuwono, Bambang Darmo; Puspita, Yolanda Adya

    2016-05-01

    Continuous Global Positioning System (GPS) observations showed significant crustal displacements as a result of the 2010 Mentawai earthquake. The Least Square Inversion method of Mentawai earthquake slip distribution from SuGAR observations yielded in an optimum value of slip distribution by giving a weight of smoothing constraint and a weight of slip value constraint = 0 at the edge of the earthquake rupture area. A maximum coseismic slip of the inversion calculation was 1.997 m and concentrated around stations PRKB (Pagai Island). In addition, the values of dip-slip direction tend to be more dominant. The seismic moment calculated from the slip distribution was 6.89 × 10E+20 Nm, which is equivalent to a magnitude of 7.8.

  2. STUDY ON PARAMETERS FOR TOPOLOGICAL VARIABLES FIELD INTERPOLATED BY MOVING LEAST SQUARE APPROXIMATION

    Institute of Scientific and Technical Information of China (English)

    Kal Long; Zhengxing Zuo; Rehan H.Zuberi

    2009-01-01

    This paper presents a new approach to the structural topology optimization of con-tinuum structures. Material-point independent variables are presented to illustrate the existence condition, or inexistence of the material points and their vicinity instead of elements or nodes in popular topology optimization methods. Topological variables field is constructed by moving least square approximation which is used as a shape function in the meshless method. Combined with finite element analyses, not only checkerboard patterns and mesh-dependence phenomena are overcome by this continuous and smooth topological variables field, but also the locations and numbers of topological variables can be arbitrary. Parameters including the number of quadrature points, scaling parameter, weight function and so on upon optimum topological configurations are discussed. Two classic topology optimization problems are solved successfully by the pro-posed method. The method is found robust and no numerical instabilities are found with proper parameters.

  3. Comparison between the basic least squares and the Bayesian approach for elastic constants identification

    Energy Technology Data Exchange (ETDEWEB)

    Gogu, C; Le Riche, R; Molimard, J; Vautrin, A [Ecole des Mines de Saint Etienne, 158 cours Fauriel, 42023 Saint Etienne (France); Haftka, R; Sankar, B [University of Florida, PO Box 116250, Gainesville, FL, 32611 (United States)], E-mail: gogu@emse.fr

    2008-11-01

    The basic formulation of the least squares method, based on the L{sub 2} norm of the misfit, is still widely used today for identifying elastic material properties from experimental data. An alternative statistical approach is the Bayesian method. We seek here situations with significant difference between the material properties found by the two methods. For a simple three bar truss example we illustrate three such situations in which the Bayesian approach leads to more accurate results: different magnitude of the measurements, different uncertainty in the measurements and correlation among measurements. When all three effects add up, the Bayesian approach can have a large advantage. We then compared the two methods for identification of elastic constants from plate vibration natural frequencies.

  4. Wavelet Neural Networks for Adaptive Equalization by Using the Orthogonal Least Square Algorithm

    Institute of Scientific and Technical Information of China (English)

    JIANG Minghu(江铭虎); DENG Beixing(邓北星); Georges Gielen

    2004-01-01

    Equalizers are widely used in digital communication systems for corrupted or time varying channels. To overcome performance decline for noisy and nonlinear channels, many kinds of neural network models have been used in nonlinear equalization. In this paper, we propose a new nonlinear channel equalization, which is structured by wavelet neural networks. The orthogonal least square algorithm is applied to update the weighting matrix of wavelet networks to form a more compact wavelet basis unit, thus obtaining good equalization performance. The experimental results show that performance of the proposed equalizer based on wavelet networks can significantly improve the neural modeling accuracy and outperform conventional neural network equalization in signal to noise ratio and channel non-linearity.

  5. NEW RESULTS ABOUT THE RELATIONSHIP BETWEEN OPTIMALLY WEIGHTED LEAST SQUARES ESTIMATE AND LINEAR MINIMUM VARIANCE ESTIMATE

    Institute of Scientific and Technical Information of China (English)

    Juan ZHAO; Yunmin ZHU

    2009-01-01

    The optimally weighted least squares estimate and the linear minimum variance estimate are two of the most popular estimation methods for a linear model. In this paper, the authors make a comprehensive discussion about the relationship between the two estimates. Firstly, the authors consider the classical linear model in which the coefficient matrix of the linear model is deterministic,and the necessary and sufficient condition for equivalence of the two estimates is derived. Moreover,under certain conditions on variance matrix invertibility, the two estimates can be identical provided that they use the same a priori information of the parameter being estimated. Secondly, the authors consider the linear model with random coefficient matrix which is called the extended linear model;under certain conditions on variance matrix invertibility, it is proved that the former outperforms the latter when using the same a priori information of the parameter.

  6. Least-Square Collaborative Beamforming Linear Array for Steering Capability in Green Wireless Sensor Networks

    Institute of Scientific and Technical Information of China (English)

    NikNoordini NikAbdMalik; Mazlina Esa; Nurul Mu’azzah Abdul Latiff

    2016-01-01

    Abstract-This paper presents a collaborative beamforming (CB) technique to organize the sensor node’s location in a linear array for green wireless sensor network (WSN) applications. In this method, only selected clusters and active CB nodes are needed each time to perform CB in WSNs. The proposed least-square linear array (LSLA) manages to select nodes to perform as a linear antenna array (LAA), which is similar to and as outstanding as the conventional uniform linear array (ULA). The LSLA technique is also able to solve positioning error problems that exist in the random nodes deployment. The beampattern fluctuations have been analyzed due to the random positions of sensor nodes. Performances in terms of normalized power gains are given. It is demonstrated by a simulation that the proposed technique gives similar performances to the conventional ULA and at the same time exhibits lower complexity.

  7. Recursive N-way partial least squares for brain-computer interface.

    Directory of Open Access Journals (Sweden)

    Andrey Eliseyev

    Full Text Available In the article tensor-input/tensor-output blockwise Recursive N-way Partial Least Squares (RNPLS regression is considered. It combines the multi-way tensors decomposition with a consecutive calculation scheme and allows blockwise treatment of tensor data arrays with huge dimensions, as well as the adaptive modeling of time-dependent processes with tensor variables. In the article the numerical study of the algorithm is undertaken. The RNPLS algorithm demonstrates fast and stable convergence of regression coefficients. Applied to Brain Computer Interface system calibration, the algorithm provides an efficient adjustment of the decoding model. Combining the online adaptation with easy interpretation of results, the method can be effectively applied in a variety of multi-modal neural activity flow modeling tasks.

  8. Research on mine noise sources analysis based on least squares wave-let transform

    Institute of Scientific and Technical Information of China (English)

    CHENG Gen-yin; YU Sheng-chen; CHEN Shao-jie; WEI Zhi-yong; ZHANG Xiao-chen

    2010-01-01

    In order to determine the characteristics of noise source accurately, the noise distribution at different frequencies was determined by taking the differences into account between aerodynamic noises, mechanical noise, electrical noise in terms of in frequency and intensity. Designed a least squares wavelet with high precision and special effects for strong interference zone (multi-source noise), which is applicable to strong noise analysis produced by underground mine, and obtained distribution of noise in different frequency and achieves good results. According to the results of decomposition, the characteristics of noise sources production can be more accurately determined, which lays a good foundation for the follow-up focused and targeted noise control, and provides a new method that is greatly applicable for testing and analyzing noise control.

  9. Uncertainty evaluation for ordinary least-square fitting with arbitrary order polynomial in joule balance method

    International Nuclear Information System (INIS)

    The ordinary least-square fitting with polynomial is used in both the dynamic phase of the watt balance method and the weighting phase of joule balance method but few researches have been conducted to evaluate the uncertainty of the fitting data in the electrical balance methods. In this paper, a matrix-calculation method for evaluating the uncertainty of the polynomial fitting data is derived and the properties of this method are studied by simulation. Based on this, another two derived methods are proposed. One is used to find the optimal fitting order for the watt or joule balance methods. Accuracy and effective factors of this method are experimented with simulations. The other is used to evaluate the uncertainty of the integral of the fitting data for joule balance, which is demonstrated with an experiment from the NIM-1 joule balance. (paper)

  10. Comparison and Analysis of Nonlinear Least Squares Methods for Vision Based Navigation (vbn) Algorithms

    Science.gov (United States)

    Sheta, B.; Elhabiby, M.; Sheimy, N.

    2012-07-01

    A robust scale and rotation invariant image matching algorithm is vital for the Visual Based Navigation (VBN) of aerial vehicles, where matches between an existing geo-referenced database images and the real-time captured images are used to georeference (i.e. six transformation parameters - three rotation and three translation) the real-time captured image from the UAV through the collinearity equations. The georeferencing information is then used in aiding the INS integration Kalman filter as Coordinate UPdaTe (CUPT). It is critical for the collinearity equations to use the proper optimization algorithm to ensure accurate and fast convergence for georeferencing parameters with the minimum required conjugate points necessary for convergence. Fast convergence to a global minimum will require non-linear approach to overcome the high degree of non-linearity that will exist in case of having large oblique images (i.e. large rotation angles).The main objective of this paper is investigating the estimation of the georeferencing parameters necessary for VBN of aerial vehicles in case of having large values of the rotational angles, which will lead to non-linearity of the estimation model. In this case, traditional least squares approaches will fail to estimate the georeferencing parameters, because of the expected non-linearity of the mathematical model. Five different nonlinear least squares methods are presented for estimating the transformation parameters. Four gradient based nonlinear least squares methods (Trust region, Trust region dogleg algorithm, Levenberg-Marquardt, and Quasi-Newton line search method) and one non-gradient method (Nelder-Mead simplex direct search) is employed for the six transformation parameters estimation process. The research was done on simulated data and the results showed that the Nelder-Mead method has failed because of its dependency on the objective function without any derivative information. Although, the tested gradient methods

  11. Window least squares method applied to statistical noise smoothing of positron annihilation data

    International Nuclear Information System (INIS)

    The paper deals with the off-line processing of experimental data obtained by two-dimensional angular correlation of the electron-positron annihilation radiation (2D-ACAR) technique on high-temperature superconductors. A piecewise continuous window least squares (WLS) method devoted to the statistical noise smoothing of 2D-ACAR data, under close control of the crystal reciprocal lattice periodicity, is derived. Reliability evaluation of the constant local weight WLS smoothing formula (CW-WLSF) shows that consistent processing 2D-ACAR data by CW-WLSF is possible. CW-WLSF analysis of 2D-ACAR data collected on untwinned Y Ba2Cu3O7-δ single crystals yields significantly improved signature of the Fermi surface ridge at second Umklapp processes and resolves, for the first time, the ridge signature at third Umklapp processes. (author). 24 refs, 9 figs

  12. A comparative analysis of the EEDF obtained by Regularization and by Least square fit methods

    International Nuclear Information System (INIS)

    The second derived of the characteristic curve current-voltage (I - V) of a Langmuir probe (I - V) is numerically calculated using the Tikhonov method for to determine the distribution function of the electrons energy (EEDF). One comparison of the obtained EEDF and a fit by least square are discussed (LS). The I - V experimental curve is obtained in a plasma source in the electron cyclotron resonance (ECR) using a cylindrical probe. The parameters of plasma are determined of the EEDF by means of the Laframboise theory. For the case of the LS fit, the obtained results are similar to those obtained by the Tikhonov method, but in the first case the procedure is slow to achieve the best fit. (Author)

  13. A hybrid least squares support vector machines and GMDH approach for river flow forecasting

    Directory of Open Access Journals (Sweden)

    R. Samsudin

    2010-06-01

    Full Text Available This paper proposes a novel hybrid forecasting model, which combines the group method of data handling (GMDH and the least squares support vector machine (LSSVM, known as GLSSVM. The GMDH is used to determine the useful input variables for LSSVM model and the LSSVM model which works as time series forecasting. In this study the application of GLSSVM for monthly river flow forecasting of Selangor and Bernam River are investigated. The results of the proposed GLSSVM approach are compared with the conventional artificial neural network (ANN models, Autoregressive Integrated Moving Average (ARIMA model, GMDH and LSSVM models using the long term observations of monthly river flow discharge. The standard statistical, the root mean square error (RMSE and coefficient of correlation (R are employed to evaluate the performance of various models developed. Experiment result indicates that the hybrid model was powerful tools to model discharge time series and can be applied successfully in complex hydrological modeling.

  14. Quantification of anaesthetic effects on atrial fibrillation rate by partial least-squares

    International Nuclear Information System (INIS)

    The mechanism underlying atrial fibrillation (AF) remains poorly understood. Multiple wandering propagation wavelets drifting through both atria under hierarchical models are not understood. Some pharmacological drugs, known as antiarrhythmics, modify the cardiac ionic currents supporting the fibrillation process within the atria and may modify the AF propagation dynamics terminating the fibrillation process. Other medications, theoretically non-antiarrhythmic, may slightly affect the fibrillation process in non-defined mechanisms. We evaluated whether the most commonly used anaesthetic agent, propofol, affects AF patterns. Partial least-squares (PLS) analysis was performed to reduce significant noise into the main latent variables to find the differences between groups. The final results showed an excellent discrimination between groups with slow atrial activity during the propofol infusion. (paper)

  15. PEMODELAN TINGKAT PENGHUNIAN KAMAR HOTEL DI KENDARI DENGAN TRANSFORMASI WAVELET KONTINU DAN PARTIAL LEAST SQUARES

    Directory of Open Access Journals (Sweden)

    Margaretha Ohyver

    2014-12-01

    Full Text Available Multicollinearity and outliers are the common problems when estimating regression model. Multicollinearitiy occurs when there are high correlations among predictor variables, leading to difficulties in separating the effects of each independent variable on the response variable. While, if outliers are present in the data to be analyzed, then the assumption of normality in the regression will be violated and the results of the analysis may be incorrect or misleading. Both of these cases occurred in the data on room occupancy rate of hotels in Kendari. The purpose of this study is to find a model for the data that is free of multicollinearity and outliers and to determine the factors that affect the level of room occupancy hotels in Kendari. The method used is Continuous Wavelet Transformation and Partial Least Squares. The result of this research is a regression model that is free of multicollinearity and a pattern of data that resolved the present of outliers.

  16. Novel passive localization algorithm based on double side matrix-restricted total least squares

    Institute of Scientific and Technical Information of China (English)

    Xu Zheng; Qu Changwen; Wang Changhai

    2013-01-01

    In order to solve the bearings-only passive localization problem in the presence of erroneous observer position,a novel algorithm based on double side matrix-restricted total least squares (DSMRTLS) is proposed.First,the aforementioned passive localization problem is transferred to the DSMRTLS problem by deriving a multiplicative structure for both the observation matrix and the observation vector.Second,the corresponding optimization problem of the DSMRTLS problem without constraint is derived,which can be approximated as the generalized Rayleigh quotient minimization problem.Then,the localization solution which is globally optimal and asymptotically unbiased can be got by generalized eigenvalue decomposition.Simulation results verify the rationality of the approximation and the good performance of the proposed algorithm compared with several typical algorithms.

  17. Simultaneous evaluation of interrelated cross sections by generalized least-squares and related data file requirements

    International Nuclear Information System (INIS)

    Though several cross sections have been designated as standards, they are not basic units and are interrelated by ratio measurements. Moreover, as such interactions as 6Li + n and 10B + n involve only two and three cross sections respectively, total cross section data become useful for the evaluation process. The problem can be resolved by a simultaneous evaluation of the available absolute and shape data for cross sections, ratios, sums, and average cross sections by generalized least-squares. A data file is required for such evaluation which contains the originally measured quantities and their uncertainty components. Establishing such a file is a substantial task because data were frequently reported as absolute cross sections where ratios were measured without sufficient information on which reference cross section and which normalization were utilized. Reporting of uncertainties is often missing or incomplete. The requirements for data reporting will be discussed

  18. Influence and interaction indexes for pseudo-Boolean functions: a unified least squares approach

    CERN Document Server

    Marichal, Jean-Luc

    2012-01-01

    The Banzhaf power and interaction indexes for a pseudo-Boolean function (or a cooperative game) appear naturally as leading coefficients in the standard least squares approximation of the function by a pseudo-Boolean function of a specified degree. We first observe that this property still holds if we consider approximations by pseudo-Boolean functions depending only on specified variables. We then show that the Banzhaf influence index can also be obtained from the latter approximation problem. Considering certain weighted versions of this approximation problem, we introduce a class of weighted Banzhaf influence indexes, analyze their most important properties, and point out similarities between the weighted Banzhaf influence index and the corresponding weighted Banzhaf interaction index.

  19. Multiexponential analysis of experimental data by an automatic peeling technique followed by non-linear least-squares adaption

    International Nuclear Information System (INIS)

    This report is concerned with multi-exponential fitting of a model function f(t) = Σsub(j=1)sup(n) asub(j) esup(-αsub(j)t) + eta(t) (asub(j), αsub(j) > o, 1 <= j <= n, eta(t) = a + bt) to given experimental data (tsub(k), ysub(k)), 1 <= k <= m, where the number n of exponential terms contained in (*) is not known in advance. An automatic version of the well-known manually performed peeling technique is realized and implemented in the subroutine SEARCH. This program yields the above mentioned number n and initial values for the parameters a, b, asub(j), αsub(j), 1 <= j <= n, in addition, which serve as input data for a final non-linear fitting of model (*) by a convenient non-linear fit program, e.g. VARPRO (from FORTLIB of KFA) or VA13AD (from Harwell Subroutine Library). Moreover, auxiliary programs for evaluation of f, the partial exponential terms in f, and the appertaining possibly weighted Least squares functional F, respectively, as well as subroutines for determination of the first and second partial derivatives of f and F with respect to the parameters are made accessible. Characteristic examples of multi-exponential fitting to simulated and experimental data demonstrate the efficiency of the presented method. (orig.)

  20. Applications of the two-dimensional differential transform and least square method for solving nonlinear wave equations

    Directory of Open Access Journals (Sweden)

    davood domiri ganji

    2014-10-01

    Full Text Available The differential transform and least square are analytical methods for solving differential equations. In this article, two-Dimensional Differential Transform Method (2D DTM and Least Square Method (LSM are applied to obtaining the analytic solution of the two- dimensional non- linear wave equations. We demonstrate that the differential transform method and least square are very effective and convenient for achieving the analytical solutions of linear or nonlinear partial differential equations. Also, three examples are given to demonstrate the exactness of the methods. Results of these methods are compared with the exact solution.

  1. An integrated approach to the simultaneous selection of variables, mathematical pre-processing and calibration samples in partial least-squares multivariate calibration.

    Science.gov (United States)

    Allegrini, Franco; Olivieri, Alejandro C

    2013-10-15

    A new optimization strategy for multivariate partial-least-squares (PLS) regression analysis is described. It was achieved by integrating three efficient strategies to improve PLS calibration models: (1) variable selection based on ant colony optimization, (2) mathematical pre-processing selection by a genetic algorithm, and (3) sample selection through a distance-based procedure. Outlier detection has also been included as part of the model optimization. All the above procedures have been combined into a single algorithm, whose aim is to find the best PLS calibration model within a Monte Carlo-type philosophy. Simulated and experimental examples are employed to illustrate the success of the proposed approach. PMID:24054659

  2. The MCLIB library: Monte Carlo simulation of neutron scattering instruments

    International Nuclear Information System (INIS)

    Monte Carlo is a method to integrate over a large number of variables. Random numbers are used to select a value for each variable, and the integrand is evaluated. The process is repeated a large number of times and the resulting values are averaged. For a neutron transport problem, first select a neutron from the source distribution, and project it through the instrument using either deterministic or probabilistic algorithms to describe its interaction whenever it hits something, and then (if it hits the detector) tally it in a histogram representing where and when it was detected. This is intended to simulate the process of running an actual experiment (but it is much slower). This report describes the philosophy and structure of MCLIB, a Fortran library of Monte Carlo subroutines which has been developed for design of neutron scattering instruments. A pair of programs (LQDGEOM and MC RUN) which use the library are shown as an example

  3. Library Design in Combinatorial Chemistry by Monte Carlo Methods

    OpenAIRE

    Falcioni, Marco; Michael W. Deem

    2000-01-01

    Strategies for searching the space of variables in combinatorial chemistry experiments are presented, and a random energy model of combinatorial chemistry experiments is introduced. The search strategies, derived by analogy with the computer modeling technique of Monte Carlo, effectively search the variable space even in combinatorial chemistry experiments of modest size. Efficient implementations of the library design and redesign strategies are feasible with current experimental capabilities.

  4. [Main Components of Xinjiang Lavender Essential Oil Determined by Partial Least Squares and Near Infrared Spectroscopy].

    Science.gov (United States)

    Liao, Xiang; Wang, Qing; Fu, Ji-hong; Tang, Jun

    2015-09-01

    This work was undertaken to establish a quantitative analysis model which can rapid determinate the content of linalool, linalyl acetate of Xinjiang lavender essential oil. Totally 165 lavender essential oil samples were measured by using near infrared absorption spectrum (NIR), after analyzing the near infrared spectral absorption peaks of all samples, lavender essential oil have abundant chemical information and the interference of random noise may be relatively low on the spectral intervals of 7100~4500 cm(-1). Thus, the PLS models was constructed by using this interval for further analysis. 8 abnormal samples were eliminated. Through the clustering method, 157 lavender essential oil samples were divided into 105 calibration set samples and 52 validation set samples. Gas chromatography mass spectrometry (GC-MS) was used as a tool to determine the content of linalool and linalyl acetate in lavender essential oil. Then the matrix was established with the GC-MS raw data of two compounds in combination with the original NIR data. In order to optimize the model, different pretreatment methods were used to preprocess the raw NIR spectral to contrast the spectral filtering effect, after analysizing the quantitative model results of linalool and linalyl acetate, the root mean square error prediction (RMSEP) of orthogonal signal transformation (OSC) was 0.226, 0.558, spectrally, it was the optimum pretreatment method. In addition, forward interval partial least squares (FiPLS) method was used to exclude the wavelength points which has nothing to do with determination composition or present nonlinear correlation, finally 8 spectral intervals totally 160 wavelength points were obtained as the dataset. Combining the data sets which have optimized by OSC-FiPLS with partial least squares (PLS) to establish a rapid quantitative analysis model for determining the content of linalool and linalyl acetate in Xinjiang lavender essential oil, numbers of hidden variables of two

  5. Quantitative simultaneous determination some phenolic compounds by orthogonal signal correction partial least squares in real samples

    International Nuclear Information System (INIS)

    Complete text of publication follows. Phenolic components belong to a class of chemicals polluting, easily absorbed by animals and humans through the skin and Mucous membranes (S. Canofeni, S. DiSario, J. Mela, R. Pilloton, Anal. Lett. 27 (1994) 1659-1669). A simple, rapid and reliable method for simultaneous determination of multi-component phenolic pollutants has been made. PLS modeling is a powerful multivariate statistical tool and can be performed with easily accessible statically software (D.M. Haaland, E.V. Thomas, Anal.Chem.60 (1988) 1193-1202). Pre-processing method scan be applied in such situations to enhance the relevant in formation to make resulting models simpler and easier to interpret. Wold et al (S. Wold, H. Antti, F. Lindgren, J. Ohman, Chemom. Intell. Lab. Syst. 44 (1998) 175-185) introduced orthogonal signal correction (OSC) as a pre-processing step that improves the calibration model by filtering strong structured (i.e. .systematic) variation in X that is not correlated to Y. We use orthogonal signal correction partial least squares (OSC-PLS) method and the data were processed. A mixture design for standards was used in the calibration step to build the orthogonal signal correction partial least squares model. The linear range varied between 0.94-75.20, 1.39-41.64 and 1.10-66.24 μg ml-1 of phenol (PH), 4-nitrophenol (4-NP) and hydroquinone (HQ), respectively. The cross validation method was used to select the number of factors. The principle component of PH, 4-NP and HQ with and without OSC were, 3, 3, 3 and 3, 4, .5, respectively. To check the accuracy of the proposed method a recovery study on real samples was carried out. The PRESS for PH, 4-NP and HQ with and without OSC were, 3.7608, 2.0331, 3.5863 and 4.8348, 2.4052, 3.9598, respectively The RMSEP for PH, 4-NP and HQ with and without OSC were, 0.7330, 0.5389, .0.7158 and 0.8311, 0.5862, .7521, respectively. The results show that proposed method was successfully applied to the

  6. [Near infrared spectroscopy quantitative analysis model based on incremental neural network with partial least squares].

    Science.gov (United States)

    Cao, Hui; Li, Da-Hang; Liu, Ling; Zhou, Yan

    2014-10-01

    This paper proposes an near infrared spectroscopy quantitative analysis model based on incremental neural network with partial least squares. The proposed model adopts the typical three-layer back-propagation neural network (BPNN), and the absorbance of different wavelengths and the component concentration are the inputs and the outputs, respectively. Partial least square (PLS) regression is performed on the history training samples firstly, and the obtained history loading matrices of the in- dependent variables and the dependent variables are used for determining the initial weights of the input layer and the output lay- er, respectively. The number of the hidden layer nodes is set as the number of the principal components of the independent varia- bles. After a set of new training samples is collected, PLS regression is performed on the combination dataset consisting of the new samples and the history loading matrices to calculate the new loading matrices. The history loading matrices and the new loading matrices are fused to obtain the new initial weights of the input layer and the output layer of the proposed model. Then the new samples are used for training the proposed mode to realize the incremental update. The proposed model is compared with PLS, BPNN, the BPNN based on PLS (PLS-BPNN) and the recursive PLS (RPLS) by using the spectra data of flue gas of nat- ural gas combustion. For the concentration prediction of the carbon dioxide in the flue gas, the root mean square error of predic- tion (RMSEP) of the proposed model are reduced by 27.27%, 58.12%, 19.24% and 14.26% than those of PLS, BPNN, PLS- BPNN and RPLS, respectively. For the concentration prediction of the carbon monoxide in the flue gas, the RMSEP of the pro- posed model are reduced by 20.65%, 24.69%, 18.54% and 19.42% than those of PLS, BPNN, PLS-BPNN and RPLS, re- spectively. For the concentration prediction of the methane in the flue gas, the RMSEP of the proposed model are reduced by 27

  7. Retrieve the evaporation duct height by least-squares support vector machine algorithm

    Science.gov (United States)

    Douvenot, Remi; Fabbro, Vincent; Bourlier, Christophe; Saillard, Joseph; Fuchs, Hans-Hellmuth; Essen, Helmut; Foerster, Joerg

    2009-01-01

    The detection and tracking of naval targets, including low Radar Cross Section (RCS) objects like inflatable boats or sea skimming missiles requires a thorough knowledge of the propagation properties of the maritime boundary layer. Models are in existence, which allow a prediction of the propagation factor using the parabolic equation algorithm. As a necessary input, the refractive index has to be known. This index, however, is strongly influenced by the actual atmospheric conditions, characterized mainly by temperature, humidity and air pressure. An approach is initiated to retrieve the vertical profile of the refractive index from the propagation factor measured on an onboard target. The method is based on the LS-SVM (Least-Squares Support Vector Machines) theory. The inversion method is here used to determine refractive index from data measured during the VAMPIRA campaign (Validation Measurement for Propagation in the Infrared and RAdar) conducted as a multinational approach over a transmission path across the Baltic Sea. As a propagation factor has been measured on two reference reflectors mounted onboard a naval vessel at different heights, the inversion method can be tested on both heights. The paper describes the experimental campaign and validates the LS-SVM inversion method for refractivity from propagation factor on simple measured data.

  8. Prediction of Placental Barrier Permeability: A Model Based on Partial Least Squares Variable Selection Procedure

    Directory of Open Access Journals (Sweden)

    Yong-Hong Zhang

    2015-05-01

    Full Text Available Assessing the human placental barrier permeability of drugs is very important to guarantee drug safety during pregnancy. Quantitative structure–activity relationship (QSAR method was used as an effective assessing tool for the placental transfer study of drugs, while in vitro human placental perfusion is the most widely used method. In this study, the partial least squares (PLS variable selection and modeling procedure was used to pick out optimal descriptors from a pool of 620 descriptors of 65 compounds and to simultaneously develop a QSAR model between the descriptors and the placental barrier permeability expressed by the clearance indices (CI. The model was subjected to internal validation by cross-validation and y-randomization and to external validation by predicting CI values of 19 compounds. It was shown that the model developed is robust and has a good predictive potential (r2 = 0.9064, RMSE = 0.09, q2 = 0.7323, rp2 = 0.7656, RMSP = 0.14. The mechanistic interpretation of the final model was given by the high variable importance in projection values of descriptors. Using PLS procedure, we can rapidly and effectively select optimal descriptors and thus construct a model with good stability and predictability. This analysis can provide an effective tool for the high-throughput screening of the placental barrier permeability of drugs.

  9. Semiparametric regression of multidimensional genetic pathway data: least-squares kernel machines and linear mixed models.

    Science.gov (United States)

    Liu, Dawei; Lin, Xihong; Ghosh, Debashis

    2007-12-01

    We consider a semiparametric regression model that relates a normal outcome to covariates and a genetic pathway, where the covariate effects are modeled parametrically and the pathway effect of multiple gene expressions is modeled parametrically or nonparametrically using least-squares kernel machines (LSKMs). This unified framework allows a flexible function for the joint effect of multiple genes within a pathway by specifying a kernel function and allows for the possibility that each gene expression effect might be nonlinear and the genes within the same pathway are likely to interact with each other in a complicated way. This semiparametric model also makes it possible to test for the overall genetic pathway effect. We show that the LSKM semiparametric regression can be formulated using a linear mixed model. Estimation and inference hence can proceed within the linear mixed model framework using standard mixed model software. Both the regression coefficients of the covariate effects and the LSKM estimator of the genetic pathway effect can be obtained using the best linear unbiased predictor in the corresponding linear mixed model formulation. The smoothing parameter and the kernel parameter can be estimated as variance components using restricted maximum likelihood. A score test is developed to test for the genetic pathway effect. Model/variable selection within the LSKM framework is discussed. The methods are illustrated using a prostate cancer data set and evaluated using simulations. PMID:18078480

  10. First-order system least-squares for the Helmholtz equation

    Energy Technology Data Exchange (ETDEWEB)

    Lee, B.; Manteuffel, T.; McCormick, S.; Ruge, J.

    1996-12-31

    We apply the FOSLS methodology to the exterior Helmholtz equation {Delta}p + k{sup 2}p = 0. Several least-squares functionals, some of which include both H{sup -1}({Omega}) and L{sup 2}({Omega}) terms, are examined. We show that in a special subspace of [H(div; {Omega}) {intersection} H(curl; {Omega})] x H{sup 1}({Omega}), each of these functionals are equivalent independent of k to a scaled H{sup 1}({Omega}) norm of p and u = {del}p. This special subspace does not include the oscillatory near-nullspace components ce{sup ik}({sup {alpha}x+{beta}y)}, where c is a complex vector and where {alpha}{sub 2} + {beta}{sup 2} = 1. These components are eliminated by applying a non-standard coarsening scheme. We achieve this scheme by introducing {open_quotes}ray{close_quotes} basis functions which depend on the parameter pair ({alpha}, {beta}), and which approximate ce{sup ik}({sup {alpha}x+{beta}y)} well on the coarser levels where bilinears cannot. We use several pairs of these parameters on each of these coarser levels so that several coarse grid problems are spun off from the finer levels. Some extensions of this theory to the transverse electric wave solution for Maxwell`s equations will also be presented.

  11. Programs for least square approximation and graphic display in an experimental data processing computer

    International Nuclear Information System (INIS)

    In the experimental data processing computer PANAFACOM U-400 in the Institute of Plasma Physics, Nagoya University, general purpose programs have been prepared for checking on the data stored in it. These programs were originally developed for use in the on-line data processing system for JIPP T-2 experiment. They are in two subroutines for obtaining straight lines best fitting data points by the method of least squares and several subroutines for the graphic display of data points. With the subroutines, graphic display, statistical processing and the display of its results can be carried out for experimental data. The programs are cataloged as execution load modules in disk files. In case of using them, it is necessary only to assign required arguments and then call the subroutines by FORTRAN CALL statements. The graphic display subroutines are based on the standard GRASP of U-400 graphic processing software. No knowledge of GRASP is required, however. Users can readily use the programs only by referring to the present report. (J.P.N.)

  12. Weighted Least Squares Techniques for Improved Received Signal Strength Based Localization

    Directory of Open Access Journals (Sweden)

    José R. Casar

    2011-09-01

    Full Text Available The practical deployment of wireless positioning systems requires minimizing the calibration procedures while improving the location estimation accuracy. Received Signal Strength localization techniques using propagation channel models are the simplest alternative, but they are usually designed under the assumption that the radio propagation model is to be perfectly characterized a priori. In practice, this assumption does not hold and the localization results are affected by the inaccuracies of the theoretical, roughly calibrated or just imperfect channel models used to compute location. In this paper, we propose the use of weighted multilateration techniques to gain robustness with respect to these inaccuracies, reducing the dependency of having an optimal channel model. In particular, we propose two weighted least squares techniques based on the standard hyperbolic and circular positioning algorithms that specifically consider the accuracies of the different measurements to obtain a better estimation of the position. These techniques are compared to the standard hyperbolic and circular positioning techniques through both numerical simulations and an exhaustive set of real experiments on different types of wireless networks (a wireless sensor network, a WiFi network and a Bluetooth network. The algorithms not only produce better localization results with a very limited overhead in terms of computational cost but also achieve a greater robustness to inaccuracies in channel modeling.

  13. Plane-Wave Least-Squares Reverse Time Migration for Rugged Topography

    Institute of Scientific and Technical Information of China (English)

    Jianping Huang; Chuang Li; Rongrong Wang; Qingyang Li

    2015-01-01

    We present a method based on least-squares reverse time migration with plane-wave encod-ing (P-LSRTM) for rugged topography. Instead of modifying the wave field before migration, we modify the plane-wave encoding function and fill constant velocity to the area above rugged topography in the model so that P-LSRTM can be directly performed from rugged surface in the way same to shot domain reverse time migration. In order to improve efficiency and reduce I/O (input/output) cost, the dynamic en-coding strategy and hybrid encoding strategy are implemented. Numerical test on SEG rugged topography model show that P-LSRTM can suppress migration artifacts in the migration image, and compensate am-plitude in the middle-deep part efficiently. Without data correction, P-LSRTM can produce a satisfying image of near-surface if we could get an accurate near-surface velocity model. Moreover, the pre-stack P-LSRTM is more robust than conventional RTM in the presence of migration velocity errors.

  14. Automatic retinal vessel classification using a Least Square-Support Vector Machine in VAMPIRE.

    Science.gov (United States)

    Relan, D; MacGillivray, T; Ballerini, L; Trucco, E

    2014-01-01

    It is important to classify retinal blood vessels into arterioles and venules for computerised analysis of the vasculature and to aid discovery of disease biomarkers. For instance, zone B is the standardised region of a retinal image utilised for the measurement of the arteriole to venule width ratio (AVR), a parameter indicative of microvascular health and systemic disease. We introduce a Least Square-Support Vector Machine (LS-SVM) classifier for the first time (to the best of our knowledge) to label automatically arterioles and venules. We use only 4 image features and consider vessels inside zone B (802 vessels from 70 fundus camera images) and in an extended zone (1,207 vessels, 70 fundus camera images). We achieve an accuracy of 94.88% and 93.96% in zone B and the extended zone, respectively, with a training set of 10 images and a testing set of 60 images. With a smaller training set of only 5 images and the same testing set we achieve an accuracy of 94.16% and 93.95%, respectively. This experiment was repeated five times by randomly choosing 10 and 5 images for the training set. Mean classification accuracy are close to the above mentioned result. We conclude that the performance of our system is very promising and outperforms most recently reported systems. Our approach requires smaller training data sets compared to others but still results in a similar or higher classification rate. PMID:25569917

  15. Multidimensional model of apathy in older adults using partial least squares-path modeling.

    Science.gov (United States)

    Raffard, Stéphane; Bortolon, Catherine; Burca, Marianna; Gely-Nargeot, Marie-Christine; Capdevielle, Delphine

    2016-06-01

    Apathy defined as a mental state characterized by a lack of goal-directed behavior is prevalent and associated with poor functioning in older adults. The main objective of this study was to identify factors contributing to the distinct dimensions of apathy (cognitive, emotional, and behavioral) in older adults without dementia. One hundred and fifty participants (mean age, 80.42) completed self-rated questionnaires assessing apathy, emotional distress, anticipatory pleasure, motivational systems, physical functioning, quality of life, and cognitive functioning. Data were analyzed using partial least squares variance-based structural equation modeling in order to examine factors contributing to the three different dimensions of apathy in our sample. Overall, the different facets of apathy were associated with cognitive functioning, anticipatory pleasure, sensitivity to reward, and physical functioning, but the contribution of these different factors to the three dimensions of apathy differed significantly. More specifically, the impact of anticipatory pleasure and physical functioning was stronger for the cognitive than for emotional apathy. Conversely, the impact of sensibility to reward, although small, was slightly stronger on emotional apathy. Regarding behavioral apathy, again we found similar latent variables except for the cognitive functioning whose impact was not statistically significant. Our results highlight the need to take into account various mechanisms involved in the different facets of apathy in older adults without dementia, including not only cognitive factors but also motivational variables and aspects related to physical disability. Clinical implications are discussed. PMID:27153818

  16. Analysis and application of partial least square regression in arc welding process

    Institute of Scientific and Technical Information of China (English)

    YANG Hai-lan; CAI Yan; BAO Ye-feng; ZHOU Yun

    2005-01-01

    Because of the relativity among the parameters, partial least square regression(PLSR)was applied to build the model and get the regression equation. The improved algorithm simplified the calculating process greatly because of the reduction of calculation. The orthogonal design was adopted in this experiment. Every sample had strong representation, which could reduce the experimental time and obtain the overall test data. Combined with the formation problem of gas metal arc weld with big current, the auxiliary analysis technique of PLSR was discussed and the regression equation of form factors (i.e. surface width, weld penetration and weld reinforcement) to process parameters(i.e. wire feed rate, wire extension, welding speed, gas flow, welding voltage and welding current)was given. The correlativity structure among variables was analyzed and there was certain correlation between independent variables matrix X and dependent variables matrix Y. The regression analysis shows that the welding speed mainly influences the weld formation while the variation of gas flow in certain range has little influence on formation of weld. The fitting plot of regression accuracy is given. The fitting quality of regression equation is basically satisfactory.

  17. Modeling of a PEM Fuel Cell Stack using Partial Least Squares and Artificial Neural Networks

    Energy Technology Data Exchange (ETDEWEB)

    Han, In-Su; Shin, Hyun Khil [GS Caltex Corp, Daejeon (Korea, Republic of)

    2015-04-15

    We present two data-driven modeling methods, partial least square (PLS) and artificial neural network (ANN), to predict the major operating and performance variables of a polymer electrolyte membrane (PEM) fuel cell stack. PLS and ANN models were constructed using the experimental data obtained from the testing of a 30 kW-class PEM fuel cell stack, and then were compared with each other in terms of their prediction and computational performances. To reduce the complexity of the models, we combined a variables importance on PLS projection (VIP) as a variable selection method into the modeling procedure in which the predictor variables are selected from a set of input operation variables. The modeling results showed that the ANN models outperformed the PLS models in predicting the average cell voltage and cathode outlet temperature of the fuel cell stack. However, the PLS models also offered satisfactory prediction performances although they can only capture linear correlations between the predictor and output variables. Depending on the degree of modeling accuracy and speed, both ANN and PLS models can be employed for performance predictions, offline and online optimizations, controls, and fault diagnoses in the field of PEM fuel cell designs and operations.

  18. Prediction for human intelligence using morphometric characteristics of cortical surface: partial least square analysis.

    Science.gov (United States)

    Yang, J-J; Yoon, U; Yun, H J; Im, K; Choi, Y Y; Lee, K H; Park, H; Hough, M G; Lee, J-M

    2013-08-29

    A number of imaging studies have reported neuroanatomical correlates of human intelligence with various morphological characteristics of the cerebral cortex. However, it is not yet clear whether these morphological properties of the cerebral cortex account for human intelligence. We assumed that the complex structure of the cerebral cortex could be explained effectively considering cortical thickness, surface area, sulcal depth and absolute mean curvature together. In 78 young healthy adults (age range: 17-27, male/female: 39/39), we used the full-scale intelligence quotient (FSIQ) and the cortical measurements calculated in native space from each subject to determine how much combining various cortical measures explained human intelligence. Since each cortical measure is thought to be not independent but highly inter-related, we applied partial least square (PLS) regression, which is one of the most promising multivariate analysis approaches, to overcome multicollinearity among cortical measures. Our results showed that 30% of FSIQ was explained by the first latent variable extracted from PLS regression analysis. Although it is difficult to relate the first derived latent variable with specific anatomy, we found that cortical thickness measures had a substantial impact on the PLS model supporting the most significant factor accounting for FSIQ. Our results presented here strongly suggest that the new predictor combining different morphometric properties of complex cortical structure is well suited for predicting human intelligence. PMID:23643979

  19. [UV spectroscopy coupled with partial least squares to determine the enantiomeric composition in chiral drugs].

    Science.gov (United States)

    Li, Qian-qian; Wu, Li-jun; Liu, Wei; Cao, Jin-li; Duan, Jia; Huang, Yue; Min, Shun-geng

    2012-02-01

    In the present study, sucrose was used as a chiral selector to detect the molar fraction of R-metalaxyl and S-ibuprofen due to the UV spectral difference caused by the interaction of the R- and S-isomer with sucrose. The quantitative model of the molar fraction of R-metalaxyl was established by partial least squares (PLS) regression and the robustness of the models was evaluated by 6 independent validation samples. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.98% and 0.003 respectively. The correlation coefficient of estimated value and specified value, the standard error and the relative standard deviation (RSD) of the independent validation samples was 0.999 8, 0.000 4 and 0.054% respectively. The quantitative models of the molar fraction of S-ibuprofen were established by PLS and the robustness of models was evaluated. The determination coefficient R2 and the standard error of calibration set (SEC) was 99.82% and 0.007 respectively. The correlation coefficient of estimated value and specified value of the independent validation samples was 0.998 1. The standard error of prediction (SEP) was 0.002 and the relative standard deviation (RSD) was 0.2%. The result demonstrates that sucrose is an ideal chiral selector for building a stable regression model to determine the enantiomeric composition. PMID:22512198

  20. Least-squares reverse time migration of marine data with frequency-selection encoding

    KAUST Repository

    Dai, Wei

    2013-06-24

    The phase-encoding technique can sometimes increase the efficiency of the least-squares reverse time migration (LSRTM) by more than one order of magnitude. However, traditional random encoding functions require all the encoded shots to share the same receiver locations, thus limiting the usage to seismic surveys with a fixed spread geometry. We implement a frequency-selection encoding strategy that accommodates data with a marine streamer geometry. The encoding functions are delta functions in the frequency domain, so that all the encoded shots have unique nonoverlapping frequency content, and the receivers can distinguish the wavefield from each shot with a unique frequency band. Because the encoding functions are orthogonal to each other, there will be no crosstalk between different shots during modeling and migration. With the frequency-selection encoding method, the computational efficiency of LSRTM is increased so that its cost is comparable to conventional RTM for the Marmousi2 model and a marine data set recorded in the Gulf of Mexico. With more iterations, the LSRTM image quality is further improved by suppressing migration artifacts, balancing reflector amplitudes, and enhancing the spatial resolution. We conclude that LSRTM with frequency-selection is an efficient migration method that can sometimes produce more focused images than conventional RTM. © 2013 Society of Exploration Geophysicists.

  1. Prediction of ferric iron precipitation in bioleaching process using partial least squares and artificial neural network

    Directory of Open Access Journals (Sweden)

    Golmohammadi Hassan

    2013-01-01

    Full Text Available A quantitative structure-property relationship (QSPR study based on partial least squares (PLS and artificial neural network (ANN was developed for the prediction of ferric iron precipitation in bioleaching process. The leaching temperature, initial pH, oxidation/reduction potential (ORP, ferrous concentration and particle size of ore were used as inputs to the network. The output of the model was ferric iron precipitation. The optimal condition of the neural network was obtained by adjusting various parameters by trial-and-error. After optimization and training of the network according to back-propagation algorithm, a 5-5-1 neural network was generated for prediction of ferric iron precipitation. The root mean square error for the neural network calculated ferric iron precipitation for training, prediction and validation set are 32.860, 40.739 and 35.890, respectively, which are smaller than those obtained by PLS model (180.972, 165.047 and 149.950, respectively. Results obtained reveal the reliability and good predictivity of neural network model for the prediction of ferric iron precipitation in bioleaching process.

  2. Amplitude differences least squares method applied to temporal cardiac beat alignment

    Science.gov (United States)

    Correa, R. O.; Laciar, E.; Valentinuzzi, M. E.

    2007-11-01

    High resolution averaged ECG is an important diagnostic technique in post-infarcted and/or chagasic patients with high risk of ventricular tachycardia (VT). It calls for precise determination of the synchronism point (fiducial point) in each beat to be averaged. Cross-correlation (CC) between each detected beat and a reference beat is, by and large, the standard alignment procedure. However, the fiducial point determination is not precise in records contaminated with high levels of noise. Herein, we propose an alignment procedure based on the least squares calculation of the amplitude differences (LSAD) between the ECG samples and a reference or template beat. Both techniques, CC and LSAD, were tested in high resolution ECG's corrupted with white noise and 50 Hz line interference of varying amplitudes (RMS range: 0-100μV). Results point out that LSDA produced a lower alignment error in all contaminated records while in those blurred by power line interference better results were found only within the 0-40 μV range. It is concluded that the proposed method represents a valid alignment alternative.

  3. Lossless compression of hyperspectral images using conventional recursive least-squares predictor with adaptive prediction bands

    Science.gov (United States)

    Gao, Fang; Guo, Shuxu

    2016-01-01

    An efficient lossless compression scheme for hyperspectral images using conventional recursive least-squares (CRLS) predictor with adaptive prediction bands is proposed. The proposed scheme first calculates the preliminary estimates to form the input vector of the CRLS predictor. Then the number of bands used in prediction is adaptively selected by an exhaustive search for the number that minimizes the prediction residual. Finally, after prediction, the prediction residuals are sent to an adaptive arithmetic coder. Experiments on the newer airborne visible/infrared imaging spectrometer (AVIRIS) images in the consultative committee for space data systems (CCSDS) test set show that the proposed scheme yields an average compression performance of 3.29 (bits/pixel), 5.57 (bits/pixel), and 2.44 (bits/pixel) on the 16-bit calibrated images, the 16-bit uncalibrated images, and the 12-bit uncalibrated images, respectively. Experimental results demonstrate that the proposed scheme obtains compression results very close to clustered differential pulse code modulation-with-adaptive-prediction-length, which achieves best lossless compression performance for AVIRIS images in the CCSDS test set, and outperforms other current state-of-the-art schemes with relatively low computation complexity.

  4. Least Squares Magnetic-Field Optimization for Portable Nuclear Magnetic Resonance Magnet Design

    International Nuclear Information System (INIS)

    Single-sided and mobile nuclear magnetic resonance (NMR) sensors have the advantages of portability, low cost, and low power consumption compared to conventional high-field NMR and magnetic resonance imaging (MRI) systems. We present fast, flexible, and easy-to-implement target field algorithms for mobile NMR and MRI magnet design. The optimization finds a global optimum in a cost function that minimizes the error in the target magnetic field in the sense of least squares. When the technique is tested on a ring array of permanent-magnet elements, the solution matches the classical dipole Halbach solution. For a single-sided handheld NMR sensor, the algorithm yields a 640 G field homogeneous to 16,100 ppm across a 1.9 cc volume located 1.5 cm above the top of the magnets and homogeneous to 32,200 ppm over a 7.6 cc volume. This regime is adequate for MRI applications. We demonstrate that the homogeneous region can be continuously moved away from the sensor by rotating magnet rod elements, opening the way for NMR sensors with adjustable 'sensitive volumes'

  5. A weighted least squares analysis of globalization and the Nigerian stock market performance

    Directory of Open Access Journals (Sweden)

    Alenoghena Osi Raymond

    2013-12-01

    Full Text Available The study empirically investigates the impact of globalization on the performance of the Nigerian Stock market. The study seeks the verification of the existence of a linking mechanism between globalization through trade openness, net inflow of capital, participation in international capital market and financial development on Stock Market performance over the period of 1981 to 2011. The methodology adopted examines the stochastic characteristics of each time series by testing their stationarity using the Im, Pesaran and Shin W-stat test. The weighted least squares regression method was employed to ascertain the different level of impacts on the above subject matter. The findings were reinforced by the presence of a long-term equilibrium relationship, as evidenced by the cointegrating equation of the VECM. The Model ascertained that globalization variables actually positively impacted on stock market performance. However, the findings reveal that while net capital inflows and participation in international capital market have greater impact on the Nigerian Stock market performance during the period under review. Accordingly, it is advised that in formulating foreign policy, policy makers should take strategic views on the international economy and make new creative policies that will foster economic integration between Nigeria and its existing trade allies. These creative policies will also assist to create avenues for the making new trade agreements with other nations of the world, which hitherto were not trade partners with Nigeria.

  6. A least-squares finite-element method for the simulation of free-surface flows

    International Nuclear Information System (INIS)

    This paper presents the simulations of free-surface flows involving two fluids (air and water) by the least-squares finite-element method. The motion of both fluids is governed by two-dimensional Navier-Stokes equations in velocity-pressure-vorticity form. The free surface of moving interface is treated as the surface of density discontinuity between gas and liquid. A field variable is used to represent the fractional volume of both fluids so the profile and position of the interface can be calculated accurately. For the time-dependent nonlinear equations, iteration with linearization is performed within each time-step. An element-by- element conjugate gradient method is applied to solve the discretized systems. The model is validated by the experimental measurements of the dam break problem. The simulations of free-surface surges through the sluice gate and over the free fall show encouraging results for representing the complicated free-surface profiles, especially, the simulated phenomena of vortex distributed in the circulation zone. This model also has the strong ability to simulate the practical engineering problems with complex geometry. Refs. 3 (author)

  7. A generalization of variable elimination for separable inverse problems beyond least squares

    International Nuclear Information System (INIS)

    In linear inverse problems, we have data derived from a noisy linear transformation of some unknown parameters, and we wish to estimate these unknowns from the data. Separable inverse problems are a powerful generalization in which the transformation itself depends on additional unknown parameters and we wish to determine both sets of parameters simultaneously. When separable problems are solved by optimization, convergence can often be accelerated by elimination of the linear variables, a strategy which appears most prominently in the variable projection methods due to Golub and Pereyra. Existing variable elimination methods require an explicit formula for the optimal value of the linear variables, so they cannot be used in problems with Poisson likelihoods, bound constraints, or other important departures from least squares. To address this limitation, we propose a generalization of variable elimination in which standard optimization methods are modified to behave as though a variable has been eliminated. We verify that this approach is a proper generalization by using it to re-derive several existing variable elimination techniques. We then extend the approach to bound-constrained and Poissonian problems, showing in the process that many of the best features of variable elimination methods can be duplicated in our framework. Tests on difficult exponential sum fitting and blind deconvolution problems indicate that the proposed approach can have significant speed and robustness advantages over standard methods. (paper)

  8. Estimation of active pharmaceutical ingredients content using locally weighted partial least squares and statistical wavelength selection.

    Science.gov (United States)

    Kim, Sanghong; Kano, Manabu; Nakagawa, Hiroshi; Hasebe, Shinji

    2011-12-15

    Development of quality estimation models using near infrared spectroscopy (NIRS) and multivariate analysis has been accelerated as a process analytical technology (PAT) tool in the pharmaceutical industry. Although linear regression methods such as partial least squares (PLS) are widely used, they cannot always achieve high estimation accuracy because physical and chemical properties of a measuring object have a complex effect on NIR spectra. In this research, locally weighted PLS (LW-PLS) which utilizes a newly defined similarity between samples is proposed to estimate active pharmaceutical ingredient (API) content in granules for tableting. In addition, a statistical wavelength selection method which quantifies the effect of API content and other factors on NIR spectra is proposed. LW-PLS and the proposed wavelength selection method were applied to real process data provided by Daiichi Sankyo Co., Ltd., and the estimation accuracy was improved by 38.6% in root mean square error of prediction (RMSEP) compared to the conventional PLS using wavelengths selected on the basis of variable importance on the projection (VIP). The results clearly show that the proposed calibration modeling technique is useful for API content estimation and is superior to the conventional one. PMID:22001843

  9. Attenuation compensation for least-squares reverse time migration using the viscoacoustic-wave equation

    KAUST Repository

    Dutta, Gaurav

    2014-10-01

    Strong subsurface attenuation leads to distortion of amplitudes and phases of seismic waves propagating inside the earth. Conventional acoustic reverse time migration (RTM) and least-squares reverse time migration (LSRTM) do not account for this distortion, which can lead to defocusing of migration images in highly attenuative geologic environments. To correct for this distortion, we used a linearized inversion method, denoted as Qp-LSRTM. During the leastsquares iterations, we used a linearized viscoacoustic modeling operator for forward modeling. The adjoint equations were derived using the adjoint-state method for back propagating the residual wavefields. The merit of this approach compared with conventional RTM and LSRTM was that Qp-LSRTM compensated for the amplitude loss due to attenuation and could produce images with better balanced amplitudes and more resolution below highly attenuative layers. Numerical tests on synthetic and field data illustrated the advantages of Qp-LSRTM over RTM and LSRTM when the recorded data had strong attenuation effects. Similar to standard LSRTM, the sensitivity tests for background velocity and Qp errors revealed that the liability of this method is the requirement for smooth and accurate migration velocity and attenuation models.

  10. Least-squares reverse-time migration with cost-effective computation and memory storage

    Science.gov (United States)

    Liu, Xuejian; Liu, Yike; Huang, Xiaogang; Li, Peng

    2016-06-01

    Least-squares reverse-time migration (LSRTM), which involves several iterations of reverse-time migration (RTM) and Born modeling procedures, can provide subsurface images with better balanced amplitudes, higher resolution and fewer artifacts than standard migration. However, the same source wavefield is repetitively computed during the Born modeling and RTM procedures of different iterations. We developed a new LSRTM method with modified excitation-amplitude imaging conditions, where the source wavefield for RTM is forward propagated only once while the maximum amplitude and its excitation-time at each grid are stored. Then, the RTM procedure of different iterations only involves: (1) backward propagation of the residual between Born modeled and acquired data, and (2) implementation of the modified excitation-amplitude imaging condition by multiplying the maximum amplitude by the back propagated data residuals only at the grids that satisfy the imaging time at each time-step. For a complex model, 2 or 3 local peak-amplitudes and corresponding traveltimes should be confirmed and stored for all the grids so that multiarrival information of the source wavefield can be utilized for imaging. Numerical experiments on a three-layer and the Marmousi2 model demonstrate that the proposed LSRTM method saves huge computation and memory cost.

  11. A least-squares parameter estimation algorithm for switched hammerstein systems with applications to the VOR

    Science.gov (United States)

    Kukreja, Sunil L.; Kearney, Robert E.; Galiana, Henrietta L.

    2005-01-01

    A "Multimode" or "switched" system is one that switches between various modes of operation. When a switch occurs from one mode to another, a discontinuity may result followed by a smooth evolution under the new regime. Characterizing the switching behavior of these systems is not well understood and, therefore, identification of multimode systems typically requires a preprocessing step to classify the observed data according to a mode of operation. A further consequence of the switched nature of these systems is that data available for parameter estimation of any subsystem may be inadequate. As such, identification and parameter estimation of multimode systems remains an unresolved problem. In this paper, we 1) show that the NARMAX model structure can be used to describe the impulsive-smooth behavior of switched systems, 2) propose a modified extended least squares (MELS) algorithm to estimate the coefficients of such models, and 3) demonstrate its applicability to simulated and real data from the Vestibulo-Ocular Reflex (VOR). The approach will also allow the identification of other nonlinear bio-systems, suspected of containing "hard" nonlinearities.

  12. A New Least Squares Support Vector Machines Ensemble Model for Aero Engine Performance Parameter Chaotic Prediction

    Directory of Open Access Journals (Sweden)

    Dangdang Du

    2016-01-01

    Full Text Available Aiming at the nonlinearity, chaos, and small-sample of aero engine performance parameters data, a new ensemble model, named the least squares support vector machine (LSSVM ensemble model with phase space reconstruction (PSR and particle swarm optimization (PSO, is presented. First, to guarantee the diversity of individual members, different single kernel LSSVMs are selected as base predictors, and they also output the primary prediction results independently. Then, all the primary prediction results are integrated to produce the most appropriate prediction results by another particular LSSVM—a multiple kernel LSSVM, which reduces the dependence of modeling accuracy on kernel function and parameters. Phase space reconstruction theory is applied to extract the chaotic characteristic of input data source and reconstruct the data sample, and particle swarm optimization algorithm is used to obtain the best LSSVM individual members. A case study is employed to verify the effectiveness of presented model with real operation data of aero engine. The results show that prediction accuracy of the proposed model improves obviously compared with other three models.

  13. Relationship of Fiber Properties to Vortex Yarn Quality via Partial Least Squares

    Directory of Open Access Journals (Sweden)

    Calvin Price

    2009-12-01

    Full Text Available The Cotton Quality Research Station (CQRS of theUSDA-ARS, recently completed a comprehensivestudy of the relationship of cotton fiber properties tothe quality of spun yarn. The five year study, beganin 2001, utilized commercial variety cotton grown,harvested and ginned in each of three major growingregions in the US (Georgia, Mississippi, and Texas.CQRS made extensive measurements of the rawcotton properties (both physical and chemical of 154lots of blended cotton. These lots were then spuninto yarn in the CQRS laboratory by vortex spinningwith several characteristics of the yarn and spinningefficiency measured for each lot. This studyexamines the use of a multivariate statistical method,partial least squares (PLS, to relate fiber propertiesto spun yarn quality for vortex spinning. Twodifferent sets of predictors were used to forecast yarnquality response variables: one set being only HVI™variables, and the second set consisting of bothHVI™ and AFIS™ variables. The quality ofpredictions was not found to significantly changewith the addition of AFIS™ variables.

  14. Denoising spectroscopic data by means of the improved Least-Squares Deconvolution method

    CERN Document Server

    Tkachenko, A; Tsymbal, V; Aerts, C; Kochukhov, O; Debosscher, J

    2013-01-01

    The MOST, CoRoT, and Kepler space missions led to the discovery of a large number of intriguing, and in some cases unique, objects among which are pulsating stars, stars hosting exoplanets, binaries, etc. Although the space missions deliver photometric data of unprecedented quality, these data are lacking any spectral information and we are still in need of ground-based spectroscopic and/or multicolour photometric follow-up observations for a solid interpretation. Both faintness of most of the observed stars and the required high S/N of spectroscopic data imply the need of using large telescopes, access to which is limited. In this paper, we look for an alternative, and aim for the development of a technique allowing to denoise the originally low S/N spectroscopic data, making observations of faint targets with small telescopes possible and effective. We present a generalization of the original Least-Squares Deconvolution (LSD) method by implementing a multicomponent average profile and a line strengths corre...

  15. Globally Conservative, Hybrid Self-Adjoint Angular Flux and Least-Squares Method Compatible with Void

    CERN Document Server

    Laboure, Vincent M; Wang, Yaqi

    2016-01-01

    In this paper, we derive a method for the second-order form of the transport equation that is both globally conservative and compatible with voids, using Continuous Finite Element Methods (CFEM). The main idea is to use the Least-Squares (LS) form of the transport equation in the void regions and the Self-Adjoint Angular Flux (SAAF) form elsewhere. While the SAAF formulation is globally conservative, the LS formulation need a correction in void. The price to pay for this fix is the loss of symmetry of the bilinear form. We first derive this Conservative LS (CLS) formulation in void. Second we combine the SAAF and CLS forms and end up with an hybrid SAAF-CLS method, having the desired properties. We show that extending the theory to near-void regions is a minor complication and can be done without affecting the global conservation of the scheme. Being angular discretization agnostic, this method can be applied to both discrete ordinates (SN) and spherical harmonics (PN) methods. However, since a globally conse...

  16. Comparison of two terrain extraction algorithms: hierarchical relaxation correlation and global least squares matching

    Science.gov (United States)

    Hermanson, Greg A.; Hinchman, John H.; Rauhala, Urho A.; Mueller, Walter J.

    1993-09-01

    Automated extraction of elevation data from stereo images requires automated images registration followed by photogrammetric mapping into a Digital Elevation Model (DEM). The Digital Production System (DPS) Data Extraction Segment (DE/S) of the Defense Mapping Agency (DMA) currently uses an image pyramid registration technique known as Hierarchical Relaxation Correlation (HRC) to perform Automated Terrain Extraction (ATE). Under an internal research and development project, GDE Systems has developed Global Least Squares Matching (GLSM) technique of nonlinear estimation requiring a simultaneous array algebra solution of a dense DEM as a part of the matching process. This paper focuses on traditional low density DEM production where the coarse-to-fine process of HRC and GLSM is stopped at lower image resolutions until the required DEM quality is reached. Tests were made comparing the HRC and GLSM results at various image resolutions against carefully edited and averaged check points of four cartographers from 1:40,000 and 1:80,000 softcopy stereo models. The results show that both HRC and GLSM far exceed the traditional mapping standard allowing an economic use of lower resolution source images. GLSM allowed up to five times lower image resolution than HRC producing acceptable contour plots with no manual edit from 1:40,000 - 800,000 softcopy stereo models vs. the traditional DEM collection from 1:40,000 analytical stereo model.

  17. POSITIONING BASED ON INTEGRATION OF MUTI-SENSOR SYSTEMS USING KALMAN FILTER AND LEAST SQUARE ADJUSTMENT

    Directory of Open Access Journals (Sweden)

    M. Omidalizarandi

    2013-09-01

    Full Text Available Sensor fusion is to combine different sensor data from different sources in order to make a more accurate model. In this research, different sensors (Optical Speed Sensor, Bosch Sensor, Odometer, XSENS, Silicon and GPS receiver have been utilized to obtain different kinds of datasets to implement the multi-sensor system and comparing the accuracy of the each sensor with other sensors. The scope of this research is to estimate the current position and orientation of the Van. The Van's position can also be estimated by integrating its velocity and direction over time. To make these components work, it needs an interface that can bridge each other in a data acquisition module. The interface of this research has been developed based on using Labview software environment. Data have been transferred to PC via A/D convertor (LabJack and make a connection to PC. In order to synchronize all the sensors, calibration parameters of each sensor is determined in preparatory step. Each sensor delivers result in a sensor specific coordinate system that contains different location on the object, different definition of coordinate axes and different dimensions and units. Different test scenarios (Straight line approach and Circle approach with different algorithms (Kalman Filter, Least square Adjustment have been examined and the results of the different approaches are compared together.

  18. Gemini Planet Imager Observational Calibrations IX: Least-Squares Inversion Flux Extraction

    CERN Document Server

    Draper, Zachary H; Wolff, Schuyler; Perrin, Marshall; Ingraham, Patrick; Ruffio, Jean-Baptiste; Rantakyrö, Fredrik T; Hartung, Markus; Goodsell, Stephen J

    2014-01-01

    The Gemini Planet Imager (GPI) is an instrument designed to directly image planets and circumstellar disks from 0.9 to 2.5 microns (the $YJHK$ infrared bands) using high contrast adaptive optics with a lenslet-based integral field spectrograph. We develop an extraction algorithm based on a least-squares method to disentangle the spectra and systematic noise contributions simultaneously. We utilize two approaches to adjust for the effect of flexure of the GPI optics which move the position of light incident on the detector. The first method is to iterate the extraction to achieve minimum residual and the second is to cross-correlate the detector image with a model image in iterative extraction steps to determine an offset. Thus far, this process has made clear qualitative improvements to the cube extraction by reducing the Moir\\'{e} pattern. There are also improvements to the automated routines for finding flexure offsets which are reliable to with $\\sim0.5$ pixel accuracy compared to pixel accuracy prior. Fur...

  19. MANUFACTURING AND CONTINUOUS IMPROVEMENT AREAS USING PARTIAL LEAST SQUARE PATH MODELING WITH MULTIPLE REGRESSION COMPARISON

    Directory of Open Access Journals (Sweden)

    Carlos Monge Perry

    2014-07-01

    Full Text Available Structural equation modeling (SEM has traditionally been deployed in areas of marketing, consumer satisfaction and preferences, human behavior, and recently in strategic planning. These areas are considered their niches; however, there is a remarkable tendency in empirical research studies that indicate a more diversified use of the technique.  This paper shows the application of structural equation modeling using partial least square (PLS-SEM, in areas of manufacturing, quality, continuous improvement, operational efficiency, and environmental responsibility in Mexico’s medium and large manufacturing plants, while using a small sample (n = 40.  The results obtained from the PLS-SEM model application mentioned, are highly positive, relevant, and statistically significant. Also shown in this paper, for purposes of validity, reliability, and statistical power confirmation of PLS-SEM, is a comparative analysis against multiple regression showing very similar results to those obtained by PLS-SEM.  This fact validates the use of PLS-SEM in areas of untraditional scientific research, and suggests and invites the use of the technique in diversified fields of the scientific research

  20. Solution of shallow-water equations using least-squares finite-element method

    Institute of Scientific and Technical Information of China (English)

    Shin-Jye Liang; Jyh-Haw Tang; Ming-Shun Wu

    2008-01-01

    A least-squares finite-element method (LSFEM) for the non-conservative shallow-water equations is pre-sented. The model is capable of handling complex topogra-phy, steady and unsteady flows, subcritical and supercritical flows, and flows with smooth and sharp gradient changes. Advantages of the model include: (1) sources terms, such as the bottom slope, surface stresses and bed frictions, can be treated easily without any special treatment; (2) upwind scheme is no needed; (3) a single approximating space can be used for all variables, and its choice of approximating space is not subject to the Ladyzhenskaya-Babuska-Brezzi (LBB) condition; and (4) the resulting system of equations is sym-metric and positive-definite (SPD) which can be solved effi-ciently with the preconditioned conjugate gradient method. The model is verified with flow over a bump, tide induced flow, and dam-break. Computed results are compared with analytic solutions or other numerical results, and show the model is conservative and accurate. The model is then used to simulate flow past a circular cylinder. Important flow charac-teristics, such as variation of water surface around the cylin-der and vortex shedding behind the cylinder are investigated. Computed results compare well with experiment data and other numerical results.

  1. Modeling of a PEM Fuel Cell Stack using Partial Least Squares and Artificial Neural Networks

    International Nuclear Information System (INIS)

    We present two data-driven modeling methods, partial least square (PLS) and artificial neural network (ANN), to predict the major operating and performance variables of a polymer electrolyte membrane (PEM) fuel cell stack. PLS and ANN models were constructed using the experimental data obtained from the testing of a 30 kW-class PEM fuel cell stack, and then were compared with each other in terms of their prediction and computational performances. To reduce the complexity of the models, we combined a variables importance on PLS projection (VIP) as a variable selection method into the modeling procedure in which the predictor variables are selected from a set of input operation variables. The modeling results showed that the ANN models outperformed the PLS models in predicting the average cell voltage and cathode outlet temperature of the fuel cell stack. However, the PLS models also offered satisfactory prediction performances although they can only capture linear correlations between the predictor and output variables. Depending on the degree of modeling accuracy and speed, both ANN and PLS models can be employed for performance predictions, offline and online optimizations, controls, and fault diagnoses in the field of PEM fuel cell designs and operations

  2. Time series online prediction algorithm based on least squares support vector machine

    Institute of Scientific and Technical Information of China (English)

    WU Qiong; LIU Wen-ying; YANG Yi-han

    2007-01-01

    Deficiencies of applying the traditional least squares support vector machine (LS-SVM) to time series online prediction were specified. According to the kernel function matrix's property and using the recursive calculation of block matrix, a new time series online prediction algorithm based on improved LS-SVM was proposed. The historical training results were fully utilized and the computing speed of LS-SVM was enhanced. Then, the improved algorithm was applied to time series online prediction. Based on the operational data provided by the Northwest Power Grid of China, the method was used in the transient stability prediction of electric power system. The results show that, compared with the calculation time of the traditional LS-SVM(75-1 600 ms), that of the proposed method in different time windows is 40-60 ms, and the prediction accuracy(normalized root mean squared error) of the proposed method is above 0.8. So the improved method is better than the traditional LS-SVM and more suitable for time series online prediction.

  3. A Least Squares Collocation Method for Accuracy Improvement of Mobile LiDAR Systems

    Directory of Open Access Journals (Sweden)

    Qingzhou Mao

    2015-06-01

    Full Text Available In environments that are hostile to Global Navigation Satellites Systems (GNSS, the precision achieved by a mobile light detection and ranging (LiDAR system (MLS can deteriorate into the sub-meter or even the meter range due to errors in the positioning and orientation system (POS. This paper proposes a novel least squares collocation (LSC-based method to improve the accuracy of the MLS in these hostile environments. Through a thorough consideration of the characteristics of POS errors, the proposed LSC-based method effectively corrects these errors using LiDAR control points, thereby improving the accuracy of the MLS. This method is also applied to the calibration of misalignment between the laser scanner and the POS. Several datasets from different scenarios have been adopted in order to evaluate the effectiveness of the proposed method. The results from experiments indicate that this method would represent a significant improvement in terms of the accuracy of the MLS in environments that are essentially hostile to GNSS and is also effective regarding the calibration of misalignment.

  4. LP Norm SAR Tomography by Iteratively Rewighted Least Square: First Results on Hong Kong

    Science.gov (United States)

    Mancon, Simone; Tebaldini, Stefano; Monti Guarnieri, Andre

    2014-11-01

    Synthetic aperture radar tomography (TomoSAR) is the natural extension to 3-D of conventional 2-D Synthetic Aperture Radar (SAR) imaging. In this work, we focus on urban scenarios where targets of interest are point-like and radiometrically strong, i.e. the reflectivity profile in elevation is sparse. Accordingly, the method for TomoSAR imaging suggested in this work is based on Compressive Sensing (CS) theory. CS problems are typically solved by looking for the minimal solution in some Lp norm, where 0≤ p ≤ 1. The solution that minimizes an arbitrary Lp norm can be obtained using the Iteratively Reweighted Least Square (IRLS) algorithm. Based on an experimental comparison among different choices for p, the conclusion drawn is that the usual choice p = 1 is the best trade-off between resolution and robustness to noise. Results from real data will be discussed by reporting a TomoSAR reconstruction of an area in Hong Kong (China), acquired by COSMO-SkyMed.

  5. Prediction of olive oil sensory descriptors using instrumental data fusion and partial least squares (PLS) regression.

    Science.gov (United States)

    Borràs, Eva; Ferré, Joan; Boqué, Ricard; Mestres, Montserrat; Aceña, Laura; Calvo, Angels; Busto, Olga

    2016-08-01

    Headspace-Mass Spectrometry (HS-MS), Fourier Transform Mid-Infrared spectroscopy (FT-MIR) and UV-Visible spectrophotometry (UV-vis) instrumental responses have been combined to predict virgin olive oil sensory descriptors. 343 olive oil samples analyzed during four consecutive harvests (2010-2014) were used to build multivariate calibration models using partial least squares (PLS) regression. The reference values of the sensory attributes were provided by expert assessors from an official taste panel. The instrumental data were modeled individually and also using data fusion approaches. The use of fused data with both low- and mid-level of abstraction improved PLS predictions for all the olive oil descriptors. The best PLS models were obtained for two positive attributes (fruity and bitter) and two defective descriptors (fusty and musty), all of them using data fusion of MS and MIR spectral fingerprints. Although good predictions were not obtained for some sensory descriptors, the results are encouraging, specially considering that the legal categorization of virgin olive oils only requires the determination of fruity and defective descriptors. PMID:27216664

  6. An efficient recursive least square-based condition monitoring approach for a rail vehicle suspension system

    Science.gov (United States)

    Liu, X. Y.; Alfi, S.; Bruni, S.

    2016-06-01

    A model-based condition monitoring strategy for the railway vehicle suspension is proposed in this paper. This approach is based on recursive least square (RLS) algorithm focusing on the deterministic 'input-output' model. RLS has Kalman filtering feature and is able to identify the unknown parameters from a noisy dynamic system by memorising the correlation properties of variables. The identification of suspension parameter is achieved by machine learning of the relationship between excitation and response in a vehicle dynamic system. A fault detection method for the vertical primary suspension is illustrated as an instance of this condition monitoring scheme. Simulation results from the rail vehicle dynamics software 'ADTreS' are utilised as 'virtual measurements' considering a trailer car of Italian ETR500 high-speed train. The field test data from an E464 locomotive are also employed to validate the feasibility of this strategy for the real application. Results of the parameter identification performed indicate that estimated suspension parameters are consistent or approximate with the reference values. These results provide the supporting evidence that this fault diagnosis technique is capable of paving the way for the future vehicle condition monitoring system.

  7. Bias correction for the least squares estimator of Weibull shape parameter with complete and censored data

    Energy Technology Data Exchange (ETDEWEB)

    Zhang, L.F. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore); Xie, M. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)]. E-mail: mxie@nus.edu.sg; Tang, L.C. [Department of Industrial and Systems Engineering, National University of Singapore, 10 Kent Ridge Crescent, Singapore 119260 (Singapore)

    2006-08-15

    Estimation of the Weibull shape parameter is important in reliability engineering. However, commonly used methods such as the maximum likelihood estimation (MLE) and the least squares estimation (LSE) are known to be biased. Bias correction methods for MLE have been studied in the literature. This paper investigates the methods for bias correction when model parameters are estimated with LSE based on probability plot. Weibull probability plot is very simple and commonly used by practitioners and hence such a study is useful. The bias of the LS shape parameter estimator for multiple censored data is also examined. It is found that the bias can be modeled as the function of the sample size and the censoring level, and is mainly dependent on the latter. A simple bias function is introduced and bias correcting formulas are proposed for both complete and censored data. Simulation results are also presented. The bias correction methods proposed are very easy to use and they can typically reduce the bias of the LSE of the shape parameter to less than half percent.

  8. A bifurcation identifier for IV-OCT using orthogonal least squares and supervised machine learning.

    Science.gov (United States)

    Macedo, Maysa M G; Guimarães, Welingson V N; Galon, Micheli Z; Takimura, Celso K; Lemos, Pedro A; Gutierrez, Marco Antonio

    2015-12-01

    Intravascular optical coherence tomography (IV-OCT) is an in-vivo imaging modality based on the intravascular introduction of a catheter which provides a view of the inner wall of blood vessels with a spatial resolution of 10-20 μm. Recent studies in IV-OCT have demonstrated the importance of the bifurcation regions. Therefore, the development of an automated tool to classify hundreds of coronary OCT frames as bifurcation or nonbifurcation can be an important step to improve automated methods for atherosclerotic plaques quantification, stent analysis and co-registration between different modalities. This paper describes a fully automated method to identify IV-OCT frames in bifurcation regions. The method is divided into lumen detection; feature extraction; and classification, providing a lumen area quantification, geometrical features of the cross-sectional lumen and labeled slices. This classification method is a combination of supervised machine learning algorithms and feature selection using orthogonal least squares methods. Training and tests were performed in sets with a maximum of 1460 human coronary OCT frames. The lumen segmentation achieved a mean difference of lumen area of 0.11 mm(2) compared with manual segmentation, and the AdaBoost classifier presented the best result reaching a F-measure score of 97.5% using 104 features. PMID:26433615

  9. Passive shimming of a superconducting magnet using the L1-norm regularized least square algorithm

    Science.gov (United States)

    Kong, Xia; Zhu, Minhua; Xia, Ling; Wang, Qiuliang; Li, Yi; Zhu, Xuchen; Liu, Feng; Crozier, Stuart

    2016-02-01

    The uniformity of the static magnetic field B0 is of prime importance for an MRI system. The passive shimming technique is usually applied to improve the uniformity of the static field by optimizing the layout of a series of steel shims. The steel pieces are fixed in the drawers in the inner bore of the superconducting magnet, and produce a magnetizing field in the imaging region to compensate for the inhomogeneity of the B0 field. In practice, the total mass of steel used for shimming should be minimized, in addition to the field uniformity requirement. This is because the presence of steel shims may introduce a thermal stability problem. The passive shimming procedure is typically realized using the linear programming (LP) method. The LP approach however, is generally slow and also has difficulty balancing the field quality and the total amount of steel for shimming. In this paper, we have developed a new algorithm that is better able to balance the dual constraints of field uniformity and the total mass of the shims. The least square method is used to minimize the magnetic field inhomogeneity over the imaging surface with the total mass of steel being controlled by an L1-norm based constraint. The proposed algorithm has been tested with practical field data, and the results show that, with similar computational cost and mass of shim material, the new algorithm achieves superior field uniformity (43% better for the test case) compared with the conventional linear programming approach.

  10. Partial least squares regression for predicting economic loss of vegetables caused by acid rain

    Institute of Scientific and Technical Information of China (English)

    WANG Ju; MENG He; DONG De-ming; LI Wei; FANG Chun-sheng

    2009-01-01

    To predict the economic loss of crops caused by acid rain, we used partial least squares (PLS) regression to build a model of single dependent variable-the economic loss calculated with the decrease in yield related to the pH value and levels of Ca2+, NH4+, Na+, K+, Mg2+, SO42-, NO3-, and Cl- in acid rain. We selected vegetables which were sensitive to acid rain as the sample crops, and collected 12 groups of data, of which 8 groups were used for modeling and 4 groups for testing. Using the cross validation method to evaluate the performace of this prediction model indicates that the optimum number of principal components was 3, determined by the minimum of prediction residual error sum of squares, and the prediction error of the regression equation ranges from-2.25% to 4.32%. The model predicted that the economic loss of vegetables from acid rain is negatively corrrelated to pH and the concentrations of NH4+, SO42-, NO3-, and Cl- in the rain, and positively correlated to the concentrations of Ca2+, Na+, K+ and Mg2+. The precision of the model may be improved if the non-linearity of original data is addressed.

  11. Computing ordinary least-squares parameter estimates for the National Descriptive Model of Mercury in Fish

    Science.gov (United States)

    Donato, David I.

    2013-01-01

    A specialized technique is used to compute weighted ordinary least-squares (OLS) estimates of the parameters of the National Descriptive Model of Mercury in Fish (NDMMF) in less time using less computer memory than general methods. The characteristics of the NDMMF allow the two products X'X and X'y in the normal equations to be filled out in a second or two of computer time during a single pass through the N data observations. As a result, the matrix X does not have to be stored in computer memory and the computationally expensive matrix multiplications generally required to produce X'X and X'y do not have to be carried out. The normal equations may then be solved to determine the best-fit parameters in the OLS sense. The computational solution based on this specialized technique requires O(8p2+16p) bytes of computer memory for p parameters on a machine with 8-byte double-precision numbers. This publication includes a reference implementation of this technique and a Gaussian-elimination solver in preliminary custom software.

  12. Evaluation of milk compositional variables on coagulation properties using partial least squares.

    Science.gov (United States)

    Bland, Julie H; Grandison, Alistair S; Fagan, Colette C

    2015-02-01

    The aim of this study was to investigate the effects of numerous milk compositional factors on milk coagulation properties using Partial Least Squares (PLS). Milk from herds of Jersey and Holstein-Friesian cattle was collected across the year and blended (n=55), to maximise variation in composition and coagulation. The milk was analysed for casein, protein, fat, titratable acidity, lactose, Ca2+, urea content, micelles size, fat globule size, somatic cell count and pH. Milk coagulation properties were defined as coagulation time, curd firmness and curd firmness rate measured by a controlled strain rheometer. The models derived from PLS had higher predictive power than previous models demonstrating the value of measuring more milk components. In addition to the well-established relationships with casein and protein levels, CMS and fat globule size were found to have as strong impact on all of the three models. The study also found a positive impact of fat on milk coagulation properties and a strong relationship between lactose and curd firmness, and urea and curd firmness rate, all of which warrant further investigation due to current lack of knowledge of the underlying mechanism. These findings demonstrate the importance of using a wider range of milk compositional variables for the prediction of the milk coagulation properties, and hence as indicators of milk suitability for cheese making. PMID:25287607

  13. Radial Basis Function-Sparse Partial Least Squares for Application to Brain Imaging Data

    Directory of Open Access Journals (Sweden)

    Hisako Yoshida

    2013-01-01

    Full Text Available Magnetic resonance imaging (MRI data is an invaluable tool in brain morphology research. Here, we propose a novel statistical method for investigating the relationship between clinical characteristics and brain morphology based on three-dimensional MRI data via radial basis function-sparse partial least squares (RBF-sPLS. Our data consisted of MRI image intensities for multimillion voxels in a 3D array along with 73 clinical variables. This dataset represents a suitable application of RBF-sPLS because of a potential correlation among voxels as well as among clinical characteristics. Additionally, this method can simultaneously select both effective brain regions and clinical characteristics based on sparse modeling. This is in contrast to existing methods, which consider prespecified brain regions because of the computational difficulties involved in processing high-dimensional data. RBF-sPLS employs dimensionality reduction in order to overcome this obstacle. We have applied RBF-sPLS to a real dataset composed of 102 chronic kidney disease patients, while a comparison study used a simulated dataset. RBF-sPLS identified two brain regions of interest from our patient data: the temporal lobe and the occipital lobe, which are associated with aging and anemia, respectively. Our simulation study suggested that such brain regions are extracted with excellent accuracy using our method.

  14. Least Squares Evaluations for Form and Profile Errors of Ellipse Using Coordinate Data

    Science.gov (United States)

    Liu, Fei; Xu, Guanghua; Liang, Lin; Zhang, Qing; Liu, Dan

    2016-04-01

    To improve the measurement and evaluation of form error of an elliptic section, an evaluation method based on least squares fitting is investigated to analyze the form and profile errors of an ellipse using coordinate data. Two error indicators for defining ellipticity are discussed, namely the form error and the profile error, and the difference between both is considered as the main parameter for evaluating machining quality of surface and profile. Because the form error and the profile error rely on different evaluation benchmarks, the major axis and the foci rather than the centre of an ellipse are used as the evaluation benchmarks and can accurately evaluate a tolerance range with the separated form error and profile error of workpiece. Additionally, an evaluation program based on the LS model is developed to extract the form error and the profile error of the elliptic section, which is well suited for separating the two errors by a standard program. Finally, the evaluation method about the form and profile errors of the ellipse is applied to the measurement of skirt line of the piston, and results indicate the effectiveness of the evaluation. This approach provides the new evaluation indicators for the measurement of form and profile errors of ellipse, which is found to have better accuracy and can thus be used to solve the difficult of the measurement and evaluation of the piston in industrial production.

  15. Kinetic microplate bioassays for relative potency of antibiotics improved by partial Least Square (PLS) regression.

    Science.gov (United States)

    Francisco, Fabiane Lacerda; Saviano, Alessandro Morais; Almeida, Túlia de Souza Botelho; Lourenço, Felipe Rebello

    2016-05-01

    Microbiological assays are widely used to estimate the relative potencies of antibiotics in order to guarantee the efficacy, safety, and quality of drug products. Despite of the advantages of turbidimetric bioassays when compared to other methods, it has limitations concerning the linearity and range of the dose-response curve determination. Here, we proposed to use partial least squares (PLS) regression to solve these limitations and to improve the prediction of relative potencies of antibiotics. Kinetic-reading microplate turbidimetric bioassays for apramacyin and vancomycin were performed using Escherichia coli (ATCC 8739) and Bacillus subtilis (ATCC 6633), respectively. Microbial growths were measured as absorbance up to 180 and 300min for apramycin and vancomycin turbidimetric bioassays, respectively. Conventional dose-response curves (absorbances or area under the microbial growth curve vs. log of antibiotic concentration) showed significant regression, however there were significant deviation of linearity. Thus, they could not be used for relative potency estimations. PLS regression allowed us to construct a predictive model for estimating the relative potencies of apramycin and vancomycin without over-fitting and it improved the linear range of turbidimetric bioassay. In addition, PLS regression provided predictions of relative potencies equivalent to those obtained from agar diffusion official methods. Therefore, we conclude that PLS regression may be used to estimate the relative potencies of antibiotics with significant advantages when compared to conventional dose-response curve determination. PMID:26971814

  16. On the Potential of Least Squares Response Method for the Calibration of Superconducting Gravimeters

    Directory of Open Access Journals (Sweden)

    Mahmoud Abd El-Gelil

    2012-01-01

    Full Text Available One of the most important operating procedures after the installation of a superconducting gravimeter (SG is its calibration. The calibration process can identify and evaluate possible time variability in the scale factor and in the hardware anti-aliasing filter response. The SG installed in Cantley, Canada is calibrated using two absolute gravimeters and the data are analysed in the time and frequency domains to estimate the SG scale factor. In the time domain, we use the weighted linear regression method whereas in the frequency domain we use the least squares response method. Rigorous statistical procedures are applied to define data disturbances, outliers, and realistic data noise levels. Using data from JILA-2 and FG5-236 separately, the scale factor is estimated in the time and frequency domains as −78.374±0.012 μGal/V and −78.403±0.075 μGal/V, respectively. The relative accuracy in the time domain is 0.015%. We cannot identify any significant periodicity in the scale factor. The hardware anti-aliasing filter response is tested by injecting known waves into the control electronics of the system. Results show that the anti-aliasing filter response is stable and conforms to the global geodynamics project standards.

  17. Intelligent Control of a Sensor-Actuator System via Kernelized Least-Squares Policy Iteration

    Directory of Open Access Journals (Sweden)

    Bo Liu

    2012-02-01

    Full Text Available In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL, for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI. Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD. Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces.

  18. Intelligent control of a sensor-actuator system via kernelized least-squares policy iteration.

    Science.gov (United States)

    Liu, Bo; Chen, Sanfeng; Li, Shuai; Liang, Yongsheng

    2012-01-01

    In this paper a new framework, called Compressive Kernelized Reinforcement Learning (CKRL), for computing near-optimal policies in sequential decision making with uncertainty is proposed via incorporating the non-adaptive data-independent Random Projections and nonparametric Kernelized Least-squares Policy Iteration (KLSPI). Random Projections are a fast, non-adaptive dimensionality reduction framework in which high-dimensionality data is projected onto a random lower-dimension subspace via spherically random rotation and coordination sampling. KLSPI introduce kernel trick into the LSPI framework for Reinforcement Learning, often achieving faster convergence and providing automatic feature selection via various kernel sparsification approaches. In this approach, policies are computed in a low-dimensional subspace generated by projecting the high-dimensional features onto a set of random basis. We first show how Random Projections constitute an efficient sparsification technique and how our method often converges faster than regular LSPI, while at lower computational costs. Theoretical foundation underlying this approach is a fast approximation of Singular Value Decomposition (SVD). Finally, simulation results are exhibited on benchmark MDP domains, which confirm gains both in computation time and in performance in large feature spaces. PMID:22736969

  19. Multi-classification algorithm and its realization based on least square support vector machine algorithm

    Institute of Scientific and Technical Information of China (English)

    Fan Youping; Chen Yunping; Sun Wansheng; Li Yu

    2005-01-01

    As a new type of learning machine developed on the basis of statistics learning theory, support vector machine (SVM) plays an important role in knowledge discovering and knowledge updating by constructing non-linear optimal classifier. However, realizing SVM requires resolving quadratic programming under constraints of inequality, which results in calculation difficulty while learning samples gets larger. Besides, standard SVM is incapable of tackling multi-classification. To overcome the bottleneck of populating SVM, with training algorithm presented, the problem of quadratic programming is converted into that of resolving a linear system of equations composed of a group of equation constraints by adopting the least square SVM(LS-SVM) and introducing a modifying variable which can change inequality constraints into equation constraints, which simplifies the calculation. With regard to multi-classification, an LS-SVM applicable in multi-classification is deduced. Finally, efficiency of the algorithm is checked by using universal Circle in square and two-spirals to measure the performance of the classifier.

  20. Partial least squares for efficient models of fecal indicator bacteria on Great Lakes beaches

    Science.gov (United States)

    Brooks, Wesley R.; Fienen, Michael J.; Corsi, Steven R.

    2013-01-01

    At public beaches, it is now common to mitigate the impact of water-borne pathogens by posting a swimmer's advisory when the concentration of fecal indicator bacteria (FIB) exceeds an action threshold. Since culturing the bacteria delays public notification when dangerous conditions exist, regression models are sometimes used to predict the FIB concentration based on readily-available environmental measurements. It is hard to know which environmental parameters are relevant to predicting FIB concentration, and the parameters are usually correlated, which can hurt the predictive power of a regression model. Here the method of partial least squares (PLS) is introduced to automate the regression modeling process. Model selection is reduced to the process of setting a tuning parameter to control the decision threshold that separates predicted exceedances of the standard from predicted non-exceedances. The method is validated by application to four Great Lakes beaches during the summer of 2010. Performance of the PLS models compares favorably to that of the existing state-of-the-art regression models at these four sites.

  1. The Effect of Coherence on Sampling from Matrices with Orthonormal Columns, and Preconditioned Least Squares Problems

    CERN Document Server

    Ipsen, Ilse C F

    2012-01-01

    We consider two strategies for sampling rows from m by n matrices Q with orthonormal columns. The first strategy samples c rows with replacement, while the second one treats each row as an i.i.d. Bernoulli random variable, and samples it with probability \\gamma = c/m. We derive several bounds for the condition numbers of the sampled matrices and express them in terms of the coherence, \\mu, of Q. In particular, we show that for both sampling strategies the two-norm condition number of the sampled matrix SQ is bounded by ((1+\\epsilon)/(1-\\epsilon))^.5 with probability at least 1-\\delta if c\\geq 3m\\mu ln(2n/\\delta)/\\epsilon^2. Numerical experiments confirm the accuracy of the bounds, even for small matrix dimensions. We also present algorithms to generate matrices with user-specified coherence, and apply the bounds to the solution of general, full-rank least squares problems with the randomized preconditioner from Blendenpik. A Matlab package, \\kappa(SQ), implements the matrix generation algorithms and the two s...

  2. Eddy current characterization of small cracks using least square support vector machine

    Science.gov (United States)

    Chelabi, M.; Hacib, T.; Le Bihan, Y.; Ikhlef, N.; Boughedda, H.; Mekideche, M. R.

    2016-04-01

    Eddy current (EC) sensors are used for non-destructive testing since they are able to probe conductive materials. Despite being a conventional technique for defect detection and localization, the main weakness of this technique is that defect characterization, of the exact determination of the shape and dimension, is still a question to be answered. In this work, we demonstrate the capability of small crack sizing using signals acquired from an EC sensor. We report our effort to develop a systematic approach to estimate the size of rectangular and thin defects (length and depth) in a conductive plate. The achieved approach by the novel combination of a finite element method (FEM) with a statistical learning method is called least square support vector machines (LS-SVM). First, we use the FEM to design the forward problem. Next, an algorithm is used to find an adaptive database. Finally, the LS-SVM is used to solve the inverse problems, creating polynomial functions able to approximate the correlation between the crack dimension and the signal picked up from the EC sensor. Several methods are used to find the parameters of the LS-SVM. In this study, the particle swarm optimization (PSO) and genetic algorithm (GA) are proposed for tuning the LS-SVM. The results of the design and the inversions were compared to both simulated and experimental data, with accuracy experimentally verified. These suggested results prove the applicability of the presented approach.

  3. Texture discrimination of green tea categories based on least squares support vector machine (LSSVM) classifier

    Science.gov (United States)

    Li, Xiaoli; He, Yong; Qiu, Zhengjun; Wu, Di

    2008-03-01

    This research aimed for development multi-spectral imaging technique for green tea categories discrimination based on texture analysis. Three key wavelengths of 550, 650 and 800 nm were implemented in a common-aperture multi-spectral charged coupled device camera, and images were acquired for 190 unique images in a four different kinds of green tea data set. An image data set consisting of 15 texture features for each image was generated based on texture analysis techniques including grey level co-occurrence method (GLCM) and texture filtering. For optimization the texture features, 5 features that weren't correlated with the category of tea were eliminated. Unsupervised cluster analysis was conducted using the optimized texture features based on principal component analysis. The cluster analysis showed that the four kinds of green tea could be separated in the first two principal components space, however there was overlapping phenomenon among the different kinds of green tea. To enhance the performance of discrimination, least squares support vector machine (LSSVM) classifier was developed based on the optimized texture features. The excellent discrimination performance for sample in prediction set was obtained with 100%, 100%, 75% and 100% for four kinds of green tea respectively. It can be concluded that texture discrimination of green tea categories based on multi-spectral image technology is feasible.

  4. Partial least squares prediction of the first hyperpolarizabilities of donor-acceptor polyenic derivatives

    International Nuclear Information System (INIS)

    Graphical abstract: PLS regression equations predicts quite well static β values for a large set of donor-acceptor organic molecules, in close agreement with the available experimental data. Display Omitted Highlights: → PLS regression predicts static β values of 35 push-pull organic molecules. → PLS equations show correlation of β with structural-electronic parameters. → PLS regression selects best components of push-bridge-pull nonlinear compounds. → PLS analyses can be routinely used to select novel second-order materials. - Abstract: A partial least squares regression analysis of a large set of donor-acceptor organic molecules was performed to predict the magnitude of their static first hyperpolarizabilities (β's). Polyenes, phenylpolyenes and biphenylpolyenes with augmented chain lengths displayed large β values, in agreement with the available experimental data. The regressors used were the HOMO-LUMO energy gap, the ground-state dipole moment, the HOMO energy AM1 values and the number of π-electrons. The regression equation predicts quite well the static β values for the molecules investigated and can be used to model new organic-based materials with enhanced nonlinear responses.

  5. Multimodal Classification of Mild Cognitive Impairment Based on Partial Least Squares.

    Science.gov (United States)

    Wang, Pingyue; Chen, Kewei; Yao, Li; Hu, Bin; Wu, Xia; Zhang, Jiacai; Ye, Qing; Guo, Xiaojuan

    2016-08-10

    In recent years, increasing attention has been given to the identification of the conversion of mild cognitive impairment (MCI) to Alzheimer's disease (AD). Brain neuroimaging techniques have been widely used to support the classification or prediction of MCI. The present study combined magnetic resonance imaging (MRI), 18F-fluorodeoxyglucose PET (FDG-PET), and 18F-florbetapir PET (florbetapir-PET) to discriminate MCI converters (MCI-c, individuals with MCI who convert to AD) from MCI non-converters (MCI-nc, individuals with MCI who have not converted to AD in the follow-up period) based on the partial least squares (PLS) method. Two types of PLS models (informed PLS and agnostic PLS) were built based on 64 MCI-c and 65 MCI-nc from the Alzheimer's Disease Neuroimaging Initiative (ADNI) database. The results showed that the three-modality informed PLS model achieved better classification accuracy of 81.40%, sensitivity of 79.69%, and specificity of 83.08% compared with the single-modality model, and the three-modality agnostic PLS model also achieved better classification compared with the two-modality model. Moreover, combining the three modalities with clinical test score (ADAS-cog), the agnostic PLS model (independent data: florbetapir-PET; dependent data: FDG-PET and MRI) achieved optimal accuracy of 86.05%, sensitivity of 81.25%, and specificity of 90.77%. In addition, the comparison of PLS, support vector machine (SVM), and random forest (RF) showed greater diagnostic power of PLS. These results suggested that our multimodal PLS model has the potential to discriminate MCI-c from the MCI-nc and may therefore be helpful in the early diagnosis of AD. PMID:27567818

  6. Exploring Omics data from designed experiments using analysis of variance multiblock Orthogonal Partial Least Squares.

    Science.gov (United States)

    Boccard, Julien; Rudaz, Serge

    2016-05-12

    Many experimental factors may have an impact on chemical or biological systems. A thorough investigation of the potential effects and interactions between the factors is made possible by rationally planning the trials using systematic procedures, i.e. design of experiments. However, assessing factors' influences remains often a challenging task when dealing with hundreds to thousands of correlated variables, whereas only a limited number of samples is available. In that context, most of the existing strategies involve the ANOVA-based partitioning of sources of variation and the separate analysis of ANOVA submatrices using multivariate methods, to account for both the intrinsic characteristics of the data and the study design. However, these approaches lack the ability to summarise the data using a single model and remain somewhat limited for detecting and interpreting subtle perturbations hidden in complex Omics datasets. In the present work, a supervised multiblock algorithm based on the Orthogonal Partial Least Squares (OPLS) framework, is proposed for the joint analysis of ANOVA submatrices. This strategy has several advantages: (i) the evaluation of a unique multiblock model accounting for all sources of variation; (ii) the computation of a robust estimator (goodness of fit) for assessing the ANOVA decomposition reliability; (iii) the investigation of an effect-to-residuals ratio to quickly evaluate the relative importance of each effect and (iv) an easy interpretation of the model with appropriate outputs. Case studies from metabolomics and transcriptomics, highlighting the ability of the method to handle Omics data obtained from fixed-effects full factorial designs, are proposed for illustration purposes. Signal variations are easily related to main effects or interaction terms, while relevant biochemical information can be derived from the models. PMID:27114219

  7. Partial least square modeling of hydrolysis: analyzing the impacts of pH and acetate

    Institute of Scientific and Technical Information of China (English)

    L(U) Fan; HE Pin-jing; SHAO Li-ming

    2006-01-01

    pH and volatile fatty acids both might affect the further hydrolysis of particulate solid waste, which is the limiting-step of anaerobic digestion. To clarify the individual effects of pH and volatile fatty acids, batch experiments were conducted at fixed pH value (pH 5-9) with or without acetate (20 g/L). The hydrolysis efficiencies of carbohydrate and protein were evaluated by carbon and nitrogen content of solids, amylase activity and proteinase activity. The trend of carbohydrate hydrolysis with pH was not affected by the addition of acetate, following the sequence of pH 7>pH 8>pH 9>pH 6>pH 5; but the inhibition of acetate (20 g/L) was obvious by 10%-60 %. The evolution of residual nitrogen showed that the effect of pH on protein hydrolysis was minor, while the acetate was seriously inhibitory especially at alkali condition by 45%-100 %. The relationship between the factors (pH and acetate) and the response variables was evaluated by partial least square modeling (PLS). The PLS analysis demonstrated that the hydrolysis of carbohydrate was both affected by pH and acetate, with pH the more important factor. Therefore, the inhibition by acetate on carbohydrate hydrolysis was mainly due to the corresponding decline of pH, but the presence of acetate species, while the acetate species was the absolutely important factor for the hydrolysis of protein.

  8. Identifying grey matter changes in schizotypy using partial least squares correlation.

    Science.gov (United States)

    Wiebels, Kristina; Waldie, Karen E; Roberts, Reece P; Park, Haeme R P

    2016-08-01

    Neuroimaging research into the brain structure of schizophrenia patients has shown consistent reductions in grey matter volume relative to healthy controls. Examining structural differences in individuals with high levels of schizotypy may help elucidate the course of disorder progression, and provide further support for the schizotypy-schizophrenia continuum. Thus far, the few studies investigating grey matter differences in schizotypy have produced inconsistent results. In the current study, we used a multivariate partial least squares (PLS) approach to clarify the relationship between psychometric schizotypy (measured by the Oxford-Liverpool Inventory of Feelings and Experiences) and grey matter volume in 49 healthy adults. We found a negative association between all schizotypy dimensions and grey matter volume in the frontal and temporal lobes, as well as the insula. We also found a positive association between all schizotypy dimensions and grey matter volume in the parietal and temporal lobes, and in subcortical regions. Further correlational analyses revealed that positive and disorganised schizotypy were strongly associated with key regions (left superior temporal gyrus and insula) most consistently reported to be affected in schizophrenia and schizotypy. We also compared PLS with the typically used General Linear Model (GLM) and demonstrate that PLS can be reliably used as an extension to voxel-based morphometry (VBM) data. This may be particularly valuable for schizotypy research due to PLS' ability to detect small, but reliable effects. Together, the findings indicate that healthy schizotypal individuals exhibit structural changes in regions associated with schizophrenia. This adds to the evidence of an overlap of phenotypic expression between schizotypy and schizophrenia, and may help establish biological endophenotypes for the disorder. PMID:27208815

  9. Multilocus association testing of quantitative traits based on partial least-squares analysis.

    Directory of Open Access Journals (Sweden)

    Feng Zhang

    Full Text Available Because of combining the genetic information of multiple loci, multilocus association studies (MLAS are expected to be more powerful than single locus association studies (SLAS in disease genes mapping. However, some researchers found that MLAS had similar or reduced power relative to SLAS, which was partly attributed to the increased degrees of freedom (dfs in MLAS. Based on partial least-squares (PLS analysis, we develop a MLAS approach, while avoiding large dfs in MLAS. In this approach, genotypes are first decomposed into the PLS components that not only capture majority of the genetic information of multiple loci, but also are relevant for target traits. The extracted PLS components are then regressed on target traits to detect association under multilinear regression. Simulation study based on real data from the HapMap project were used to assess the performance of our PLS-based MLAS as well as other popular multilinear regression-based MLAS approaches under various scenarios, considering genetic effects and linkage disequilibrium structure of candidate genetic regions. Using PLS-based MLAS approach, we conducted a genome-wide MLAS of lean body mass, and compared it with our previous genome-wide SLAS of lean body mass. Simulations and real data analyses results support the improved power of our PLS-based MLAS in disease genes mapping relative to other three MLAS approaches investigated in this study. We aim to provide an effective and powerful MLAS approach, which may help to overcome the limitations of SLAS in disease genes mapping.

  10. Comparison of Kriging and Moving Least Square Methods to Change the Geometry of Human Body Models.

    Science.gov (United States)

    Jolivet, Erwan; Lafon, Yoann; Petit, Philippe; Beillas, Philippe

    2015-11-01

    Finite Element Human Body Models (HBM) have become powerful tools to study the response to impact. However, they are typically only developed for a limited number of sizes and ages. Various approaches driven by control points have been reported in the literature for the non-linear scaling of these HBM into models with different geometrical characteristics. The purpose of this study is to compare the performances of commonly used control points based interpolation methods in different usage scenarios. Performance metrics include the respect of target, the mesh quality and the runability. For this study, the Kriging and Moving Least square interpolation approaches were compared in three test cases. The first two cases correspond to changes of anthropometric dimensions of (1) a child model (from 6 to 1.5 years old) and (2) the GHBMC M50 model (Global Human Body Models Consortium, from 50th to 5th percentile female). For the third case, the GHBMC M50 ribcage was scaled to match the rib cage geometry derived from a CT-scan. In the first two test cases, all tested methods provided similar shapes with acceptable results in terms of time needed for the deformation (a few minutes at most), overall respect of the targets, element quality distribution and time step for explicit simulation. The personalization of rib cage proved to be much more challenging. None of the methods tested provided fully satisfactory results at the level of the rib trajectory and section. There were corrugated local deformations unless using a smooth regression through relaxation. Overall, the results highlight the importance of the target definition over the interpolation method. PMID:26660750

  11. Bayesian inference for data assimilation using Least-Squares Finite Element methods

    International Nuclear Information System (INIS)

    It has recently been observed that Least-Squares Finite Element methods (LS-FEMs) can be used to assimilate experimental data into approximations of PDEs in a natural way, as shown by Heyes et al. in the case of incompressible Navier-Stokes flow. The approach was shown to be effective without regularization terms, and can handle substantial noise in the experimental data without filtering. Of great practical importance is that - unlike other data assimilation techniques - it is not significantly more expensive than a single physical simulation. However the method as presented so far in the literature is not set in the context of an inverse problem framework, so that for example the meaning of the final result is unclear. In this paper it is shown that the method can be interpreted as finding a maximum a posteriori (MAP) estimator in a Bayesian approach to data assimilation, with normally distributed observational noise, and a Bayesian prior based on an appropriate norm of the governing equations. In this setting the method may be seen to have several desirable properties: most importantly discretization and modelling error in the simulation code does not affect the solution in limit of complete experimental information, so these errors do not have to be modelled statistically. Also the Bayesian interpretation better justifies the choice of the method, and some useful generalizations become apparent. The technique is applied to incompressible Navier-Stokes flow in a pipe with added velocity data, where its effectiveness, robustness to noise, and application to inverse problems is demonstrated.

  12. Using a partial least squares (PLS) method for estimating cyanobacterial pigments in eutrophic inland waters

    Science.gov (United States)

    Robertson, A. L.; Li, L.; Tedesco, L.; Wilson, J.; Soyeux, E.

    2009-08-01

    Midwestern lakes and reservoirs are commonly exposed to anthropogenic eutrophication. Cyanobacteria thrive in these nutrient rich-waters and some species pose three threats: 1) taste & odor (drinking), 2) toxins (drinking + recreational) and 3) water treatment process disturbance. Managers for drinking water production are interested in the rapid identification of cyanobacterial blooms to minimize effects caused by harmful cyanobacteria. There is potential to monitor cyanobacteria through the remote sensing of two algal pigments: chlorophyll a (CHL) and phycocyanin (PC). Several empirical methods that develop spectral parameters (e.g., simple band ratio) sensitive to these two pigments and map reflectance to the pigment concentration have been used in a number of investigations using field-based spectroradiometers. This study tests a multivariate analysis approach, partial least squares (PLS) regression, for the estimation of CHL and PC. PLS models were trained with 35 spectra collected from three central Indiana reservoirs during a 2007 field campaign with dual-headed Ocean Optics USB4000 field spectroradiometers (355 - 802 nm, nominal 1.0 nm intervals), and CHL and PC concentrations of the corresponding water samples analyzed at Indiana University-Purdue University at Indianapolis. Validation of these models with 19 remaining spectra show that PLS (CHL R2=0.90, slope=0.91, RMSE=20.61 μg/L PC R2=0.65, slope=1.15, RMSE=23.04. μg/L) performed equally well to the band tuning model based on Gitelson et al. 2005 (CHL: R2=0.75, slope=0.84, RMSE=40.16 μg/L PC: R2=0.59, slope=1.14, RMSE=20.24 μg/L).

  13. Detection of epileptic seizure in EEG signals using linear least squares preprocessing.

    Science.gov (United States)

    Roshan Zamir, Z

    2016-09-01

    An epileptic seizure is a transient event of abnormal excessive neuronal discharge in the brain. This unwanted event can be obstructed by detection of electrical changes in the brain that happen before the seizure takes place. The automatic detection of seizures is necessary since the visual screening of EEG recordings is a time consuming task and requires experts to improve the diagnosis. Much of the prior research in detection of seizures has been developed based on artificial neural network, genetic programming, and wavelet transforms. Although the highest achieved accuracy for classification is 100%, there are drawbacks, such as the existence of unbalanced datasets and the lack of investigations in performances consistency. To address these, four linear least squares-based preprocessing models are proposed to extract key features of an EEG signal in order to detect seizures. The first two models are newly developed. The original signal (EEG) is approximated by a sinusoidal curve. Its amplitude is formed by a polynomial function and compared with the predeveloped spline function. Different statistical measures, namely classification accuracy, true positive and negative rates, false positive and negative rates and precision, are utilised to assess the performance of the proposed models. These metrics are derived from confusion matrices obtained from classifiers. Different classifiers are used over the original dataset and the set of extracted features. The proposed models significantly reduce the dimension of the classification problem and the computational time while the classification accuracy is improved in most cases. The first and third models are promising feature extraction methods with the classification accuracy of 100%. Logistic, LazyIB1, LazyIB5, and J48 are the best classifiers. Their true positive and negative rates are 1 while false positive and negative rates are 0 and the corresponding precision values are 1. Numerical results suggest that these

  14. Phase-space finite elements in a least-squares solution of the transport equation

    International Nuclear Information System (INIS)

    The linear Boltzmann transport equation is solved using a least-squares finite element approximation in the space, angular and energy phase-space variables. The method is applied to both neutral particle transport and also to charged particle transport in the presence of an electric field, where the angular and energy derivative terms are handled with the energy/angular finite elements approximation, in a manner analogous to the way the spatial streaming term is handled. For multi-dimensional problems, a novel approach is used for the angular finite elements: mapping the surface of a unit sphere to a two-dimensional planar region and using a meshing tool to generate a mesh. In this manner, much of the spatial finite-elements machinery can be easily adapted to handle the angular variable. The energy variable and the angular variable for one-dimensional problems make use of edge/beam elements, also building upon the spatial finite elements capabilities. The methods described here can make use of either continuous or discontinuous finite elements in space, angle and/or energy, with the use of continuous finite elements resulting in a smaller problem size and the use of discontinuous finite elements resulting in more accurate solutions for certain types of problems. The work described in this paper makes use of continuous finite elements, so that the resulting linear system is symmetric positive definite and can be solved with a highly efficient parallel preconditioned conjugate gradients algorithm. The phase-space finite elements capability has been built into the Sceptre code and applied to several test problems, including a simple one-dimensional problem with an analytic solution available, a two-dimensional problem with an isolated source term, showing how the method essentially eliminates ray effects encountered with discrete ordinates, and a simple one-dimensional charged-particle transport problem in the presence of an electric field. (authors)

  15. Prediction of Biomass Production and Nutrient Uptake in Land Application Using Partial Least Squares Regression Analysis

    Directory of Open Access Journals (Sweden)

    Vasileios A. Tzanakakis

    2014-12-01

    Full Text Available Partial Least Squares Regression (PLSR can integrate a great number of variables and overcome collinearity problems, a fact that makes it suitable for intensive agronomical practices such as land application. In the present study a PLSR model was developed to predict important management goals, including biomass production and nutrient recovery (i.e., nitrogen and phosphorus, associated with treatment potential, environmental impacts, and economic benefits. Effluent loading and a considerable number of soil parameters commonly monitored in effluent irrigated lands were considered as potential predictor variables during the model development. All data were derived from a three year field trial including plantations of four different plant species (Acacia cyanophylla, Eucalyptus camaldulensis, Populus nigra, and Arundo donax, irrigated with pre-treated domestic effluent. PLSR method was very effective despite the small sample size and the wide nature of data set (with many highly correlated inputs and several highly correlated responses. Through PLSR method the number of initial predictor variables was reduced and only several variables were remained and included in the final PLSR model. The important input variables maintained were: Effluent loading, electrical conductivity (EC, available phosphorus (Olsen-P, Na+, Ca2+, Mg2+, K2+, SAR, and NO3−-N. Among these variables, effluent loading, EC, and nitrates had the greater contribution to the final PLSR model. PLSR is highly compatible with intensive agronomical practices such as land application, in which a large number of highly collinear and noisy input variables is monitored to assess plant species performance and to detect impacts on the environment.

  16. Prediction of CO concentrations based on a hybrid Partial Least Square and Support Vector Machine model

    Science.gov (United States)

    Yeganeh, B.; Motlagh, M. Shafie Pour; Rashidi, Y.; Kamalan, H.

    2012-08-01

    Due to the health impacts caused by exposures to air pollutants in urban areas, monitoring and forecasting of air quality parameters have become popular as an important topic in atmospheric and environmental research today. The knowledge on the dynamics and complexity of air pollutants behavior has made artificial intelligence models as a useful tool for a more accurate pollutant concentration prediction. This paper focuses on an innovative method of daily air pollution prediction using combination of Support Vector Machine (SVM) as predictor and Partial Least Square (PLS) as a data selection tool based on the measured values of CO concentrations. The CO concentrations of Rey monitoring station in the south of Tehran, from Jan. 2007 to Feb. 2011, have been used to test the effectiveness of this method. The hourly CO concentrations have been predicted using the SVM and the hybrid PLS-SVM models. Similarly, daily CO concentrations have been predicted based on the aforementioned four years measured data. Results demonstrated that both models have good prediction ability; however the hybrid PLS-SVM has better accuracy. In the analysis presented in this paper, statistic estimators including relative mean errors, root mean squared errors and the mean absolute relative error have been employed to compare performances of the models. It has been concluded that the errors decrease after size reduction and coefficients of determination increase from 56 to 81% for SVM model to 65-85% for hybrid PLS-SVM model respectively. Also it was found that the hybrid PLS-SVM model required lower computational time than SVM model as expected, hence supporting the more accurate and faster prediction ability of hybrid PLS-SVM model.

  17. HYDRA: a Java library for Markov Chain Monte Carlo

    Directory of Open Access Journals (Sweden)

    Gregory R. Warnes

    2002-03-01

    Full Text Available Hydra is an open-source, platform-neutral library for performing Markov Chain Monte Carlo. It implements the logic of standard MCMC samplers within a framework designed to be easy to use, extend, and integrate with other software tools. In this paper, we describe the problem that motivated our work, outline our goals for the Hydra pro ject, and describe the current features of the Hydra library. We then provide a step-by-step example of using Hydra to simulate from a mixture model drawn from cancer genetics, first using a variable-at-a-time Metropolis sampler and then a Normal Kernel Coupler. We conclude with a discussion of future directions for Hydra.

  18. Kernelized partial least squares for feature reduction and classification of gene microarray data

    Directory of Open Access Journals (Sweden)

    Land Walker H

    2011-12-01

    Full Text Available Abstract Background The primary objectives of this paper are: 1. to apply Statistical Learning Theory (SLT, specifically Partial Least Squares (PLS and Kernelized PLS (K-PLS, to the universal "feature-rich/case-poor" (also known as "large p small n", or "high-dimension, low-sample size" microarray problem by eliminating those features (or probes that do not contribute to the "best" chromosome bio-markers for lung cancer, and 2. quantitatively measure and verify (by an independent means the efficacy of this PLS process. A secondary objective is to integrate these significant improvements in diagnostic and prognostic biomedical applications into the clinical research arena. That is, to devise a framework for converting SLT results into direct, useful clinical information for patient care or pharmaceutical research. We, therefore, propose and preliminarily evaluate, a process whereby PLS, K-PLS, and Support Vector Machines (SVM may be integrated with the accepted and well understood traditional biostatistical "gold standard", Cox Proportional Hazard model and Kaplan-Meier survival analysis methods. Specifically, this new combination will be illustrated with both PLS and Kaplan-Meier followed by PLS and Cox Hazard Ratios (CHR and can be easily extended for both the K-PLS and SVM paradigms. Finally, these previously described processes are contained in the Fine Feature Selection (FFS component of our overall feature reduction/evaluation process, which consists of the following components: 1. coarse feature reduction, 2. fine feature selection and 3. classification (as described in this paper and prediction. Results Our results for PLS and K-PLS showed that these techniques, as part of our overall feature reduction process, performed well on noisy microarray data. The best performance was a good 0.794 Area Under a Receiver Operating Characteristic (ROC Curve (AUC for classification of recurrence prior to or after 36 months and a strong 0.869 AUC for

  19. Current identification in vacuum circuit breakers as a least squares problem*

    Directory of Open Access Journals (Sweden)

    Ghezzi Luca

    2013-01-01

    Full Text Available In this work, a magnetostatic inverse problem is solved, in order to reconstruct the electric current distribution inside high voltage, vacuum circuit breakers from measurements of the outside magnetic field. The (rectangular final algebraic linear system is solved in the least square sense, by involving a regularized singular value decomposition of the system matrix. An approximated distribution of the electric current is thus returned, without the theoretical problem which is encountered with optical methods of matching light to temperature and finally to current density. The feasibility is justified from the computational point of view as the (industrial goal is to evaluate whether, or to what extent in terms of accuracy, a given experimental set-up (number and noise level of sensors is adequate to work as a “magnetic camera” for a given circuit breaker. Dans cet article, on résout un problème inverse magnétostatique pour déterminer la distribution du courant électrique dans le vide d’un disjoncteur à haute tension à partir des mesures du champ magnétique extérieur. Le système algébrique (rectangulaire final est résolu au sens des moindres carrés en faisant appel à une décomposition en valeurs singulières regularisée de la matrice du système. On obtient ainsi une approximation de la distribution du courant électrique sans le problème théorique propre des méthodes optiques qui est celui de relier la lumière à la température et donc à la densité du courant. La faisabilité est justifiée d’un point de vue numérique car le but (industriel est d’évaluer si, ou à quelle précision, un dispositif expérimental donné (nombre et seuil limite de bruit des senseurs peut travailler comme une “caméra magnétique” pour un certain disjoncteur.

  20. Least-squares Migration and Full Waveform Inversion with Multisource Frequency Selection

    KAUST Repository

    Huang, Yunsong

    2013-09-01

    Multisource Least-Squares Migration (LSM) of phase-encoded supergathers has shown great promise in reducing the computational cost of conventional migration. But for the marine acquisition geometry this approach faces the challenge of erroneous misfit due to the mismatch between the limited number of live traces/shot recorded in the field and the pervasive number of traces generated by the finite-difference modeling method. To tackle this mismatch problem, I present a frequency selection strategy with LSM of supergathers. The key idea is, at each LSM iteration, to assign a unique frequency band to each shot gather, so that the spectral overlap among those shots—and therefore their crosstallk—is zero. Consequently, each receiver can unambiguously identify and then discount the superfluous sources—those that are not associated with the receiver in marine acquisition. To compare with standard migration, I apply the proposed method to 2D SEG/EAGE salt model and obtain better resolved images computed at about 1/8 the cost; results for 3D SEG/EAGE salt model, with Ocean Bottom Seismometer (OBS) survey, show a speedup of 40×. This strategy is next extended to multisource Full Waveform Inversion (FWI) of supergathers for marine streamer data, with the same advantages of computational efficiency and storage savings. In the Finite-Difference Time-Domain (FDTD) method, to mitigate spectral leakage due to delayed onsets of sine waves detected at receivers, I double the simulation time and retain only the second half of the simulated records. To compare with standard FWI, I apply the proposed method to 2D velocity model of SEG/EAGE salt and to Gulf Of Mexico (GOM) field data, and obtain a speedup of about 4× and 8×. Formulas are then derived for the resolution limits of various constituent wavepaths pertaining to FWI: diving waves, primary reflections, diffractions, and multiple reflections. They suggest that inverting multiples can provide some low and intermediate

  1. Neutron spectrum unfolding using artificial neural network and modified least square method

    Science.gov (United States)

    Hosseini, Seyed Abolfazl

    2016-09-01

    In the present paper, neutron spectrum is reconstructed using the Artificial Neural Network (ANN) and Modified Least Square (MLSQR) methods. The detector's response (pulse height distribution) as a required data for unfolding of energy spectrum is calculated using the developed MCNPX-ESUT computational code (MCNPX-Energy engineering of Sharif University of Technology). Unlike the usual methods that apply inversion procedures to unfold the energy spectrum from the Fredholm integral equation, the MLSQR method uses the direct procedure. Since liquid organic scintillators like NE-213 are well suited and routinely used for spectrometry of neutron sources, the neutron pulse height distribution is simulated/measured in the NE-213 detector. The response matrix is calculated using the MCNPX-ESUT computational code through the simulation of NE-213 detector's response to monoenergetic neutron sources. For known neutron pulse height distribution, the energy spectrum of the neutron source is unfolded using the MLSQR method. In the developed multilayer perception neural network for reconstruction of the energy spectrum of the neutron source, there is no need for formation of the response matrix. The multilayer perception neural network is developed based on logsig, tansig and purelin transfer functions. The developed artificial neural network consists of two hidden layers of type hyperbolic tangent sigmoid transfer function and a linear transfer function in the output layer. The motivation of applying the ANN method may be explained by the fact that no matrix inversion is needed for energy spectrum unfolding. The simulated neutron pulse height distributions in each light bin due to randomly generated neutron spectrum are considered as the input data of ANN. Also, the randomly generated energy spectra are considered as the output data of the ANN. Energy spectrum of the neutron source is identified with high accuracy using both MLSQR and ANN methods. The results obtained from

  2. First-order system least-squares for second-order elliptic problems with discontinuous coefficients: Further results

    Energy Technology Data Exchange (ETDEWEB)

    Bloechle, B.; Manteuffel, T.; McCormick, S.; Starke, G.

    1996-12-31

    Many physical phenomena are modeled as scalar second-order elliptic boundary value problems with discontinuous coefficients. The first-order system least-squares (FOSLS) methodology is an alternative to standard mixed finite element methods for such problems. The occurrence of singularities at interface corners and cross-points requires that care be taken when implementing the least-squares finite element method in the FOSLS context. We introduce two methods of handling the challenges resulting from singularities. The first method is based on a weighted least-squares functional and results in non-conforming finite elements. The second method is based on the use of singular basis functions and results in conforming finite elements. We also share numerical results comparing the two approaches.

  3. An Iterative Method for the Least-Squares Problems of a General Matrix Equation Subjects to Submatrix Constraints

    Directory of Open Access Journals (Sweden)

    Li-fang Dai

    2013-01-01

    Full Text Available An iterative algorithm is proposed for solving the least-squares problem of a general matrix equation ∑i=1t‍MiZiNi=F, where Zi (i=1,2,…,t are to be determined centro-symmetric matrices with given central principal submatrices. For any initial iterative matrices, we show that the least-squares solution can be derived by this method within finite iteration steps in the absence of roundoff errors. Meanwhile, the unique optimal approximation solution pair for given matrices Z~i can also be obtained by the least-norm least-squares solution of matrix equation ∑i=1t‍MiZ-iNi=F-, in which Z-i=Zi-Z~i,  F-=F-∑i=1t‍MiZ~iNi. The given numerical examples illustrate the efficiency of this algorithm.

  4. Mitigation of defocusing by statics and near-surface velocity errors by interferometric least-squares migration

    KAUST Repository

    Sinha, Mrinal

    2015-08-19

    We propose an interferometric least-squares migration method that can significantly reduce migration artifacts due to statics and errors in the near-surface velocity model. We first choose a reference reflector whose topography is well known from the, e.g., well logs. Reflections from this reference layer are correlated with the traces associated with reflections from deeper interfaces to get crosscorrelograms. These crosscorrelograms are then migrated using interferometric least-squares migration (ILSM). In this way statics and velocity errors at the near surface are largely eliminated for the examples in our paper.

  5. DISCRETE MINUS ONE NORM LEAST-SQUARES FOR THE STRESS FORMULATION OF LINEAR ELASTICITY WITH NUMERICAL RESULTS

    Institute of Scientific and Technical Information of China (English)

    Sang Dong Kim; Byeong Chun Shin; Seokchan Kim; Gyungsoo Woo

    2003-01-01

    This paper studies the discrete minus one norm least-squares methods for the stress formulation of pure displacement linear elasticity in two dimensions. The proposed leastsquares functional is defined as the sum of the L2- and H-1-norms of the residual equations weighted appropriately. The minus one norm in the functional is replaced by the discrete minus one norm and then the discrete minus one norm least-squares methods are analyzed with various numerical results focusing on the finite element accuracy and multigrid convergence performances.

  6. Application of neural network model coupling with the partial least-squares method for forecasting watre yield of mine

    Institute of Scientific and Technical Information of China (English)

    CHEN Nan-xiang; CAO Lian-hai; HUANG Qiang

    2005-01-01

    Scientific forecasting water yield of mine is of great significance to the safety production of mine and the colligated using of water resources. The paper established the forecasting model for water yield of mine, combining neural network with the partial least square method. Dealt with independent variables by the partial least square method, it can not only solve the relationship between independent variables but also reduce the input dimensions in neural network model, and then use the neural network which can solve the non-linear problem better. The result of an example shows that the prediction has higher precision in forecasting and fitting.

  7. Signs of divided differences yield least squares data fitting with constrained monotonicity or convexity

    Science.gov (United States)

    Demetriou, I. C.

    2002-09-01

    Methods are presented for least squares data smoothing by using the signs of divided differences of the smoothed values. Professor M.J.D. Powell initiated the subject in the early 1980s and since then, theory, algorithms and FORTRAN software make it applicable to several disciplines in various ways. Let us consider n data measurements of a univariate function which have been altered by random errors. Then it is usual for the divided differences of the measurements to show sign alterations, which are probably due to data errors. We make the least sum of squares change to the measurements, by requiring the sequence of divided differences of order m to have at most q sign changes for some prescribed integer q. The positions of the sign changes are integer variables of the optimization calculation, which implies a combinatorial problem whose solution can require about O(nq) quadratic programming calculations in n variables and n-m constraints. Suitable methods have been developed for the following cases. It has been found that a dynamic programming procedure can calculate the global minimum for the important cases of piecewise monotonicity m=1,q[greater-or-equal, slanted]1 and piecewise convexity/concavity m=2,q[greater-or-equal, slanted]1 of the smoothed values. The complexity of the procedure in the case of m=1 is O(n2+qn log2 n) computer operations, while it is reduced to only O(n) when q=0 (monotonicity) and q=1 (increasing/decreasing monotonicity). The case m=2,q[greater-or-equal, slanted]1 requires O(qn2) computer operations and n2 quadratic programming calculations, which is reduced to one and n-2 quadratic programming calculations when m=2,q=0, i.e. convexity, and m=2,q=1, i.e. convexity/concavity, respectively. Unfortunately, the technique that receives this efficiency cannot generalize for the highly nonlinear case m[greater-or-equal, slanted]3,q[greater-or-equal, slanted]2. However, the case m[greater-or-equal, slanted]3,q=0 is solved by a special strictly

  8. Least Squares Pure Imaginary Solution and Real Solution of the Quaternion Matrix Equation AXB+CXD=E with the Least Norm

    Directory of Open Access Journals (Sweden)

    Shi-Fang Yuan

    2014-01-01

    Full Text Available Using the Kronecker product of matrices, the Moore-Penrose generalized inverse, and the complex representation of quaternion matrices, we derive the expressions of least squares solution with the least norm, least squares pure imaginary solution with the least norm, and least squares real solution with the least norm of the quaternion matrix equation AXB+CXD=E, respectively.

  9. Mass spectrometry and partial least-squares regression: a tool for identification of wheat variety and end-use quality

    DEFF Research Database (Denmark)

    Sørensen, Helle Aagaard; Petersen, Marianne Kjerstine; Jacobsen, Susanne;

    2004-01-01

    . The whole process takes similar to30 min. Extracts of alcohol-soluble storage proteins (gliadins) from wheat were analysed by matrix-assisted laser desorption/ionization time-of-flight mass spectrometry. Partial least-squares regression was subsequently applied using these mass spectra for making...

  10. PROPOSED MODIFICATIONS OF K2-TEMPERATURE RELATION AND LEAST SQUARES ESTIMATES OF BOD (BIOCHEMICAL OXYGEN DEMAND) PARAMETERS

    Science.gov (United States)

    A technique is presented for finding the least squares estimates for the ultimate biochemical oxygen demand (BOD) and rate coefficient for the BOD reaction without resorting to complicated computer algorithms or subjective graphical methods. This may be used in stream water quali...

  11. Regional estimation of rainfall intensity-duration-frequency curves using generalized least squares regression of partial duration series statistics

    DEFF Research Database (Denmark)

    Madsen, H.; Mikkelsen, Peter Steen; Rosbjerg, Dan; Harremoës, Poul

    2002-01-01

    mean value of the exceedance magnitudes, and the coefficient of L variation (LCV) are considered as regional variables. A generalized least squares (GLS) regression model that explicitly accounts for intersite correlation and sampling uncertainties is applied for evaluating the regional heterogenity of...

  12. SIMULATIONS OF 2D AND 3D THERMOCAPILLARY FLOWS BY A LEAST-SQUARES FINITE ELEMENT METHOD. (R825200)

    Science.gov (United States)

    Numerical results for time-dependent 2D and 3D thermocapillary flows are presented in this work. The numerical algorithm is based on the Crank-Nicolson scheme for time integration, Newton's method for linearization, and a least-squares finite element method, together with a matri...

  13. Optimal Least-Squares Unidimensional Scaling: Improved Branch-and-Bound Procedures and Comparison to Dynamic Programming

    Science.gov (United States)

    Brusco, Michael J.; Stahl, Stephanie

    2005-01-01

    There are two well-known methods for obtaining a guaranteed globally optimal solution to the problem of least-squares unidimensional scaling of a symmetric dissimilarity matrix: (a) dynamic programming, and (b) branch-and-bound. Dynamic programming is generally more efficient than branch-and-bound, but the former is limited to matrices with…

  14. Online Low-Rank Tensor Subspace Tracking from Incomplete Data by CP Decomposition using Recursive Least Squares

    OpenAIRE

    Kasai, Hiroyuki

    2016-01-01

    We propose an online tensor subspace tracking algorithm based on the CP decomposition exploiting the recursive least squares (RLS), dubbed OnLine Low-rank Subspace tracking by TEnsor CP Decomposition (OLSTEC). Numerical evaluations show that the proposed OLSTEC algorithm gives faster convergence per iteration comparing with the state-of-the-art online algorithms.

  15. Non-negative least squares for high-dimensional linear models: consistency and sparse recovery without regularization

    CERN Document Server

    Slawski, Martin

    2012-01-01

    Least squares fitting is in general not useful for high-dimensional linear models, in which the number of predictors is of the same or even larger order of magnitude than the number of samples. Theory developed in recent years has coined a paradigm according to which sparsity-promoting regularization is regarded as a necessity in such setting. Deviating from this paradigm, we show that non-negativity constraints on the regression coefficients may be similarly effective as explicit regularization. For a broad range of designs with Gram matrix having non-negative entries, we establish bounds on the $\\ell_2$-prediction error of non-negative least squares (NNLS) whose form qualitatively matches corresponding results for $\\ell_1$-regularization. Under slightly stronger conditions, it is established that NNLS followed by hard thresholding performs excellently in terms of support recovery of an (approximately) sparse target, in some cases improving over $\\ell_1$-regularization. A substantial advantage of NNLS over r...

  16. A consensus least squares support vector regression (LS-SVR) for analysis of near-infrared spectra of plant samples.

    Science.gov (United States)

    Li, Yankun; Shao, Xueguang; Cai, Wensheng

    2007-04-15

    Consensus modeling of combining the results of multiple independent models to produce a single prediction avoids the instability of single model. Based on the principle of consensus modeling, a consensus least squares support vector regression (LS-SVR) method for calibrating the near-infrared (NIR) spectra was proposed. In the proposed approach, NIR spectra of plant samples were firstly preprocessed using discrete wavelet transform (DWT) for filtering the spectral background and noise, then, consensus LS-SVR technique was used for building the calibration model. With an optimization of the parameters involved in the modeling, a satisfied model was achieved for predicting the content of reducing sugar in plant samples. The predicted results show that consensus LS-SVR model is more robust and reliable than the conventional partial least squares (PLS) and LS-SVR methods. PMID:19071605

  17. Based on Partial Least-squares Regression to Build up and Analyze the Model of Rice Evapotranspiration

    Institute of Scientific and Technical Information of China (English)

    2003-01-01

    During the course of calculating the rice evapotranspiration using weather factors,we often find that some independent variables have multiple correlation.The phenomena can lead to the traditional multivariate regression model which based on least square method distortion.And the stability of the model will be lost.The model will be built based on partial least-square regression in the paper,through applying the idea of main component analyze and typical correlation analyze,the writer picks up some component from original material.Thus,the writer builds up the model of rice evapotranspiration to solve the multiple correlation among the independent variables (some weather factors).At last,the writer analyses the model in some parts,and gains the satisfied result.

  18. Finite element solution of multi-scale transport problems using the least squares based bubble function enrichment

    CERN Document Server

    Yazdani, A

    2011-01-01

    This paper presents an optimum technique based on the least squares method for the derivation of the bubble functions to enrich the standard linear finite elements employed in the formulation of Galerkin weighted-residual statements. The element-level linear shape functions are enhanced with supplementary polynomial bubble functions with undetermined coefficients. The best least squares minimization of the residual functional obtained from the insertion of these trial functions into model equations results in an algebraic system of equations whose solution provides the unknown coefficients in terms of element-level nodal values. The normal finite element procedures for the construction of stiffness matrices may then be followed with no extra degree of freedom incurred as a result of such enrichment. The performance of the proposed method has been tested on a number of benchmark linear transport equations with the results compared against the exact and standard linear element solutions. It has been observed th...

  19. Least Square Fast Learning Network for modeling the combustion efficiency of a 300WM coal-fired boiler.

    Science.gov (United States)

    Li, Guoqiang; Niu, Peifeng; Wang, Huaibao; Liu, Yongchao

    2014-03-01

    This paper presents a novel artificial neural network with a very fast learning speed, all of whose weights and biases are determined by the twice Least Square method, so it is called Least Square Fast Learning Network (LSFLN). In addition, there is another difference from conventional neural networks, which is that the output neurons of LSFLN not only receive the information from the hidden layer neurons, but also receive the external information itself directly from the input neurons. In order to test the validity of LSFLN, it is applied to 6 classical regression applications, and also employed to build the functional relation between the combustion efficiency and operating parameters of a 300WM coal-fired boiler. Experimental results show that, compared with other methods, LSFLN with very less hidden neurons could achieve much better regression precision and generalization ability at a much faster learning speed. PMID:24373896

  20. FTIR Spectroscopy Combined with Partial Least Square for Analysis of Red Fruit Oil in Ternary Mixture System

    OpenAIRE

    Rohman, A.; Dwi Larasati Setyaningrum; Sugeng Riyanto

    2014-01-01

    FTIR spectroscopy is a promising method for quantification of edible oils. Three edible oils, namely, red fruit oil (RFO), corn oil (CO), and soybean oil (SO), in ternary mixture system were quantitatively analyzed using FTIR spectroscopy in combination with partial least square (PLS). FTIR spectra of edible oils in ternary mixture were subjected to several treatments including normal spectra and their derivative. Using PLS calibration, the first derivative FTIR spectra can be exploited for d...

  1. Modeling and forecasting monthly movement of annual average solar insolation based on the least-squares Fourier-model

    International Nuclear Information System (INIS)

    Highlights: • Introduce a finite Fourier-series model for evaluating monthly movement of annual average solar insolation. • Present a forecast method for predicting its movement based on the extended Fourier-series model in the least-squares. • Shown its movement is well described by a low numbers of harmonics with approximately 6-term Fourier series. • Predict its movement most fitting with less than 6-term Fourier series. - Abstract: Solar insolation is one of the most important measurement parameters in many fields. Modeling and forecasting monthly movement of annual average solar insolation is of increasingly importance in areas of engineering, science and economics. In this study, Fourier-analysis employing finite Fourier-series is proposed for evaluating monthly movement of annual average solar insolation and extended in the least-squares for forecasting. The conventional Fourier analysis, which is the most common analysis method in the frequency domain, cannot be directly applied for prediction. Incorporated with the least-square method, the introduced Fourier-series model is extended to predict its movement. The extended Fourier-series forecasting model obtains its optimums Fourier coefficients in the least-square sense based on its previous monthly movements. The proposed method is applied to experiments and yields satisfying results in the different cities (states). It is indicated that monthly movement of annual average solar insolation is well described by a low numbers of harmonics with approximately 6-term Fourier series. The extended Fourier forecasting model predicts the monthly movement of annual average solar insolation most fitting with less than 6-term Fourier series

  2. Least Squares Fitting of Chacón-Gielis Curves by the Particle Swarm Method of Optimization

    OpenAIRE

    Mishra, SK

    2006-01-01

    Ricardo Chacón generalized Johan Gielis's superformula by introducing elliptic functions in place of trigonometric functions. In this paper an attempt has been made to fit the Chacón-Gielis curves (modified by various functions) to simulated data by the least squares principle. Estimation has been done by the Particle Swarm (PS) methods of global optimization. The Repulsive Particle Swarm optimization algorithm has been used. It has been found that although the curve-fitting exercise may be s...

  3. Penalized Least Squares Methoden mit stückweise polynomialen Funktionen zur Lösung von partiellen Differentialgleichungen

    OpenAIRE

    Pechmann, Patrick R.

    2008-01-01

    Das Hauptgebiet der Arbeit stellt die Approximation der Lösungen partieller Differentialgleichungen mit Dirichlet-Randbedingungen durch Splinefunktionen dar. Partielle Differentialgleichungen finden ihre Anwendung beispielsweise in Bereichen der Elektrostatik, der Elastizitätstheorie, der Strömungslehre sowie bei der Untersuchung der Ausbreitung von Wärme und Schall. Manche Approximationsaufgaben besitzen keine eindeutige Lösung. Durch Anwendung der Penalized Least Squares Methode wurde gezei...

  4. On discrete least square projection in unbounded domain with random evaluations and its application to parametric uncertainty quantification

    OpenAIRE

    TANG, TAO; Zhou, Tao

    2014-01-01

    This work is concerned with approximating multivariate functions in unbounded domain by using discrete least-squares projection with random points evaluations. Particular attention are given to functions with random Gaussian or Gamma parameters. We first demonstrate that the traditional Hermite (Laguerre) polynomials chaos expansion suffers from the \\textit{instability} in the sense that an \\textit{unfeasible} number of points, which is relevant to the dimension of the approximation space, is...

  5. Solve: a non linear least-squares code and its application to the optimal placement of torsatron vertical field coils

    International Nuclear Information System (INIS)

    A computational method was developed which alleviates the need for lengthy parametric scans as part of a design process. The method makes use of a least squares algorithm to find the optimal value of a parameter vector. Optimal is defined in terms of a utility function prescribed by the user. The placement of the vertical field coils of a torsatron is such a non linear problem

  6. Recursive Total Least-Squares Algorithm Based on Inverse Power Method and Dichotomous Coordinate-Descent Iterations

    OpenAIRE

    Arablouei, Reza; Doğançay, Kutluyıl; Werner, Stefan

    2014-01-01

    We develop a recursive total least-squares (RTLS) algorithm for errors-in-variables system identification utilizing the inverse power method and the dichotomous coordinate-descent (DCD) iterations. The proposed algorithm, called DCD-RTLS, outperforms the previously-proposed RTLS algorithms, which are based on the line-search method, with reduced computational complexity. We perform a comprehensive analysis of the DCD-RTLS algorithm and show that it is asymptotically unbiased as well as being ...

  7. Application of the european customer satisfaction index to postal services. Structural equation models versus partial least squars

    OpenAIRE

    O'Loughlin, Christina; Coenders, Germà

    2002-01-01

    Customer satisfaction and retention are key issues for organizations in today’s competitive market place. As such, much research and revenue has been invested in developing accurate ways of assessing consumer satisfaction at both the macro (national) and micro (organizational) level, facilitating comparisons in performance both within and between industries. Since the instigation of the national customer satisfaction indices (CSI), partial least squares (PLS) has been used to estimate the CSI...

  8. Novel approach of crater detection by crater candidate region selection and matrix-pattern-oriented least squares support vector machine

    Institute of Scientific and Technical Information of China (English)

    Ding Meng; Cao Yunfeng; Wu Qingxian

    2013-01-01

    Impacted craters are commonly found on the surface of planets,satellites,asteroids and other solar system bodies.In order to speed up the rate of constructing the database of craters,it is important to develop crater detection algorithms.This paper presents a novel approach to automatically detect craters on planetary surfaces.The approach contains two parts:crater candidate region selection and crater detection.In the first part,crater candidate region selection is achieved by Kanade-Lucas-Tomasi (KLT) detector.Matrix-pattern-oriented least squares support vector machine (MatLSSVM),as the matrixization version of least square support vector machine (SVM),inherits the advantages of least squares support vector machine (LSSVM),reduces storage space greatly and reserves spatial redundancies within each image matrix compared with general LSSVM.The second part of the approach employs MatLSSVM to design classifier for crater detection.Experimental results on the dataset which comprises 160 preprocessed image patches from Google Mars demonstrate that the accuracy rate of crater detection can be up to 88%.In addition,the outstanding feature of the approach introduced in this paper is that it takes resized crater candidate region as input pattern directly to finish crater detection.The results of the last experiment demonstrate that MatLSSVM-based classifier can detect crater regions effectively on the basis of KLT-based crater candidate region selection.

  9. A new formulation for total least square error method in d-dimensional space with mapping to a parametric line

    Science.gov (United States)

    Skala, Vaclav

    2016-06-01

    There are many practical applications based on the Least Square Error (LSE) or Total Least Square Error (TLSE) methods. Usually the standard least square error is used due to its simplicity, but it is not an optimal solution, as it does not optimize distance, but square of a distance. The TLSE method, respecting the orthogonality of a distance measurement, is computed in d-dimensional space, i.e. for points given in E2 a line π in E2, resp. for points given in E3 a plane ρ in E3, fitting the TLSE criteria are found. However, some tasks in physical sciences lead to a slightly different problem. In this paper, a new TSLE method is introduced for solving a problem when data are given in E3 a line π ∈ E3 is to be found fitting the TLSE criterion. The presented approach is applicable for a general d-dimensional case, i.e. when points are given in Ed a line π ∈ Ed is to be found. This formulation is different from the TLSE formulation.

  10. Hourly cooling load forecasting using time-indexed ARX models with two-stage weighted least squares regression

    International Nuclear Information System (INIS)

    Highlights: • Developed hourly-indexed ARX models for robust cooling-load forecasting. • Proposed a two-stage weighted least-squares regression approach. • Considered the effect of outliers as well as trend of cooling load and weather patterns. • Included higher order terms and day type patterns in the forecasting models. • Demonstrated better accuracy compared with some ARX and ANN models. - Abstract: This paper presents a robust hourly cooling-load forecasting method based on time-indexed autoregressive with exogenous inputs (ARX) models, in which the coefficients are estimated through a two-stage weighted least squares regression. The prediction method includes a combination of two separate time-indexed ARX models to improve prediction accuracy of the cooling load over different forecasting periods. The two-stage weighted least-squares regression approach in this study is robust to outliers and suitable for fast and adaptive coefficient estimation. The proposed method is tested on a large-scale central cooling system in an academic institution. The numerical case studies show the proposed prediction method performs better than some ANN and ARX forecasting models for the given test data set

  11. Comparative evaluation of photon cross section libraries for materials of interest in PET Monte Carlo simulations

    CERN Document Server

    Zaidi, H

    1999-01-01

    the many applications of Monte Carlo modelling in nuclear medicine imaging make it desirable to increase the accuracy and computational speed of Monte Carlo codes. The accuracy of Monte Carlo simulations strongly depends on the accuracy in the probability functions and thus on the cross section libraries used for photon transport calculations. A comparison between different photon cross section libraries and parametrizations implemented in Monte Carlo simulation packages developed for positron emission tomography and the most recent Evaluated Photon Data Library (EPDL97) developed by the Lawrence Livermore National Laboratory was performed for several human tissues and common detector materials for energies from 1 keV to 1 MeV. Different photon cross section libraries and parametrizations show quite large variations as compared to the EPDL97 coefficients. This latter library is more accurate and was carefully designed in the form of look-up tables providing efficient data storage, access, and management. Toge...

  12. Dependence of Monte Carlo Prediction on Evaluated Nuclear Data Library in Continuous Energy Criticality Calculations

    International Nuclear Information System (INIS)

    Monte Carlo neutronics calculations can estimate accurate nuclear parameters from continuous energy nuclear library and detailed geometry. The continuous energy nuclear library for Monte Carlo simulations can be generated from several evaluated nuclear data files - ENDF/B-VI.8, JENDL-3.3, JEFF-3.0, etc . by NJOY 99. The objective of this paper is to quantify effects of evaluated nuclear data files on nuclear parameters estimated by Monte Carlo calculations for various critical experiment problems. In this study, Monte Carlo calculations are conducted by the MCCARD which is designed exclusively for the neutron transport calculation

  13. Assessment of the Influence of Thermal Scattering Library on Monte-Carlo Calculation

    International Nuclear Information System (INIS)

    Monte-Carlo Neutron Transport Code uses continuous energy neutron libraries generally. Also thermal scattering libraries are used to represent a thermal neutron scattering by molecules and crystalline solids completely. Both neutron libraries and thermal scattering libraries are generated by NJOY based on ENDF data. While a neutron library can be generated for any specific temperature, a thermal scattering library can be generated for restricted temperatures when using ENDF data. However it is able to generate a thermal scattering for any specific temperature by using the LEAPR module in NJOY instead of using ENDF data. In this study, thermal scattering libraries of hydrogen bound in light water and carbon bound in graphite are generated by using the LEAPR module and ENDF data, and it is assessed the influence of each libraries on Monte-Carlo calculations. In addition, it is assessed the influence of a library temperature on Monte-Carlo calculations. In this study, thermal scattering libraries are generated by using LEAPR module in NJOY, and it is developed NIM program to do this work. It is compared above libraries with libraries generated from ENDF thermal scattering data. And the comparison carried out for H in H2O and C in graphite. As a result, similar results came out between libraries generated from LEAPR module and that generated from ENDF thermal scattering data. Hereby, it is conclude that the generation of thermal scattering libraries with LEAPR module is appropriate to use and it is able to generate a library with user-specific temperature. Also it is assessed how much a temperature in a thermal scattering library influences on Monte-Carlo calculations

  14. Assessment of the Influence of Thermal Scattering Library on Monte-Carlo Calculation

    Energy Technology Data Exchange (ETDEWEB)

    Kim, Gwanyoung; Woo, Swengwoong [Korea Institute of Nuclear Safety, Daejeon (Korea, Republic of)

    2014-05-15

    Monte-Carlo Neutron Transport Code uses continuous energy neutron libraries generally. Also thermal scattering libraries are used to represent a thermal neutron scattering by molecules and crystalline solids completely. Both neutron libraries and thermal scattering libraries are generated by NJOY based on ENDF data. While a neutron library can be generated for any specific temperature, a thermal scattering library can be generated for restricted temperatures when using ENDF data. However it is able to generate a thermal scattering for any specific temperature by using the LEAPR module in NJOY instead of using ENDF data. In this study, thermal scattering libraries of hydrogen bound in light water and carbon bound in graphite are generated by using the LEAPR module and ENDF data, and it is assessed the influence of each libraries on Monte-Carlo calculations. In addition, it is assessed the influence of a library temperature on Monte-Carlo calculations. In this study, thermal scattering libraries are generated by using LEAPR module in NJOY, and it is developed NIM program to do this work. It is compared above libraries with libraries generated from ENDF thermal scattering data. And the comparison carried out for H in H{sub 2}O and C in graphite. As a result, similar results came out between libraries generated from LEAPR module and that generated from ENDF thermal scattering data. Hereby, it is conclude that the generation of thermal scattering libraries with LEAPR module is appropriate to use and it is able to generate a library with user-specific temperature. Also it is assessed how much a temperature in a thermal scattering library influences on Monte-Carlo calculations.

  15. Dynamic least-squares kernel density modeling of Fokker-Planck equations with application to neural population

    Science.gov (United States)

    Shotorban, Babak

    2010-04-01

    The dynamic least-squares kernel density (LSQKD) model [C. Pantano and B. Shotorban, Phys. Rev. E 76, 066705 (2007)] is used to solve the Fokker-Planck equations. In this model the probability density function (PDF) is approximated by a linear combination of basis functions with unknown parameters whose governing equations are determined by a global least-squares approximation of the PDF in the phase space. In this work basis functions are set to be Gaussian for which the mean, variance, and covariances are governed by a set of partial differential equations (PDEs) or ordinary differential equations (ODEs) depending on what phase-space variables are approximated by Gaussian functions. Three sample problems of univariate double-well potential, bivariate bistable neurodynamical system [G. Deco and D. Martí, Phys. Rev. E 75, 031913 (2007)], and bivariate Brownian particles in a nonuniform gas are studied. The LSQKD is verified for these problems as its results are compared against the results of the method of characteristics in nondiffusive cases and the stochastic particle method in diffusive cases. For the double-well potential problem it is observed that for low to moderate diffusivity the dynamic LSQKD well predicts the stationary PDF for which there is an exact solution. A similar observation is made for the bistable neurodynamical system. In both these problems least-squares approximation is made on all phase-space variables resulting in a set of ODEs with time as the independent variable for the Gaussian function parameters. In the problem of Brownian particles in a nonuniform gas, this approximation is made only for the particle velocity variable leading to a set of PDEs with time and particle position as independent variables. Solving these PDEs, a very good performance by LSQKD is observed for a wide range of diffusivities.

  16. Least Squares Support Vector Machine Based Real-Time Fault Diagnosis Model for Gas Path Parameters of Aero Engines

    Institute of Scientific and Technical Information of China (English)

    WANG Xu-hui; HUANG Sheng-guo; WANG Ye; LIU Yong-jian; SHU Ping

    2009-01-01

    Least squares support vector machine (LS-SVM) is applied in gas path fault diagnosis for aero engines.Firstly,the deviation data of engine cruise are analyzed.Then,model selection is conducted using pattern search method.Finally,by decoding aircraft communication addressing and reporting system (ACARS) report,a real-time cruise data set is acquired,and the diagnosis model is adopted to process data.In contrast to the radial basis function (RBF) neutral network,LS-SVM is more suitable for real-time diagnosis of gas turbine engine.

  17. Resolution of quaternary mixtures of cadaverine, histamine, putrescine and tyramine by the square wave voltammetry and partial least squares method.

    Science.gov (United States)

    Henao-Escobar, W; Domínguez-Renedo, O; Alonso-Lomillo, M A; Arcos-Martínez, M J

    2015-10-01

    This work presents the simultaneous determination of cadaverine, histamine, putrescine and tyramine by square wave voltammetry using a boron-doped diamond electrode. A multivariate calibration method based on partial least square regressions has allowed the resolution of the very high overlapped voltammetric signals obtained for the analyzed biogenic amines. Prediction errors lower than 9% have been obtained when concentration of quaternary mixtures were calculated. The developed procedure has been applied in the analysis of ham samples, which results are in good agreement with those obtained using the standard HPLC method. PMID:26078134

  18. An online algorithm for least-square spectral analysis: Applied to time-frequency analysis of heart rate.

    Science.gov (United States)

    Zhang, Zhe; Leong, Philip H W

    2015-08-01

    We propose a novel online algorithm for computing least-square based periodograms, otherwise known as the Lomb-Scargle Periodogram. Our spectral analysis technique has been shown to be superior to traditional discrete Fourier transform (DFT) based methods, and we introduce an algorithm which has O(N) time complexity per input sample. The technique is suitable for real-time embedded implementations and its utility is demonstrated through an application to the high resolution time-frequency domain analysis of heart rate variability (HRV). PMID:26736732

  19. Calculation of the velocity components for continuous GNSS station through applying the algorithm for least squares adjustment.

    Directory of Open Access Journals (Sweden)

    Jorge Moya Zamora

    2014-06-01

    Full Text Available The calculation of the velocity of a continuous GNSS observation station represents a key input in modern surveying. The act of determining the position of the GNSS stations involves daily which can establish the time series of stations, based on which information can be influenced by phenomena affecting the performance thereof. This article is a description of the algorithm of the least squares adapted and applied to the determination of the velocity components of continuous observation stations. Furthermore, this algorithm is applied for calculating the speed of ETCG station belonging to the Geocentric System for the Americas (SIRGAS.

  20. 'AJUSTAR' a interactive processor for to Fit, by means of least squares, one variable polinomials (arbitrary degree) at experimental points

    International Nuclear Information System (INIS)

    In this repport is offered, to scientist and technical people, a numeric tool consisting in a FORTRAN program, of interactive use, with destination to make lineal 'least squares', fittings on any set of empirical observations. The method based in the orthogonal functions (for discrete case), instead of direct solving the equations system, is used. The procedure includes also the optionally facilities of: variable change, direct interpolation, correlation non linear factor, 'weights' of the points, confidence intervals (Scheffe, Miller, Student), and plotting results. (Author). 10 refs

  1. Ajustar: A interactive processor for to fit, by means of least squares, one variable polynomials (arbitrary degree) at experimental points

    International Nuclear Information System (INIS)

    In this report is offered, to scientist and technical people, a numeric tool consisting in a FORTRAN program, of interactive use, with destination to make lineal least squares, fittings on any set of empirical observations. The method based in the orthogonal functions (for discrete case), instead of direct solving the equations system, is used. The procedure includes also the optionally facilities of: variable change, direct interpolation, correlation non linear factor, weightsof the points, confidence intervals (Schelle, Miller, Student), and plotting results. (Author) 10 refs

  2. Spectroscopic Determination of Aboveground Biomass in Grasslands Using Spectral Transformations, Support Vector Machine and Partial Least Squares Regression

    Directory of Open Access Journals (Sweden)

    Miguel Marabel

    2013-08-01

    Full Text Available Aboveground biomass (AGB is one of the strategic biophysical variables of interest in vegetation studies. The main objective of this study was to evaluate the Support Vector Machine (SVM and Partial Least Squares Regression (PLSR for estimating the AGB of grasslands from field spectrometer data and to find out which data pre-processing approach was the most suitable. The most accurate model to predict the total AGB involved PLSR and the Maximum Band Depth index derived from the continuum removed reflectance in the absorption features between 916–1,120 nm and 1,079–1,297 nm (R2 = 0.939, RMSE = 7.120 g/m2. Regarding the green fraction of the AGB, the Area Over the Minimum index derived from the continuum removed spectra provided the most accurate model overall (R2 = 0.939, RMSE = 3.172 g/m2. Identifying the appropriate absorption features was proved to be crucial to improve the performance of PLSR to estimate the total and green aboveground biomass, by using the indices derived from those spectral regions. Ordinary Least Square Regression could be used as a surrogate for the PLSR approach with the Area Over the Minimum index as the independent variable, although the resulting model would not be as accurate.

  3. Regression model of support vector machines for least squares prediction of crystallinity of cracking catalysts by infrared spectroscopy

    International Nuclear Information System (INIS)

    The recently introduction of the least squares support vector machines method for regression purposes in the field of Chemometrics has provided several advantages to linear and nonlinear multivariate calibration methods. The objective of the paper was to propose the use of the least squares support vector machine as an alternative multivariate calibration method for the prediction of the percentage of crystallinity of fluidized catalytic cracking catalysts, by means of Fourier transform mid-infrared spectroscopy. A linear kernel was used in the calculations of the regression model. The optimization of its gamma parameter was carried out using the leave-one-out cross-validation procedure. The root mean square error of prediction was used to measure the performance of the model. The accuracy of the results obtained with the application of the method is in accordance with the uncertainty of the X-ray powder diffraction reference method. To compare the generalization capability of the developed method, a comparison study was carried out, taking into account the results achieved with the new model and those reached through the application of linear calibration methods. The developed method can be easily implemented in refinery laboratories

  4. A task specific uncertainty analysis method for least-squares-based form characterization of ultra-precision freeform surfaces

    International Nuclear Information System (INIS)

    In the measurement of ultra-precision freeform surfaces, least-squares-based form characterization methods are widely used to evaluate the form error of the measured surfaces. Although many methodologies have been proposed in recent years to improve the efficiency of the characterization process, relatively little research has been conducted on the analysis of associated uncertainty in the characterization results which may result from those characterization methods being used. As a result, this paper presents a task specific uncertainty analysis method with application in the least-squares-based form characterization of ultra-precision freeform surfaces. That is, the associated uncertainty in the form characterization results is estimated when the measured data are extracted from a specific surface with specific sampling strategy. Three factors are considered in this study which include measurement error, surface form error and sample size. The task specific uncertainty analysis method has been evaluated through a series of experiments. The results show that the task specific uncertainty analysis method can effectively estimate the uncertainty of the form characterization results for a specific freeform surface measurement

  5. Factors Contributing to Safety and Health Performance of Malaysian Low-cost Housing: Partial Least Squares Approach

    Directory of Open Access Journals (Sweden)

    Azuin Ramli

    2014-05-01

    Full Text Available Sustainable development is fast emerging as one of the main priorities of construction industry in Malaysia. Malaysians of all income levels, particularly the low-income group, would have accessibility to adequate, affordable and quality shelter. As a result, safety and health performance in the low-cost housing has become a rising concern. This study attempts to explore the influence of architecture, building services, external environment, operation and maintenance and management approaches on the building safety and health performance among the construction practitioners in Malaysia and their subsequent personal responsibility. The study used the Partial Least Squares (PLS and Structural Equation Modelling (SEM tool to test the hypotheses generated. Findings from the Partial Least Squares analysis revealed that architecture, building services, external environment, operation and maintenance and management approaches are vital determinants contributing to safety and health performance of low-cost housing in the Malaysian context. In turn, this determinant that is formed will largely determine whether the construction practitioners engage in influencing personal responsibility towards building safety and health performance. Implications, limitations as well as suggestions for future research are accordingly discussed in this study.

  6. An improved algorithm for the determination of the system parameters of a visual binary by least squares

    International Nuclear Information System (INIS)

    The problem of computing the orbit of a visual binary from a set of observed positions is reconsidered. It is a least squares adjustment problem, if the observational errors follow a bias-free multivariate Gaussian distribution and the covariance matrix of the observations is assumed to be known. The condition equations are constructed to satisfy both the conic section equation and the area theorem, which are nonlinear in both the observations and the adjustment parameters. The traditional least squares algorithm, which employs condition equations that are solved with respect to the uncorrelated observations and either linear in the adjustment parameters or linearized by developing them in Taylor series by first-order approximation, is inadequate in the orbit problem. Not long ago, a completely general solution was published by W. H. Jefferys, who proposed a rigorous adjustment algorithm for models in which the observations appear nonlinearly in the condition equations and may be correlated, and in which construction of the normal equations and the residual function involves no approximation. This method was successfully applied in this problem. The normal equations were first solved by Newton's scheme. Newton's method was modified to yield a definitive solution in the case the normal approach fails, by combination with the method of steepest descent and other sophisticated algorithms. Practical examples show that the modified Newton scheme can always lead to a final solution. The weighting of observations, the orthogonal parameters and the efficiency of a set of adjustment parameters are also considered

  7. Control System of Substance and Energy Balances of Combined Heat-and-Power Plants Applying the Least Squares Adjustment Method

    Directory of Open Access Journals (Sweden)

    Henryk Rusinowsk

    1999-12-01

    Full Text Available The set of substance and energy balance equations of water and steam collectors, together with the balance of boilers and turbines (including the regeneration system is the base of the control system of the exploitation of a combined heat-and power plant. The initial values of this system are the results of measurements in the course of exploitation. Due to inevitable errors in the measurements the substance balances display discrepancies between the balance of steam and the balance of the feed water, and the calculated technical indices are uncertain. In order to increase the reliability of the results of the technical analysis of exploitation and to co-ordinate the balance equations we may use the least squares adjustment method. It may be applied on the condition that we have a sufficient surplus of measuring data. The number of balance equations must be higher than the number of unknown values. This makes it possible to apply the criterion of least squares warranting a maximum of the reliability function in an n dimensional. space of errors (n - number of measuring data. This method has been applied in the balance systems of the combined heat-and-power generating plant.

  8. Least Square Support Vector Machine Modelling of Breakdown Voltage of Solid Insulating Materials in the Presence of Voids

    Science.gov (United States)

    Behera, S.; Tripathy, R. K.; Mohanty, S.

    2013-03-01

    The least square formulation of support vector machine (SVM) was recently proposed and derived from the statistical learning theory. It is also marked as a new development by learning from examples based on neural networks, radial basis function and splines or other functions. Here least square support vector machine (LS-SVM) is used as a machine learning technique for the prediction of the breakdown voltage of solid insulator. The breakdown voltage is due to partial discharge of five solid insulating materials under ac condition. That has been predicted as a function of four input parameters such as thickness of insulating samples ` t', diameter of void ` d', the thickness of the void ` t 1' and relative permittivity of materials `ɛ r ' by using the LS-SVM model. From experimental studies performed on cylindrical-plane electrode system, the requisite training data is obtained. The voids with different dimension are artificially created. Detailed studies have been carried out to determine the LS-SVM parameters which give the best result. At the completion of training it is found that the LS-SVM model is capable of predicting the breakdown voltage V b = ( t, t 1, d, ɛ r ) very efficiently and with a small value of the mean absolute error.

  9. Model-Based Least Squares Reconstruction of Coded Source Neutron Radiographs: Integrating the ORNL HFIR CG1D Source Model

    Energy Technology Data Exchange (ETDEWEB)

    Santos-Villalobos, Hector J [ORNL; Gregor, Jens [University of Tennessee, Knoxville (UTK); Bingham, Philip R [ORNL

    2014-01-01

    At the present, neutron sources cannot be fabricated small and powerful enough in order to achieve high resolution radiography while maintaining an adequate flux. One solution is to employ computational imaging techniques such as a Magnified Coded Source Imaging (CSI) system. A coded-mask is placed between the neutron source and the object. The system resolution is increased by reducing the size of the mask holes and the flux is increased by increasing the size of the coded-mask and/or the number of holes. One limitation of such system is that the resolution of current state-of-the-art scintillator-based detectors caps around 50um. To overcome this challenge, the coded-mask and object are magnified by making the distance from the coded-mask to the object much smaller than the distance from object to detector. In previous work, we have shown via synthetic experiments that our least squares method outperforms other methods in image quality and reconstruction precision because of the modeling of the CSI system components. However, the validation experiments were limited to simplistic neutron sources. In this work, we aim to model the flux distribution of a real neutron source and incorporate such a model in our least squares computational system. We provide a full description of the methodology used to characterize the neutron source and validate the method with synthetic experiments.

  10. Imputation And Classification Of Missing Data Using Least Square Support Vector Machines – A New Approach In Dementia Diagnosis

    Directory of Open Access Journals (Sweden)

    T R Sivapriya

    2012-07-01

    Full Text Available This paper presents a comparison of different data imputation approaches used in filling missing data and proposes a combined approach to estimate accurately missing attribute values in a patient database. The present study suggests a more robust technique that is likely to supply a value closer to the one that is missing for effective classification and diagnosis. Initially data is clustered and z-score method is used to select possible values of an instance with missing attribute values. Then multiple imputation method using LSSVM (Least Squares Support Vector Machine is applied to select the most appropriate values for the missing attributes. Five imputed datasets have been used to demonstrate the performance of the proposed method. Experimental results show that our method outperforms conventional methods of multiple imputation and mean substitution. Moreover, the proposed method CZLSSVM (Clustered Z-score Least Square Support Vector Machine has been evaluated in two classification problems for incomplete data. The efficacy of the imputation methods have been evaluated using LSSVM classifier. Experimental results indicate that accuracy of the classification is increases with CZLSSVM in the case of missing attribute value estimation. It is found that CZLSSVM outperforms other data imputation approaches like decision tree, rough sets and artificial neural networks, K-NN (K-Nearest Neighbour and SVM. Further it is observed that CZLSSVM yields 95 per cent accuracy and prediction capability than other methods included and tested in the study.

  11. A semi-implicit finite strain shell algorithm using in-plane strains based on least-squares

    Science.gov (United States)

    Areias, P.; Rabczuk, T.; de Sá, J. César; Natal Jorge, R.

    2015-04-01

    The use of a semi-implicit algorithm at the constitutive level allows a robust and concise implementation of low-order effective shell elements. We perform a semi-implicit integration in the stress update algorithm for finite strain plasticity: rotation terms (highly nonlinear trigonometric functions) are integrated explicitly and correspond to a change in the (in this case evolving) reference configuration and relative Green-Lagrange strains (quadratic) are used to account for change in the equilibrium configuration implicitly. We parametrize both reference and equilibrium configurations, in contrast with the so-called objective stress integration algorithms which use a common configuration. A finite strain quadrilateral element with least-squares assumed in-plane shear strains (in curvilinear coordinates) and classical transverse shear assumed strains is introduced. It is an alternative to enhanced-assumed-strain (EAS) formulations and, contrary to this, produces an element satisfying ab-initio the Patch test. No additional degrees-of-freedom are present, contrasting with EAS. Least-squares fit allows the derivation of invariant finite strain elements which are both in-plane and out-of-plane shear-locking free and amenable to standardization in commercial codes. Two thickness parameters per node are adopted to reproduce the Poisson effect in bending. Metric components are fully deduced and exact linearization of the shell element is performed. Both isotropic and anisotropic behavior is presented in elasto-plastic and hyperelastic examples.

  12. Phase discrepancy induced from least squares wavefront reconstruction of wrapped phase measurements with high noise or large localized wavefront gradients

    Science.gov (United States)

    Steinbock, Michael J.; Hyde, Milo W.

    2012-10-01

    Adaptive optics is used in applications such as laser communication, remote sensing, and laser weapon systems to estimate and correct for atmospheric distortions of propagated light in real-time. Within an adaptive optics system, a reconstruction process interprets the raw wavefront sensor measurements and calculates an estimate for the unwrapped phase function to be sent through a control law and applied to a wavefront correction device. This research is focused on adaptive optics using a self-referencing interferometer wavefront sensor, which directly measures the wrapped wavefront phase. Therefore, its measurements must be reconstructed for use on a continuous facesheet deformable mirror. In testing and evaluating a novel class of branch-point- tolerant wavefront reconstructors based on the post-processing congruence operation technique, an increase in Strehl ratio compared to a traditional least squares reconstructor was noted even in non-scintillated fields. To investigate this further, this paper uses wave-optics simulations to eliminate many of the variables from a hardware adaptive optics system, so as to focus on the reconstruction techniques alone. The simulation results along with a discussion of the physical reasoning for this phenomenon are provided. For any applications using a self-referencing interferometer wavefront sensor with low signal levels or high localized wavefront gradients, understanding this phenomena is critical when applying a traditional least squares wavefront reconstructor.

  13. Use of near-infrared spectroscopy and least-squares support vector machine to determine quality change of tomato juice

    Institute of Scientific and Technical Information of China (English)

    Li-juan XIE; Yi-bin YING

    2009-01-01

    Near-infrared (NIR) transmittance spectroscopy combined with least-squares support vector machine (LS-SVM) was investigated to study the quality change of tomato juice during the storage. A total of 100 tomato juice samples were used. The spectrum of each tomato juice was collected twice: the first measurement was taken when the tomato juice was fresh and had not undergone any changes, and the second measurement was taken after a month. Principal component analysis (PCA) was used to examine a potential capability of separating juice before and after the storage. The soluble solid content (SSC) and pH of the juice samples were determined. The results show that changes in certain compounds between tomato juice before and after the storage period were obvious. An excellent precision was achieved by LS-SVM model compared with discriminant partial least-squares (DPLS), soft independent modeling of class analogy (SIMCA), and discriminant analysis (DA) models, with 100% of a total accuracy. It can be found that N1R spectroscopy coupled with LS-SVM, DPLS, SIMCA, and DA can be used to control the quality change of tomato juice during the storage.

  14. A novel approach to the experimental study on methane/steam reforming kinetics using the Orthogonal Least Squares method

    Science.gov (United States)

    Sciazko, Anna; Komatsu, Yosuke; Brus, Grzegorz; Kimijima, Shinji; Szmyd, Janusz S.

    2014-09-01

    For a mathematical model based on the result of physical measurements, it becomes possible to determine their influence on the final solution and its accuracy. However, in classical approaches, the influence of different model simplifications on the reliability of the obtained results are usually not comprehensively discussed. This paper presents a novel approach to the study of methane/steam reforming kinetics based on an advanced methodology called the Orthogonal Least Squares method. The kinetics of the reforming process published earlier are divergent among themselves. To obtain the most probable values of kinetic parameters and enable direct and objective model verification, an appropriate calculation procedure needs to be proposed. The applied Generalized Least Squares (GLS) method includes all the experimental results into the mathematical model which becomes internally contradicted, as the number of equations is greater than number of unknown variables. The GLS method is adopted to select the most probable values of results and simultaneously determine the uncertainty coupled with all the variables in the system. In this paper, the evaluation of the reaction rate after the pre-determination of the reaction rate, which was made by preliminary calculation based on the obtained experimental results over a Nickel/Yttria-stabilized Zirconia catalyst, was performed.

  15. Diagnosis of Periodontal Disease from Saliva Samples Using Fourier Transform Infrared Microscopy Coupled with Partial Least Squares Discriminant Analysis.

    Science.gov (United States)

    Fujii, Satoshi; Sato, Shinobu; Fukuda, Keisuke; Okinaga, Toshinori; Ariyoshi, Wataru; Usui, Michihiko; Nakashima, Keisuke; Nishihara, Tatsuji; Takenaka, Shigeori

    2016-01-01

    Diagnosis of periodontal disease by Fourier transform infrared (FT-IR) microscopic technique was achieved for saliva samples. Twenty-two saliva samples, collected from 10 patients with periodontal disease and 12 normal volunteers, were pre-processed and analyzed by FT-IR microscopy. We found that the periodontal samples showed a larger raw IR spectrum than the control samples. In addition, the shape of the second derivative spectrum was clearly different between the periodontal and control samples. Furthermore, the amount of saliva content and the mixture ratio were different between the two samples. Partial least squares discriminant analysis was used for the discrimination of periodontal samples based on the second derivative spectrum. The leave-one-out cross-validation discrimination accuracy was 94.3%. Thus, these results show that periodontal disease may be diagnosed by analyzing saliva samples with FT-IR microscopy. PMID:26860570

  16. Derivation of decay heat benchmarks for U235 and Pu239 by a least squares fit to measured data

    International Nuclear Information System (INIS)

    A least squares technique used by previous authors has been applied to an extended set of available decay heat measurements for both U235 and Pu239 to yield simultaneous fits to the corresponding beta, gamma and total decay heat. The analysis takes account of both systematic and statistical uncertainties, including correlations, via calculations which use covariance matrices constructed for the measured data. The results of the analysis are given in the form of beta, gamma and total decay heat estimates following fission pulses and a range of irradiation times in both U235 and Pu239. These decay heat estimates are considered to form a consistent set of benchmarks for use in the assessment of summation calculations. (author)

  17. The estimation of b factor of the FAO24 Blaney - Cridlle method with the use of weighted least squares

    Science.gov (United States)

    Vasilios, Ampas; Evanggelos, Baltas

    2010-05-01

    The purpose of this paper is the estimation of the factor b in the FAO 24 Blaney and Criddle method for reference crop evapotranspiration. The existing relationships of Frevent et al. and Allen and Pruitt for the estimation of the b factor give variation 15% and 12%, based on the values given in tables from Doorenbos and Pruitt. These relationships were developed based on multiple regression analysis of the variables and pseudovariables. In this research work a new method is proposed for the estimation of b factor based on weighted least squares for the variables and pseudovariables. The weighting factors were based on the frequency of each meteorological parameter. The reference evapotranspiration that was estimated using this b factor gives close results compared to the FAO 24 Blaney and Criddle method.

  18. Robust anti-synchronization of uncertain chaotic systems based on multiple-kernel least squares support vector machine modeling

    International Nuclear Information System (INIS)

    Highlights: Model uncertainty of the system is approximated by multiple-kernel LSSVM. Approximation errors and disturbances are compensated in the controller design. Asymptotical anti-synchronization is achieved with model uncertainty and disturbances. Abstract: In this paper, we propose a robust anti-synchronization scheme based on multiple-kernel least squares support vector machine (MK-LSSVM) modeling for two uncertain chaotic systems. The multiple-kernel regression, which is a linear combination of basic kernels, is designed to approximate system uncertainties by constructing a multiple-kernel Lagrangian function and computing the corresponding regression parameters. Then, a robust feedback control based on MK-LSSVM modeling is presented and an improved update law is employed to estimate the unknown bound of the approximation error. The proposed control scheme can guarantee the asymptotic convergence of the anti-synchronization errors in the presence of system uncertainties and external disturbances. Numerical examples are provided to show the effectiveness of the proposed method.

  19. Least-squares analysis of clock frequency comparison data to deduce optimized frequency and frequency ratio values

    CERN Document Server

    Margolis, H S

    2015-01-01

    A method is presented for analysing over-determined sets of clock frequency comparison data involving standards based on a number of different reference transitions. This least-squares adjustment procedure, which is based on the method used by CODATA to derive a self-consistent set of values for the fundamental physical constants, can be used to derive optimized values for the frequency ratios of all possible pairs of reference transitions. It is demonstrated to reproduce the frequency values recommended by the International Committee for Weights and Measures, when using the same input data used to derive those values. The effects of including more recently published data in the evaluation is discussed and the importance of accounting for correlations between the input data is emphasised.

  20. Electric Load Forecasting Based on a Least Squares Support Vector Machine with Fuzzy Time Series and Global Harmony Search Algorithm

    Directory of Open Access Journals (Sweden)

    Yan Hong Chen

    2016-01-01

    Full Text Available This paper proposes a new electric load forecasting model by hybridizing the fuzzy time series (FTS and global harmony search algorithm (GHSA with least squares support vector machines (LSSVM, namely GHSA-FTS-LSSVM model. Firstly, the fuzzy c-means clustering (FCS algorithm is used to calculate the clustering center of each cluster. Secondly, the LSSVM is applied to model the resultant series, which is optimized by GHSA. Finally, a real-world example is adopted to test the performance of the proposed model. In this investigation, the proposed model is verified using experimental datasets from the Guangdong Province Industrial Development Database, and results are compared against autoregressive integrated moving average (ARIMA model and other algorithms hybridized with LSSVM including genetic algorithm (GA, particle swarm optimization (PSO, harmony search, and so on. The forecasting results indicate that the proposed GHSA-FTS-LSSVM model effectively generates more accurate predictive results.